SQL Reference
Data Definition Language (DDL)...
Tables
this group of ddl sql statements allows database administrators to manage tables database administrators can create, modify, drop, and export tables you can also truncate segments on a table additional statements are available for unique settings like segment redundancy, compression settings, and streamloader properties you can view information about tables using the sys tables system catalog table for information on other database components, see the pages on docid\ uy5crlqbr0ymvtqzwna6x , docid\ m9zbpvfqpm96jidxyhqvi , docid\ g tzn2zxb4zp8w7za8a1k , and docid\ n7zyxmwfavae6fnmv2uvw create table creates a new table in the current database the table name must be distinct from the name of any existing tables in the database unless the replace keyword is specified to use replace in the create table statement, you must have delete privileges by default, columns are nullable unless otherwise specified for faster query results, you can define one {{timekey}} for the table, which must be a timestamp, date, or time column, with a specified bucket resolution tables with a specified timekey can perform query operations faster, especially if they involve time filtering you can specify a c lustering key composed of one or more fixed length columns designating columns as cluster keys that are frequently referenced in queries can greatly improve performance for details about defining timekeys and clustering indexes, see docid 7asr7i8qb0jlunvq9tzq see the docid\ czxgepf3prq9ufwhwbiuy section for table supported data types required privileges you must have the create table privilege for the current database for examples, see docid\ veeoxedu opmz9xm0qqac syntax create \[ or replace ] table \[ if not exists ] table name \[ ( \<column definition> \[, ] | \<clustering definition> ) ] \[ \[ with ] \<create option> \[, ] ] \[ as (query) ] \<column definition> = column name \[ data type ] \[ \<timekey definition> ] \[ \<column constraint> ] \[, ] ] \<timekey definition> = time key bucket (bucket granularity, bucket value) \<column constraint> = \| \[ not ] null \| default literal \| comment comment \| compression gdc \[ (compression value), existing schema name ] \| compression \[ compression scheme ] \<clustering definition> = clustering key key name (ck col1, ck col2 \[, ]) \[, index index name (idx col1, idx col2 \[, ]) \[, index ] ] \<create option> = index index name (index column) \[ using index type ] \| retention policy age retention granularity retention value \| storagespace storage space name \| segmentsize segment value \| redundancy segment part (redundancy scheme) \| streamloader properties streamloader json \| comment '\<string>' true 120,100,100 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type column definition ( \<column definition> ) the parameters listed here are required for defining each column in a table true 135,100,100 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type clustering key and index definition ( \<clustering definition> ) the parameters listed here are required for defining a clustering key or clustering indexes on a table for details about how to apply clustering columns, see docid 7asr7i8qb0jlunvq9tzq clustering key key name (ck col1, ck col2 \[, ]) \[, index index name (idx col1, idx col2 \[, ]) \[, index ] ] 241,79,293 true unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type timekey definition ( \<timekey definition> ) this syntax example is for a single timekey column, which can be included in the column definition of a create table statement for the full syntax, see docid\ yhp4b1irv haf8f3df ww for details about using timekeys, see docid 7asr7i8qb0jlunvq9tzq column name \[ data type ] time key bucket (bucket granularity, bucket value \[, noindex] ) 215,115,293 true unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type column constraint ( \<column constraint> ) the parameters listed here include constraints and other configurations for individual columns for best performance, one column with date or time data in each table should be defined as the time key with a specified bucket value for details about timekey columns, see docid 7asr7i8qb0jlunvq9tzq true 193,193,194 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type create option ( \<create option> ) the parameters listed here include various options to configure table storage space, segments, redundancy, streamloading, and indexes true 213,100,100 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type example this example creates a new table in the current database and schema named trades the table uses the timestamp column created at as the timekey, with the granularity set at 1 hour the columns ticker symbol and t type are defined as the table clustering keys the example also includes a streamloader property pagequeryexclusionduration to delay how soon data pages that were recently added can be included in query results create table trades ( id uuid not null, ticker symbol varchar(255) not null compression gdc(2), t type varchar(255) not null compression gdc(1), raw ticker data varchar(255), created at timestamp not null time key bucket(1, hour), array of tuples tuple<\<varchar(2040), byte, bigint, double, timestamp, date, time, decimal(3,2), st point, boolean, binary(6), hash(8), ip, uuid>>\[], clustering index idx ticker symbol type (ticker symbol, t type), index idx type (t type) ) storagespace ocient storage, redundancy data (parity), redundancy manifest (copy), streamloader properties '{ "pagequeryexclusionduration" "2700s" }'; create table as select (ctas) ctas provides the ability to create and load a new table from the result of a query on one or more existing tables the first column of the query result maps to the first column of the new table definition, the second column maps to the second column of the new table, and so on the new table is available for querying after it has been created, and the entire result set from the query has been loaded into the table when you receive a response to the ctas sql statement, the load is complete, and the table is ready when you create a table from a select sql statement, the schema for the table can be automatically determined based on the query results you can override this behavior with an alternative schema, provided the query results can automatically be cast to the target column types ctas also supports all syntax options for the new table ctas does not support default values and explicit nullable definitions on the column of the table ctas statements support secondary and prefix indexes to create a table, you must have both the create table privilege for the current database and the select privilege on all referenced tables and views for syntax and parameter information, see docid\ yhp4b1irv haf8f3df ww default table definitions by default, a new table created with a ctas statement retains column names, data types, and nullable definitions from the queried table you can override this configuration with alternate table definitions in the ctas statement t ables created by a ctas statement do not inherit some table definitions from the original table, including the following timekey clustering key and clustering indexes secondary indexes column compression optional table configurations (see docid\ yhp4b1irv haf8f3df ww ) to include these table definit i ons, you must explicitly specify them in the ctas statement examples these ctas examples select columns from the original table table that this create table statement defines this table contains these columns col int — non nullable integer with the default value 123456789 col bigint — non nullable 8 byte signed integer col id — non nullable integer col point — non nullable point with the default value point(0 0) col timestamp — non nullable timekey with granularity set at 1 day col varchar — variable length character string with a maximum length of 255 characters and zstandard compression the table has a clustering key ck using the col bigint and col id columns it also has two secondary indexes a hash index idx 01 on the col varchar column and a spatial index on the idx 02 column create table original table ( col int int not null default 123456789, col bigint bigint not null, col id int not null, 	col point point not null default 'point(0 0)', 	col timestamp timestamp time key bucket (1,day) not null, 	col varchar varchar(255) compression zstd, 	clustering key ck (col bigint, col id) ) with index idx 01 (col varchar) using hash, index idx 02 (col point) using spatial; ctas using all columns from a base table this example shows a basic ctas statement that inherits most of its table definition from the original table table create table basic ctas as ( 	select from original table ); the new basic ctas table includes all the columns and data types from the original table definition however, it does not include the segment keys, indexes, or the compression on the col varchar column the export table sql statement shows the differences in the basic ctas table export table basic ctas; output create table basic ctas ( "col int" int not null, "col bigint" bigint not null, "col id" int not null, "col point" point not null, "col timestamp" timestamp not null, "col varchar" varchar(536870912) compression dynamic null ) redundancy cde (parity), redundancy column metadata (copy), redundancy data (parity), redundancy index (parity), redundancy pdf (parity), redundancy skip lists (copy), redundancy stats (parity), redundancy summary stats (parity), redundancy table of contents (copy), storagespace "ss0", segmentsize 4; in this output, the table options for redundancy , storagespace , and segmentsize are all default table settings ctas using a full table definition this example ctas statement includes a more detailed table definition the definition includes new columns for the timekey, clustering key, and secondary indexes the example also makes various changes from the original table schema different column default value different compression scheme (dynamic compression) new timekey granularity of 1 hour three columns in the clustering key different secondary index types ( ngram and spatial ) create table complex ctas ( col amt int not null, col phone bigint not null compression dynamic, col id int not null, 	col point point not null default 'point(0 0)', 	col timestamp timestamp time key bucket (1,hour) not null, 	col varchar varchar(255) compression dynamic, 	clustering key ck (col amt, col phone, col id) ) with index idx 01 (col varchar) using ngram(3), index idx 02 (col point) using spatial as ( 	select from original table ); ctas using a subset of table columns this example selects only a subset of columns, col int , col bigint , and col id , from the original table to insert into the new subset table the example also specifies alternate table options for redundancy and segmentsize create table subset ( col amt int not null, col phone bigint not null, col id int not null ) with redundancy cde (parity), segmentsize 3 as ( select col int, col bigint, col id from original table ); due to limitations of the jdbc api, the reported modified row count might not be accurate for tables larger than two billion rows ctas using transformations on columns this example performs various transformation functions on the original columns as it selects them for the new table these include col int multiply column is the multiplication of the col int values by 10 col month add column is the result of adding three months to each col timestamp column value col year column is the extraction of the year value from each col timestamp column value col substring column contains the first three characters from each col varchar column value create table calc table ( col int multiply int not null, col month add timestamp not null, col year int not null, col substring varchar(255) ) as ( select col int 10, add months(col timestamp, 3), year(col timestamp), substring(col varchar, 1, 3) from original table ); ctas using loaders specify one or more loader nodes for executing the ctas sql statement if you do not use this option, the ocient system uses all loader nodes that are live to execute the sql statement this statement is useful for managing loading operations, particularly when balancing multiple loads of different sizes and resource requirements alternatively, this statement can also help simplify small batch loads by sourcing the data from a single loader node syntax true 193,193,194 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type for the query to execute successfully, the specified names of the loader nodes must identify nodes that are live identify nodes that have the loader role examples create a table named my schema my ctas table 2 with a clustering index named idx on the int col column with the values in the int col column in the table named my schema my table use the loader node named stream loader1 to execute this sql statement create table my schema my ctas table 2 ( clustering index idx (int col) ) using loaders "stream loader1" as (select int col from my schema my table); in this example, execute the same ctas sql statement with two loader nodes named stream loader2 and stream loader3 create table my schema my ctas table 2 ( clustering index idx (int col) ) using loaders "stream loader2","stream loader3" as (select int col from my schema my table); drop table drop table removes one or more existing tables in the current database, along with all associated views t his action cannot be undone to remove a table, the logged in user must be a system level user or have the delete table privileges for the table syntax drop table \[ if exists ] table name \[, ] true 193,193,194 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type examples this example drops an existing table in the current database and schema named employees drop table employees; in this example, drop two tables in the current database and schema named employees and departments drop table employees, departments; when you drop multiple tables, and none of them exist in the database, the database returns an error for each missing table use the if exists statement to convert the error to a warning if you execute the drop table statement and only some of the tables exist while other tables are missing, the database drops the existing tables and returns warnings for each missing table alter table alter table rename alter table rename renames an existing table required privileges to rename a table, you must have the alter table privilege for the table the ocient system requires these privileges if this statement includes a change to the schema view privilege on the current schema of the table view table and create table privileges on the target schema (if the schema already exists) create table privilege on the database (if the schema does not exist) syntax alter table \[ if exists ] old table name rename to new table name true 193,193,194 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type example this example renames an existing table in the current database and schema named us employees to mid west employees alter table us employees rename to mid west employees; this example renames an existing table in the current database named us employees to north america employees alter table us employees rename to north america employees; alter table rename column alter table rename column renames an existing column to rename a column, you must have the alter table privilege for the table syntax alter table \[ if exists ] table name rename column old column name to new column name true 193,193,194 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type example this example renames an existing column name in the table employees in the current database and schema to first name alter table employees rename column name to first name; alter table add column alter table add column adds a new column to the table to add a column, you must have the alter table privilege for the table new columns must either be nullable or specify a default value for a defined list of column parameters, see docid\ yhp4b1irv haf8f3df ww for constraints, see docid\ yhp4b1irv haf8f3df ww syntax alter table \[ if exists ] table name add column \<column definition>; \<column definition> = column name \[ data type ] \[ \<column constraint> \[, ] ] \<column constraint> = time key bucket(bucket granularity \[, bucket value ] ) \| \[ not ] null \| default literal \| comment comment \| compression gdc \[ (compression value), existing schema name ] \| compression \[ compression scheme ] true 193,193,194 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type examples this example adds a bigint column to the employees table in the current database and schema with the default value of 0 alter table employees add column new column bigint not null default 0; this example adds a column that is nullable alter table employees add column new column bigint null; alter table alter column compression alter table alter column compression alters an existing column in the table to change its compression scheme supported compression schemes are compression none , compression dynamic , and compression zstd a ltering the compression setting of a column only affects compression for data loaded after you execute the sql statement for details about ocient supported compression schemes, see docid\ dfaledfc0jdwsfsar3pav syntax alter table \[ if exists ] table name alter column column name set compression \[ compression scheme ]; true 193,69,318 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type examples this example alters the compression scheme for the column employee name in the table employees in the current database and schema alter table employees alter column employee name set compression lz4; this example alters the compression scheme to zstandard for the column employee name in the table employees in the current database and schema alter table employees alter column employee name set compression zstd compression level=5, dictionary size=32768; alter table alter redundancy alter table alter redundancy alters the segment part redundancy for future segments of an existing table note that altering a segment part redundancy setting only affects data loaded after applying the sql statement alter table \[ if exists ] table name alter redundancy segment part (redundancy scheme) true 193,73,314 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type example this example alters the stats part to copy redundancy alter table employees alter redundancy stats (copy); alter table drop column alter table drop column drops an existing column from the table you cannot remove the timekey column and the clustering key columns from the table when you remove a column, the database does not remove or free any actual data alter table \[ if exists ] table name drop column column name \[ if exists ] true 193,193,194 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type example this example removes a column named address from the table employees alter table employees drop column address; alter table streamloader properties alter table streamloader properties resets the table streamloader properties to the provided string the properties string must be in valid json format the database registers streamloader changes dynamically therefore, you do not need to restart nodes or take other actions for the changes to take effect any properties not specified in the string default to the system wide setting alter table \[ if exists ] table name streamloader properties streamloader json true 193,193,194 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type configuring streamloader properties streamloader properties is a field on the table metadata that must be written as a json string in order to be read properly the database can dynamically render any changes to streamloader properties with the alter table sql statement you can set loader node properties for a new table as a parameter in the create table sql statement you do not need to restart the database node for the changes to take effect per table streamloader properties true 193,193,194 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type example this example sets the loader node properties of the table employees to {"pagequeryexclusionduration" "30s"} this means that any pages added less than 30 seconds ago will not be included in query results alter table employees streamloader properties '{ "pagequeryexclusionduration" "30s" }'; alter table disable index the alter table disable index statement instructs future queries not to use the specified indexes, but existing segments and new segments continue to have the index available in case you enable the index again all s econdary indexes except for secondary clustering key indexes can be disabled trying to disable other types of indexes generates an error you can specify the index by name or uuid syntax alter table \[ if exists ] table name disable index { index name or uuid | in (index name or uuid \[, ]) } true 193,69,318 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type examples this example disables an existing index named current idx on the table employees alter table employees disable index current idx; this example disables an existing or dropped index with the uuid 5c15d8de 36fa 4055 9bdc 3f1750aaeea0 alter table employees disable index '5c15d8de 36fa 4055 9bdc 3f1750aaeea0'; this example disables both indexes current idx and other idx on the table employees alter table employees disable index in (current idx, other idx); alter table enable index the alter table enable index statement reverts the operation performed by the alter table disable index statement syntax alter table \[ if exists ] table name enable index index name or uuid true 193,60,327 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type examples this example enables an existing index named current idx on the table employees alter table employees enable index current idx; this example enables an existing or dropped index with the uuid 5c15d8de 36fa 4055 9bdc 3f1750aaeea0 alter table employees enable index '5c15d8de 36fa 4055 9bdc 3f1750aaeea0'; this example enables both indexes current idx and other idx on the table employees alter table employees enable index in (current idx, other idx); alter table enable retention policy age this alter table sql statement enacts a new retention policy on the specified table a table can have only one retention policy for details about retention policies, see docid 5vdcjrldo1h6fdcwxwkl0 required privileges you must have the alter and delete privileges for the specified table enacting a new retention policy on an existing table already loaded with data might cause the system to delete many rows syntax alter table \[ if exists ] table name enable retention policy age retention granularity retention value true 193,74,313 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type example this example creates a new retention policy for my table that deletes any rows older than 1 day alter table my table enable retention policy age 1 day; alter table disable retention policy this alter table sql statement disables a retention policy on the specified table for details about retention policies, see docid 5vdcjrldo1h6fdcwxwkl0 required privileges you must have the alter table system privilege and the alter privilege for the specified table syntax alter table \[ if exists ] table name disable retention policy true 193,193,194 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type example this example removes the retention policy for my table alter table my table disable retention policy; delete from table removes rows from the specified table you can use the where clause to specify the rows to remove if a delete sql statement lacks the where clause, then the database deletes all rows in the table to use this statement , you must have the delete privilege for the table for details and examples , see docid\ ubdqnmju5stv aqfavviz delete actions cannot be undone if a delete operation fails during execution, the database rolls back the changes and returns to its original state due to limitations of the jdbc api, the reported modified row count might not be accurate for delete operations that are larger than two billion rows syntax delete from table name \[ with cte ] \[ where \<filter clause> ] true 193,88,299 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type examples delete rows from the table with filter criteria this delete sql statement removes all rows in the movies table that have a budget of less than 10000 delete from movies where budget < 10000; delete rows from the table using a common table expression this example uses a common table expression using the with keyword to find rows representing all transactions that occurred before 2022 that are less than $100 the delete sql statement receives the results from the common table expression then, the database executes this statement to delete the corresponding rows delete from transactions with old transactions as ( select transaction id from transactions where transaction date < '2022 01 01' and amount < 100 	) 	where transaction id in ( 	 select transaction id 	 from old transactions ); export table export table shows the create table statement for an existing table in the current database to export a table, you must have the select table privilege for the table export table table name true 193,193,194 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type example this example exports an existing table in the current database and schema named trades export table trades; output create table "admin\@system" "trade test" ( "id" uuid not null, "ticker symbol" varchar(1048576) compression gdc(2) not null, "t type" varchar(1048576) compression gdc(1) not null, "raw ticker data" varchar(1048576) compression dynamic null, "created at" timestamp time key bucket(1, hour) not null, "array of tuples" tuple<\<varchar(1048576) compression dynamic,tinyint,bigint,double precision,timestamp,date,time,decimal(3,2),point,boolean,binary(6),binary(8),ip,uuid>>\[] null, clustering index "idx ticker symbol type" ("ticker symbol", "t type"), index "idx type" ("t type") ) redundancy cde (parity), redundancy manifest (copy), redundancy pdf (parity), redundancy stats (parity), redundancy column metadata (copy), redundancy index (parity), redundancy summary stats (parity), redundancy skip lists (copy), redundancy data (parity), storagespace "storage", segmentsize 4, streamloader properties '{"pagequeryexclusionduration" "30s"}'; create index "new idx" on "admin\@system" "trade test" ("raw ticker data") using hash; insert into table insert into inserts rows into a table in the current database using literal values, column references, function executions, computed expressions, or column default values this sql statement requires the insert privilege for the relevant table due to limitations of the jdbc api, the reported modified row count might not be accurate for insert operations that are larger than two billion rows syntax insert into table name \[ ( col1, col2 \[, ] ) ] \[ with cte ] { query | \[ default values | values \[ \<rows to insert> ] } \<rows to insert> = ( row1 exp1, row1 exp2 \[, ] ), ( row2 exp1, row2 exp2 \[, ] ) \[, ] using default values inserts a single row where each target column is populated with its column defaults (as defined in the column definition) instead of an explicit values list for table columns that do not each have a defined default value, the inserted row is null if the column has no default and also has the not null constraint, the insert operation generates an error true 193,79,308 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type examples insert values from one column this example inserts the columns from system table b into system table a insert into system table a select from system table b; insert values from multiple columns this example inserts the column system table b id col b into system table a id col a and system table b int col b into system table a int col a insert into system table a (id col a, int col a) select id col b, int col b from system table b; insert literal values create a table with product, quantity, and date of sale information with these non nullable columns product — product identifier quantity — quantity of the product sold sale date — date of sale create table sales ( product int not null, quantity int not null, sale date date not null ); insert three rows of literal values that represent different sales insert into sales (product, quantity, sale date) values (1, 10, '2023 01 15'), (2, 5, '2023 01 20'), (1, 8, '2023 02 05'); insert values using a common table expression in this example, a common table expression performs calculations on the sales table before inserting rows into the monthly sales summary table the example uses the monthly sales summary table created by this create table statement with these non nullable columns product id — product identifier month — month part of the date total quantity — total quantity of the product create table monthly sales summary ( "product id" int not null, "month" date not null, "total quantity" int not null ); the common table expression following the with keyword extracts the month from the sale date sale date and calculates the sum of the quantity sold total quantity of the product from the sales table before inserting this data then, the insert sql statement specifies to insert the data into the monthly sales summary table insert into monthly sales summary (product id, month, total quantity) with monthly totals as ( select product, date trunc('month', sale date) as month, sum(quantity sold) as total quantity from sales group by product, date trunc('month', sale date) ) select product, month, total quantity from monthly totals; insert columns using default values this code utilizes the customers table with these columns customer id — customer identifier name — customer name status — customer status with the default active status created at — created date create table customers ( customer id int, name varchar(100), status varchar(20) default 'active', created at timestamp ); use the default values keyword to insert one row of default values into the table for columns that lack a defined default value, the operation inserts a null row insert into customers default values; the resulting row contains all null values except for the status column, which has the active default value select from customers; output \| customer id | name | status | created at | \| | | | | \| | | active | | alternatively, you can insert default values by using the default keyword as one of the row values in the insert statement insert into customers (customer id, name, status, created at) values (1, 'alice', default, null); output \| customer id | name | status | created at | \| | | | | \| | | active | | \| 1 | alice | active | | insert into table using loaders specify one or more loader nodes for executing the insert into sql statement if you do not use this option, the ocient system uses all loader nodes that are live to execute the sql statement this statement is useful for managing loading operations, particularly when balancing multiple loads of different sizes and resource requirements alternatively, this statement can also help simplify small batch loads by sourcing the data from a single loader node syntax insert into table table name using loaders streamloader \[, ] query true 193,117,270 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type for the query to execute successfully, the specified names must identify nodes that are live identify nodes that have the loader role examples this example inserts the column system table b id col b into system table a id col a and system table b int col b into system table a int col a use the loader node named stream loader1 to execute this sql statement insert into system table a (id col a, int col a) using loaders "stream loader1" select id col b, int col b from system table b; in this example, execute the same sql statement with two loader nodes named stream loader2 and stream loader3 insert into system table a (id col a, int col a) using loaders "stream loader2","stream loader3" select id col b, int col b from system table b; truncate table truncate table removes some or all records from an existing table in the current database the system deletes the truncated data, but the table and its schema remain intact in the system even if all data is deleted if the entire table is truncated, global dictionary compression tables remain in place to truncate a table, you must have the delete privilege for the table to remove a subset of rows from a table, you can use the docid\ yhp4b1irv haf8f3df ww sql statement for details and examples of using truncate , see docid\ ubdqnmju5stv aqfavviz this action cannot be undone and results in data loss syntax truncate table table name truncate table table name where segment group id = \<id> truncate table table name where segment group id in (\<id>, ) true 193,193,194 unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type unhandled content type examples this example truncates an existing table in the current database and schema named students truncate table students; this example truncates an existing table in the current database named us students truncate table us students; this example truncates a single segment group from an existing table in the current database named students truncate table students where segment group id = 123456789; this example truncates a number of segment groups from an existing table in the current database named us students truncate table us students where segment group id in (1,2,3,4,5); related links docid nwuyof4 i 7wgmmmbf4j docid\ nw9vavkey2v75moxm muo docid\ jfltms91v2bfledppksy8 docid\ f55ngxtki0f7kkmyatvug https //docs ocient com/system catalog