SQL Reference
Data Definition Language (DDL)...
Database, Tables, Views, and Indexes
this group of ddl sql statements allows database administrators to manage elements of the database and schema design database administrators can create and modify databases, including sso authentication settings these statements also create, modify, drop, and export tables, views, and indexes in the database and truncate segments on a table additional statements are available for unique settings like segment redundancy, compression settings, and streamloader properties database create database create database creates a new database the database name must be distinct from the name of any existing database in the system to create a database, you must have the create database privilege for the current system syntax create database \[ if not exists ] database name parameter type description database name string a unique identifier for the database the system generates an error if a duplicate name is provided system is a reserved database name and can neither be created nor dropped example to create a new database named ocient create database ocient; drop database drop database removes an existing database this sql statement also disconnects all users currently connected to the database to remove a database, you must have the drop database privilege for the current database you cannot drop a database while it has any pipeline in a running status the drop database sql statement removes the existing database and all created users, tables, and views this action cannot be undone syntax drop database \[ if exists ] database name \[, ] parameter type description database name string an identifier for the database to be dropped you can drop multiple databases by specifying additional database names and separating each with commas system is a reserved database name and can neither be created nor dropped example remove an existing database named ocient drop database ocient; alter database alter database rename alter database rename renames an existing database to rename a database, you must have the alter database privilege for the database syntax alter database old database name rename to new database name parameter type description old database name string the old identifier of the database for rename new database name string the new identifier of the database for rename example rename an existing database named oracle to ocient alter database oracle rename to ocient; alter database set sso integration alter database set sso integration configures the database to authenticate using an external sso provider this sso integration is the default for connections unless you use a connectivity pool or specify a different provider to set a connection, you must be a system level user or a database administrator and have an open connection to the database this sql statement is an alias for /#alter database alter sso integration see docid 5vdoeimcg9i6p xff 6b for details about configuring sso protocols if your {{ocient}} system is version 25 0 or later, you can create multiple sso integrations for each database an sso integration assigned to the database by the alter database sql statement is the primary sso connection, unless you connect to the database with a connectivity pool that has a different sso integration assigned to it syntax alter database database alter sso integration sso name parameter type description database string the identifier of the database for configuration sso name string the identifier of the sso integration to use example this example sets an example database to use the sso integration named sso test alter database example database set sso integration sso test; alter database alter sso integration alter database alter sso connection configures the database to authenticate using an external sso provider this sso integration is the default for connections unless you use a connectivity pool or specify a different provider to alter a connection, you must be a system level user or a database administrator and have an open connection to the database this sql statement is an alias for /#alter database set sso integration see docid 5vdoeimcg9i6p xff 6b for details about configuring sso protocols if your ocient system is version 25 0 or later, you can create multiple sso integrations for each database an sso integration assigned to the database by the alter database sql statement is the primary sso connection, unless you connect to the database with a connectivity pool that has a different sso integration assigned to it syntax alter database database alter sso integration sso name parameter type description database string the identifier of the database for configuration sso name string the identifier of the sso integration to use example this example alters an example database to use the sso integration named sso test alter database example database alter sso integration sso test; alter database remove sso integration alter database remove sso integration removes an existing sso integration as the default connection protocol for the database this action effectively undoes the /#alter database set sso integration sql statement to remove a connection, you must be a system level user or a database administrator syntax alter database database remove sso integration parameter type description database string the identifier of the database for deletion example remove the default connection from the database named example database alter database example database remove sso integration; alter database alter security sets the security settings at the database level using the alter database alter security sql statement replace \<security setting> with the security setting and \<value> with the value syntax alter database database alter security \<security setting> \[=] \<value> parameter data type description database string the identifier of the database for setting security settings security setting string the security setting with values password minimum length password complexity level password no repeat count password lifetime days password invalid attempt limit for details about these values, see docid 3fiusnpipj97zfs1tbm5g value numeric an integer to represent one of the security settings for details about this value, see docid 3fiusnpipj97zfs1tbm5g example set the password lifetime to 20 days for the database example db alter database example db alter security password lifetime days = 20; table create table creates a new table in the current database the table name must be distinct from the name of any existing tables in the database unless the replace keyword is specified to use replace in the create table statement, you must have delete privileges by default, columns are nullable unless otherwise specified for faster query results, you can define one {{timekey}} for the table, which must be a timestamp, date, or time column, with a specified bucket resolution tables with a specified timekey can perform query operations faster, especially if they involve time filtering you can specify a c lustering key composed of one or more fixed length columns designating columns as cluster keys that are frequently referenced in queries can greatly improve performance for details about defining timekeys and clustering indexes, see docid 21n4t9o37 dsjnmgsgj1o see the docid\ ogtviwl gtbgv0chhrh 3 section for table supported data types to create a table, you must have the create table privilege for the current database for examples, see docid 0rmcbcyysu 0ej2rmrcqy syntax create \[ or replace ] table \[ if not exists ] table name \[ ( \<column definition> \[, ] | \<clustering definition> ) ] \[ \[ with ] \<create option> \[, ] ] \[ as (query) ] \<column definition> = column name \[ data type ] \[ \<timekey definition> \| \<column constraint> \[, ] ] \<timekey definition> = time key bucket (bucket granularity, bucket value) \[ \<column constraint> ] \<column constraint> = time key bucket( bucket granularity, bucket value ) \| \[ not ] null \| default literal \| comment comment \| compression gdc \[ (compression value), existing schema name ] \| compression \[ compression scheme ] \<clustering definition> = clustering key key name (ck col1, ck col2 \[, ]) \[, index index name (idx col1, idx col2 \[, ]) \[, index ] ] \<create option> = storagespace storage space name \| segmentsize segment value \| redundancy segment part (redundancy scheme) \| streamloader properties streamloader json \| comment '\<string>' parameter type description table name string a unique identifier for the table table name must be distinct from the name of any existing tables in the database unless the replace keyword is specified query string a select query used to load data into the newly created table for details, see /#create table as select ctas column definition ( \<column definition> ) the parameters listed here are required for defining each column in a table parameter type description column name string an identifier for a column to be included in the newly created table data type string the data type of a specified column for a list of supported data, see docid\ ogtviwl gtbgv0chhrh 3 clustering key and index definition ( \<clustering definition> ) the parameters listed here are required for defining a clustering key or clustering indexes on a table for details about how to apply clustering columns, see docid 21n4t9o37 dsjnmgsgj1o clustering key key name (ck col1, ck col2 \[, ]) \[, index index name (idx col1, idx col2 \[, ]) \[, index ] ] parameter type description key name string an identifier for the clustering key ck col1, ck col2 \[, ] string a series of specific columns comprising the clustering key clustering key columns must not be nullable specify not null in the column definition for the respective columns for details, see https //docs ocient com/database tables views and indexes/#bqmt5 no limit exists on the number of columns for the clustering key index name string optional an identifier for a clustering index you must include the definition of columns that comprise the index using the idx col1, idx col2 \[, ] parameter idx col1, idx col2 \[, ] string optional a series of specific columns comprising a clustering index you can apply clustering indexes only to columns included in the clustering key specify any number of columns in any specified order you must include the identifier for the index using the index name parameter timekey definition ( \<timekey definition> ) this syntax example is for a single timekey column, which can be included in the column definition of a create table statement for the full syntax, see /#create table for details about using timekeys, see docid 21n4t9o37 dsjnmgsgj1o column name \[ data type ] time key bucket (bucket granularity, bucket value) parameter type description column name string an identifier for the column data type string optional the data type of the column for the timekey column, this column should be date or timestamp type the timekey column can also support int or bigint type, but these data types do not use a bucket value argument bucket granularity numeric the granularity of the timekey column, based on the specified time type in bucket value bucket value string the time type to parse the timekey column supported values include \[ day, hour, minute, second ] for example, bucket(1, day) sets the time bucket granularity to a fixed width of one day column constraint ( \<column constraint> ) the parameters listed here include constraints and other configurations for individual columns for best performance, one column with date or time data in each table should be defined as the time key with a specified bucket value for details about timekey columns, see docid 21n4t9o37 dsjnmgsgj1o parameter type description bucket granularity numeric the granularity of the timekey column based on the specified time type in bucket value bucket value string the time type to parse the timekey column supported values include \[ day, hour, minute, second ] for example, bucket(1, day) sets the time bucket granularity to a fixed width of one day literal depends on the column data type if specified as a constraint, sets the default value for the column the value must be a literal that is enclosed in quotes do not include an expression with this argument to cast this value to the data type of the column the system automatically attempts type coercion to convert the literal to the data type of the column for example, this create table statement includes a default value for its universally unique identifier (uuid) column create table example table ( col1 uuid not null default '00000000 0000 0000 0000 000000000000' ); the supported geospatial data types point, linestring, and polygon require wkt formatting for formatting examples, see the https //en wikipedia org/wiki/well known text representation of geometry#geometric objects comment string an optional comment for the column compression value numeric an integer value of either 1 , 2 or 4 is used to define the storage of compression gdc the compression of the values is defined as follows a compression value of 1 can hold up to 255 unique values a compression value of 2 can hold up to 65535 unique values a compression value of 4 can hold millions of unique values for tuple columns, compression is specified for each tuple value rather than with other constraints the example here defines a tuple column with gdc compression only for the first inner varchar value my tuple tuple<\<int, varchar(255) compression gdc(1), varchar(255)>> it is not recommended to use compression on a column that will contain more than one million unique values schema name string fully qualified column name or a column name if you specify a column in the same table if existing is specified as a compression constraint, the compression gdc reuses the system lookup table for the schema name column, rather than creating a new one the existing column must be in the same database as the new column schema name should include the schema, table, and column names, each separated by periods and enclosed in double quotation marks, for example "schema name" "table name" "column name" if an existing column is specified, any new column that uses the existing column’s system lookup table must be deleted before the existing column can be deleted compression scheme string the type of compression used for the column supported values include \[ compression none | compression zstd | compression dynamic ] if no compression setting is specified, the compression defaults to compression dynamic for fixed length columns and compression none for variable length columns compression zstd applies to varchar columns as well as other data types for the compression dynamic setting, the ocient system applies lz4 compression only if the column data is dynamically determined to be compressible for fixed length columns, compression dynamic applies delta delta compression for details about ocient supported compression schemes, see docid gac3iwnrtwwnndngn50b create option ( \<create option> ) the parameters listed here include various options to configure table storage space, segments, redundancy, streamloading, and indexes parameter type description storage space name string an identifier for the storagespace segment value numeric a value to define the size of the segment segment part string specifies the segment part redundancy of the table supported values include { data | manifest | index | summary stats | stats } these settings are defined as follows data the actual data for the table manifest header information stored about the data, which describes how to locate any specified cluster of rows within the data index the index of the data used for quicker lookups and better query performance summary stats a collection of statistics on the data that includes compression, row count, and average column size stats used in the optimizer, the probability density function and combinable distinct estimators used to make better optimizations to query plans redundancy scheme string the redundancy scheme supported values include { copy | parity } these settings are defined as follows copy a copy of the bytes is stored throughout the storage cluster to ensure redundancy this option uses more storage but is faster during rebuilds and node outages parity using the parity encoding specified on the storage cluster, this option uses parity bits to ensure redundancy for the data this option uses less storage but is slower during rebuilds and node outages streamloader json string a json string that defines the streamloader parameters for details, see /#alter table streamloaderproperties example this example creates a new table in the current database and schema named trades the table uses the timestamp column created at as the timekey, with the granularity set at 1 hour the columns ticker symbol and t type are defined as the table clustering keys the example also includes a streamloader property pagequeryexclusionduration to delay how soon data pages that were recently added can be included in query results create table trades ( id uuid not null, ticker symbol varchar(255) not null compression gdc(2), t type varchar(255) not null compression gdc(1), raw ticker data varchar(255), created at timestamp not null time key bucket(1, hour), array of tuples tuple<\<varchar(2040), byte, bigint, double, timestamp, date, time, decimal(3,2), st point, boolean, binary(6), hash(8), ip, uuid>>\[], clustering index idx ticker symbol type (ticker symbol, t type), index idx type (t type) ) storagespace ocient storage, redundancy data (parity), redundancy manifest (copy), streamloader properties '{ "pagequeryexclusionduration" "2700s" }'; create table as select (ctas) ctas provides the ability to create and load a new table from the result of a query on one or more existing tables the first column of the query result maps to the first column of the new table definition, the second column maps to the second column of the new table, and so on the new table is available for querying after it has been created, and the entire result set from the query has been loaded into the table when you receive a response to the ctas sql statement, the load is complete and the table is ready when you create a table from a select sql statement, the schema for the table can be automatically determined based on the query results you can override this behavior with an alternative schema, provided the query results can automatically be cast to the target column types ctas also supports all syntax options for the new table ctas does not support default values and explicit nullable definitions on the column of the table ctas statements support secondary and prefix indexes to create a table, you must have both the create table privilege for the current database and the select privilege on all referenced tables and views for syntax and parameter information, see /#create table default table definitions by default, a new table created with a ctas statement retains column names, data types, and nullable definitions from the queried table you can override this configuration with alternate table definitions in the ctas statement t ables created by a ctas statement do not inherit some table definitions from the original table, including the following timekey clustering key and clustering indexes secondary indexes column compression optional table configurations (see /#create option createoption ) to include these table definit i ons, you must explicitly specify them in the ctas statement examples these ctas examples select columns from the original table table that this create table statement defines this table contains these columns col int — non nullable integer with the default value 123456789 col bigint — non nullable 8 byte signed integer col id — non nullable integer col point — non nullable point with the default value point(0 0) col timestamp — non nullable timekey with granularity set at 1 day col varchar — variable length character string with a maximum length of 255 characters and zstandard compression the table has a clustering key ck using the col bigint and col id columns it also has two secondary indexes a hash index idx 01 on the col varchar column and a spatial index on the idx 02 column create table original table ( col int int not null default 123456789, col bigint bigint not null, col id int not null, 	col point point not null default 'point(0 0)', 	col timestamp timestamp time key bucket (1,day) not null, 	col varchar varchar(255) compression zstd, 	clustering key ck (col bigint, col id) ) with index idx 01 (col varchar) using hash, index idx 02 (col point) using spatial; ctas using all columns from a base table this example shows a basic ctas statement that inherits most of its table definition from the original table table create table basic ctas as ( 	select from original table ); the new basic ctas table includes all the columns and data types from the original table definition however, it does not include the segment keys, indexes, or the compression on the col varchar column the export table sql statement shows the differences in the basic ctas table export table basic ctas; output create table basic ctas ( "col int" int not null, "col bigint" bigint not null, "col id" int not null, "col point" point not null, "col timestamp" timestamp not null, "col varchar" varchar(536870912) compression dynamic null ) redundancy cde (parity), redundancy column metadata (copy), redundancy data (parity), redundancy index (parity), redundancy pdf (parity), redundancy skip lists (copy), redundancy stats (parity), redundancy summary stats (parity), redundancy table of contents (copy), storagespace "ss0", segmentsize 4; in this output, the table options for redundancy , storagespace , and segmentsize are all default table settings ctas using a full table definition this example ctas statement includes a more detailed table definition the definition includes new columns for the timekey, clustering key, and secondary indexes the example also makes various changes from the original table schema different column default value different compression scheme (dynamic compression) new timekey granularity of 1 hour three columns in the clustering key different secondary index types ( ngram and spatial ) create table complex ctas ( col amt int not null, col phone bigint not null compression dynamic, col id int not null, 	col point point not null default 'point(0 0)', 	col timestamp timestamp time key bucket (1,hour) not null, 	col varchar varchar(255) compression dynamic, 	clustering key ck (col amt, col phone, col id) ) with index idx 01 (col varchar) using ngram(3), index idx 02 (col point) using spatial as ( 	select from original table ); ctas using a subset of table columns this example selects only a subset of columns, col int , col bigint , and col id , from the original table to insert into the new subset table the example also specifies alternate table options for redundancy and segmentsize create table subset ( col amt int not null, col phone bigint not null, col id int not null ) with redundancy cde (parity), segmentsize 3 as ( select col int, col bigint, col id from original table ); due to limitations of the jdbc api, the reported modified row count might not be accurate for tables larger than two billion rows ctas using transformations on columns this example performs various transformation functions on the original columns as it selects them for the new table these include col int multiply column is the multiplication of the col int values by 10 col month add column is the result of adding three months to each col timestamp column value col year column is the extraction of the year value from each col timestamp column value col substring column contains the first three characters from each col varchar column value create table calc table ( col int multiply int not null, col month add timestamp not null, col year int not null, col substring varchar(255) ) as ( select col int 10, add months(col timestamp, 3), year(col timestamp), substring(col varchar, 1, 3) from original table ); ctas using loaders specify one or more loader nodes for executing the ctas sql statement if you do not use this option, the ocient system uses all loader nodes that are live to execute the sql statement this statement is useful for managing loading operations, particularly when balancing multiple loads of different sizes and resource requirements alternatively, this statement can also help simplify small batch loads by sourcing the data from a single loader node syntax parameter type description streamloader string a unique name for the loader node identify the names of loader nodes from the sys nodes table by using this query select name from sys nodes; if the name of the loader node contains special characters, you must enclose it in quotes, such as "stream loader1" query string a select query that defines values or a table and any of its columns to use for data in the specified table name table for the query to execute successfully, the specified names of the loader nodes must identify nodes that are live identify nodes that have the loader role examples create a table named my schema my ctas table 2 with a clustering index named idx on the int col column with the values in the int col column in the table named my schema my table use the loader node named stream loader1 to execute this sql statement create table my schema my ctas table 2 ( clustering index idx (int col) ) using loaders "stream loader1" as (select int col from my schema my table); in this example, execute the same ctas sql statement with two loader nodes named stream loader2 and stream loader3 create table my schema my ctas table 2 ( clustering index idx (int col) ) using loaders "stream loader2","stream loader3" as (select int col from my schema my table); drop table drop table removes one or more existing tables in the current database, along with all associated views t his action cannot be undone to remove a table, the logged in user must be a system level user or have the delete table privileges for the table syntax drop table \[ if exists ] table name \[, ] parameter type description table name string a unique identifier for the table you can drop multiple tables by specifying additional table names and separating each with commas examples this example drops an existing table in the current database and schema named employees drop table employees; in this example, drop two tables in the current database and schema named employees and departments drop table employees, departments; when you drop multiple tables, and none of them exist in the database, the database returns an error for each missing table use the if exists statement to convert the error to a warning if you execute the drop table statement and only some of the tables exist while other tables are missing, the database drops the existing tables and returns warnings for each missing table alter table alter table rename alter table rename renames an existing table to rename a table, you must have the alter table privilege for the table syntax alter table \[ if exists ] old table name rename to new table name parameter type description table name string a unique identifier for the table old table name string the name of the table to alter new table name string the new name to replace old table name example this example renames an existing table in the current database and schema named us employees to mid west employees alter table us employees rename to mid west employees; this example renames an existing table in the current database named us employees to north america employees alter table us employees rename to north america employees; alter table rename column alter table rename column renames an existing column to rename a column, you must have the alter table privilege for the table syntax alter table \[ if exists ] table name rename column old column name to new column name parameter type description table name string a unique identifier for the table old column name string the name of the table column to alter new column name string the new column name to replace old column name example this example renames an existing column name in the table employees in the current database and schema to first name alter table employees rename column name to first name; alter table add column alter table add column adds a new column to the table to add a column, you must have the alter table privilege for the table new columns must either be nullable or specify a default value for a defined list of column parameters, see /#column definition columndefinition for constraints, see /#column constraint columnconstraint syntax alter table \[ if exists ] table name add column \<column definition>; \<column definition> = column name \[ data type ] \[ \<column constraint> \[, ] ] \<column constraint> = time key bucket(bucket granularity \[, bucket value ] ) \| \[ not ] null \| default literal \| comment comment \| compression gdc \[ (compression value), existing schema name ] \| compression \[ compression scheme ] parameter type description table name string a unique identifier for the table examples this example adds a bigint column to the employees table in the current database and schema with the default value of 0 alter table employees add column new column bigint not null default 0; this example adds a column that is nullable alter table employees add column new column bigint null; alter table alter column compression alter table alter column compression alters an existing column in the table to change its compression scheme supported compression schemes are compression none , compression dynamic , and compression zstd a ltering the compression setting of a column only affects compression for data loaded after you execute the sql statement for details about ocient supported compression schemes, see docid gac3iwnrtwwnndngn50b syntax alter table \[ if exists ] table name alter column column name set compression \[ compression scheme ]; parameter type description table name string the name of the table containing the column to alter column name string the name of the column to alter compression scheme string supported values for the compression schemes are compression none specifies no compression applied compression dynamic applies only if the column data is dynamically determined to be compressible compression zstd applies to varchar columns as well as other data types for this option only, you can specify these additional parameters compression level — this value signifies how much compression the data receives the default value is 0 the full range of values is from 7 through 15 the database uses less memory when this value is lower, whereas more memory when this value is larger larger values provide better compression dictionary size — dictionary size specified as a positive integer that signifies the size of the shared compression dictionary in bytes the default value is 32768 (32k) the full range of values is from 4096 (4k) through 1048576 (1mb) this value denotes the amount of memory consumed during segment generation in general, larger values provide better compression but use more memory examples this example alters the compression scheme for the column employee name in the table employees in the current database and schema alter table employees alter column employee name set compression lz4; this example alters the compression scheme to zstandard for the column employee name in the table employees in the current database and schema alter table employees alter column employee name set compression zstd compression level=5, dictionary size=32768; alter table alter redundancy alter table alter redundancy alters the segment part redundancy for future segments of an existing table note that altering a segment part redundancy setting only affects data loaded after applying the sql statement alter table \[ if exists ] table name alter redundancy segment part (redundancy scheme) parameter type description table name string the name of the table that you want to alter segment part string specifies the segment part redundancy of the table supported values include { data | manifest | index | pdf | cde | stats } these settings are defined as follows data the actual data for the table manifest header information stored about the data, which describes how to locate any given cluster of rows within the data index the index of the data used for quicker lookups and better query performance stats used in the optimizer, the probability density function and combinable distinct estimators used to make better optimizations to query plans redundancy scheme string the redundancy scheme supported values include { copy | parity } these settings are defined as follows copy — the system stores a copy of the bytes throughout the storage cluster to ensure redundancy this option uses more storage but is faster during rebuilds and node outages parity — the system uses the parity encoding specified on the storage cluster, and uses parity bits to ensure redundancy for the data this option uses less storage but is slower during rebuilds and node outages example this example alters the stats part to copy redundancy alter table employees alter redundancy stats (copy); alter table drop column alter table drop column drops an existing column from the table you cannot remove the timekey column and the clustering key columns from the table when you remove a column, the database does not remove or free any actual data alter table \[ if exists ] table name drop column column name \[ if exists ] parameter type description table name string the name of the table to alter column name string the name of the column to drop example this example removes a column named address from the table employees alter table employees drop column address; alter table streamloader properties alter table streamloader properties resets the table streamloader properties to the provided string the properties string must be in valid json format the database registers streamloader changes dynamically therefore, you do not need to restart nodes or take other actions for the changes to take effect any properties not specified in the string default to the system wide setting alter table \[ if exists ] table name streamloader properties streamloader json parameter type description table name string the name of the table to alter streamloader json string the streamloader properties to alter see this table for a list of all supported properties configuring streamloader properties streamloader properties is a field on the table metadata that must be written as a json string in order to be read properly the database can dynamically render any changes to streamloader properties with the alter table sql statement you can set loader node properties for a new table as a parameter in the create table sql statement you do not need to restart the database node for the changes to take effect per table streamloader properties parameter data type description pagequeryexclusionduration integer in nanoseconds(ns) or a string with the suffix ns , us , ms , or s appended for example "10s" = 10 seconds, "1000us" = 1,000 microseconds per table configuration for the time interval for pages that should be excluded from queries the database excludes pages with time column values that are greater than the duration of the query a value of 0 means the database does not exclude any pages by default, this value is set to 0 if not specified example this example sets the loader node properties of the table employees to {"pagequeryexclusionduration" "30s"} this means that any pages added less than 30 seconds ago will not be included in query results alter table employees streamloader properties '{ "pagequeryexclusionduration" "30s" }'; alter table disable index the alter table disable index statement instructs future queries not to use the specified indexes, but existing segments and new segments continue to have the index available in case you enable the index again all s econdary indexes except for secondary clustering key indexes can be disabled trying to disable other types of indexes generates an error you can specify the index by name or uuid syntax alter table \[ if exists ] table name disable index { index name or uuid | in (index name or uuid \[, ]) } parameter type description table name string the name of the table to alter index name or uuid string the name or uuid of the index to disable if you specify the index by name, the name must match an existing index if you specify a uuid instead, it does not have to match an existing index therefore, you can disable a dropped index using its uuid, which ensures it is not used within old segments that were loaded with the index you can get a list of index names and uuids in your database by referencing the https //docs ocient com/system catalog#eyrdy table in the system catalog examples this example disables an existing index named current idx on the table employees alter table employees disable index current idx; this example disables an existing or dropped index with the uuid 5c15d8de 36fa 4055 9bdc 3f1750aaeea0 alter table employees disable index '5c15d8de 36fa 4055 9bdc 3f1750aaeea0'; this example disables both indexes current idx and other idx on the table employees alter table employees disable index in (current idx, other idx); alter table enable index the alter table enable index statement reverts the operation performed by the alter table disable index statement syntax alter table \[ if exists ] table name enable index index name or uuid parameter type description table name string the name of the table to alter index name or uuid string the name or uuid of the index to enable if you specify the index by name, the name must match an existing index if you specify a uuid instead, it does not have to match an existing index therefore, you can disable a dropped index using its uuid, which ensures it is not used within old segments that were loaded with the index you can get a list of index names and uuids in your database by referencing the https //docs ocient com/system catalog#eyrdy table in the system catalog examples this example enables an existing index named current idx on the table employees alter table employees enable index current idx; this example enables an existing or dropped index with the uuid 5c15d8de 36fa 4055 9bdc 3f1750aaeea0 alter table employees enable index '5c15d8de 36fa 4055 9bdc 3f1750aaeea0'; this example enables both indexes current idx and other idx on the table employees alter table employees enable index in (current idx, other idx); delete from table removes rows from the specified table you can use the where clause to specify the rows to remove if a delete sql statement lacks the where clause, then the database deletes all rows in the table to use this statement , you must have the delete privilege for the table for details and examples , see docid\ mhtrg3 ibhiiqyailb9xj delete actions cannot be undone if a delete operation fails during execution, the database rolls back the changes and returns to its original state due to limitations of the jdbc api, the reported modified row count might not be accurate for delete operations that are larger than two billion rows syntax delete from table name \[ with cte ] \[ where \<filter clause> ] parameter type description table name string the name of the table, specified as a string, indicates where to delete rows cte string a common table expression that defines temporary data for the delete statement for details about using common table expressions, see docid\ qcf0x9ao4a56x id39pkr \<filter clause> none a logical combination of predicates that filter the rows to delete based on one or more columns for details, see the docid\ qcf0x9ao4a56x id39pkr clause the delete sql statement removes all rows from a table if you do not include the where clause examples delete rows from the table with filter criteria this delete sql statement removes all rows in the movies table that have a budget of less than 10000 delete from movies where budget < 10000; delete rows from the table using a common table expression this example uses a common table expression using the with keyword to find rows representing all transactions that occurred before 2022 that are less than $100 the delete sql statement receives the results from the common table expression then, the database executes this statement to delete the corresponding rows delete from transactions with old transactions as ( select transaction id from transactions where transaction date < '2022 01 01' and amount < 100 	) 	where transaction id in ( 	 select transaction id 	 from old transactions ); export table export table shows the create table statement for an existing table in the current database to export a table, you must have the select table privilege for the table export table table name parameter type description table name string the name of the table that you want to export example this example exports an existing table in the current database and schema named trades export table trades; output create table "admin\@system" "trade test" ( "id" uuid not null, "ticker symbol" varchar(1048576) compression gdc(2) not null, "t type" varchar(1048576) compression gdc(1) not null, "raw ticker data" varchar(1048576) compression dynamic null, "created at" timestamp time key bucket(1, hour) not null, "array of tuples" tuple<\<varchar(1048576) compression dynamic,tinyint,bigint,double precision,timestamp,date,time,decimal(3,2),point,boolean,binary(6),binary(8),ip,uuid>>\[] null, clustering index "idx ticker symbol type" ("ticker symbol", "t type"), index "idx type" ("t type") ) redundancy cde (parity), redundancy manifest (copy), redundancy pdf (parity), redundancy stats (parity), redundancy column metadata (copy), redundancy index (parity), redundancy summary stats (parity), redundancy skip lists (copy), redundancy data (parity), storagespace "storage", segmentsize 4, streamloader properties '{"pagequeryexclusionduration" "30s"}'; create index "new idx" on "admin\@system" "trade test" ("raw ticker data") using hash; insert into table insert into inserts rows into a table in the current database using the results of a sql query the column list of the query must match the columns in the table and the query results the first column of the query result maps to the first column of the existing table definition, the second column maps to the second column of the existing table, and so on the column list defaults to all columns in the table if you do not specify any column names the database does not require every column in the table to be populated from the query if you do not choose a column to be populated, the database inserts default values for the column the database inserts null values if the column does not have a default value and the column can contain null if neither a default value nor null can be inserted, the operation fails due to limitations of the jdbc api, the reported modified row count might not be accurate for insert operations that are larger than two billion rows syntax insert into table name \[ ( col1, col2 \[, ] ) ] \[ with cte ] { query | values \[ \<rows to insert> ] } \<rows to insert> = ( row1 col1, row1 col2 \[, ] ), ( row2 col1, row2 col2 \[, ] ) \[, ] parameter type description table name string the name of the table for insertion col1, col2 \[, ] string the specific columns for insertion of new values cte string a common table expression that defines temporary data for the insert statement for details about using common table expressions, see docid\ qcf0x9ao4a56x id39pkr query string a select query that defines values or a table and any of its columns that should be inserted into the specified table name row col1, row col2 \[, ] string the specific values to insert into columns in the table examples insert values from one column this example inserts the columns from system table b into system table a insert into system table a select from system table b; insert values from multiple columns this example inserts column system table b id col b into system table a id col a and system table b int col b into system table a int col a insert into system table a (id col a, int col a) select id col b, int col b from system table b; insert literal values you can also insert multiple rows of literal values the create table statement is included to demonstrate the table schema create table sales ( product int not null, quantity int not null, sale date date not null ); insert into sales (product, quantity, sale date) values (1, 10, '2023 01 15'), (2, 5, '2023 01 20'), (1, 8, '2023 02 05'); insert values using a common table expression in this example, a common table expression performs calculations on the sales table before inserting rows into the monthly sales summary table the example uses the monthly sales summary table created by this create table statement with these non nullable columns product id — product identifier month — month part of the date total quantity — total quantity of the product create table monthly sales summary ( "product id" int not null, "month" date not null, "total quantity" int not null ); the common table expression following the with keyword extracts the month from the sale date sale date and calculates the sum of the quantity sold total quantity of the product from the sales table before inserting this data then, the insert sql statement specifies to insert the data into the monthly sales summary table insert into monthly sales summary (product id, month, total quantity) with monthly totals as ( select product, date trunc('month', sale date) as month, sum(quantity sold) as total quantity from sales group by product, date trunc('month', sale date) ) select product, month, total quantity from monthly totals; insert into table using loaders specify one or more loader nodes for executing the insert into sql statement if you do not use this option, the ocient system uses all loader nodes that are live to execute the sql statement this statement is useful for managing loading operations, particularly when balancing multiple loads of different sizes and resource requirements alternatively, this statement can also help simplify small batch loads by sourcing the data from a single loader node syntax insert into table table name using loaders streamloader \[, ] query parameter type description streamloader string a unique name for the loader node identify the names of loader nodes from the sys nodes table by using this query select name from sys nodes; if the name of the streamloader contains special characters, you must enclose it in quotes, such as "stream loader1" for the query to execute successfully, the specified names must identify nodes that are live identify nodes that have the loader role examples this example inserts the column system table b id col b into system table a id col a and system table b int col b into system table a int col a use the loader node named stream loader1 to execute this sql statement insert into system table a (id col a, int col a) using loaders "stream loader1" select id col b, int col b from system table b; in this example, execute the same sql statement with two loader nodes named stream loader2 and stream loader3 insert into system table a (id col a, int col a) using loaders "stream loader2","stream loader3" select id col b, int col b from system table b; truncate table truncate table removes some or all records from an existing table in the current database the system deletes the truncated data, but the table and its schema remain intact in the system even if all data is deleted if the entire table is truncated, global dictionary compression tables remain in place to truncate a table, you must have the delete privilege for the table to remove a subset of rows from a table, you can use the /#delete from table sql statement for details and examples of using truncate , see docid\ mhtrg3 ibhiiqyailb9xj this action cannot be undone syntax truncate table table name truncate table table name where segment group id = \<id> truncate table table name where segment group id in (\<id>, ) parameter type description table name string the name of the table to truncate examples this example truncates an existing table in the current database and schema named students truncate table students; this example truncates an existing table in the current database named us students truncate table us students; this example truncates a single segment group from an existing table in the current database named students truncate table students where segment group id = 123456789; this example truncates a number of segment groups from an existing table in the current database named us students truncate table us students where segment group id in (1,2,3,4,5); view create view create view creates a new view in the current database or replaces an existing view for view creation, the name of the view must be distinct from the name of any existing views in the database to create a view, you must have both the create view privilege for the current database and the select privilege on all directly referenced tables and views in the query you must have drop privileges on the view you also must have create view privileges on the database syntax create \[ or replace ] view \[ if not exists ] view name as query parameter type description view name string a distinct identifier used to name the view query string a select query that defines the data from a table used to create the view example this example creates an existing view in the current database and schema named option trades create view option trades as select from trades where type = 'options'; drop view drop view removes one or more existing views in the current database to remove a view, you must have the drop view privilege for the view syntax drop view \[ if exists ] view name \[, ] parameter type description view name string the identifier of the view to drop you can drop multiple views by specifying additional view names and separating each with commas example this example drops an existing view in the current database and schema named star employees drop view star employees; this example drops two views named star employees and bad employees drop view star employees, bad employees; alter view rename alter view rename renames an existing view to rename a view, you must have the alter view privilege for the view syntax alter view \[ if exists ] old view name rename to new view name parameter type description old view name string the identifier of the view to rename new view name string the new name for the specified view examples this example renames an existing view in the current database and schema named star employees to star mid west employees alter view star employees rename to star mid west employees; this example renames an existing view in the current database named us star employees to us star mid west employees alter view us star employees rename to us star mid west employees; alter view as alter view as modifies an inner query of the existing view to modify the query for an existing view, you must be a system level user or have the alter view privilege for the view you must also have the select privilege on all referenced tables and views in the new query syntax alter view \[ if exists ] view name as query parameter type description view name string the identifier for the view to alter query string a select query that defines the new data from a table used to alter the view example this example alters a view in the current database and schema named star employees alter view star employees as select from sys tables; export view export view shows the create view statement for an existing view in the current database to export a view, the logged in user must be a system level user or have the read view right for the view syntax export view view name parameter type description view name string the identifier for the view to export example this example exports an existing view in the current database and schema named students export view students; index create index create index creates a new secondary index indexes help optimize database queries when created on columns that are frequently referenced for more information on how ocient indexes operate, see docid\ ssxi4zc3p mqdtr0b7qdn creating an index does not trigger re indexing of existing segments only segments generated after the create index is issued contain the new index indexes can be created on columns containing various different data types as long as the requirements are met please note that depending on the data type, the system can assign different index types by default if you decline to specify which index type to use the name must be distinct from the name of any existing index on the table you can apply indexes regardless of whether they have gdc compression syntax create index \[ if not exists ] index name on table (column name) \[ using \<index type> ] \<index type> = inverted | hash | ngram \[ (n value) ] | spatial parameter type description index name string an identifier for the index to create the name must be distinct from the name of any existing index on the table table string the name of the table for the index column name string the name of the column for the index identical indexes on the same column are not allowed a column can only have multiple indexes if they are of different types or parameters n value integer optional when used with an ngram index, this numeric value specifies the character length of the substrings to be indexed if unspecified, this value defaults to 3 index types ( \<index type> ) ocient supports four index types alongside the clustering index inverted , hash , ngram , and spatial an index notionally stores a mapping of a column value to the rows that contain that value, and the index type differentiates the format in which the column values are stored and accessed unless an index type is explicitly specified with a using clause, the data type of a column determines a default index type that the system creates for information on index type defaults, see docid\ ssxi4zc3p mqdtr0b7qdn for container data types (e g , arrays and tuples), the index stores the internal elements of the container, and is used on predicates that target the internal values however, a mapping of null column values is generally stored for both scalar and container data types, so the index can always be used for column is null predicates index type primary data types primary usage description inverted fixed length numeric columns stores whole column value internally, meaning its storage size is approximately the same as the width of the data type supports lookups using strict equality or range comparisons for details, see the docid\ ssxi4zc3p mqdtr0b7qdn section hash variable length character columns stores a hash of the indexed column value rather than the full value primarily used for exact comparisons for details, see the docid\ ssxi4zc3p mqdtr0b7qdn section ngram variable length character columns stores substrings equal in size to its n value storage requirements can greatly vary depending on column data size, width and cardinality supports exact string comparison and filters including like , not like , similar to and not similar to for details, see the docid\ ssxi4zc3p mqdtr0b7qdn section spatial geospatial columns ( point , linestring , polygon ) groups geographic objects for bounding box filtering for details, see the docid\ ssxi4zc3p mqdtr0b7qdn section for further description and examples of the index types, see docid\ ssxi4zc3p mqdtr0b7qdn examples this example creates an index named new idx on the address column of the table because address is a varchar column, this index defaults to the hash index type create index new idx on employees (address); this example creates an index of type ngram on the address column as the ngram has no specified n value , it defaults to indexing substrings of three characters long create index ngram address idx on employees (address) using ngram; this example creates an index on a component of the tuple col column as this column is of data type int , the index defaults to using the inverted type create index tuple index on employees (tuple col\[1]); this example creates an index on a component of the point col column as this column is of data type point , the index defaults to using the spatial type create index spatial index on employees (point col) drop index drop index drops a secondary index on a table after an index is dropped, new segments that are generated do not contain the new index however, no existing segments will be altered this means that until a segment is rebuilt, you can still use the removed index internally and the system does not reclaim the storage space the removed index occupied syntax drop index \[ if exists ] index name on table name parameter type description index name string an identifier for the index to drop table name string the name of the table with the index to drop example this example drops the index named new idx on the employees table drop index new idx on employees; related links docid\ qcf0x9ao4a56x id39pkr docid 3fiusnpipj97zfs1tbm5g https //docs ocient com/system catalog