SQL Reference
System Catalog Reference
Discover Insights From System Catalog Tables
you can query the {{ocient}} system catalog tables to discover important information on system objects such as databases, segments, tables, and indexes these statistics can be important for monitoring your system to ensure it operates as intended however, querying these tables can be challenging because generating insights about your system can involve writing complex queries that involve merging, grouping, and filtering multiple system catalog tables here, you can find examples of useful queries that you can use as templates for discovering insights about your system most of these example queries use the data type coverage table available with the ocient simulator for details, see ocient simulator docid\ pusva0 a2unmu385gcra segment system catalog tables for specific column definitions of the system catalog tables referenced in these examples, see the system catalog docid 2zcc9xuscejvt5v ihgy6 section of the system catalog page segment size check the size of segments and return insights about the segment storage allocation and the number of table rows contained in each segment select current timestamp, sg segment type, t name, (count( )) as num segments, (sum(s row count)) as total rows, (avg(s row count)) as avg rows per segment, (double(sum(s segment size)) / 1000000000) as total size gb, (max(s segment size / 1000000)) as max size, (min(s segment size / 1000000)) as min size, (avg(s segment size) / 1000000) as avg size mb, (stdev(double(s segment size) / 1000000)) as stdev size mb from sys segment groups as sg join sys tables t on t id = sg table id join sys segments s on s segment group id = sg id group by sg segment type, t name order by total size gb desc, total rows desc, sg segment type, t name; output \| "current timestamp()" | "segment type" | "name" | "num segments" | "total rows" | "avg rows per segment" | "total size gb" | "max size" | "min size" | "avg size mb" | "stdev size mb" | \| | | | | | | | | | | | \| 2024 07 31 16 23 22 863 | tkt segment | data type coverage | 3 | 100000 | 33333 333333333336 | 0 189542692 | 113 | 37 | 63 180897333333334 | 43 73338035824336 | for information on the system catalog tables queried in this example, see sys segments and sys tables page and segment information check the details for pages and segments in the system with cte as ( select 	coalesce(v schema, t schema) as schema, 	coalesce(v name, t name) as name, 	stored segments status, 	stored segments kind, 	sum(case when segments segment type = 'page' then 1 else 0 end) as pagecount, 	sum(case when segments segment type = 'tkt segment' then 1 else 0 end) as segcount, 	sum(case when segments segment type = 'page' then segments row count else 0 end) as page rows, 	sum(case when segments segment type = 'tkt segment' then segments row count else 0 end) as seg rows, 	min(case when segments segment type = 'page' then sg begin time else null end) as minpagetime, 	max(case when segments segment type = 'page' then sg end time else null end) as maxpagetime, 	min(case when segments segment type = 'tkt segment' then sg begin time else null end) as minsegtime, 	max(case when segments segment type = 'tkt segment' then sg end time else null end) as maxsegtime from 	sys segments join sys stored segments on 	segments segment group id = stored segments segment group id 	and segments ida offset = stored segments ida offset join sys segment groups sg on 	sg id = segments segment group id join sys tables as t on 	t id = sg table id left join sys views as v on 	v global dictionary compression table id = t id where 	stored segments end osn = 'osn infinity' group by 	1, 	2, 	stored segments status, 	stored segments kind ) select 	schema, 	name, 	status, 	kind, 	pagecount, 	segcount, 	page rows, 	seg rows, 	page rows + seg rows as tot rows, 	ceiling(page rows /(pagecount + 000001)) as rows per page, 	ceiling(seg rows /(segcount + 000001)) as rows per seg, 	to timestamp(minpagetime / 1000000000) as minpagetime, 	to timestamp(maxpagetime / 1000000000) as maxpagetime, 	to timestamp(minsegtime / 1000000000) as minsegtime, 	to timestamp(maxsegtime / 1000000000) as maxsegtime, 	now() from 	cte order by 	schema, 	name; output \| "schema" | "name" | "status" | "kind" | "pagecount" | "segcount" | "page rows" | "seg rows" | "tot rows" | "rows per page" | "rows per seg" | "minpagetime" | "maxpagetime" | "minsegtime" | "maxsegtime" | "current timestamp()" | \| | | | | | | | | | | | | | | | | \| loading | data type coverage | intact | disk | 0 | 3 | 0 | 100000 | 100000 | 0 0 | 33334 0 | | | 1970 01 01 00 00 00 000 | 1970 01 01 00 00 00 000 | 2024 08 07 21 41 36 800 | storage spaces on a system check all storage spaces on your system select from sys storage spaces; output \| "id" | "name" | "is system storage space" | "block size" | "total width" | "parity width" | "parity type" | "parity cycles" | "page replication" | \| | | | | | | | | | \| ce9f9c2e d3fe 4101 a2b2 6f7f2580327b | ss0 | false | 4096 | 3 | 1 | xor | 1 | 2 | \| "7277656e 6465 6c20 7761 732068657265" | systemstoragespace | true | 4096 | 3 | 2 | replication | 1 | 3 | query management system catalog tables for specific column definitions of the system catalog tables referenced in these examples, see the system catalog docid 2zcc9xuscejvt5v ihgy6 section of the system catalog page running queries check all queries currently running on the system the output of the sys queries table depends on what database you are logged into and your system role for details, see data control language (dcl) statement reference docid\ asr8r6xqiyofgaz5qnbiw select 	queries query id, queries initial priority, queries effective priority, 	queries user, 	queries total time, 	queries status, 	queries rows returned, 	queries bytes returned from 	sys queries where 	queries sql not like '%sys queries%'; select 	q query id, 	q user, 	q sql, 	q status, 	"database name", 	q bytes returned from 	sys queries q; output \| "query id" | "user" | "sql" | "status" | "database name" | "bytes returned" | \| | | | | | | \| "66b27dde 17df 403a 82d7 65e34b6fd07f" | admin\@system | "example query" | running | test | 0 | recent queries check the most recent queries completed by the system the output of the sys completed queries table depends on what database you are logged into and your system role for details, see data control language (dcl) statement reference docid\ asr8r6xqiyofgaz5qnbiw select user, timestamp start, total time, state, reason, sql from sys completed queries order by timestamp start desc limit 10; output \| "user" | "time start" | "total time" | "state" | "reason" | "sql" | | | | | | | | | | | | | | \| | | | | | | | | | | | | | | | | | | | \| admin\@system | "2024 08 06t21 51 29 284" | 84 | "00000" | the operation completed successfully | "with the user query to add a limit to as (select user | time start | total time | state | reason | sql from sys completed queries order by time start desc limit 100) select from the user query to add a limit to limit 200" | | | | | | | | | \| admin\@system | "2024 08 06t21 45 33 952" | 58 | "00000" | the operation completed successfully | "with the user query to add a limit to as (select q query id | q user | q sql | q status | ""database name"" | q bytes returned from sys queries q) select from the user query to add a limit to limit 200" | | | | | | | | | \| admin\@system | "2024 08 06t21 41 04 521" | 205 | "00000" | the operation completed successfully | "with the user query to add a limit to as (select sg segment type | ss kind | ss status | t name | (ss end osn = 'osn infinity') active | count(distinct ss storage id) num segments from sys stored segments ss join sys segment groups sg on sg id = ss segment group id join sys tables t on t id = sg table id where ss status <> 'intact' group by sg segment type | ss kind | ss status | t name | active order by num segments desc | ss kind | ss status | t name | active) select from the user query to add a limit to limit 200" | \| admin\@system | "2024 08 06t21 38 19 740" | 90 | "00000" | the operation completed successfully | with the user query to add a limit to as (select from sys stored segments) select from the user query to add a limit to limit 200 | | | | | | | | | | | | | | \| admin\@system | "2024 08 06t21 37 35 181" | 5 | "42703" | the reference to referenced column is not valid (the column 'ss kind' is being referenced incorrectly maybe a missing group by?) | "with the user query to add a limit to as (select ss kind | ss status | (ss end osn = 'osn infinity') active | count(distinct ss storage id) num segments from sys stored segments ss) select from the user query to add a limit to limit 200" | | | | | | | | | | | \| admin\@system | "2024 08 06t21 37 33 710" | 5 | "42703" | the reference to referenced column is not valid (the column 'ss kind' is being referenced incorrectly maybe a missing group by?) | "with the user query to add a limit to as (select ss kind | ss status | (ss end osn = 'osn infinity') active | count(distinct ss storage id) num segments from sys stored segments ss) select from the user query to add a limit to limit 200" | | | | | | | | | | | \| admin\@system | "2024 08 06t21 36 03 692" | 11 | "42703" | the reference to referenced column is not valid (the column 'ss kind' is being referenced incorrectly maybe a missing group by?) | "with the user query to add a limit to as (select ss kind | ss status | (ss end osn = 'osn infinity') active | count(distinct ss storage id) num segments from sys stored segments ss) select from the user query to add a limit to limit 200" | | | | | | | | | | | \| admin\@system | "2024 08 06t21 35 51 301" | 4 | "42703" | the reference to column 't name' is not valid | "with the user query to add a limit to as (select ss kind | ss status | t name | (ss end osn = 'osn infinity') active | count(distinct ss storage id) num segments from sys stored segments ss) select from the user query to add a limit to limit 200" | | | | | | | | | | \| admin\@system | "2024 08 06t21 35 10 331" | 5 | "42703" | the reference to column 'sg segment type' is not valid | "with the user query to add a limit to as (select sg segment type | ss kind | ss status | t name | (ss end osn = 'osn infinity') active | count(distinct ss storage id) num segments from sys stored segments ss) select from the user query to add a limit to limit 200" | | | | | | | | | \| admin\@system | "2024 08 06t21 34 42 970" | 208 | "00000" | the operation completed successfully | "with the user query to add a limit to as (select sg segment type | ss kind | ss status | t name | (ss end osn = 'osn infinity') active | count(distinct ss storage id) num segments from sys stored segments ss join sys segment groups sg on sg id = ss segment group id join sys tables t on t id = sg table id where ss status <> 'intact' group by sg segment type | ss kind | ss status | t name | active order by num segments desc | ss kind | ss status | t name | active) select from the user query to add a limit to limit 200" | database and table system catalog tables for specific column definitions of the system catalog tables used in these examples, see the system catalog docid 2zcc9xuscejvt5v ihgy6 section of the system catalog page database sizes check the total size of all databases on your system select to char(sum(sp size), '999,999,999,999,999') as totsize, d name from sys segment parts as sp join sys segment groups as sg on sp segment group id = sg id join sys tables as t on sg table id = t id join sys databases as d on t database id = d id group by d name order by 1 desc; output \| "totsize" | "name" | \| | | \| 204276004 | test | table sizes check the total size of all tables on the system with cte as ( select 	 sg id as segment group id, 	 sp name as segment name, 	 sp part type as partition type, 	 sp size as segment size, 	 sg segment type, 	 sg status, 	 t schema, 	 t name from sys segment parts as sp inner join sys segment groups as sg on 	 sp segment group id = sg id inner join sys tables as t on 	 sg table id = t id ), ctefinal as ( select schema, name, sum(segment size) as total table size from cte group by schema, name ) select schema, name, total table size, sum(total table size) over( order by total table size desc ) as running total from ctefinal order by total table size desc; output \| "schema" | "name" | "total table size" | "running total" | \| | | | | \| loading | data type coverage | 204276004 | 204276004 | table statistics this query provides a breakdown of table statistics and partition information with cte1 as ( 	select 	 sg id as segment group id, 	 sp name as segment name, 	 sp part type as partition type, 	 sp size as segment size, 	 sg segment type, 	 sg status, 	 t schema, 	 t name 	from sys segment parts as sp 	inner join sys segment groups as sg on 	 sp segment group id = sg id 	inner join sys tables as t on 	 sg table id = t id ), cte as ( 	select 	 count( ) as num segments, 	 sum(segment size) as partition size, 	 partition type, 	 sum(sum(segment size)) over(partition by schema, name) as total table size, 	 sum(count( )) over(partition by schema, name) as total table segments, 	 schema, 	 name 	from cte1 	group by 	 partition type, 	 schema, 	 name ) select name, num segments, partition size, partition type, cast(float(partition size)/(total table size + 1) 100 as decimal(6,3)) as percent of total, total table size, total table segments from cte order by name, partition size desc; output \| "name" | "num segments" | "partition size" | "partition type" | "percent of total" | "total table size" | "total table segments" | \| | | | | | | | \| data type coverage | 3 | 113397760 | parity data | 59 827 | 189542692 | 42 | \| data type coverage | 3 | 75599872 | data | 39 885 | 189542692 | 42 | \| data type coverage | 3 | 172032 | skip lists | 0 091 | 189542692 | 42 | \| data type coverage | 3 | 172032 | copy skip lists | 0 091 | 189542692 | 42 | \| data type coverage | 3 | 90112 | parity stats | 0 048 | 189542692 | 42 | \| data type coverage | 3 | 59210 | stats | 0 031 | 189542692 | 42 | \| data type coverage | 3 | 12288 | table of contents | 0 006 | 189542692 | 42 | \| data type coverage | 3 | 12288 | parity index | 0 006 | 189542692 | 42 | \| data type coverage | 3 | 12288 | copy table of contents | 0 006 | 189542692 | 42 | \| data type coverage | 3 | 8192 | index | 0 004 | 189542692 | 42 | \| data type coverage | 3 | 4096 | parity summary stats | 0 002 | 189542692 | 42 | \| data type coverage | 3 | 906 | column metadata | 0 000 | 189542692 | 42 | \| data type coverage | 3 | 906 | copy column metadata | 0 000 | 189542692 | 42 | \| data type coverage | 3 | 710 | summary stats | 0 000 | 189542692 | 42 | for information on the system catalog tables queried in this example, see sys segment groups and sys segment parts compression check the information on the compression for each column in the specified table select r "table", c name, 	r is rle enabled, 	r is ne enabled, 	r raw size, 	r compressed size, 	r is deltadelta enabled, 	r num deltadelta blocks, 	r num rle blocks, 	r num nerle blocks, 	r num uncompressed blocks, 	r num total blocks from sys columns c, 	( select t name as "table", 	 t id, 	 cci is deltadelta enabled, 	 cci is rle enabled, 	 cci is ne enabled, 	 cci raw size, 	 cci compressed size, 	 cci num deltadelta blocks, 	 cci num rle blocks, 	 cci num nerle blocks, 	 cci num uncompressed blocks, 	 cci num total blocks, 	 cci ordinal 	 from sys tables as t, 	 sys columns compression info as cci 	 where t id = cci table id 	 and t name = 'chicago docsis arr idx' ) as r where r id = c table id and r ordinal = c ordinal; output \| "table" | "name" | "is rle enabled" | "is ne enabled" | "raw size" | "compressed size" | "is deltadelta enabled" | "num deltadelta blocks" | "num rle blocks" | "num nerle blocks" | "num uncompressed blocks" | "num total blocks" | \| | | | | | | | | | | | | \| data type coverage | col double | true | true | 900000 | 905216 | false | 0 | 0 | 0 | 220 | 220 | \| data type coverage | col int | true | true | 500000 | 507904 | true | 0 | 0 | 0 | 123 | 123 | \| data type coverage | col tinyint | true | true | 200000 | 204800 | true | 0 | 0 | 0 | 49 | 49 | \| data type coverage | col date | true | true | 500000 | 294912 | true | 71 | 0 | 0 | 0 | 71 | \| data type coverage | col decimal | true | true | 1100000 | 1105920 | false | 0 | 0 | 0 | 269 | 269 | \| data type coverage | col ip | true | true | 1700000 | 1712128 | false | 0 | 0 | 0 | 417 | 417 | \| data type coverage | col point | true | true | 1700000 | 1712128 | false | 0 | 0 | 0 | 417 | 417 | \| data type coverage | col timestamp | true | true | 900000 | 905216 | true | 0 | 0 | 0 | 220 | 220 | \| data type coverage | col bigint | true | true | 900000 | 905216 | true | 0 | 0 | 0 | 220 | 220 | \| data type coverage | col binary | true | true | 300000 | 307200 | false | 0 | 0 | 0 | 74 | 74 | \| data type coverage | col boolean | true | true | 200000 | 106496 | true | 19 | 0 | 0 | 6 | 25 | \| data type coverage | col ipv4 | true | true | 500000 | 507904 | true | 0 | 0 | 0 | 123 | 123 | \| data type coverage | col time | true | true | 900000 | 905216 | true | 0 | 0 | 0 | 220 | 220 | \| data type coverage | col float | true | true | 500000 | 507904 | false | 0 | 0 | 0 | 123 | 123 | \| data type coverage | col uuid | true | true | 1700000 | 1712128 | false | 0 | 0 | 0 | 417 | 417 | \| data type coverage | col smallint | true | true | 300000 | 307200 | true | 0 | 0 | 0 | 74 | 74 | for information on the system catalog table queried in this example, see sys columns compression info compression for the entire table this query summarizes the total compression of the specified table in the example, replace the table name and schema name accordingly select sum(raw size), sum(compressed size) from ( select t name, c name as colname, co raw size, co compressed size, sum(co compressed size) over(partition by t schema, t name) as table size, 100 ((100 co compressed size) / co raw size) as col compress percent, decimal( float(co compressed size) 100 / sum(co compressed size) over(partition by t schema, t name), 3, 1 ) as percent of table from ( select table id, ordinal, raw size, compressed size from sys columns compression info union all select table id, ordinal, raw size, compressed size from sys vl columns compression info ) as co join sys tables as t on co table id = t id join sys columns as c on c table id = t id and co ordinal = c ordinal where t name = 'data type coverage' and t schema = 'loading' order by co ordinal ) as abc; output \| "sum(raw size)" | "sum(compressed size)" | \| | | \| 73788914 | 73592306 | for information on the system catalog table queried in this example, see sys columns compression info index size check the total size of each index across the different segment groups in this example, the table has three secondary indexes added before loading data select p name as uuid, p part type, i name as index name, sum(p size) as sum size from ( select from sys segment parts where part type like 'index' ) as p inner join sys indexes i on p name = i id group by 1,2,3 order by 1,2; output \| "uuid" | "part type" | "index name" | "sum size" | \| | | | | \| "669cbcdf 772f 4c97 b250 0cc9e573cd7b" | index | idx01 | 2527232 | \| cccf6f5e 96a8 4621 b947 f318d120f6a0 | index | idx03 | 180224 | \| d75ae71a 207e 4723 a86c ecffc798c7d0 | index | idx02 | 3190784 | for information on the system catalog tables queried in this example, see sys indexes and sys segment parts column sizes this query finds the size of all columns for the specified table before running this query on your system, replace table name in the example with cte as ( select 	 table id, 	 ordinal, 	 compressed size from 	 sys columns compression info union all select 	 table id, 	 ordinal, 	 compressed size from 	 sys vl columns compression info ) select 	( select t name 	 from sys tables as t 	 where t id = cte table id ) as table name, 	ordinal, 	compressed size, 	( select name 	 from sys columns as c 	 where c table id = cte table id 	 and c ordinal = cte ordinal ) as name, 	double(compressed size) / sum(compressed size) over() as percent of total from cte where 	table id in ( 	 select id 	 from sys tables 	 where name like 'table name' ) order by compressed size; output \| "table name" | "ordinal" | "compressed size" | "name" | "percent of total" | \| | | | | | \| data type coverage | 25 | 8192 | | 0 0001082748327179 | \| data type coverage | 21 | 8192 | | 0 0001082748327179 | \| data type coverage | 2 | 102400 | col boolean | 0 0013534354089736837 | \| data type coverage | 18 | 204800 | col tinyint | 0 0027068708179473675 | \| data type coverage | 19 | 266240 | | 0 003518932063331578 | \| data type coverage | 4 | 294912 | col date | 0 0038978939778442096 | \| data type coverage | 12 | 307200 | col smallint | 0 004060306226921052 | \| data type coverage | 1 | 307200 | col binary | 0 004060306226921052 | \| data type coverage | 10 | 507904 | col ipv4 | 0 0067130396285094715 | \| data type coverage | 7 | 507904 | col float | 0 0067130396285094715 | \| data type coverage | 8 | 507904 | col int | 0 0067130396285094715 | \| data type coverage | 3 | 699370 | col char | 0 009243673066151615 | \| data type coverage | 23 | 783147 | col varbinary | 0 010350965627260875 | \| data type coverage | 6 | 905216 | col double | 0 011964369015327365 | \| data type coverage | 17 | 905216 | col timestamp | 0 011964369015327365 | \| data type coverage | 16 | 905216 | col time | 0 011964369015327365 | \| data type coverage | 0 | 905216 | col bigint | 0 011964369015327365 | \| data type coverage | 5 | 1105920 | col decimal | 0 014617102416915785 | \| data type coverage | 13 | 1712128 | col point | 0 022629440038039995 | \| data type coverage | 22 | 1712128 | col uuid | 0 022629440038039995 | \| data type coverage | 11 | 1712128 | col ip | 0 022629440038039995 | \| data type coverage | 20 | 1784388 | | 0 02358451076706771 | \| data type coverage | 24 | 1788957 | col varchar | 0 023644899891907562 | \| data type coverage | 14 | 8500000 | col linestring | 0 11234571265894837 | \| data type coverage | 9 | 21477958 | col int array | 0 28387723505517193 | \| data type coverage | 15 | 27739482 | col polygon | 0 3666366910682436 | this output is based on a query using the table data type coverage column cardinality this query finds the cardinality of each column in the specified table knowing the cardinality is useful for determining which compression scheme to use for a column select t schema, t name, c name, c ordinal, cc cardinality from sys column cardinalities as cc left join sys tables as t on cc table id = t id left join sys columns as c on cc column id = c id where t name like 'data type coverage' order by c ordinal; output \| "schema" | "name" | "name 1" | "ordinal" | "cardinality" | \| | | | | | \| loading | data type coverage | col bigint | 0 | 98551 | \| loading | data type coverage | col binary | 1 | 48048 | \| loading | data type coverage | col boolean | 2 | 2 | \| loading | data type coverage | col char | 3 | 95206 | \| loading | data type coverage | col date | 4 | 29634 | \| loading | data type coverage | col decimal | 5 | 20059 | \| loading | data type coverage | col double | 6 | 53546 | \| loading | data type coverage | col float | 7 | 75680 | \| loading | data type coverage | col int | 8 | 99319 | \| loading | data type coverage | col int array | 9 | 5000855 | \| loading | data type coverage | col int array | 9 | 99899 | \| loading | data type coverage | col ipv4 | 10 | 99908 | \| loading | data type coverage | col ip | 11 | 99875 | \| loading | data type coverage | col smallint | 12 | 51227 | \| loading | data type coverage | col point | 13 | 100000 | \| loading | data type coverage | col linestring | 14 | 100000 | \| loading | data type coverage | col polygon | 15 | 99153 | \| loading | data type coverage | col time | 16 | 99902 | \| loading | data type coverage | col timestamp | 17 | 99829 | \| loading | data type coverage | col tinyint | 18 | 256 | \| loading | data type coverage | col uuid | 22 | 99892 | \| loading | data type coverage | col varbinary | 23 | 68834 | \| loading | data type coverage | col varchar | 24 | 93258 | \| loading | data type coverage | | | 10410 | \| loading | data type coverage | | | 2 | \| loading | data type coverage | | | 1 | \| loading | data type coverage | | | 93333 | \| loading | data type coverage | | | 1 | for information on the system catalog table queried in this example, see sys column cardinalities cluster and node system catalog tables for specific column definitions of the system catalog tables referenced in these examples, see the system section of the system catalog page nodes present on a system check the status of all nodes in the system select name, status from sys nodes; output \| "name" | "status" | \| | | \| foundation2 | accepted | \| sql | accepted | \| loader | accepted | \| foundation1 | accepted | \| foundation0 | accepted | nodes roles on a system check the roles of each node on the system select 	n name, n status, 	string agg(distinct c name, ', ' order by c name) in clusters, 	string agg(distinct sr service role type, ', ' order by sr service role type) with roles from 	sys nodes n 	left join sys node clusters nc on nc node id = n id 	left join sys clusters c on c id = nc cluster id 	left join sys service roles sr on sr node id = n id group by 	n name, n status order by 	n name; output \| "name" | "status" | "in clusters" | "with roles" | | | | \| | | | | | | | \| foundation0 | accepted | "foundation cluster | foundation cluster vm" | "health | lts | operatorvm" | \| foundation1 | accepted | "foundation cluster | foundation cluster vm" | "health | lts | operatorvm" | \| foundation2 | accepted | "foundation cluster | foundation cluster vm" | "health | lts | operatorvm" | \| loader | accepted | | "health | streamloader" | | | \| sql | accepted | initial vm | "admin | health | operatorvm | sql" | node status check the operational status, software versions, and assigned roles of your nodes possible status values include active , starting , stopping , error , unknown , or unreachable select name, operational status, software version, array agg(service role type) from sys node status as ns left join sys nodes as n on ns node id = id left join sys service roles as sr on sr node id = n id group by name, operational status, software version; output \| name | operational status | software version | array agg(service role type ) | \| | | | | \| loader | active | 24 0 0 | \['health','streamloader'] | \| foundation2 | active | 24 0 0 | \['health','lts','operatorvm'] | foundation nodes on a specific cluster check for all foundation nodes on your system clusters select n name, r name from sys nodes as n, ( select from sys clusters as c, sys node clusters as nc where c cluster type = 'foundation' and c id = nc cluster id ) as r where r node id = n id; output \| foundation2 | foundation cluster | \| | | \| foundation0 | foundation cluster | \| foundation1 | foundation cluster | drive status check the status of all drives assigned to nodes in the system select n name as node name, s node id, s id as serial number, s pci address, s device status, s device model from sys nodes as n join sys storage device status as s on n id = s node id; output \| "node name" | "node id" | "serial number" | "pci address" | "device status" | "device model" | \| | | | | | | \| foundation1 | "6a96b307 57bc 477f 952c e6cab38a7922" | "72062649 396c 4424 9da5 c90cf0589e00" | /var/opt/ocient/72062649 396c 4424 9da5 c90cf0589e00 dat | "" | non nvme drive | \| foundation1 | "6a96b307 57bc 477f 952c e6cab38a7922" | ee2f0013 c7b7 4750 833e 3d717af85001 | /var/opt/ocient/ee2f0013 c7b7 4750 833e 3d717af85001 dat | "" | non nvme drive | \| loader | "83f0ecf9 9a31 4a56 85b9 7eb610447185" | ff3186fb 502f 417c bcab 8b7aa3de8800 | /var/opt/ocient/ff3186fb 502f 417c bcab 8b7aa3de8800 dat | "" | non nvme drive | \| loader | "83f0ecf9 9a31 4a56 85b9 7eb610447185" | "11ff09e2 a6c1 4f76 ae96 3a97eaf02a01" | /var/opt/ocient/11ff09e2 a6c1 4f76 ae96 3a97eaf02a01 dat | "" | non nvme drive | \| foundation0 | "9330a0b3 b3b7 4503 949c 043b196c0cc4" | "6156962f fcf6 4299 bbd7 2618fc6f1d00" | /var/opt/ocient/6156962f fcf6 4299 bbd7 2618fc6f1d00 dat | "" | non nvme drive | \| foundation0 | "9330a0b3 b3b7 4503 949c 043b196c0cc4" | e6a4b24f 0d49 4704 b7ce 18af163c0701 | /var/opt/ocient/e6a4b24f 0d49 4704 b7ce 18af163c0701 dat | "" | non nvme drive | \| sql | "1198a86f 549d 4736 ac31 1ba54a2b02e7" | "37203a6c 9deb 47cf 8194 95c4eecd7300" | /var/opt/ocient/37203a6c 9deb 47cf 8194 95c4eecd7300 dat | "" | non nvme drive | \| sql | "1198a86f 549d 4736 ac31 1ba54a2b02e7" | "0921d62f a31c 4681 9a56 82cb1450a401" | /var/opt/ocient/0921d62f a31c 4681 9a56 82cb1450a401 dat | "" | non nvme drive | \| foundation2 | babadc9c f2a3 4512 ab9a bab3fd88a544 | "842881df b71c 4cfc 847b 19517e8b2800" | /var/opt/ocient/842881df b71c 4cfc 847b 19517e8b2800 dat | "" | non nvme drive | \| foundation2 | babadc9c f2a3 4512 ab9a bab3fd88a544 | fc0e490a a295 491d b6a6 8b3c153a7301 | /var/opt/ocient/fc0e490a a295 491d b6a6 8b3c153a7301 dat | "" | non nvme drive | roles, privileges, and service class system catalog tables for specific column definitions of the system catalog tables referenced in these examples, see the user management section of the system catalog page group assignment for users check for all users on the system and their corresponding assigned group select g name as group name, r user name from sys groups as g, ( select u user name, ug group id from sys users as u, sys user groups as ug where u id = ug user id ) as r order by g name asc; output \| group name | user name | \| | | \| dba | humphrey | user roles check for all users on the system and their assigned role select rl name as role name, r user name from sys roles as rl, ( select u user name, ur role id from sys users as u, sys user roles as ur where u id = ur user id ) as r order by rl name asc; output \| role name | user name | \| | | \| system administrator | humphrey | \| system analyst | grimey | \| database administrator | charles | \| database analyst | edna | user privileges check the privileges for the specific table name before running on your system, replace username in the example select t schema as db name, t name as table name, r privilege from sys tables as t, ( select object type, object id, privilege from sys privileges where grantee = 'username' ) as r where r object id = t id order by t schema, t name asc; output \| db name | table name | privilege | \| | | | \| loading | data type coverage | view | \| loading | data type coverage | select | privileges for the specific table check the granted privileges across all database objects of the specific type, such as group , table , or database in this example, the query checks for table privileges select p grantee, t schema, t name, p privilege from sys tables as t, sys privileges as p where object type = 'table' and p object id = t id; output \| admin\@system | admin\@system | example table | delete | \| | | | | \| admin\@system | admin\@system | example table | drop | \| admin\@system | admin\@system | example table | sysauth | \| admin\@system | admin\@system | example table | load | \| admin\@system | admin\@system | example table | insert | \| admin\@system | admin\@system | example table | alter | \| admin\@system | admin\@system | example table | select | \| admin\@system | admin\@system | example table | view | groups assigned to service classes check for the service class assignment for all groups select g name, sc name from sys groups as g, sys service classes as sc where g service class id = sc id; output \| group | sc name | \| | | \| analyst | high priority | \| dbadmin | low priority | service class settings check for the settings for the specified service class before running on your system, replace the high priority service class name in the example for details about service class settings, see workload management and service classes docid\ sibw5ky21wk028mfgeqzs select from sys groups as g, sys service classes as sc where g service class id = sc id and sc name = 'high priority'; output \| id | name | database id | service class id | id 1 | database id 1 | name 1 | max temp disk usage | max elapsed time | max concurrent queries | max rows returned | scheduling priority | cache max bytes | cache max time | max elapsed time for caching | max columns in result set | priority adjustment factor | priority adjustment time | min priority | max priority | statement text | statement text matcher type | half parallelism | load balance shuffle | parallelism | memory optimal strategy | \| | | | | | | | | | | | | | | | | | | | | | | | | | | \| e9c92a1b 32a8 4f88 ab72 485ff8b24f53 | analyst | e80010d7 6f26 438c 8461 11af309ed8a3 | 83b76f87 6634 4cc9 8252 9300caeefdf1 | 83b76f87 6634 4cc9 8252 9300caeefdf1 | e80010d7 6f26 438c 8461 11af309ed8a3 | high priority | 80 | 100 | 10 | 100 | 5 | 1,000 | 25 | 50 | 1 | 0 | 0 | 0 | 1 | | | | | | false | service classes for the specific user check for the service class of the specific user before running on your system, replace the jmack\@test username in this example with the specific username that you want to query select r2 name, sc name, r2 user name from sys service classes as sc, ( select g name, g service class id, r user name from sys groups as g, ( select u user name, ug group id from sys users as u, sys user groups as ug where u id = ug user id and u user name = 'jmack\@test' ) as r ) as r2; output \| group | service class | user name | \| | | | \| analyst | default | jmack\@test | \| analyst | high priority | jmack\@test | data pipelines pipeline events while your pipeline is running, the pipeline generates events in the sys pipeline events system catalog table to mark significant checkpoints when something has occurred in this example, you run the create pipeline and start pipeline sql statements on the pipeline named my pipeline and let the pipeline complete as expected, the created , started , and completed events appear for details about the lists of files, see the sys pipeline files system catalog table pipeline events include messages from many different tasks these events are the background processes that execute across different loader nodes during pipeline operation for details about tasks, you can query the sys tasks and sys subtasks system catalog tables select from sys pipeline events order by event timestamp; output + + + + + + + \| pipeline id | task id | user id | event type | event message | event timestamp | \| + + + + + | \| d8848467 9262 4bd6 84a7 1380c95f8b8b | | dde90d1b bfcf 4b48 a251 b9ea812a7b26 | created | created pipeline my pipeline | 2024 02 07 18 04 32 822690 | \| d8848467 9262 4bd6 84a7 1380c95f8b8b | | dde90d1b bfcf 4b48 a251 b9ea812a7b26 | started | started processing pipeline my pipeline | 2024 02 07 18 39 52 682276 | \| d8848467 9262 4bd6 84a7 1380c95f8b8b | | | file listing started | file listing started for pipeline my pipeline this may take a while | 2024 02 07 18 39 52 848000 | \| d8848467 9262 4bd6 84a7 1380c95f8b8b | | | file listing completed | file listing completed for pipeline my pipeline 1 files were listed | 2024 02 07 18 39 52 863000 | \| d8848467 9262 4bd6 84a7 1380c95f8b8b | 61007c85 6744 42e7 9401 66e1a594e990 | | extraction started | extraction started | 2024 02 07 18 39 58 367000 | \| d8848467 9262 4bd6 84a7 1380c95f8b8b | 61007c85 6744 42e7 9401 66e1a594e990 | | extraction completed | extraction completed | 2024 02 07 18 39 59 417000 | \| d8848467 9262 4bd6 84a7 1380c95f8b8b | 49c17f79 0108 4b05 aae5 9ae9a13abaca | | completed | completed processing pipeline my pipeline | 2024 02 07 18 40 03 808543 | + + + + + + + fetched 7 rows pipeline metrics find metrics that measure the performance and activity of a pipeline in the sys pipeline metrics system catalog table this table contains one row for each metric at a specified point in time a common use of this data is to plot metric values over time to examine the behavior of a pipeline metric types metrics are either instantaneous or incremental instantaneous metrics reflect the current value of a counter at the time of the metric collection these values can increase or decrease based on the current state or rate of the metric incremental metrics increment over time and reflect the cumulative value for a specified counter example the first snapshot of a metric appears when it is initialized afterward, the metric is updated every 10 seconds the updated at column shows the metric collection time for example, if you are interested in the number of the transformed record bytes, you can run this query the value increases over time as the pipeline runs select from sys pipeline metrics where name = 'count record bytes transformed' order by updated at; output + + + + + + + + \| pipeline id | extractor task id | partition id | sink index | name | value | updated at | \| + + + + + + | \| 0f51f1c0 f251 4407 8393 0f9bcbf18f28 | 034cf49d 364b 42b5 ad0a 37b7c237f16e | | | count record bytes transformed | 7774214710 | 2024 02 07 22 35 27 546000 | \| 0f51f1c0 f251 4407 8393 0f9bcbf18f28 | 034cf49d 364b 42b5 ad0a 37b7c237f16e | | | count record bytes transformed | 16529202619 | 2024 02 07 22 35 37 546000 | \| 0f51f1c0 f251 4407 8393 0f9bcbf18f28 | 034cf49d 364b 42b5 ad0a 37b7c237f16e | | | count record bytes transformed | 25077088214 | 2024 02 07 22 35 47 546000 | \| 0f51f1c0 f251 4407 8393 0f9bcbf18f28 | 034cf49d 364b 42b5 ad0a 37b7c237f16e | | | count record bytes transformed | 33425410618 | 2024 02 07 22 35 57 546000 | \| 0f51f1c0 f251 4407 8393 0f9bcbf18f28 | 034cf49d 364b 42b5 ad0a 37b7c237f16e | | | count record bytes transformed | 41818859671 | 2024 02 07 22 36 07 546000 | + + + + + + + + fetched 5 rows metric scope each metric has a scope that defines its uniqueness be careful not to aggregate metrics across different scopes the scope columns are pipeline id , extractor task id , partition id , and sink index if the value of any of these columns is null, the scope applies to all values in that dimension for example, because the partition id and sink index columns are null for the metric count record bytes transformed , its scope applies to the pipeline indicated by the pipeline id and extractor task id across all sinks and partitions an extractor task id is the internal identifier used when the ocient system executes a pipeline process on a loader node a pipeline might have many of these processes across time and loaders example retrieve the most recent snapshot for each metric and aggregate the values to produce a single value for a pipeline select pipeline id, name, sum(most recent value) as total value from ( select distinct from ( select pipeline id, extractor task id, partition id, name, first value (value) over (partition by pipeline id, extractor task id, partition id, name order by updated at desc) as most recent value from sys pipeline metrics where name like 'count%' ) ) group by pipeline id, name; output + + + + \| pipeline id | name | total value | \| + + | \| 0c7b32f1 37ca 43f7 96b6 69c266c978a5 | count file total | 64 | \| 0c7b32f1 37ca 43f7 96b6 69c266c978a5 | count record bytes transformed | 33644961 | \| 0c7b32f1 37ca 43f7 96b6 69c266c978a5 | count file processed | 64 | \| 0c7b32f1 37ca 43f7 96b6 69c266c978a5 | count record sent | 100000 | \| 0c7b32f1 37ca 43f7 96b6 69c266c978a5 | count record durable | 100000 | + + + + fetched 5 rows this output contains a single row for each metric for each pipeline if you run a pipeline with 64 files and 100,000 records, the output might look like this this table explains each column in the output related links system catalog docid 2zcc9xuscejvt5v ihgy6 data integrity and storage docid\ vwm4hp1ea4ya6dwi4eryz query analysis docid\ xc5pdlk7orduwvuqcmran database, tables, views, and indexes docid\ uacarixqhe493vlhudb5b cluster and node management docid\ xga0pas8wadtq33 a x7v users, groups, and service classes docid\ hcnnsmzcvpai1kqlhtzud