Load Data
Monitor Data Pipelines
you can monitor data pipelines through a combination of system catalog tables information schema views operational metrics endpoints for use with operations monitoring tools this tutorial describes how to use these system catalog objects to monitor your pipelines information schema pipelines information schema pipeline status sys pipelines sys pipeline events sys pipeline errors sys pipeline files sys pipeline partitions sys pipeline metrics sys pipeline metrics info a complete reference of system catalog tables and views is available in the docid 2zcc9xuscejvt5v ihgy6 pipelines information schema after you create a pipeline, two convenient views are most useful for observing the pipeline at a glance, you can view the configuration of your pipeline in information schema pipelines and the status information in information schema pipeline status access these views conveniently with the show command sql > show pipelines; + + + + + + + + + \| database name | pipeline name | loading mode | source type | data format | created at | created by | table names | \| + + + + + + + | \| test | my pipeline | batch | filesystem | csv | 2024 02 07 18 01 22 910745 | mac | \['test my table'] | + + + + + + + + + fetched 1 rows \> show pipeline status; + + + + + + + + + + + + + + + \| database name | pipeline name | table names | status | status message | percent complete | duration seconds | files processed | files failed | files remaining | files total | records processed | records loaded | records failed | \| + + + + + + + + + + + + + | \| test database | test pipeline | \['test schema test table'] | completed | completed processing pipeline test pipeline | 1 | 10 5066 | 1 | 0 | 0 | 1 | 4 | 4 | 0 | + + + + + + + + + + + + + + + fetched 1 rows for a quick look at the progress of a pipeline, the show pipeline status command provides name target tables current status last status message that was received in processing percentage completion (from 0 0 1 0) duration of pipeline execution number of files and records processed error counts to join these two views to other catalog tables, use the pipeline name to join to the sys pipelines table and then join tables using the pipeline id pipeline catalog tables beyond the information schema views, many catalog tables offer details about your pipelines table description importance sys pipelines list of pipelines that exist and the configuration information high sys pipeline events event log generated through the lifecycle of a pipeline high sys pipeline errors log of errors created by pipelines during execution for troubleshooting record level errors or system errors high sys pipeline files list of files that each file based pipeline is loading or has loaded high sys pipeline partitions list of {{kafka}} source partitions on each kafka pipeline with relevant metadata for offsets medium sys pipeline metrics metrics that are regularly collected for each pipeline to capture performance and progress medium sys pipeline metrics info descriptive details about the available pipeline metrics provides information about how to interpret each metric type low monitor activity and events while your pipeline is running, the pipeline generates events in the sys pipeline events system catalog table to mark significant checkpoints when something has occurred you can use this table to observe progress and look for detailed messages about changes in the pipeline for example, the sys pipeline events table captures events when a pipeline is created, started, stopped, fails, and completed this table also captures relevant system events such as rebalancing kafka partitions or retrying a process due to a transient failure pipeline events include messages from many different tasks these tasks are the background processes that execute across different loader nodes during pipeline operation monitor files or partitions for file based loads, the sys pipeline files system catalog table contains one row for each file, which contains the status of the file and other file metadata such as filename, creation or modified timestamps, and the file size the pipeline updates this list of files when it is started and maintains status information as it processes files you can use this list to understand which files are loading, which succeeded, and which failed for kafka partition based loads, the sys pipeline partitions system catalog table contains one row for each partition, which contains the current offsets and record counts you can use this information to understand how partitions are distributed across loader nodes and also to observe lag on each topic and partition monitor performance pipeline performance metrics are captured in the sys pipeline metrics system catalog table this table contains samples of the metrics over time the {{ocient}} system collects samples regularly you can query the samples using standard sql queries to inspect the behavior of a single pipeline, a single task on a loader node, or across all pipelines the sys pipeline metrics info system catalog table provides metadata that explains the individual metric types see docid szkiqnuxled51xsbjf3w for a detailed description of how to use sys pipeline metrics and sys pipeline metric info to monitor the performance of your pipelines for the system catalog table definitions, see https //docs ocient com/system catalog metrics endpoints in the same way that the ocient system monitors system performance using docid\ jtynlpc rgdksxytttfyh endpoints with operational monitoring tools, data pipelines expose an api that allows operators to capture performance metrics over time access metrics endpoints you query the metrics using an http endpoint on each loader node located at \<loader ip address or hostname> 8090/metrics/v1 for example, you can retrieve metrics from a loader node by executing this command from a shell on the node or by running a similar request from a monitoring agent on the node curl http //localhost 8090/metrics/v1 each metric is represented as a json object that contains metric name metric value scope information query time value units whether or not the metric is incremental metrics can be incremental or instantaneous incremental metrics accumulate value over time, so they never decrease in value instantaneous metrics capture the current value, so they can decrease metrics scope similar to the sys pipeline metrics system catalog table, the ocient system scopes metrics to a loader node, a pipeline task, or a partition you can determine the scope of a metric based on the presence of the pipeline name , pipeline name external , and partition keys or reference the details in /#pipeline metrics details scope description loader metrics scoped to a loader do not have information about pipelines or partitions because metrics are collected on each node, this scope contains all activity on the node where you access the endpoint pipeline metrics scoped to a pipeline have a key named pipeline name with a value that corresponds to the identifier of the extractor task with hyphens removed the metrics also have a key named pipeline name external with a value that corresponds to the identifier of the pipeline partition metrics scoped to a partition have the same keys as metrics scoped to a pipeline the metrics also have a key named partition with a value that is the stream source identifier of the partition examples this json structure contains a metric scoped to a loader node { "name" "usage os heap", "time" 1713993689613976, "timestamp" "2024 04 24t21 21 29 613976119z", "value" 317855288, "units" "bytes", "incremental" false } this json structure contains a metric scoped to a pipeline { "name" "count record sent", "time" 1713994532578624, "timestamp" "2024 04 24t21 35 32 578624253z", "value" 100000, "units" "unitless", "incremental" true, "pipeline name external" "test rest external create", "pipeline name" "test" } this json structure contains a metric scoped to a partition { "name" "duration record processed", "time" 1714509826724477, "timestamp" "2024 04 30t20 43 46 724477142z", "value" 322432, "units" "milliseconds", "incremental" true, "pipeline name external" "test rest external create", "pipeline name" "58ff07cf732846f9830b728b3a8e4a7a", "partition" "226979", "sink index" "0" } pipeline metrics details name scope description importance count error pipeline the count of failures when processing records during pipeline execution for a pipeline high count file fail pipeline the number of files that have failures high count file processed pipeline the number of files that have been extracted from the source for processing the ocient system tracks this number before tokenization, transformation, or sending to the sink this number does not indicate the durability of the files or if files load with errors medium count file total pipeline the total number of files in the static file list high count partition durable offset committed partition the last committed offset of the kafka partition for the consumer group high count partition offset lag partition count partition offset max count partition durable offset committed for a specified kafka partition medium count partition offset max partition the maximum offset of the kafka partition high count partition records failed partition the count of failures when processing records during pipeline execution for a specified kafka partition high count partition records processed partition the number of records that have been extracted for processing on a specified kafka partition medium count record bytes maximum partition the largest record processed in bytes medium count record bytes minimum partition the smallest record processed in bytes medium count record bytes transformed pipeline the number of bytes that have been transformed, excluding delimiters medium count record sent pipeline the number of records that have been fully transformed and sent to the sink high count sink bytes queuedepth pipeline the number of transformed records queued to send to the sink medium count stream paused partition the number of times the processing of the record stream has been paused due to reaching a high watermark medium count stream unpaused partition the number of times the processing of the record stream has been unpaused due to reaching a low watermark medium duration file listed pipeline the number of milliseconds spent listing files from the data source high duration record durable partition the duration in milliseconds for making records durable ℹ️ the database takes time to make records durable, so this duration might be longer than the record processing duration medium duration record processed partition the duration in milliseconds for reading the records from the source, transforming them, and buffering them for sending to the sink medium uptime loader noce the number of seconds after the last restart of the extraction process high usage os heap loader noce the number of bytes used in the heap high usage os percentage cpu loader noce the percentage of cpu used by the {{jvm}} high related links https //docs ocient com/system catalog