Data Pipeline Load of CSV Data from S3
A common setup for loading files in a batch into is to load from a bucket on S3 with time-partitioned data. Often, you must perform a batch load repeatedly to load new files.
Ocient uses data pipelines to transform each document into rows in one or more different tables. The loading and transformation capabilities use a simple SQL-like syntax for transforming data. This tutorial guides you through a simple example load using a small data set in CSV format. The data in this example comes from a test set for the Business Intelligence tool.
This tutorial assumes that:
- The Ocient System has network access to S3 from the Loader Nodes.
- An Ocient System is installed and configured with an active Storage Cluster (See the Ocient Application Configuration guide).
To begin, load two example tables in a database. First, connect to a SQL Node using the Commands Supported by the Ocient JDBC CLI Program . Then, execute the CREATE DATABASE SQL statement for a database named metabase.
Create the orders table in the new database. First, connect to that database (e.g., connect to jdbc:ocient://sql-node:4050/metabase), and then execute this CREATE TABLE SQL statement that specifies to create a table with these columns and a clustering index based on the user_id and product_id columns:
- created_at as a timestamp that is not nullable.
- id, user_id, and product_id as integers that are not nullable.
- subtotal, tax, total, and discount as floating point numbers.
- quantity as an integer.
The database creates the orders table, and you can begin loading data.
Create data pipelines using the CREATE PIPELINE SQL statement. To load data, you first create a pipeline with the definition of the source, data format, and transformation rules using a SQL-like declarative syntax. Then, you execute the START PIPELINE command to start the load. You can observe progress and status using system tables and views.
Each Ocient pipeline defines a single data source and the target table or tables into which data loads. A data source includes the location of the source and filters on the source to define the specific data set to load. This example loads data from two data sources, where each source is located in a directory on the same S3 bucket.
First, inspect the data that you plan to load. Each document has a format similar to this example CSV file named orders.csv.
In this case, Ocient automatically transforms the data to the target columns using some sensible conventions. In other cases, loads require some transformation. Most transformations are identical to functions that already exist in the SQL dialect of the Ocient System.
Create a pipeline named orders_pipeline for the orders data set from your database connection prompt. Use the S3 data source with endpoint https://s3.us-east-1.amazonaws.com, bucket ocient-examples, and filter metabase_samples/csv/orders.csv. Specify the CSV format with one header line. Load the data into the public.orders table. The SELECT part of the SQL statement maps the fields in the CSV file to the target columns in the created table.
The pipeline has three main sections:
- SOURCE — In this case, load data from S3. Specify the endpoint and bucket. The FILTER parameter identifies the file or files to load. The load uses a single CSV file. Options exist to add wildcards or regular expressions to isolate different file sets.
- EXTRACT — Set the format to CSV files and note that there is one header line in the file. This specification skips that row when the Ocient System processes the file. Many other options exist for delimited data such as a record delimiter and field delimiter.
- INTO ... SELECT — Choose the target table public.orders and select the fields from the CSV file. The numeric index identifies each file field. Importantly, similar to other SQL syntax, the first field in the file is $1, not $0. Each field maps to a target column using the as syntax.
After you successfully create the orders_pipeline pipeline, execute the START PIPELINE command.
With your pipeline running, data immediately begins to load from the S3 files that you defined. If there are many files in each file group, the load process first sorts the files into batches, partitions them for parallel processing, and assigns them to Loader Nodes.
You can check the pipeline status and progress by querying information_schema.pipeline_status or by executing SHOW PIPELINE_STATUS
Output
After the status of the pipeline changes to COMPLETED, all data is available in the target table.
After a few seconds, the data is available for query in the public.orders table.
Output
You can drop the pipeline with the DROP PIPELINE orders_pipeline; SQL statement. Execution of this statement leaves the data in your target table, but removes metadata about the pipeline execution from the system.