Load Data
Loading by Data Source

Load JSON Data from S3

A common setup for batch loading files into

 is to load from a bucket on

S3 with time partitioned data. In many instances, a batch load is performed on a recurring basis to load new files. The LAT transforms each document into rows in one or more different tables. Ocient’s Loading and Transformation capabilities use a simple SQL-like syntax for transforming data. This tutorial will guide users through a simple example load using a small set of data in JSONL (newline delimited JSON) format. The data in this example is created from a test set for the

Business Intelligence tool.

Prerequisites

This tutorial assumes that:

  1. The Ocient System has network access to S3 from the Loader Nodes.
  2. An Ocient System is installed and configured with an active storage cluster (See the Ocient Application Configuration guide).
  3. Loading and Transformation is installed on the Loader Nodes.
  4. A default "sink" for the Ocient Loader Nodes is configured on the system.

Step 1: Create a New Database

To begin, you are going to load two example tables in a database. First, connect to a SQL Node using the Commands Supported by the Ocient JDBC CLI Program . Then run the following DDL command:

SQL


Step 2: Create Tables

To create tables in the new database, first connect to that database (e.g., connect to jdbc:ocient://sql-node:4050/metabase), then run the following DDL commands:

SQL


Now, the database tables are created and you can begin loading data.

Step 3: Create a Data Pipeline

Data pipelines are created using a simple loading configuration that is submitted to the Transformation Nodes to start loading. File Groups designate a batch of files to load. Each File Group is routed to one or more Ocient tables, and each column is the result of a transformation applied to the source document.

First, let’s inspect the data you plan to load. Each document has a format similar to the following example:

JSON


As you can see, this is similar to our target schema, but will require some transformation. Most transformations are identical to functions already in Ocient’s SQL dialect. To route data to our tables, you need to create a pipeline.json file that has the following structure:

JSON


The most interesting part of this pipeline.json file is the way it defines the file groups. Note that each sets the S3 endpoint, a bucket, a prefix used for filtering the considered files, and then a file matcher. In this case you only have a single file, but if there were many files matching the pattern **orders*.jsonl then they would all be part of the file group.

The final parameter that you supplied is the sort type for the file load. This informs the LAT how you would like data to be ordered when loading. The ideal sort is in time order according to the defined

. This makes more efficient segments and is much faster to load. In this case, you used the lexicographic sort which orders according to the characters in the file name. Other sort types are available to use file modified time or to extract the timestamp for sorting from the file path or file name.

Step 4: Using the Loading and Transformation CLI

With a pipeline.json file ready to go, you can test this pipeline. To test, use the LAT CLI. For these examples, assume that two LATs are configured and set using an environment variable.

First configure the LAT CLI to use the hosts of our Loading and Transformation service. You can add these to every CLI command as a flag, but for simplicity you can also set them as environment variables. From a command line, run the following command replacing the IP addresses with the IP addresses of your LAT processes:

Shell


Next, check on the status of the LAT:

Shell


Example response:

Text


This confirms that you can reach the LAT from our CLI. If the status is "Running" it means a pipeline is already executing a pipeline. Next, you are going to update and start our new pipeline.

This example uses secure connections. If you receive an SSL Error when testing, your service might not be configured to use TLS or you might need to use the --no-verify flag if the certificate validation fails.

Step 5: Test the Transformation

The CLI supports previewing a transformation with an example document and the pipeline file. This makes it easy to test our transformations.

First, save an example document to your file system to use for this test. For this demo, you can download an example file from https://ocient-examples.s3.amazonaws.com/metabase_samples/jsonl/orders.jsonl and save it to ~/orders.jsonl.

Next, make sure the pipeline.json file that you created is stored at ~/pipeline.json.

Now that both files are available, you can run the CLI to preview the results. Pass the preview command the topic name, the pipeline file, and the sample record file. The response contains the transformed data tied to the destination table and a list of any error records.

Similar to how you can preview records on a

 topic for file loads, you can supply any one of the topics you created as file groups to preview the transformations.

Shell


Example response:

JSON


You can see that the data is transformed and the columns to which each transformed value will be mapped. If there are issues in the values, these will appear in the recordErrors object. You can quickly update our pipeline.json file and preview again. Now, you can inspect different documents to confirm that various states of data cleanliness like missing columns, null values, and special characters are well handled by our transformations.

Step 6: Configure and Start the Data Pipeline

With a tested transformation, the next step is to setup and start the data pipeline.

First, you must configure the pipeline using the pipeline create command. This validates and creates the pipeline, but will not take effect until you start the pipeline:

Shell


Example response:

Text


In cases where there is an existing pipeline operating, it is necessary to stop the pipeline and remove the original pipeline before creating and starting the new pipeline.

Now that the pipeline has been created on all LAT Nodes, you can start the LAT by running the pipeline start commands:

Shell


Example responses:

Text


Step 7: Confirm that Loading is Operating Correctly

With our pipeline in place and running, data will immediately begin loading from the S3 file groups you defined. If there were many files per file group, the LAT would first sort the files, then partition them for the fastest loading based on the sorting criteria you provided.

Observing Loading Progress

With the pipeline running, data immediately begins to load into Ocient. To observe this progress, you can use the pipeline status command from the LAT Client or monitor the LAT metrics endpoint of the Loader Nodes.

You can check the status with this command by using the --list-files flag to include a summary of the files included in the load.

Shell


Example responses:

Text


Command:

Curl


Example response:

JSON


Check Row Counts in Tables

To confirm that you are seeing results in the target tables, you can also run some simple queries to check row counts. Depending on the streamloader role settings, the time for records to become queryable can vary from a few seconds to minutes:

Example Queries:

SQL

SQL


Now you can explore the data in these four tables with any Ocient SQL queries.



Related Links

LAT Overview