Key Concepts
The system is the collection of all of the nodes within a single environment. This architecture diagram is an example of one Ocient system. Across systems, the number of nodes can vary along with the total storage capacity. Within a single system, users can define varying numbers of databases, storage groups, users, etc.
An Ocient system is made up of a collection of distributed nodes which each play a predefined role in the data warehouse. The different roles are generally responsible for storing and analyzing data, administration, and moving data into Ocient. These nodes are connected to each other through a high-speed 100 Gbps network connection. In addition to storing user data, the data warehouse also stores metadata about the system and data for its own internal operation and recovery. This collection of nodes, their data, and the interconnection between them make up the Ocient system.
The end user, analyzing data in Ocient, connects into this system through a JDBC, ODBC, or a Python-based client. From there, they interact with the database objects such as tables and views. The architecture itself is abstracted from the user, allowing her to focus on SQL queries and results.
While the number of nodes in an environment might vary, the inclusion of and purpose of the different roles do not. The following sections briefly describe the components in the architecture.
Ocient supports SQL access to the database through JDBC, ODBC, and pyocient - a native module. Using these interfaces, users can connect to Ocient from industry-standard and familiar tooling like database administration tools, BI, and visualization tooling. Connecting through these clients allows the user to issue SQL to interact with the objects and data within the database. End users are not required to understand the organization, nodes, networking, or anything else of the architecture to analyze data in Ocient.
The SQL Node is responsible for receiving incoming SQL requests and parsing the statements. This node also serves as the interface for System Administrators and Database Administrators to configure and maintain an Ocient system. Administrators can connect to a SQL node using the Ocient command-line interface or their chosen client. SQL Nodes also run the Admin Role.
When a SQL node receives a query, it creates a plan for it using one of two optimization methods. After there is a plan for the query, the SQL Node distributes it to the different Foundation (LTS) Nodes. When the Foundation Nodes finish their processing, they return data back to the SQL Node for further processing and packaging the result set to the client. When the result set is finalized, the SQL Node returns the data to the client. Beyond the Foundation Nodes, further processing is done on the SQL Nodes to process intermediate result sets delivered by the Foundation Nodes. Depending on the query, the SQL Node is responsible for a different proportion of the overall work of the query. Common processing done here are aggregations and joins that happen across the intermediate results delivered by the Foundation Nodes.
Administrators also use DDL or DCL statements to make changes to an Ocient System. When submitted to a SQL Node, the node will either execute the command across the system when it is also performing the Admin role or forward the command to a node performing the Admin Role.
SQL Nodes typically also run the Admin Role. This connects to and tracks the status of the other nodes in the architecture and maintains system metadata via a Raft-based consensus mechanism. When administrators make changes to the Ocient System through a SQL client, the change is executed across the system via the Admin Role.
The Foundation Nodes both store the user data in Ocient and perform the bulk of query processing. A key architectural concept of Ocient is the Compute Adjacent Storage Architecture™, which collocates data and processing where possible. The Foundation Node is the central element of this architectural principle. When a query is deployed to the Foundation Node, it performs as much of the query as it can with the data on the node before having to join, aggregate, or compare it to other data. It returns that result up to the SQL Node for further processing, packaging, and returning to the user. Foundation Nodes contain the majority of the storage in an Ocient system and are also typically the largest in number.
The Loader Node is responsible for extracting, transforming, indexing, and loading data ingested by Ocient from batch file sources as well as streaming data sources such as Kafka®. A user can specify the extraction source details and supply a transformation pipeline that manipulates the structured or semi-structured source data before loading it into relational form into Ocient tables.
Loader Nodes operate in a horizontal scale-out fashion allowing the loading system to scale to fit various ingestion requirements. Transparently to the end user, the Loader Node also manages exactly-once guarantees with data as it is loaded and converts pages of rows into segments in the Foundation Nodes.
Data and communication flows across the nodes of an Ocient system for a variety of different purposes. There are two networks that connect nodes, a 100 Gbps high speed network and a 10 Gbps network. These are used in different ways for data flows and administrative purposes.
When a user issues a query against the database, their SQL statement moves from the JDBC or ODBC client to the SQL Node. The SQL Node parses and optimizes the query before handing the plan off to the Foundation (LTS) Nodes. The Foundation Nodes do the first level of processing against the data before handing the results back to the SQL Node. The SQL Node further process the data, constructs the results, and returns them to the client.
- The connection between the SQL Nodes and Foundation Nodes is typically a 100 Gbps network connection. The speed of the connection between the client and the SQL Node is subject to the speed of that network (outside Ocient).
- Every Foundation Node is connected to every SQL Node.
When administrators configure or make changes to an Ocient system, they do it using DDL and DCL via a SQL client. The SQL client relays the statement to a SQL node, which parses them into commands that the Admin Role on the SQL Node executes. This can impact the SQL Nodes, Foundation Nodes or Loader Nodes, depending on the type of change or operation.
- The administration flow connections with the Ocient system are 10 Gbps connections.
- Every node is connected to the SQL Nodes that perform the Admin Role.
When the load job is defined and started, the Loader Nodes read data from the source files, stages the data, processes that data, and moves the data into the Foundation Nodes. The Loading and Transformation process is responsible for managing this process and ensuring data is properly indexed and loaded.
- The network connections between the Loader and the Foundation Nodes are 100 Gbps.
- The Loader Nodes connect to every Foundation Node.
Similar to other databases, an is a grouping of tables, views, users, and data within one Ocient system. As mentioned above, a single system can have multiple databases. Tables and views are defined within each database. While Ocient stores data on disk in columnar format, creating and working with tables in Ocient is just like any other relational database. Ocient supports a wide variety of standard SQL datatypes along with an expanding set of geospatial datatypes.
Read more about:
Each table in Ocient can contain a TimeKey® column that is specified when defining the table. The TimeKey column is used to partition the data within the Foundation Nodes. In addition to defining the TimeKey column, users also specify how the time is represented and the TimeKey partition resolution in minutes. Time partitioning is an important performance mechanism used by the Ocient Hyperscale Data Warehouse. Because many, if not most, queries specify some time filter for the results, time partitioning allows Ocient to quickly skip data from irrelevant times.
When defining a table, users can also specify a clustering key containing one or more of the columns to further subdivide records on disk. This allows the database to quickly find records with the same key values.
Read more in Time Keys, Clustering Keys, and Indexes.
In order to provide reliability in the case of a hardware outage, Ocient uses erasure coding. Erasure coding is a mechanism to organize and compute parity blocks so that the system can rebuild missing data. This means that Ocient does not store redundant copies of the data for reliability. As a result, an Ocient System requires less overall storage than if it were storing one or more copies of the data for failover.
When a user defines their storage configuration, they specify the width of the group and the parity width. These values can vary based on how many Foundation Nodes are in the system and how resilient users want to make the Ocient system.
Segments in Ocient are the storage units that contain rows and are divided into a fixed number of coding blocks with a defined size. Segments are typically sized on the order of gigabytes. You set the segment size when you define tables in Ocient. The size of the segment must be a multiple of the coding block size. The coding block is the unit of parity calculation and the smallest unit of recovery as well.
Multiple segments combine to form Segment Groups. A Segment Group has a fixed number of segments, named the “width,” which is the number of segments in the group. It also has a pre-defined number of parity blocks per set of data blocks for resiliency. Each segment has a defined index in the group. The Segment Group is physically stored in a Storage Cluster.
When the user configures an Ocient system, they need to define at least one Storage Space and Storage Cluster. A Storage Cluster is a set of Foundation Nodes with an associated Storage Space. Also, these nodes coordinate together to store segment groups in a reliable manner. The Storage Cluster also has a width in the number of nodes. The width of the cluster and the Segment Groups stored on it must be the same. At the Storage Space level, the user defines the width and parity width of the space. Segments and their Segment Groups are stored across these storage spaces and clusters. To learn more about erasure coding, see Compute-Adjacent I/O on Large Working Sets.
An Ocient system must have at least one Storage Space. This defines how data is persisted across a Storage Cluster.
A Storage Space has two settings: Width and Parity Width. Choose the Width and Parity Width settings that provide the data reliability required for your application. Also, the overprovision of a storage space indicates the system has more nodes than the Width.
Width
Width is the total number of segments to use in each segment group as it is written to disk. Width cannot be greater than the number of nodes in the storage cluster.
Parity Width
The number of parity coding bits to use for each segment group. Parity Width must be less than Width. Queries still complete with less than or equal to the Parity Width number of nodes disabled. Parity Width is the fault tolerance in the number of failed nodes that the cluster can handle.
Parity width adds a percentage of storage overhead. For example, a parity of 2 with a width of 10 results in the need for 25% more storage because 2 bits of parity information are added for every 8 bits of data. To calculate the storage overhead, the formula is (Parity Width) / (Width - Parity Width).
To query data, you must have at least (Width - Parity Width) Foundation Nodes available.
Overprovision
Overprovision means that the system has more nodes than the width of the storage space. To load data, you must have at least Width Foundation Nodes available. In addition, overprovisioning helps to rebuild data when nodes are lost. You need at least Width Foundation Nodes available to rebuild data.
Ensure that the width on a production system is smaller than the number of Foundation (LTS) nodes in the cluster so that loading can continue even in the event of a node outage.
These example configurations show various combinations of storage space configuration and nodes.
Configuration | Width | Parity Width | Overprovision | Foundation Nodes |
---|---|---|---|---|
1 | 10 | 2 | 3 | 13 |
2 | 6 | 2 | 2 | 8 |
For example, configuration 1 has 13 Foundation Nodes with an overprovision of 3, width of 10, and parity width of 2. This configuration means that loading can continue with a loss of 3 Foundation Nodes while querying can continue with a loss of 2 Foundation Nodes. Assuming that you have 1 PB of storage on the system, this configuration uses approximately 25% of parity information for storing data.
This is a simple example of how the storage and redundancy concepts come together in an Ocient system.
A new Ocient system is configured, and the System Administrator needs to define Storage Spaces for her DBAs to use. She wants to store all data across the nine available Foundation Nodes. She also knows that she needs resiliency in the system in case of a hardware outage. First, she creates a Storage Space with a width of nine, all of her Foundation Nodes. She specifies a parity width of two, allowing for recovery from hardware failure for a maximum of 2 Foundation Nodes.
With her Storage Space defined, she can now define her Storage Cluster. She adds a new Storage Cluster with all of the nine Foundation Nodes and specifies the Storage Space that she previously defined. This tells Ocient how data will be stored in the Storage Cluster.
Now, a DBA creates his database. He creates a new table with a segment size of 4 GB and specifies the previously defined Storage Space. He specifies a loading pipeline to download data off of S3, transform it into the right columns for his table, and load into the table.
Ocient takes the incoming rows and creates Pages that accumulate new rows, bucketing them by the TimeKey and Clustering Keys. When enough Pages accumulate to create a segment, the Loader Node converts them to Segments of 4 GB based on the setting used when creating the table. It distributes those segments including the required parity information across the 9 Foundation Nodes included in the Storage Cluster definition. Ocient rotates the data so that all nodes have a similar amount of data and parity.
Data is made available for query as soon as it is stored in Pages by the Loader Nodes. When a data analyst queries this new table using a SQL Client, Pages and Segments are transparently federated into a single result set ensuring data is available to analysts as soon as it is loaded into pages.