Query Ocient
General SQL Syntax
Join Operations
join operations in the {{ocient}} system deliver high performance without requiring manual fine tuning of queries and tables this topic explains how the ocient system handles joins to help you better understand performance characteristics and make informed design choices for syntax and information about supported join types, see the docid\ nw9vavkey2v75moxm muo reference join optimization in ocient before starting any join operation, the ocient system automatically determines the optimal plan for combining tables the system determines the best order for joining tables to reduce processing and storage demands while increasing performance this process includes optimizing i/o with early filtering — the execution engine uses various techniques to dynamically reduce input and output (i/o) for join operations by filtering unneeded values determining join order — the optimizer automatically reorders tables for efficiency, usually from smallest to largest in terms of row count, regardless of how you order tables in the sql statement selecting the join strategy — the system can override user specified join types with more performant alternatives for example, the system can convert an outer join to an inner join when the result remains semantically equivalent as a result, you generally do not need to fine tune join queries or configure table definitions to designate a specific table order the optimizer ensures efficient execution plans even if you do not specify optimal table or join conditions join performance does not generally benefit from other table configurations for query optimization, such as {{timekey}} , clustering keys, secondary indexes, or compression how ocient joins tables the ocient system executes all join operations within sql statements as either hash or product joins hash joins the system uses hash joins automatically whenever your join predicate includes at least one equality comparison (e g , on a column 1 = b column 2 ) the system splits and shuffles data so the hash join executes in parallel this shuffling enables great scalability and performance, even with large tables product joins if your join predicate does not include an equality condition, the system uses a product join (a cartesian join with a filter) for example, the join predicate on a column 1 > b column 2 uses a product join because it is not an equality condition in a product join, the system broadcasts the smaller table of the join to all compute nodes, and then compares every row according to your condition product joins perform more slowly when operating on large tables, so it is generally better to use an equality comparison in your join sql statements when possible join storage considerations the performance of sql join operations is limited by the available system memory, specifically the size of the huge pages configured on the server during the execution of hash joins, the system typically loads the smaller table of the join into memory spill to disk if the memory required for the join operation exceeds what is physically available, the system spills intermediate data to disk, utilizing swap or temporary files this reliance on disk i/o can significantly degrade query performance compared to in memory processing, but it ensures that the system can complete the query without failure temporary storage requirements when intermediate results or hash tables exceed available memory, the join operation adjusts to using temporary disk space approximately equal to the combined sizes of the participating tables (both the build and probe sides) resource contention because joins are resource intensive, heavy join workloads can affect overall system throughput and potentially impact the performance of concurrent queries, particularly in systems without strict workload management configurations join best practices these best practices help you design join queries that balance performance, scalability, and maintainability while the ocient optimizer handles most join operations automatically, applying these techniques can further reduce overhead and improve query efficiency many of these tips are common for olap database administration and reflect proven approaches across analytical systems moderate denormalization in columnar analytical databases, moderate denormalization is a recommended best practice this denormalization means selectively embedding low cardinality, frequently used dimension attributes into a fact table to reduce the number of joins for example, duplicating a product category column in a sales fact table can remove the need for a separate join to the product dimension, while still leaving high cardinality tables normalized small dimension tables (less than 1 million rows) can also benefit from global dictionary compression (gdc) with gdc, the system can share dictionaries across tables, improving both performance and storage efficiency query design considerations beyond general join strategies, small design choices in your sql statements can impact performance significantly these practices help ensure that the optimizer can make efficient decisions and avoid unnecessary overhead during execution use window aggregate functions instead of self joins in many cases, window aggregate functions can replace self joins, especially for time period comparisons window aggregate example (most performant) in this example, the lag function retrieves the revenue from the previous month within the same result set, allowing you to calculate month over month changes without requiring a self join select month, revenue, lag(revenue) over (order by month) as prior month revenue, revenue lag(revenue) over (order by month) as month change from monthly sales; you can obtain the same results by using a self join statement in a query, but this is less efficient self join example (less performant) the query scans the monthly sales table twice, once as curr and again as prev this approach produces the same result as the window function but requires more overhead select curr month, curr revenue, prev revenue as prior month revenue, curr revenue prev revenue as month change from monthly sales curr join monthly sales prev on curr month = prev month + interval '1' month; match data types for join keys always ensure that join keys share the same data type mismatched types force the system to apply implicit type conversions, which can slow down execution and interfere with query optimization materialize calculations before joins avoid using functions or expressions directly in join conditions if you apply a calculation at join time (for example, casting a timestamp to a date), the system must evaluate that function for every row, which can result in losing the ability to use statistics or efficient hash keys a better method is to pre compute the value and store it as a separate column during loading or as part of a staging step this approach reduces per row processing costs and keeps joins predictable materializing calculations example materializing calculations before joins using a common table expression separates data transformation from the join itself, so the join operates on clean, precomputed keys this approach reduces workload during the join, enables better use of indexes and statistics, and yields simpler, more predictable query plans with summarized as ( select customer id, sum(purchase amount) as total spent from purchases group by customer id ) select c region, s total spent from customers c join summarized s on c customer id = s customer id; related links docid\ nw9vavkey2v75moxm muo docid\ knh6ndwnm hzuab9dwzrz docid\ khdpfv6pnr1nk4lwgxe j