Connection Driver Reference
JDBC Manual

Data Extract Manual

the data extract tool is a part of the jdbc driver to unload data you can execute the tool directly from the jdbc cli the tool extracts a result set to delimited files in the target location to invoke the jdbc cli, see the jdbc manual docid\ wwz1djxrgnnfu96ig5uyl to use the data extract tool, you must have jdbc version 2 63 or higher general command structure here is the general structure of an extract command extract to \<location type>{ local | s3 } \[options(\[param=value \[, ]])] as \<query> the command is case insensitive each extract command must start with extract to the location type needs to follow and must be either local or {{aws}} s3 you can enclose additional options within a pair of parenthesis following the word options note that location type is the only required option next, the query follows the word as this example is a simple general command structure extract to local options( file prefix="/home/user/out/data ", file extension=" csv" ) as select c1 from sys dummy10; for supported options, see data extract manual docid\ nzqr3pc s c0yl ygm ll specify options, quoting, and escaping quotes here is the general format of options key1 = value1, key2 = value2, , keyn = valuen you need to follow certain guidelines when you specify options keys (option names) can only consist of alphanumeric characters and are unquoted values can be either quoted (with the reserved character " ) or unquoted if values are unquoted, they can only contain alphanumeric characters if the value has a non alphanumeric character, you must quote it with the reserved character " note that the single quote character does not work options(file prefix = "/path/to/dir/result", header mode = none, file extension = " csv") to use the reserved quote character " as an argument, you must escape it with the backslash character \ to use \ as an argument, you must escape it with another \ this code illustrates both of these scenarios options(field optionally enclosed by = "\\"", escape = "\\\\") file naming conventions when you use the extract tool, the tool produces a number of files if the tool extracts with a single thread, the tool names files in this convention {file prefix} {file number}{file extension}{gzip extension} file prefix option specified by the user file number this part of the convention is 0 if all results go into one file however, if max rows per file is set, then rows are placed into one file until that limit is hit then, another file generates with an incremented file number file number starts from 0 file extension option specified by the user gzip extension if you specify gzip compression, then the tool adds the gzip suffix if you use the multithreaded extract, then the naming convention is {file prefix}{thread number} {file number}{file extension} file prefix , file extension , gzip extension is still determined with the set options thread number is the number of the thread going from 0 to n 1 where n is the number of threads specified the file number is now calculated per thread with the number of rows given to that thread recall that rows are distributed to threads in a round robin order, starting with thread 0 so, thread 0 receives the rows 0, n, 2n, 3n, etc thread 1 receives the rows 1, n + 1, 2n + 1, 3n + 1, etc extract options option usage comments default location type local, s3 dictates where the results are extracted to local extracts the results to the local machine {{aws}} s3 extracts the results to s3 when using s3, other options are required see s3 options for details none, must be specified file prefix local, s3 dictates the prefix used on the results when extracting to local, this is the prefix used to determine the path of the results this can be a relative or full path when extracting to s3, this is the prefix for the key in either case, additional file numbers and file extensions are added to generate the complete filename results file extension local, s3 the file extension given to each result file csv max rows per file local, s3 if non zero, the max rows per file modifier splits the results into files with maximum max rows per file in each file null compression local, s3 compression type to use supported compression types are none — n o compression gzip — gzip compression bzip2 — bzip2 compression xz — xz compression none record delimiter local, s3 delimiter to use between records this supports {{java}} strings, so special characters can be input using escape characters utf 16 \u\[utf 16 value] or octal \\\[octal value] \n field delimiter local, s3 delimiter to use between fields within a record this supports java strings, so special characters can be input using escape characters utf 16 \u\[utf 16 value] or octal \\\[octal value] , header mode local, s3 dictates how to manage headers in result files supported values are none, all files, and first file none — the tool writes all output files without an additional header all files — the tool adds column names as a header in the first row of each output file each file has at most max rows per file + 1 total rows first file — the tool adds column names as a header in the first row of the first output file the tool does not add the header to subsequent files each file has at most max rows per file total rows, inclusive of the header in the first file none null format local, s3 format string to use for writing null values to the output files "" (empty string) encoding local, s3 encoding used when writing out data to files t he default charset of the system, as determined by the oracle documentation escape local, s3 character used for escaping quoted fields set this to the null character (\0) to indicate that the escape character is not specified " field optionally enclosed by local, s3 sometimes, you need to surround fields in a character for example, the field might have a literal comma generally, this character is also known as the quote character set this option to the null character (\0) to indicate that the quote character is not specified " binary format local, s3 the format with which to encode the binary data type supports utf 8, hexadecimal, and base64 hexadecimal compression block size local, s3 the number of bytes that comprise each block to be compressed; larger blocks result in better compression at the expense of more ram usage when compressing 4194304 compression level local, s3 an integer value \[ 1, 9] use 1 for the gzip default compression level, 0 for "no compression", or a value \[1 9] where 1 indicates fastest compression and 9 indicates best compression 1 num compression threads local, s3 the number of threads to use for compression leave unspecified for the default value $(number of cores 2) escape unquoted values local, s3 dictates whether to write escape sequences in unquoted values only applicable when field delimiter is set to , false input escaped local, s3 dictates whether the input is already escaped when this option is set to true, the tool does not add escape sequences, and data is written without changes to the output file only applicable when field delimiter is set to , ensure that data is properly escaped, otherwise the extract might produce invalid csv data false quote all fields local, s3 dictates whether all written fields are enclosed with quotes when this option is set to true, the tool encloses all fields with the field optionally enclosed by character false bucket s3 s3 bucket to use ignored if extracting locally if extracting to s3, this argument is required none, required for s3 aws key id s3 aws key id if empty, the cli uses the java aws sdk default credentials provider chain documented here https //docs aws amazon com/awsjavasdk/latest/javadoc/com/amazonaws/auth/defaultawscredentialsproviderchain html "" aws secret key s3 aws secret key if empty, the cli uses the java aws sdk default credentials provider chain documented here https //docs aws amazon com/awsjavasdk/latest/javadoc/com/amazonaws/auth/defaultawscredentialsproviderchain html "" region s3 s3 region to upload to ignored when extracting to local us east 2 endpoint s3 endpoint for s3 upload required when extracting to s3 ignored when extracting to local documentation https //docs aws amazon com/sdk for javascript/v2/developer guide/specifying endpoints html on endpoint formatting none, required for s3 path style access s3 whether path style access https //docs aws amazon com/amazons3/latest/userguide/access bucket intro html should be used to access a bucket false translate characters mode local, s3 character mode to use for translating characters supported values are char and hex the tool performs character translation only if you specify translate characters from and translate characters to the tool replaces the nth character in translate characters from with the nth character in translate characters to in the extracted records when translate characters mode is set to char, translate characters from, and translate characters to must be equal length strings of utf 8 characters for example translate characters mode="char", translate characters from="àëï", translate characters to="aei" when translate characters mode is set to hex, translate characters from, and translate characters to must be comma separated lists of hexadecimal utf 8 code points with the same number of list elements for example translate characters mode="hex", translate characters from="c3a0,c3ab,c3af", translate characters to="61,65,69" char translate characters from local, s3 sequence of utf 8 characters in the source data to translate to a corresponding character in the translate characters to option see the translate characters mode option for the expected format "" translate characters to local, s3 sequence of utf 8 characters to use as a replacement for the characters included in translate characters from see the translate characters mode option for the expected format "" examples options( ) is only required when you specify additional options this example unloads the results of select c1 from sys dummy10 to the local machine at the relative path result 0 csv extract to local as select c1 from sys dummy10; sys dummy creates a virtual table with the specified number for rows for details, see generate tables using sys dummy docid\ udbs dxonkysghxlhtbbr this example extracts the results of select c1 from sys dummy10 to the local machine at the absolute path /home/user/out/data 0 csv extract to local options( file prefix="/home/user/out/data ", file extension=" csv" ) as select c1 from sys dummy10; the result set has 100 rows, and max rows per file is set to 10 thus, the tool writes 10 files to s3 with the keys query results/query 0/ results {file number} csv gzip , where file number ranges from 0 to 9, inclusive related links connect using jdbc docid\ eql yafaxip4xpah rj90 jdbc manual docid\ wwz1djxrgnnfu96ig5uyl