Skip to main content
Skip to main content

Parquet

InputOutputAlias

Description

Apache Parquet is a columnar storage format widespread in the Hadoop ecosystem. ClickHouse supports read and write operations for this format.

Data Types Matching

The table below shows supported data types and how they match ClickHouse data types in INSERT and SELECT queries.

Parquet data type (INSERT)ClickHouse data typeParquet data type (SELECT)
BOOLBoolBOOL
UINT8, BOOLUInt8UINT8
INT8Int8/Enum8INT8
UINT16UInt16UINT16
INT16Int16/Enum16INT16
UINT32UInt32UINT32
INT32Int32INT32
UINT64UInt64UINT64
INT64Int64INT64
FLOATFloat32FLOAT
DOUBLEFloat64DOUBLE
DATEDate32DATE
TIME (ms)DateTimeUINT32
TIMESTAMP, TIME (us, ns)DateTime64TIMESTAMP
STRING, BINARYStringBINARY
STRING, BINARY, FIXED_LENGTH_BYTE_ARRAYFixedStringFIXED_LENGTH_BYTE_ARRAY
DECIMALDecimalDECIMAL
LISTArrayLIST
STRUCTTupleSTRUCT
MAPMapMAP
UINT32IPv4UINT32
FIXED_LENGTH_BYTE_ARRAY, BINARYIPv6FIXED_LENGTH_BYTE_ARRAY
FIXED_LENGTH_BYTE_ARRAY, BINARYInt128/UInt128/Int256/UInt256FIXED_LENGTH_BYTE_ARRAY

Arrays can be nested and can have a value of Nullable type as an argument. Tuple and Map types can also be nested.

Unsupported Parquet data types are:

  • FIXED_SIZE_BINARY
  • JSON
  • UUID
  • ENUM.

Data types of ClickHouse table columns can differ from the corresponding fields of the Parquet data inserted. When inserting data, ClickHouse interprets data types according to the table above and then casts the data to that data type which is set for the ClickHouse table column.

Example Usage

Inserting and Selecting Data

You can insert Parquet data from a file into ClickHouse table using the following command:

$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT Parquet"

You can select data from a ClickHouse table and save it into some file in the Parquet format using the following command:

$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_file.pq}

To exchange data with Hadoop, you can use the HDFS table engine.

Format Settings

SettingDescriptionDefault
input_format_parquet_case_insensitive_column_matchingIgnore case when matching Parquet columns with CH columns.0
input_format_parquet_preserve_orderAvoid reordering rows when reading from Parquet files. Usually makes it much slower.0
input_format_parquet_filter_push_downWhen reading Parquet files, skip whole row groups based on the WHERE/PREWHERE expressions and min/max statistics in the Parquet metadata.1
input_format_parquet_bloom_filter_push_downWhen reading Parquet files, skip whole row groups based on the WHERE expressions and bloom filter in the Parquet metadata.0
input_format_parquet_use_native_readerWhen reading Parquet files, to use native reader instead of arrow reader.0
input_format_parquet_allow_missing_columnsAllow missing columns while reading Parquet input formats1
input_format_parquet_local_file_min_bytes_for_seekMin bytes required for local read (file) to do seek, instead of read with ignore in Parquet input format8192
input_format_parquet_enable_row_group_prefetchEnable row group prefetching during parquet parsing. Currently, only single-threaded parsing can prefetch.1
input_format_parquet_skip_columns_with_unsupported_types_in_schema_inferenceSkip columns with unsupported types while schema inference for format Parquet0
input_format_parquet_max_block_sizeMax block size for parquet reader.65409
input_format_parquet_prefer_block_bytesAverage block bytes output by parquet reader16744704
output_format_parquet_row_group_sizeTarget row group size in rows.1000000
output_format_parquet_row_group_size_bytesTarget row group size in bytes, before compression.536870912
output_format_parquet_string_as_stringUse Parquet String type instead of Binary for String columns.1
output_format_parquet_fixed_string_as_fixed_byte_arrayUse Parquet FIXED_LENGTH_BYTE_ARRAY type instead of Binary for FixedString columns.1
output_format_parquet_versionParquet format version for output format. Supported versions: 1.0, 2.4, 2.6 and 2.latest (default)2.latest
output_format_parquet_compression_methodCompression method for Parquet output format. Supported codecs: snappy, lz4, brotli, zstd, gzip, none (uncompressed)zstd
output_format_parquet_compliant_nested_typesIn parquet file schema, use name 'element' instead of 'item' for list elements. This is a historical artifact of Arrow library implementation. Generally increases compatibility, except perhaps with some old versions of Arrow.1
output_format_parquet_use_custom_encoderUse a faster Parquet encoder implementation.1
output_format_parquet_parallel_encodingDo Parquet encoding in multiple threads. Requires output_format_parquet_use_custom_encoder.1
output_format_parquet_data_page_sizeTarget page size in bytes, before compression.1048576
output_format_parquet_batch_sizeCheck page size every this many rows. Consider decreasing if you have columns with average values size above a few KBs.1024
output_format_parquet_write_page_indexAdd a possibility to write page index into parquet files.1
input_format_parquet_import_nestedObsolete setting, does nothing.0