Skip to main content
Skip to main content
Edit this page

MergeTree tables settings

System table system.merge_tree_settings shows the globally set MergeTree settings.

MergeTree settings can be set in the merge_tree section of the server config file, or specified for each MergeTree table individually in the SETTINGS clause of the CREATE TABLE statement.

Example for customizing setting max_suspicious_broken_parts:

Configure the default for all MergeTree tables in the server configuration file:

Set for a particular table:

Change the settings for a particular table using ALTER TABLE ... MODIFY SETTING:

MergeTree settings

adaptive_write_buffer_initial_size

TypeDefault
UInt6416384

Initial size of an adaptive write buffer

add_implicit_sign_column_constraint_for_collapsing_engine

TypeDefault
Bool0

If true, adds an implicit constraint for the sign column of a CollapsingMergeTree or VersionedCollapsingMergeTree table to allow only valid values (1 and -1).

add_minmax_index_for_numeric_columns

TypeDefault
Bool0

When enabled, min-max (skipping) indices are added for all numeric columns of the table.

add_minmax_index_for_string_columns

TypeDefault
Bool0

When enabled, min-max (skipping) indices are added for all string columns of the table.

allow_experimental_replacing_merge_with_cleanup

Experimental feature. Learn more.
TypeDefault
Bool0

Allow experimental CLEANUP merges for ReplacingMergeTree with is_deleted column. When enabled, allows using OPTIMIZE ... FINAL CLEANUP to manually merge all parts in a partition down to a single part and removing any deleted rows.

Also allows enabling such merges to happen automatically in the background with settings min_age_to_force_merge_seconds, min_age_to_force_merge_on_partition_only and enable_replacing_merge_with_cleanup_for_min_age_to_force_merge.

allow_experimental_reverse_key

Experimental feature. Learn more.
TypeDefault
Bool0

Enables support for descending sort order in MergeTree sorting keys. This setting is particularly useful for time series analysis and Top-N queries, allowing data to be stored in reverse chronological order to optimize query performance.

With allow_experimental_reverse_key enabled, you can define descending sort orders within the ORDER BY clause of a MergeTree table. This enables the use of more efficient ReadInOrder optimizations instead of ReadInReverseOrder for descending queries.

Example

By using ORDER BY time DESC in the query, ReadInOrder is applied.

Default Value: false

allow_floating_point_partition_key

TypeDefault
Bool0

Enables to allow floating-point number as a partition key.

Possible values:

  • 0 — Floating-point partition key not allowed.
  • 1 — Floating-point partition key allowed.

allow_nullable_key

TypeDefault
Bool0

Allow Nullable types as primary keys.

allow_reduce_blocking_parts_task

TypeDefault
Bool1

Background task which reduces blocking parts for shared merge tree tables. Only in ClickHouse Cloud

allow_remote_fs_zero_copy_replication

Beta feature. Learn more.
TypeDefault
Bool0

Don't use this setting in production, because it is not ready.

allow_suspicious_indices

TypeDefault
Bool0

Reject primary/secondary indexes and sorting keys with identical expressions

allow_vertical_merges_from_compact_to_wide_parts

TypeDefault
Bool1

Allows vertical merges from compact to wide parts. This settings must have the same value on all replicas.

always_fetch_merged_part

TypeDefault
Bool0

If true, this replica never merges parts and always downloads merged parts from other replicas.

Possible values:

  • true, false
TypeDefault
Bool0

Always copy data instead of hardlinking during mutations/replaces/detaches and so on.

assign_part_uuids

TypeDefault
Bool0

When enabled, a unique part identifier will be assigned for every new part. Before enabling, check that all replicas support UUID version 4.

async_block_ids_cache_update_wait_ms

TypeDefault
Milliseconds100

How long each insert iteration will wait for async_block_ids_cache update

async_insert

TypeDefault
Bool0

If true, data from INSERT query is stored in queue and later flushed to table in background.

background_task_preferred_step_execution_time_ms

TypeDefault
Milliseconds50

Target time to execution of one step of merge or mutation. Can be exceeded if one step takes longer time

cache_populated_by_fetch

TypeDefault
Bool0
note

This setting applies only to ClickHouse Cloud.

When cache_populated_by_fetch is disabled (the default setting), new data parts are loaded into the cache only when a query is run that requires those parts.

If enabled, cache_populated_by_fetch will instead cause all nodes to load new data parts from storage into their cache without requiring a query to trigger such an action.

See Also

check_delay_period

TypeDefault
UInt6460

Obsolete setting, does nothing.

check_sample_column_is_correct

TypeDefault
Bool1

Enables the check at table creation, that the data type of a column for s ampling or sampling expression is correct. The data type must be one of unsigned integer types: UInt8, UInt16, UInt32, UInt64.

Possible values:

  • true — The check is enabled.
  • false — The check is disabled at table creation.

Default value: true.

By default, the ClickHouse server checks at table creation the data type of a column for sampling or sampling expression. If you already have tables with incorrect sampling expression and do not want the server to raise an exception during startup, set check_sample_column_is_correct to false.

clean_deleted_rows

TypeDefault
CleanDeletedRowsNever

Obsolete setting, does nothing.

cleanup_delay_period

TypeDefault
UInt6430

Minimum period to clean old queue logs, blocks hashes and parts.

cleanup_delay_period_random_add

TypeDefault
UInt6410

Add uniformly distributed value from 0 to x seconds to cleanup_delay_period to avoid thundering herd effect and subsequent DoS of ZooKeeper in case of very large number of tables.

cleanup_thread_preferred_points_per_iteration

TypeDefault
UInt64150

Preferred batch size for background cleanup (points are abstract but 1 point is approximately equivalent to 1 inserted block).

cleanup_threads

TypeDefault
UInt64128

Threads for cleanup of outdated threads. Only available in ClickHouse Cloud

columns_and_secondary_indices_sizes_lazy_calculation

TypeDefault
Bool1

Calculate columns and secondary indices sizes lazily on first request instead of on table initialization.

columns_to_prewarm_mark_cache

List of columns to prewarm mark cache for (if enabled). Empty means all columns

compact_parts_max_bytes_to_buffer

TypeDefault
UInt64134217728

Only available in ClickHouse Cloud. Maximal number of bytes to write in a single stripe in compact parts

compact_parts_max_granules_to_buffer

TypeDefault
UInt64128

Only available in ClickHouse Cloud. Maximal number of granules to write in a single stripe in compact parts

compact_parts_merge_max_bytes_to_prefetch_part

TypeDefault
UInt6416777216

Only available in ClickHouse Cloud. Maximal size of compact part to read it in a whole to memory during merge.

compatibility_allow_sampling_expression_not_in_primary_key

TypeDefault
Bool0

Allow to create a table with sampling expression not in primary key. This is needed only to temporarily allow to run the server with wrong tables for backward compatibility.

compress_marks

TypeDefault
Bool1

Marks support compression, reduce mark file size and speed up network transmission.

compress_primary_key

TypeDefault
Bool1

Primary key support compression, reduce primary key file size and speed up network transmission.

concurrent_part_removal_threshold

TypeDefault
UInt64100

Activate concurrent part removal (see 'max_part_removal_threads') only if the number of inactive data parts is at least this.

deduplicate_merge_projection_mode

TypeDefault
DeduplicateMergeProjectionModethrow

Whether to allow create projection for the table with non-classic MergeTree, that is not (Replicated, Shared) MergeTree. Ignore option is purely for compatibility which might result in incorrect answer. Otherwise, if allowed, what is the action when merge projections, either drop or rebuild. So classic MergeTree would ignore this setting. It also controls OPTIMIZE DEDUPLICATE as well, but has effect on all MergeTree family members. Similar to the option lightweight_mutation_projection_mode, it is also part level.

Possible values:

  • ignore
  • throw
  • drop
  • rebuild

detach_not_byte_identical_parts

TypeDefault
Bool0

Enables or disables detaching a data part on a replica after a merge or a mutation, if it is not byte-identical to data parts on other replicas. If disabled, the data part is removed. Activate this setting if you want to analyze such parts later.

The setting is applicable to MergeTree tables with enabled data replication.

Possible values:

  • 0 — Parts are removed.
  • 1 — Parts are detached.

detach_old_local_parts_when_cloning_replica

TypeDefault
Bool1

Do not remove old local parts when repairing lost replica.

Possible values:

  • true
  • false

disable_detach_partition_for_zero_copy_replication

TypeDefault
Bool1

Disable DETACH PARTITION query for zero copy replication.

disable_fetch_partition_for_zero_copy_replication

TypeDefault
Bool1

Disable FETCH PARTITION query for zero copy replication.

disable_freeze_partition_for_zero_copy_replication

TypeDefault
Bool1

Disable FREEZE PARTITION query for zero copy replication.

disk

Name of storage disk. Can be specified instead of storage policy.

enable_block_number_column

TypeDefault
Bool0

Enable persisting column _block_number for each row.

enable_block_offset_column

TypeDefault
Bool0

Persists virtual column _block_number on merges.

enable_index_granularity_compression

TypeDefault
Bool1

Compress in memory values of index granularity if it is possible

enable_max_bytes_limit_for_min_age_to_force_merge

TypeDefault
Bool0

If settings min_age_to_force_merge_seconds and min_age_to_force_merge_on_partition_only should respect setting max_bytes_to_merge_at_max_space_in_pool.

Possible values:

  • true
  • false

enable_mixed_granularity_parts

TypeDefault
Bool1

Enables or disables transitioning to control the granule size with the index_granularity_bytes setting. Before version 19.11, there was only the index_granularity setting for restricting granule size. The index_granularity_bytes setting improves ClickHouse performance when selecting data from tables with big rows (tens and hundreds of megabytes). If you have tables with big rows, you can enable this setting for the tables to improve the efficiency of SELECT queries.

enable_replacing_merge_with_cleanup_for_min_age_to_force_merge

Experimental feature. Learn more.
TypeDefault
Bool0

Whether to use CLEANUP merges for ReplacingMergeTree when merging partitions down to a single part. Requires allow_experimental_replacing_merge_with_cleanup, min_age_to_force_merge_seconds and min_age_to_force_merge_on_partition_only to be enabled.

Possible values:

  • true
  • false

enable_the_endpoint_id_with_zookeeper_name_prefix

TypeDefault
Bool0

Enable the endpoint id with zookeeper name prefix for the replicated merge tree table.

enable_vertical_merge_algorithm

TypeDefault
UInt641

Enable usage of Vertical merge algorithm.

enforce_index_structure_match_on_partition_manipulation

TypeDefault
Bool0

If this setting is enabled for destination table of a partition manipulation query (ATTACH/MOVE/REPLACE PARTITION), the indices and projections must be identical between the source and destination tables. Otherwise, the destination table can have a superset of the source table's indices and projections.

exclude_deleted_rows_for_part_size_in_merge

TypeDefault
Bool0

If enabled, estimated actual size of data parts (i.e., excluding those rows that have been deleted through DELETE FROM) will be used when selecting parts to merge. Note that this behavior is only triggered for data parts affected by DELETE FROM executed after this setting is enabled.

Possible values:

  • true
  • false

See Also

execute_merges_on_single_replica_time_threshold

TypeDefault
Seconds0

When this setting has a value greater than zero, only a single replica starts the merge immediately, and other replicas wait up to that amount of time to download the result instead of doing merges locally. If the chosen replica doesn't finish the merge during that amount of time, fallback to standard behavior happens.

Possible values:

  • Any positive integer.

fault_probability_after_part_commit

TypeDefault
Float0

For testing. Do not change it.

fault_probability_before_part_commit

TypeDefault
Float0

For testing. Do not change it.

finished_mutations_to_keep

TypeDefault
UInt64100

How many records about mutations that are done to keep. If zero, then keep all of them.

force_read_through_cache_for_merges

Experimental feature. Learn more.
TypeDefault
Bool0

Force read-through filesystem cache for merges

fsync_after_insert

TypeDefault
Bool0

Do fsync for every inserted part. Significantly decreases performance of inserts, not recommended to use with wide parts.

fsync_part_directory

TypeDefault
Bool0

Do fsync for part directory after all part operations (writes, renames, etc.).

in_memory_parts_enable_wal

TypeDefault
Bool1

Obsolete setting, does nothing.

in_memory_parts_insert_sync

TypeDefault
Bool0

Obsolete setting, does nothing.

inactive_parts_to_delay_insert

TypeDefault
UInt640

If the number of inactive parts in a single partition in the table exceeds the inactive_parts_to_delay_insert value, an INSERT is artificially slowed down.

tip

It is useful when a server fails to clean up parts quickly enough.

Possible values:

  • Any positive integer.

inactive_parts_to_throw_insert

TypeDefault
UInt640

If the number of inactive parts in a single partition more than the inactive_parts_to_throw_insert value, INSERT is interrupted with the following error:

"Too many inactive parts (N). Parts cleaning are processing significantly slower than inserts" exception."

Possible values:

  • Any positive integer.

index_granularity

TypeDefault
UInt648192

Maximum number of data rows between the marks of an index. I.e how many rows correspond to one primary key value.

index_granularity_bytes

TypeDefault
UInt6410485760

Maximum size of data granules in bytes.

To restrict the granule size only by number of rows, set to 0 (not recommended).

initialization_retry_period

TypeDefault
Seconds60

Retry period for table initialization, in seconds.

kill_delay_period

TypeDefault
UInt6430

Obsolete setting, does nothing.

kill_delay_period_random_add

TypeDefault
UInt6410

Obsolete setting, does nothing.

kill_threads

TypeDefault
UInt64128

Obsolete setting, does nothing.

lightweight_mutation_projection_mode

TypeDefault
LightweightMutationProjectionModethrow

By default, lightweight delete DELETE does not work for tables with projections. This is because rows in a projection may be affected by a DELETE operation. So the default value would be throw. However, this option can change the behavior. With the value either drop or rebuild, deletes will work with projections. drop would delete the projection so it might be fast in the current query as projection gets deleted but slow in future queries as no projection attached. rebuild would rebuild the projection which might affect the performance of the current query, but might speedup for future queries. A good thing is that these options would only work in the part level, which means projections in the part that don't get touched would stay intact instead of triggering any action like drop or rebuild.

Possible values:

  • throw
  • drop
  • rebuild

load_existing_rows_count_for_old_parts

TypeDefault
Bool0

If enabled along with exclude_deleted_rows_for_part_size_in_merge, deleted rows count for existing data parts will be calculated during table starting up. Note that it may slow down start up table loading.

Possible values:

  • true
  • false

See Also

lock_acquire_timeout_for_background_operations

TypeDefault
Seconds120

For background operations like merges, mutations etc. How many seconds before failing to acquire table locks.

marks_compress_block_size

TypeDefault
UInt6465536

Mark compress block size, the actual size of the block to compress.

marks_compression_codec

TypeDefault
StringZSTD(3)

Compression encoding used by marks, marks are small enough and cached, so the default compression is ZSTD(3).

materialize_skip_indexes_on_merge

TypeDefault
Bool1

When enabled, merges build and store skip indices for new parts. Otherwise they can be created/stored by explicit MATERIALIZE INDEX

materialize_ttl_recalculate_only

TypeDefault
Bool0

Only recalculate ttl info when MATERIALIZE TTL

max_avg_part_size_for_too_many_parts

TypeDefault
UInt641073741824

The 'too many parts' check according to 'parts_to_delay_insert' and 'parts_to_throw_insert' will be active only if the average part size (in the relevant partition) is not larger than the specified threshold. If it is larger than the specified threshold, the INSERTs will be neither delayed or rejected. This allows to have hundreds of terabytes in a single table on a single server if the parts are successfully merged to larger parts. This does not affect the thresholds on inactive parts or total parts.

max_bytes_to_merge_at_max_space_in_pool

TypeDefault
UInt64161061273600

The maximum total parts size (in bytes) to be merged into one part, if there are enough resources available. Corresponds roughly to the maximum possible part size created by an automatic background merge.

Possible values:

  • Any positive integer.

The merge scheduler periodically analyzes the sizes and number of parts in partitions, and if there are enough free resources in the pool, it starts background merges. Merges occur until the total size of the source parts is larger than max_bytes_to_merge_at_max_space_in_pool.

Merges initiated by OPTIMIZE FINAL ignore max_bytes_to_merge_at_max_space_in_pool (only the free disk space is taken into account).

max_bytes_to_merge_at_min_space_in_pool

TypeDefault
UInt641048576

The maximum total part size (in bytes) to be merged into one part, with the minimum available resources in the background pool.

Possible values:

  • Any positive integer.

max_bytes_to_merge_at_min_space_in_pool defines the maximum total size of parts which can be merged despite the lack of available disk space (in pool). This is necessary to reduce the number of small parts and the chance of Too many parts errors. Merges book disk space by doubling the total merged parts sizes. Thus, with a small amount of free disk space, a situation may occur in which there is free space, but this space is already booked by ongoing large merges, so other merges are unable to start, and the number of small parts grows with every insert.

max_cleanup_delay_period

TypeDefault
UInt64300

Maximum period to clean old queue logs, blocks hashes and parts.

max_compress_block_size

TypeDefault
UInt640

Maximum size of blocks of uncompressed data before compressing for writing to a table. You can also specify this setting in the global settings (see max_compress_block_size setting). The value specified when the table is created overrides the global value for this setting.

max_concurrent_queries

TypeDefault
UInt640

Max number of concurrently executed queries related to the MergeTree table. Queries will still be limited by other max_concurrent_queries settings.

Possible values:

  • Positive integer.
  • 0 — No limit.

Default value: 0 (no limit).

Example

max_delay_to_insert

TypeDefault
UInt641

The value in seconds, which is used to calculate the INSERT delay, if the number of active parts in a single partition exceeds the parts_to_delay_insert value.

Possible values:

  • Any positive integer.

The delay (in milliseconds) for INSERT is calculated by the formula:

For example, if a partition has 299 active parts and parts_to_throw_insert = 300, parts_to_delay_insert = 150, max_delay_to_insert = 1, INSERT is delayed for pow( 1 * 1000, (1 + 299 - 150) / (300 - 150) ) = 1000 milliseconds.

Starting from version 23.1 formula has been changed to:

For example, if a partition has 224 active parts and parts_to_throw_insert = 300, parts_to_delay_insert = 150, max_delay_to_insert = 1, min_delay_to_insert_ms = 10, INSERT is delayed for max( 10, 1 * 1000 * (224 - 150 + 1) / (300 - 150) ) = 500 milliseconds.

max_delay_to_mutate_ms

TypeDefault
UInt641000

Max delay of mutating MergeTree table in milliseconds, if there are a lot of unfinished mutations

max_digestion_size_per_segment

TypeDefault
UInt64268435456

Max number of bytes to digest per segment to build GIN index.

max_file_name_length

TypeDefault
UInt64127

The maximal length of the file name to keep it as is without hashing. Takes effect only if setting replace_long_file_name_to_hash is enabled. The value of this setting does not include the length of file extension. So, it is recommended to set it below the maximum filename length (usually 255 bytes) with some gap to avoid filesystem errors.

max_files_to_modify_in_alter_columns

TypeDefault
UInt6475

Do not apply ALTER if number of files for modification(deletion, addition) is greater than this setting.

Possible values:

  • Any positive integer.

Default value: 75

max_files_to_remove_in_alter_columns

TypeDefault
UInt6450

Do not apply ALTER, if the number of files for deletion is greater than this setting.

Possible values:

  • Any positive integer.

max_merge_delayed_streams_for_parallel_write

TypeDefault
UInt6440

The maximum number of streams (columns) that can be flushed in parallel (analog of max_insert_delayed_streams_for_parallel_write for merges). Works only for Vertical merges.

max_merge_selecting_sleep_ms

TypeDefault
UInt6460000

Maximum time to wait before trying to select parts to merge again after no parts were selected. A lower setting will trigger selecting tasks in background_schedule_pool frequently which result in large amount of requests to zookeeper in large-scale clusters

max_number_of_merges_with_ttl_in_pool

TypeDefault
UInt642

When there is more than specified number of merges with TTL entries in pool, do not assign new merge with TTL. This is to leave free threads for regular merges and avoid "Too many parts"

max_number_of_mutations_for_replica

TypeDefault
UInt640

Limit the number of part mutations per replica to the specified amount. Zero means no limit on the number of mutations per replica (the execution can still be constrained by other settings).

max_part_loading_threads

TypeDefault
MaxThreads'auto(4)'

Obsolete setting, does nothing.

max_part_removal_threads

TypeDefault
MaxThreads'auto(4)'

Obsolete setting, does nothing.

max_partitions_to_read

TypeDefault
Int64-1

Limits the maximum number of partitions that can be accessed in one query.

The setting value specified when the table is created can be overridden via query-level setting.

Possible values:

  • Any positive integer.

You can also specify a query complexity setting max_partitions_to_read at a query / session / profile level.

max_parts_in_total

TypeDefault
UInt64100000

If the total number of active parts in all partitions of a table exceeds the max_parts_in_total value INSERT is interrupted with the Too many parts (N) exception.

Possible values:

  • Any positive integer.

A large number of parts in a table reduces performance of ClickHouse queries and increases ClickHouse boot time. Most often this is a consequence of an incorrect design (mistakes when choosing a partitioning strategy - too small partitions).

max_parts_to_merge_at_once

TypeDefault
UInt64100

Max amount of parts which can be merged at once (0 - disabled). Doesn't affect OPTIMIZE FINAL query.

max_postpone_time_for_failed_mutations_ms

TypeDefault
UInt64300000

The maximum postpone time for failed mutations.

max_projections

TypeDefault
UInt6425

The maximum number of merge tree projections.

max_replicated_fetches_network_bandwidth

TypeDefault
UInt640

Limits the maximum speed of data exchange over the network in bytes per second for replicated fetches. This setting is applied to a particular table, unlike the max_replicated_fetches_network_bandwidth_for_server setting, which is applied to the server.

You can limit both server network and network for a particular table, but for this the value of the table-level setting should be less than server-level one. Otherwise the server considers only the max_replicated_fetches_network_bandwidth_for_server setting.

The setting isn't followed perfectly accurately.

Possible values:

  • Positive integer.
  • 0 — Unlimited.

Default value: 0.

Usage

Could be used for throttling speed when replicating data to add or replace new nodes.

max_replicated_logs_to_keep

TypeDefault
UInt641000

How many records may be in the ClickHouse Keeper log if there is inactive replica. An inactive replica becomes lost when when this number exceed.

Possible values:

  • Any positive integer.

max_replicated_merges_in_queue

TypeDefault
UInt641000

How many tasks of merging and mutating parts are allowed simultaneously in ReplicatedMergeTree queue.

max_replicated_merges_with_ttl_in_queue

TypeDefault
UInt641

How many tasks of merging parts with TTL are allowed simultaneously in ReplicatedMergeTree queue.

max_replicated_mutations_in_queue

TypeDefault
UInt648

How many tasks of mutating parts are allowed simultaneously in ReplicatedMergeTree queue.

max_replicated_sends_network_bandwidth

TypeDefault
UInt640

Limits the maximum speed of data exchange over the network in bytes per second for replicated sends. This setting is applied to a particular table, unlike the max_replicated_sends_network_bandwidth_for_server setting, which is applied to the server.

You can limit both server network and network for a particular table, but for this the value of the table-level setting should be less than server-level one. Otherwise the server considers only the max_replicated_sends_network_bandwidth_for_server setting.

The setting isn't followed perfectly accurately.

Possible values:

  • Positive integer.
  • 0 — Unlimited.

Usage

Could be used for throttling speed when replicating data to add or replace new nodes.

max_suspicious_broken_parts

TypeDefault
UInt64100

If the number of broken parts in a single partition exceeds the max_suspicious_broken_parts value, automatic deletion is denied.

Possible values:

  • Any positive integer.

max_suspicious_broken_parts_bytes

TypeDefault
UInt641073741824

Max size of all broken parts, if more - deny automatic deletion.

Possible values:

  • Any positive integer.

merge_max_block_size

TypeDefault
UInt648192

The number of rows that are read from the merged parts into memory.

Possible values:

  • Any positive integer.

Merge reads rows from parts in blocks of merge_max_block_size rows, then merges and writes the result into a new part. The read block is placed in RAM, so merge_max_block_size affects the size of the RAM required for the merge. Thus, merges can consume a large amount of RAM for tables with very wide rows (if the average row size is 100kb, then when merging 10 parts, (100kb * 10 * 8192) = ~ 8GB of RAM). By decreasing merge_max_block_size, you can reduce the amount of RAM required for a merge but slow down a merge.

merge_max_block_size_bytes

TypeDefault
UInt6410485760

How many bytes in blocks should be formed for merge operations. By default has the same value as index_granularity_bytes.

merge_max_bytes_to_prewarm_cache

TypeDefault
UInt641073741824

Only available in ClickHouse Cloud. Maximal size of part (compact or packed) to prewarm cache during merge.

merge_selecting_sleep_ms

TypeDefault
UInt645000

Minimum time to wait before trying to select parts to merge again after no parts were selected. A lower setting will trigger selecting tasks in background_schedule_pool frequently which result in large amount of requests to zookeeper in large-scale clusters

merge_selecting_sleep_slowdown_factor

TypeDefault
Float1.2

The sleep time for merge selecting task is multiplied by this factor when there's nothing to merge and divided when a merge was assigned

merge_selector_algorithm

Experimental feature. Learn more.
TypeDefault
MergeSelectorAlgorithmSimple

The algorithm to select parts for merges assignment

merge_selector_base

TypeDefault
Float5

Affects write amplification of assigned merges (expert level setting, don't change if you don't understand what it is doing). Works for Simple and StochasticSimple merge selectors

merge_selector_blurry_base_scale_factor

TypeDefault
UInt640

Controls when the logic kicks in relatively to the number of parts in partition. The bigger the factor the more belated reaction will be.

merge_selector_enable_heuristic_to_remove_small_parts_at_right

TypeDefault
Bool1

Enable heuristic for selecting parts for merge which removes parts from right side of range, if their size is less than specified ratio (0.01) of sum_size. Works for Simple and StochasticSimple merge selectors

merge_selector_window_size

TypeDefault
UInt641000

How many parts to look at once.

merge_total_max_bytes_to_prewarm_cache

TypeDefault
UInt6416106127360

Only available in ClickHouse Cloud. Maximal size of parts in total to prewarm cache during merge.

merge_tree_clear_old_broken_detached_parts_ttl_timeout_seconds

TypeDefault
UInt642592000

Obsolete setting, does nothing.

merge_tree_clear_old_parts_interval_seconds

TypeDefault
UInt641

Sets the interval in seconds for ClickHouse to execute the cleanup of old parts, WALs, and mutations.

Possible values:

  • Any positive integer.

merge_tree_clear_old_temporary_directories_interval_seconds

TypeDefault
UInt6460

Sets the interval in seconds for ClickHouse to execute the cleanup of old temporary directories.

Possible values:

  • Any positive integer.

merge_tree_enable_clear_old_broken_detached

TypeDefault
UInt640

Obsolete setting, does nothing.

merge_with_recompression_ttl_timeout

TypeDefault
Int6414400

Minimum delay in seconds before repeating a merge with recompression TTL.

merge_with_ttl_timeout

TypeDefault
Int6414400

Minimum delay in seconds before repeating a merge with delete TTL.

merge_workload

Used to regulate how resources are utilized and shared between merges and other workloads. Specified value is used as workload setting value for background merges of this table. If not specified (empty string), then server setting merge_workload is used instead.

See Also

min_absolute_delay_to_close

TypeDefault
UInt640

Minimal absolute delay to close, stop serving requests and not return Ok during status check.

min_age_to_force_merge_on_partition_only

TypeDefault
Bool0

Whether min_age_to_force_merge_seconds should be applied only on the entire partition and not on subset.

By default, ignores setting max_bytes_to_merge_at_max_space_in_pool (see enable_max_bytes_limit_for_min_age_to_force_merge).

Possible values:

  • true, false

min_age_to_force_merge_seconds

TypeDefault
UInt640

Merge parts if every part in the range is older than the value of min_age_to_force_merge_seconds.

By default, ignores setting max_bytes_to_merge_at_max_space_in_pool (see enable_max_bytes_limit_for_min_age_to_force_merge).

Possible values:

  • Positive integer.

min_bytes_for_compact_part

TypeDefault
UInt640

Obsolete setting, does nothing.

min_bytes_for_full_part_storage

TypeDefault
UInt640

Only available in ClickHouse Cloud. Minimal uncompressed size in bytes to use full type of storage for data part instead of packed

min_bytes_for_wide_part

TypeDefault
UInt6410485760

Minimum number of bytes/rows in a data part that can be stored in Wide format. You can set one, both or none of these settings.

min_bytes_to_prewarm_caches

TypeDefault
UInt640

Minimal size (uncompressed bytes) to prewarm mark cache and primary index cache for new parts

min_bytes_to_rebalance_partition_over_jbod

TypeDefault
UInt640

Sets minimal amount of bytes to enable balancing when distributing new big parts over volume disks JBOD.

Possible values:

  • Positive integer.
  • 0 — Balancing is disabled.

Usage

The value of the min_bytes_to_rebalance_partition_over_jbod setting should not be less than the value of the max_bytes_to_merge_at_max_space_in_pool / 1024. Otherwise, ClickHouse throws an exception.

min_compress_block_size

TypeDefault
UInt640

Minimum size of blocks of uncompressed data required for compression when writing the next mark. You can also specify this setting in the global settings (see min_compress_block_size setting). The value specified when the table is created overrides the global value for this setting.

min_compressed_bytes_to_fsync_after_fetch

TypeDefault
UInt640

Minimal number of compressed bytes to do fsync for part after fetch (0 - disabled)

min_compressed_bytes_to_fsync_after_merge

TypeDefault
UInt640

Minimal number of compressed bytes to do fsync for part after merge (0 - disabled)

min_delay_to_insert_ms

TypeDefault
UInt6410

Min delay of inserting data into MergeTree table in milliseconds, if there are a lot of unmerged parts in single partition.

min_delay_to_mutate_ms

TypeDefault
UInt6410

Min delay of mutating MergeTree table in milliseconds, if there are a lot of unfinished mutations

min_free_disk_bytes_to_perform_insert

TypeDefault
UInt640

The minimum number of bytes that should be free in disk space in order to insert data. If the number of available free bytes is less than min_free_disk_bytes_to_perform_insert then an exception is thrown and the insert is not executed. Note that this setting:

  • takes into account the keep_free_space_bytes setting.
  • does not take into account the amount of data that will be written by the INSERT operation.
  • is only checked if a positive (non-zero) number of bytes is specified

Possible values:

  • Any positive integer.
note

If both min_free_disk_bytes_to_perform_insert and min_free_disk_ratio_to_perform_insert are specified, ClickHouse will count on the value that will allow to perform inserts on a bigger amount of free memory.

min_free_disk_ratio_to_perform_insert

TypeDefault
Float0

The minimum free to total disk space ratio to perform an INSERT. Must be a floating point value between 0 and 1. Note that this setting:

  • takes into account the keep_free_space_bytes setting.
  • does not take into account the amount of data that will be written by the INSERT operation.
  • is only checked if a positive (non-zero) ratio is specified

Possible values:

  • Float, 0.0 - 1.0

Note that if both min_free_disk_ratio_to_perform_insert and min_free_disk_bytes_to_perform_insert are specified, ClickHouse will count on the value that will allow to perform inserts on a bigger amount of free memory.

min_index_granularity_bytes

TypeDefault
UInt641024

Min allowed size of data granules in bytes.

To provide a safeguard against accidentally creating tables with very low index_granularity_bytes.

min_marks_to_honor_max_concurrent_queries

TypeDefault
UInt640

The minimal number of marks read by the query for applying the max_concurrent_queries setting.

note

Queries will still be limited by other max_concurrent_queries settings.

Possible values:

  • Positive integer.
  • 0 — Disabled (max_concurrent_queries limit applied to no queries).

Example

min_merge_bytes_to_use_direct_io

TypeDefault
UInt6410737418240

The minimum data volume for merge operation that is required for using direct I/O access to the storage disk. When merging data parts, ClickHouse calculates the total storage volume of all the data to be merged. If the volume exceeds min_merge_bytes_to_use_direct_io bytes, ClickHouse reads and writes the data to the storage disk using the direct I/O interface (O_DIRECT option). If min_merge_bytes_to_use_direct_io = 0, then direct I/O is disabled.

min_parts_to_merge_at_once

TypeDefault
UInt640

Minimal amount of data parts which merge selector can pick to merge at once (expert level setting, don't change if you don't understand what it is doing). 0 - disabled. Works for Simple and StochasticSimple merge selectors.

min_relative_delay_to_close

TypeDefault
UInt64300

Minimal delay from other replicas to close, stop serving requests and not return Ok during status check.

min_relative_delay_to_measure

TypeDefault
UInt64120

Calculate relative replica delay only if absolute delay is not less that this value.

min_relative_delay_to_yield_leadership

TypeDefault
UInt64120

Obsolete setting, does nothing.

min_replicated_logs_to_keep

TypeDefault
UInt6410

Keep about this number of last records in ZooKeeper log, even if they are obsolete. It doesn't affect work of tables: used only to diagnose ZooKeeper log before cleaning.

Possible values:

  • Any positive integer.

min_rows_for_compact_part

TypeDefault
UInt640

Obsolete setting, does nothing.

min_rows_for_full_part_storage

TypeDefault
UInt640

Only available in ClickHouse Cloud. Minimal number of rows to use full type of storage for data part instead of packed

min_rows_for_wide_part

TypeDefault
UInt640

Minimal number of rows to create part in wide format instead of compact

min_rows_to_fsync_after_merge

TypeDefault
UInt640

Minimal number of rows to do fsync for part after merge (0 - disabled)

mutation_workload

Used to regulate how resources are utilized and shared between mutations and other workloads. Specified value is used as workload setting value for background mutations of this table. If not specified (empty string), then server setting mutation_workload is used instead.

See Also

non_replicated_deduplication_window

TypeDefault
UInt640

The number of the most recently inserted blocks in the non-replicated MergeTree table for which hash sums are stored to check for duplicates.

Possible values:

  • Any positive integer.
  • 0 (disable deduplication).

A deduplication mechanism is used, similar to replicated tables (see replicated_deduplication_window setting). The hash sums of the created parts are written to a local file on a disk.

notify_newest_block_number

Experimental feature. Learn more.
TypeDefault
Bool0

Notify newest block number to SharedJoin or SharedSet. Only in ClickHouse Cloud.

number_of_free_entries_in_pool_to_execute_mutation

TypeDefault
UInt6420

When there is less than specified number of free entries in pool, do not execute part mutations. This is to leave free threads for regular merges and to avoid "Too many parts" errors.

Possible values:

  • Any positive integer.

Usage

The value of the number_of_free_entries_in_pool_to_execute_mutation setting should be less than the value of the background_pool_size

number_of_free_entries_in_pool_to_execute_optimize_entire_partition

TypeDefault
UInt6425

When there is less than specified number of free entries in pool, do not execute optimizing entire partition in the background (this task generated when set min_age_to_force_merge_seconds and enable min_age_to_force_merge_on_partition_only). This is to leave free threads for regular merges and avoid "Too many parts".

Possible values:

  • Positive integer.

The value of the number_of_free_entries_in_pool_to_execute_optimize_entire_partition setting should be less than the value of the background_pool_size

number_of_free_entries_in_pool_to_lower_max_size_of_merge

TypeDefault
UInt648

When there is less than the specified number of free entries in pool (or replicated queue), start to lower maximum size of merge to process (or to put in queue). This is to allow small merges to process - not filling the pool with long running merges.

Possible values:

  • Any positive integer.

number_of_mutations_to_delay

TypeDefault
UInt64500

If table has at least that many unfinished mutations, artificially slow down mutations of table. Disabled if set to 0

number_of_mutations_to_throw

TypeDefault
UInt641000

If table has at least that many unfinished mutations, throw 'Too many mutations' exception. Disabled if set to 0

number_of_partitions_to_consider_for_merge

TypeDefault
UInt6410

Only available in ClickHouse Cloud. Up to top N partitions which we will consider for merge. Partitions picked in a random weighted way where weight is amount of data parts which can be merged in this partition.

old_parts_lifetime

TypeDefault
Seconds480

The time (in seconds) of storing inactive parts to protect against data loss during spontaneous server reboots.

Possible values:

  • Any positive integer.

After merging several parts into a new part, ClickHouse marks the original parts as inactive and deletes them only after old_parts_lifetime seconds. Inactive parts are removed if they are not used by current queries, i.e. if the refcount of the part is 1.

fsync is not called for new parts, so for some time new parts exist only in the server's RAM (OS cache). If the server is rebooted spontaneously, new parts can be lost or damaged. To protect data inactive parts are not deleted immediately.

During startup ClickHouse checks the integrity of the parts. If the merged part is damaged ClickHouse returns the inactive parts to the active list, and later merges them again. Then the damaged part is renamed (the broken_ prefix is added) and moved to the detached folder. If the merged part is not damaged, then the original inactive parts are renamed (the ignored_ prefix is added) and moved to the detached folder.

The default dirty_expire_centisecs value (a Linux kernel setting) is 30 seconds (the maximum time that written data is stored only in RAM), but under heavy loads on the disk system data can be written much later. Experimentally, a value of 480 seconds was chosen for old_parts_lifetime, during which a new part is guaranteed to be written to disk.

optimize_row_order

TypeDefault
Bool0

Controls if the row order should be optimized during inserts to improve the compressability of the newly inserted table part.

Only has an effect for ordinary MergeTree-engine tables. Does nothing for specialized MergeTree engine tables (e.g. CollapsingMergeTree).

MergeTree tables are (optionally) compressed using compression codecs. Generic compression codecs such as LZ4 and ZSTD achieve maximum compression rates if the data exposes patterns. Long runs of the same value typically compress very well.

If this setting is enabled, ClickHouse attempts to store the data in newly inserted parts in a row order that minimizes the number of equal-value runs across the columns of the new table part. In other words, a small number of equal-value runs mean that individual runs are long and compress well.

Finding the optimal row order is computationally infeasible (NP hard). Therefore, ClickHouse uses a heuristics to quickly find a row order which still improves compression rates over the original row order.

Heuristics for finding a row order

It is generally possible to shuffle the rows of a table (or table part) freely as SQL considers the same table (table part) in different row order equivalent.

This freedom of shuffling rows is restricted when a primary key is defined for the table. In ClickHouse, a primary key C1, C2, ..., CN enforces that the table rows are sorted by columns C1, C2, ... Cn (clustered index). As a result, rows can only be shuffled within "equivalence classes" of row, i.e. rows which have the same values in their primary key columns. The intuition is that primary keys with high-cardinality, e.g. primary keys involving a DateTime64 timestamp column, lead to many small equivalence classes. Likewise, tables with a low-cardinality primary key, create few and large equivalence classes. A table with no primary key represents the extreme case of a single equivalence class which spans all rows.

The fewer and the larger the equivalence classes are, the higher the degree of freedom when re-shuffling rows.

The heuristics applied to find the best row order within each equivalence class is suggested by D. Lemire, O. Kaser in Reordering columns for smaller indexes and based on sorting the rows within each equivalence class by ascending cardinality of the non-primary key columns.

It performs three steps:

  1. Find all equivalence classes based on the row values in primary key columns.
  2. For each equivalence class, calculate (usually estimate) the cardinalities of the non-primary-key columns.
  3. For each equivalence class, sort the rows in order of ascending non-primary-key column cardinality.

If enabled, insert operations incur additional CPU costs to analyze and optimize the row order of the new data. INSERTs are expected to take 30-50% longer depending on the data characteristics. Compression rates of LZ4 or ZSTD improve on average by 20-40%.

This setting works best for tables with no primary key or a low-cardinality primary key, i.e. a table with only few distinct primary key values. High-cardinality primary keys, e.g. involving timestamp columns of type DateTime64, are not expected to benefit from this setting.

part_moves_between_shards_delay_seconds

Experimental feature. Learn more.
TypeDefault
UInt6430

Time to wait before/after moving parts between shards.

part_moves_between_shards_enable

Experimental feature. Learn more.
TypeDefault
UInt640

Experimental/Incomplete feature to move parts between shards. Does not take into account sharding expressions.

parts_to_delay_insert

TypeDefault
UInt641000

If the number of active parts in a single partition exceeds the parts_to_delay_insert value, an INSERT is artificially slowed down.

Possible values:

  • Any positive integer.

ClickHouse artificially executes INSERT longer (adds 'sleep') so that the background merge process can merge parts faster than they are added.

parts_to_throw_insert

TypeDefault
UInt643000

If the number of active parts in a single partition exceeds the parts_to_throw_insert value, INSERT is interrupted with the Too many parts (N). Merges are processing significantly slower than inserts exception.

Possible values:

  • Any positive integer.

To achieve maximum performance of SELECT queries, it is necessary to minimize the number of parts processed, see Merge Tree.

Prior to version 23.6 this setting was set to 300. You can set a higher different value, it will reduce the probability of the Too many parts error, but at the same time SELECT performance might degrade. Also in case of a merge issue (for example, due to insufficient disk space) you will notice it later than you would with the original 300.

prefer_fetch_merged_part_size_threshold

TypeDefault
UInt6410737418240

If the sum of the size of parts exceeds this threshold and the time since a replication log entry creation is greater than prefer_fetch_merged_part_time_threshold, then prefer fetching merged part from a replica instead of doing merge locally. This is to speed up very long merges.

Possible values:

  • Any positive integer.

prefer_fetch_merged_part_time_threshold

TypeDefault
Seconds3600

If the time passed since a replication log (ClickHouse Keeper or ZooKeeper) entry creation exceeds this threshold, and the sum of the size of parts is greater than prefer_fetch_merged_part_size_threshold, then prefer fetching merged part from a replica instead of doing merge locally. This is to speed up very long merges.

Possible values:

  • Any positive integer.

prewarm_mark_cache

TypeDefault
Bool0

If true mark cache will be prewarmed by saving marks to mark cache on inserts, merges, fetches and on startup of server

prewarm_primary_key_cache

TypeDefault
Bool0

If true primary index cache will be prewarmed by saving marks to mark cache on inserts, merges, fetches and on startup of server

primary_key_compress_block_size

TypeDefault
UInt6465536

Primary compress block size, the actual size of the block to compress.

primary_key_compression_codec

TypeDefault
StringZSTD(3)

Compression encoding used by primary, primary key is small enough and cached, so the default compression is ZSTD(3).

primary_key_lazy_load

TypeDefault
Bool1

Load primary key in memory on first use instead of on table initialization. This can save memory in the presence of a large number of tables.

primary_key_ratio_of_unique_prefix_values_to_skip_suffix_columns

TypeDefault
Float0.9

If the value of a column of the primary key in data part changes at least in this ratio of times, skip loading next columns in memory. This allows to save memory usage by not loading useless columns of the primary key.

ratio_of_defaults_for_sparse_serialization

TypeDefault
Float0.9375

Minimal ratio of the number of default values to the number of all values in a column. Setting this value causes the column to be stored using sparse serializations.

If a column is sparse (contains mostly zeros), ClickHouse can encode it in a sparse format and automatically optimize calculations - the data does not require full decompression during queries. To enable this sparse serialization, define the ratio_of_defaults_for_sparse_serialization setting to be less than 1.0. If the value is greater than or equal to 1.0, then the columns will be always written using the normal full serialization.

Possible values:

  • Float between 0 and 1 to enable sparse serialization
  • 1.0 (or greater) if you do not want to use sparse serialization

Example

Notice the s column in the following table is an empty string for 95% of the rows. In my_regular_table we do not use sparse serialization, and in my_sparse_table we set ratio_of_defaults_for_sparse_serialization to 0.95:

Notice the s column in my_sparse_table uses less storage space on disk:

You can verify if a column is using the sparse encoding by viewing the serialization_kind column of the system.parts_columns table:

You can see which parts of s were stored using the sparse serialization:

reduce_blocking_parts_sleep_ms

TypeDefault
UInt645000

Only available in ClickHouse Cloud. Minimum time to wait before trying to reduce blocking parts again after no ranges were dropped/replaced. A lower setting will trigger tasks in background_schedule_pool frequently which results in large amount of requests to zookeeper in large-scale clusters

remote_fs_execute_merges_on_single_replica_time_threshold

TypeDefault
Seconds10800

When this setting has a value greater than zero only a single replica starts the merge immediately if merged part on shared storage and allow_remote_fs_zero_copy_replication is enabled.

note

Zero-copy replication is not ready for production Zero-copy replication is disabled by default in ClickHouse version 22.8 and higher.

This feature is not recommended for production use.

Possible values:

  • Any positive integer.

remote_fs_zero_copy_path_compatible_mode

Experimental feature. Learn more.
TypeDefault
Bool0

Run zero-copy in compatible mode during conversion process.

remote_fs_zero_copy_zookeeper_path

Experimental feature. Learn more.
TypeDefault
String/clickhouse/zero_copy

ZooKeeper path for zero-copy table-independent info.

remove_empty_parts

TypeDefault
Bool1

Remove empty parts after they were pruned by TTL, mutation, or collapsing merge algorithm.

remove_rolled_back_parts_immediately

Experimental feature. Learn more.
TypeDefault
Bool1

Setting for an incomplete experimental feature.

replace_long_file_name_to_hash

TypeDefault
Bool1

If the file name for column is too long (more than 'max_file_name_length' bytes) replace it to SipHash128

replicated_can_become_leader

TypeDefault
Bool1

If true, replicated tables replicas on this node will try to acquire leadership.

Possible values:

  • true
  • false

replicated_deduplication_window

TypeDefault
UInt641000

The number of most recently inserted blocks for which ClickHouse Keeper stores hash sums to check for duplicates.

Possible values:

  • Any positive integer.
  • 0 (disable deduplication)

The Insert command creates one or more blocks (parts). For insert deduplication, when writing into replicated tables, ClickHouse writes the hash sums of the created parts into ClickHouse Keeper. Hash sums are stored only for the most recent replicated_deduplication_window blocks. The oldest hash sums are removed from ClickHouse Keeper.

A large number for replicated_deduplication_window slows down Inserts because more entries need to be compared. The hash sum is calculated from the composition of the field names and types and the data of the inserted part (stream of bytes).

replicated_deduplication_window_for_async_inserts

TypeDefault
UInt6410000

The number of most recently async inserted blocks for which ClickHouse Keeper stores hash sums to check for duplicates.

Possible values:

  • Any positive integer.
  • 0 (disable deduplication for async_inserts)

The Async Insert command will be cached in one or more blocks (parts). For insert deduplication, when writing into replicated tables, ClickHouse writes the hash sums of each insert into ClickHouse Keeper. Hash sums are stored only for the most recent replicated_deduplication_window_for_async_inserts blocks. The oldest hash sums are removed from ClickHouse Keeper. A large number of replicated_deduplication_window_for_async_inserts slows down Async Inserts because it needs to compare more entries. The hash sum is calculated from the composition of the field names and types and the data of the insert (stream of bytes).

replicated_deduplication_window_seconds

TypeDefault
UInt64604800

The number of seconds after which the hash sums of the inserted blocks are removed from ClickHouse Keeper.

Possible values:

  • Any positive integer.

Similar to replicated_deduplication_window, replicated_deduplication_window_seconds specifies how long to store hash sums of blocks for insert deduplication. Hash sums older than replicated_deduplication_window_seconds are removed from ClickHouse Keeper, even if they are less than replicated_deduplication_window.

The time is relative to the time of the most recent record, not to the wall time. If it's the only record it will be stored forever.

replicated_deduplication_window_seconds_for_async_inserts

TypeDefault
UInt64604800

The number of seconds after which the hash sums of the async inserts are removed from ClickHouse Keeper.

Possible values:

  • Any positive integer.

Similar to replicated_deduplication_window_for_async_inserts, replicated_deduplication_window_seconds_for_async_inserts specifies how long to store hash sums of blocks for async insert deduplication. Hash sums older than replicated_deduplication_window_seconds_for_async_inserts are removed from ClickHouse Keeper, even if they are less than replicated_deduplication_window_for_async_inserts.

The time is relative to the time of the most recent record, not to the wall time. If it's the only record it will be stored forever.

replicated_fetches_http_connection_timeout

TypeDefault
Seconds0

Obsolete setting, does nothing.

replicated_fetches_http_receive_timeout

TypeDefault
Seconds0

Obsolete setting, does nothing.

replicated_fetches_http_send_timeout

TypeDefault
Seconds0

Obsolete setting, does nothing.

replicated_max_mutations_in_one_entry

TypeDefault
UInt6410000

Max number of mutation commands that can be merged together and executed in one MUTATE_PART entry (0 means unlimited)

replicated_max_parallel_fetches

TypeDefault
UInt640

Obsolete setting, does nothing.

replicated_max_parallel_fetches_for_host

TypeDefault
UInt6415

Obsolete setting, does nothing.

replicated_max_parallel_fetches_for_table

TypeDefault
UInt640

Obsolete setting, does nothing.

replicated_max_parallel_sends

TypeDefault
UInt640

Obsolete setting, does nothing.

replicated_max_parallel_sends_for_table

TypeDefault
UInt640

Obsolete setting, does nothing.

replicated_max_ratio_of_wrong_parts

TypeDefault
Float0.5

If the ratio of wrong parts to total number of parts is less than this - allow to start.

Possible values:

  • Float, 0.0 - 1.0

shared_merge_tree_create_per_replica_metadata_nodes

TypeDefault
Bool1

Enables creation of per-replica /metadata and /columns nodes in ZooKeeper. Only available in ClickHouse Cloud

shared_merge_tree_disable_merges_and_mutations_assignment

TypeDefault
Bool0

Stop merges assignment for shared merge tree. Only available in ClickHouse Cloud

shared_merge_tree_enable_keeper_parts_extra_data

Experimental feature. Learn more.
TypeDefault
Bool0

Enables writing attributes into virtual parts and committing blocks in keeper

shared_merge_tree_enable_outdated_parts_check

TypeDefault
Bool1

Enable outdated parts check. Only available in ClickHouse Cloud

shared_merge_tree_idle_parts_update_seconds

TypeDefault
UInt643600

Interval in seconds for parts update without being triggered by ZooKeeper watch in the shared merge tree. Only available in ClickHouse Cloud

shared_merge_tree_initial_parts_update_backoff_ms

TypeDefault
UInt6450

Initial backoff for parts update. Only available in ClickHouse Cloud

shared_merge_tree_interserver_http_connection_timeout_ms

TypeDefault
UInt64100

Timeouts for interserver HTTP connection. Only available in ClickHouse Cloud

shared_merge_tree_interserver_http_timeout_ms

TypeDefault
UInt6410000

Timeouts for interserver HTTP communication. Only available in ClickHouse Cloud

shared_merge_tree_leader_update_period_random_add_seconds

TypeDefault
UInt6410

Add uniformly distributed value from 0 to x seconds to shared_merge_tree_leader_update_period to avoid thundering herd effect. Only available in ClickHouse Cloud

shared_merge_tree_leader_update_period_seconds

TypeDefault
UInt6430

Maximum period to recheck leadership for parts update. Only available in ClickHouse Cloud

shared_merge_tree_max_outdated_parts_to_process_at_once

TypeDefault
UInt641000

Maximum amount of outdated parts leader will try to confirm for removal at one HTTP request. Only available in ClickHouse Cloud.

shared_merge_tree_max_parts_update_backoff_ms

TypeDefault
UInt645000

Max backoff for parts update. Only available in ClickHouse Cloud

shared_merge_tree_max_parts_update_leaders_in_total

TypeDefault
UInt646

Maximum number of parts update leaders. Only available in ClickHouse Cloud

shared_merge_tree_max_parts_update_leaders_per_az

TypeDefault
UInt642

Maximum number of parts update leaders. Only available in ClickHouse Cloud

shared_merge_tree_max_replicas_for_parts_deletion

TypeDefault
UInt6410

Max replicas which will participate in parts deletion (killer thread). Only available in ClickHouse Cloud

shared_merge_tree_max_replicas_to_merge_parts_for_each_parts_range

TypeDefault
UInt645

Max replicas which will try to assign potentially conflicting merges (allow to avoid redundant conflicts in merges assignment). 0 means disabled. Only available in ClickHouse Cloud

shared_merge_tree_max_suspicious_broken_parts

TypeDefault
UInt640

Max broken parts for SMT, if more - deny automatic detach.

shared_merge_tree_max_suspicious_broken_parts_bytes

TypeDefault
UInt640

Max size of all broken parts for SMT, if more - deny automatic detach.

shared_merge_tree_memo_ids_remove_timeout_seconds

TypeDefault
Int641800

How long we store insert memoization ids to avoid wrong actions during insert retries. Only available in ClickHouse Cloud

shared_merge_tree_partitions_hint_ratio_to_reload_merge_pred_for_mutations

TypeDefault
Float0.5

Will reload merge predicate in merge/mutate selecting task when <candidate partitions for mutations only (partitions that cannot be merged)>/<candidate partitions for mutations> ratio is higher than the setting. Only available in ClickHouse Cloud

shared_merge_tree_parts_load_batch_size

TypeDefault
UInt6432

Amount of fetch parts metadata jobs to schedule at once. Only available in ClickHouse Cloud

shared_merge_tree_postpone_next_merge_for_locally_merged_parts_ms

TypeDefault
UInt640

Time to keep a locally merged part without starting a new merge containing this part. Gives other replicas a chance fetch the part and start this merge. Only available in ClickHouse Cloud.

shared_merge_tree_postpone_next_merge_for_locally_merged_parts_rows_threshold

TypeDefault
UInt641000000

Minimum size of part (in rows) to postpone assigning a next merge just after merging it locally. Only available in ClickHouse Cloud.

shared_merge_tree_range_for_merge_window_size

TypeDefault
UInt6410

Time to keep a locally merged part without starting a new merge containing this part. Gives other replicas a chance fetch the part and start this merge. Only available in ClickHouse Cloud

shared_merge_tree_read_virtual_parts_from_leader

TypeDefault
Bool1

Read virtual parts from leader when possible. Only available in ClickHouse Cloud

shared_merge_tree_try_fetch_part_in_memory_data_from_replicas

TypeDefault
Bool0

If enabled all the replicas try to fetch part in memory data (like primary key, partition info and so on) from other replicas where it already exists.

shared_merge_tree_use_metadata_hints_cache

TypeDefault
Bool1

Enables requesting FS cache hints from in-memory cache on other replicas. Only available in ClickHouse Cloud

shared_merge_tree_use_outdated_parts_compact_format

TypeDefault
Bool0

Use compact format for outdated parts: reduces load to Keeper, improves outdated parts processing. Only available in ClickHouse Cloud

shared_merge_tree_use_too_many_parts_count_from_virtual_parts

TypeDefault
Bool0

If enabled too many parts counter will rely on shared data in Keeper, not on local replica state. Only available in ClickHouse Cloud

simultaneous_parts_removal_limit

TypeDefault
UInt640

If there are a lot of outdated parts cleanup thread will try to delete up to simultaneous_parts_removal_limit parts during one iteration. simultaneous_parts_removal_limit set to 0 means unlimited.

sleep_before_commit_local_part_in_replicated_table_ms

TypeDefault
Milliseconds0

For testing. Do not change it.

sleep_before_loading_outdated_parts_ms

TypeDefault
UInt640

For testing. Do not change it.

storage_policy

TypeDefault
Stringdefault

Name of storage disk policy

table_disk

TypeDefault
Bool0

This is table disk, the path/endpoint should point to the table data, not to the database data. Can be set only for s3_plain/s3_plain_rewritable/web.

temporary_directories_lifetime

TypeDefault
Seconds86400

How many seconds to keep tmp_-directories. You should not lower this value because merges and mutations may not be able to work with low value of this setting.

try_fetch_recompressed_part_timeout

TypeDefault
Seconds7200

Timeout (in seconds) before starting merge with recompression. During this time ClickHouse tries to fetch recompressed part from replica which assigned this merge with recompression.

Recompression works slow in most cases, so we don't start merge with recompression until this timeout and trying to fetch recompressed part from replica which assigned this merge with recompression.

Possible values:

  • Any positive integer.

ttl_only_drop_parts

TypeDefault
Bool0

Controls whether data parts are fully dropped in MergeTree tables when all rows in that part have expired according to their TTL settings.

When ttl_only_drop_parts is disabled (by default), only the rows that have expired based on their TTL settings are removed.

When ttl_only_drop_parts is enabled, the entire part is dropped if all rows in that part have expired according to their TTL settings.

use_adaptive_write_buffer_for_dynamic_subcolumns

TypeDefault
Bool1

Allow to use adaptive writer buffers during writing dynamic subcolumns to reduce memory usage

use_async_block_ids_cache

TypeDefault
Bool1

If true, we cache the hash sums of the async inserts.

Possible values:

  • true
  • false

A block bearing multiple async inserts will generate multiple hash sums. When some of the inserts are duplicated, keeper will only return one duplicated hash sum in one RPC, which will cause unnecessary RPC retries. This cache will watch the hash sums path in Keeper. If updates are watched in the Keeper, the cache will update as soon as possible, so that we are able to filter the duplicated inserts in the memory.

use_compact_variant_discriminators_serialization

TypeDefault
Bool1

Enables compact mode for binary serialization of discriminators in Variant data type. This mode allows to use significantly less memory for storing discriminators in parts when there is mostly one variant or a lot of NULL values.

use_const_adaptive_granularity

TypeDefault
Bool0

Always use constant granularity for whole part. It allows to compress in memory values of index granularity. It can be useful in extremely large workloads with thin tables.

use_metadata_cache

TypeDefault
Bool0

Obsolete setting, does nothing.

use_minimalistic_checksums_in_zookeeper

TypeDefault
Bool1

Use small format (dozens bytes) for part checksums in ZooKeeper instead of ordinary ones (dozens KB). Before enabling check that all replicas support new format.

use_minimalistic_part_header_in_zookeeper

TypeDefault
Bool1

Storage method of the data parts headers in ZooKeeper. If enabled, ZooKeeper stores less data. For details, see here.

use_primary_key_cache

TypeDefault
Bool0

Use cache for primary index instead of saving all indexes in memory. Can be useful for very large tables

vertical_merge_algorithm_min_bytes_to_activate

TypeDefault
UInt640

Minimal (approximate) uncompressed size in bytes in merging parts to activate Vertical merge algorithm.

vertical_merge_algorithm_min_columns_to_activate

TypeDefault
UInt6411

Minimal amount of non-PK columns to activate Vertical merge algorithm.

vertical_merge_algorithm_min_rows_to_activate

TypeDefault
UInt64131072

Minimal (approximate) sum of rows in merging parts to activate Vertical merge algorithm.

vertical_merge_remote_filesystem_prefetch

TypeDefault
Bool1

If true prefetching of data from remote filesystem is used for the next column during merge

wait_for_unique_parts_send_before_shutdown_ms

TypeDefault
Milliseconds0

Before shutdown table will wait for required amount time for unique parts (exist only on current replica) to be fetched by other replicas (0 means disabled).

write_ahead_log_bytes_to_fsync

TypeDefault
UInt64104857600

Obsolete setting, does nothing.

write_ahead_log_interval_ms_to_fsync

TypeDefault
UInt64100

Obsolete setting, does nothing.

write_ahead_log_max_bytes

TypeDefault
UInt641073741824

Obsolete setting, does nothing.

write_final_mark

TypeDefault
Bool1

Obsolete setting, does nothing.

zero_copy_concurrent_part_removal_max_postpone_ratio

TypeDefault
Float0.05

Max percentage of top level parts to postpone removal in order to get smaller independent ranges. Recommended not to change.

zero_copy_concurrent_part_removal_max_split_times

TypeDefault
UInt645

Max recursion depth for splitting independent Outdated parts ranges into smaller subranges. Recommended not to change.

zero_copy_merge_mutation_min_parts_size_sleep_before_lock

TypeDefault
UInt641073741824

If zero copy replication is enabled sleep random amount of time before trying to lock depending on parts size for merge or mutation

zero_copy_merge_mutation_min_parts_size_sleep_no_scale_before_lock

TypeDefault
UInt640

If zero copy replication is enabled sleep random amount of time up to 500ms before trying to lock for merge or mutation.

zookeeper_session_expiration_check_period

TypeDefault
Seconds60

ZooKeeper session expiration check period, in seconds.

Possible values:

  • Any positive integer.