- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Supported OS
본 점검은 Datadog 에이전트로 ClickHouse를 모니터링합니다.
아래 지침을 따라 호스트에서 실행되는 에이전트에 대해 이 점검을 설치하고 설정하세요. 컨테이너화된 환경의 경우 이러한 지침을 적용하는 데 가이드가 필요하면 오토파일럿 통합 템플릿을 참조하세요.
ClickHouse 점검은 Datadog 에이전트 패키지에 포함됩니다. 서버에 추가 설치할 필요가 없습니다.
호스트에서 실행 중인 에이전트에 대해 이 점검을 구성하려면:
ClickHouse 성능 데이터 수집을 시작하려면 에이전트 구성 디렉터리 루트의 conf.d/
폴더에서 clickhouse.d/conf.yaml
파일을 편집합니다. 사용할 수 있는 구성 옵션 전체를 보려면 clickhouse.d/conf.yaml 샘플을 참고하세요.
Datadog 에이전트에서 로그 수집은 기본적으로 사용하지 않도록 설정되어 있습니다. datadog.yaml
파일에서 로그 수집을 사용하도록 설정합니다.
logs_enabled: true
원하는 로그 파일을 clickhouse.d/conf.yaml
파일에 추가하여 ClickHouse 로그 수집을 시작하세요.
logs:
- type: file
path: /var/log/clickhouse-server/clickhouse-server.log
source: clickhouse
service: "<SERVICE_NAME>"
path
, service
파라미터 값을 변경하고 환경에 맞게 설정합니다. 사용 가능한 모든 설정 옵션은 clickhouse.d/conf.yaml 샘플을 참조하세요.
컨테이너화된 환경의 경우 자동탐지 통합 템플릿에 다음 파라미터를 적용하는 방법이 안내되어 있습니다.
파라미터 | 값 |
---|---|
<INTEGRATION_NAME> | clickhouse |
<INIT_CONFIG> | 비어 있음 또는 {} |
<INSTANCE_CONFIG> | {"server": "%%host%%", "port": "%%port%%", "username": "<USER>", "password": "<PASSWORD>"} |
Datadog 에이전트에서 로그 수집은 기본값으로 비활성화되어 있습니다. 이를 활성화하려면 쿠버네티스(Kubernetes) 로그 수집을 참조하세요.
파라미터 | 값 |
---|---|
<LOG_CONFIG> | {"source": "clickhouse", "service": "<SERVICE_NAME>"} |
에이전트 상태 하위 명령을 실행하고 Checks 섹션에서 clickhouse
를 찾습니다.
clickhouse.CompiledExpressionCacheCount (gauge) | Total entries in the cache of JIT-compiled code. Shown as item |
clickhouse.MarkCacheFiles (gauge) | The number of mark files cached in the mark cache. Shown as item |
clickhouse.ReplicasMaxInsertsInQueue (gauge) | Maximum number of INSERT operations in the queue (still to be replicated) across Replicated tables. Shown as item |
clickhouse.ReplicasMaxMergesInQueue (gauge) | Maximum number of merge operations in the queue (still to be applied) across Replicated tables. Shown as item |
clickhouse.ReplicasMaxQueueSize (gauge) | Maximum queue size (in the number of operations like get, merge) across Replicated tables. Shown as item |
clickhouse.ReplicasSumInsertsInQueue (gauge) | Sum of INSERT operations in the queue (still to be replicated) across Replicated tables. Shown as item |
clickhouse.ReplicasSumMergesInQueue (gauge) | Sum of merge operations in the queue (still to be applied) across Replicated tables. Shown as item |
clickhouse.UncompressedCacheBytes (gauge) | Total size of uncompressed cache in bytes. Uncompressed cache does not usually improve the performance and should be mostly avoided. Shown as byte |
clickhouse.UncompressedCacheCells (gauge) | Total number of entries in the uncompressed cache. Each entry represents a decompressed block of data. Uncompressed cache does not usually improve performance and should be mostly avoided. Shown as item |
clickhouse.addresses.active (gauge) | Total count of addresses which are used for creation connections with connection pools |
clickhouse.aggregator.threads (gauge) | Number of threads in the Aggregator thread pool. |
clickhouse.aggregator.threads.active (gauge) | Number of threads in the Aggregator thread pool running a task. |
clickhouse.aggregator.threads.scheduled (gauge) | Number of queued or active jobs in the Aggregator thread pool. |
clickhouse.aio.read.count (count) | Number of reads with Linux or FreeBSD AIO interface. Shown as read |
clickhouse.aio.read.size.count (count) | Number of bytes read with Linux or FreeBSD AIO interface. Shown as byte |
clickhouse.aio.read.size.total (gauge) | Total number of bytes read with Linux or FreeBSD AIO interface. Shown as byte |
clickhouse.aio.read.total (gauge) | Total number of reads with Linux or FreeBSD AIO interface. Shown as read |
clickhouse.aio.write.count (count) | Number of writes with Linux or FreeBSD AIO interface. Shown as write |
clickhouse.aio.write.size.count (count) | Number of bytes read with Linux or FreeBSD AIO interface. Shown as byte |
clickhouse.aio.write.size.total (gauge) | Total number of bytes read with Linux or FreeBSD AIO interface. Shown as byte |
clickhouse.aio.write.total (gauge) | Total number of writes with Linux or FreeBSD AIO interface. Shown as write |
clickhouse.async.read.time (gauge) | Time spent in waiting for asynchronous reads in asynchronous local read. Shown as microsecond |
clickhouse.async.reader.ignored.bytes.count (count) | Number of bytes ignored during asynchronous reading |
clickhouse.async.reader.ignored.bytes.total (gauge) | Number of bytes ignored during asynchronous reading |
clickhouse.async.remote_read.time (gauge) | Time spent in waiting for asynchronous remote reads. Shown as microsecond |
clickhouse.attached.database (gauge) | Active database, used by current and upcoming SELECTs. |
clickhouse.attached.table (gauge) | Active table, used by current and upcoming SELECTs. |
clickhouse.azure.blob_storage.copy_object.count (count) | Number of Azure blob storage API CopyObject calls |
clickhouse.azure.blob_storage.copy_object.total (gauge) | Number of Azure blob storage API CopyObject calls |
clickhouse.azure.blob_storage.delete_object.count (count) | Number of Azure blob storage API DeleteObject(s) calls. |
clickhouse.azure.blob_storage.delete_object.total (gauge) | Number of Azure blob storage API DeleteObject(s) calls. |
clickhouse.azure.blob_storage.list_object.count (count) | Number of Azure blob storage API ListObjects calls. |
clickhouse.azure.blob_storage.list_object.total (gauge) | Number of Azure blob storage API ListObjects calls. |
clickhouse.azure.blob_storage.upload_part.count (count) | Number of Azure blob storage API UploadPart calls |
clickhouse.azure.blob_storage.upload_part.total (gauge) | Number of Azure blob storage API UploadPart calls |
clickhouse.background_pool.buffer_flush_schedule.task.active (gauge) | Number of active tasks in BackgroundBufferFlushSchedulePool. This pool is used for periodic Buffer flushes Shown as task |
clickhouse.background_pool.buffer_flush_schedule.task.limit (gauge) | Limit on number of tasks in BackgroundBufferFlushSchedulePool |
clickhouse.background_pool.common.task.active (gauge) | Number of active tasks in an associated background pool Shown as task |
clickhouse.background_pool.common.task.limit (gauge) | Limit on number of tasks in an associated background pool |
clickhouse.background_pool.distributed.task.active (gauge) | Number of active tasks in BackgroundDistributedSchedulePool. This pool is used for distributed sends that is done in background. Shown as task |
clickhouse.background_pool.distributed.task.limit (gauge) | Limit on number of tasks in BackgroundDistributedSchedulePool |
clickhouse.background_pool.fetches.task.active (gauge) | Number of active tasks in BackgroundFetchesPool Shown as task |
clickhouse.background_pool.fetches.task.limit (gauge) | Limit on number of simultaneous fetches in an associated background pool |
clickhouse.background_pool.merges.task.active (gauge) | Number of active merges and mutations in an associated background pool Shown as task |
clickhouse.background_pool.merges.task.limit (gauge) | Limit on number of active merges and mutations in an associated background pool |
clickhouse.background_pool.message_broker.task.active (gauge) | Number of active tasks in BackgroundProcessingPool for message streaming Shown as task |
clickhouse.background_pool.message_broker.task.limit (gauge) | Limit on number of tasks in BackgroundProcessingPool for message streaming |
clickhouse.background_pool.move.memory (gauge) | Total amount of memory (bytes) allocated in background processing pool (that is dedicated for background moves). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks. Shown as byte |
clickhouse.background_pool.move.task.active (gauge) | The number of active tasks in BackgroundProcessingPool for moves. Shown as task |
clickhouse.background_pool.move.task.limit (gauge) | Limit on number of tasks in BackgroundProcessingPool for moves |
clickhouse.background_pool.processing.memory (gauge) | Total amount of memory allocated in background processing pool (that is dedicated for background merges, mutations and fetches). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks. Shown as byte |
clickhouse.background_pool.processing.task.active (gauge) | The number of active tasks in BackgroundProcessingPool (merges, mutations, fetches, or replication queue bookkeeping) Shown as task |
clickhouse.background_pool.schedule.memory (gauge) | Total amount of memory allocated in background schedule pool (that is dedicated for bookkeeping tasks of Replicated tables). Shown as byte |
clickhouse.background_pool.schedule.task.active (gauge) | The number of active tasks in BackgroundSchedulePool. This pool is used for periodic ReplicatedMergeTree tasks, like cleaning old data parts, altering data parts, replica re-initialization, etc. Shown as task |
clickhouse.background_pool.schedule.task.limit (gauge) | Limit on number of tasks in BackgroundSchedulePool. This pool is used for periodic ReplicatedMergeTree tasks, like cleaning old data parts, altering data parts, replica re-initialization, etc. |
clickhouse.backup.post_tasks.time (gauge) | Time spent running post tasks after making backup entries Shown as microsecond |
clickhouse.backup.read.time (gauge) | Time spent reading backup metadata from .backup file Shown as microsecond |
clickhouse.backup.tables.time (gauge) | Time spent making backup entries for tables data Shown as microsecond |
clickhouse.backup.time (gauge) | Time spent making backup entries Shown as microsecond |
clickhouse.backup.write.time (gauge) | Time spent writing backup metadata to .backup file Shown as microsecond |
clickhouse.backups.read.open.count (count) | Number of backups opened for reading |
clickhouse.backups.read.open.total (gauge) | Number of backups opened for reading |
clickhouse.backups.threads.active (gauge) | Number of threads in thread pool for BACKUP running a task. |
clickhouse.backups.threads.scheduled (gauge) | Number of queued or active jobs for BACKUP. |
clickhouse.backups.threads.total (gauge) | Number of threads in the thread pool for BACKUP. |
clickhouse.backups.write.open.count (count) | Number of backups opened for writing |
clickhouse.backups.write.open.total (gauge) | Number of backups opened for writing |
clickhouse.backups_io.threads.active (gauge) | Number of threads in the BackupsIO thread pool running a task. |
clickhouse.backups_io.threads.scheduled (gauge) | Number of queued or active jobs in the BackupsIO thread pool. |
clickhouse.backups_io.threads.total (gauge) | Number of threads in the BackupsIO thread pool. |
clickhouse.buffer.write.discard.count (count) | The number of stack traces dropped by query profiler or signal handler because pipe is full or cannot write to pipe during the last interval. Shown as error |
clickhouse.buffer.write.discard.total (gauge) | The total number of stack traces dropped by query profiler or signal handler because pipe is full or cannot write to pipe. Shown as error |
clickhouse.cache.async.insert (gauge) | Number of async insert hash id in cache |
clickhouse.cache.buffer.time (gauge) | Prepare buffer time Shown as microsecond |
clickhouse.cache.distributed.client_access.count (count) | Number of client access times |
clickhouse.cache.distributed.client_access.total (gauge) | Number of client access times |
clickhouse.cache.distributed.connection.time (gauge) | The time spent to connect to distributed cache Shown as microsecond |
clickhouse.cache.distributed.connections.used.count (count) | The number of used connections to distributed cache |
clickhouse.cache.distributed.connections.used.total (gauge) | The number of used connections to distributed cache |
clickhouse.cache.distributed.new_read_range.time (gauge) | Time spent to start a new read range with distributed cache Shown as microsecond |
clickhouse.cache.distributed.packets.received.count (count) | Total number of packets received from distributed cache |
clickhouse.cache.distributed.packets.received.total (gauge) | Total number of packets received from distributed cache |
clickhouse.cache.distributed.packets.skipped.count (count) | Number of skipped unused packets from distributed cache |
clickhouse.cache.distributed.packets.skipped.total (gauge) | Number of skipped unused packets from distributed cache |
clickhouse.cache.distributed.read.compute.time (gauge) | Time spent to precompute read ranges Shown as microsecond |
clickhouse.cache.distributed.read.time (gauge) | Time spent reading from distributed cache Shown as microsecond |
clickhouse.cache.distributed.read_buffer_next_impl.time (gauge) | Time spend in ReadBufferFromDistributedCache::nextImpl Shown as microsecond |
clickhouse.cache.distributed.registry.update.time (gauge) | Time spent updating distributed cache registry Shown as microsecond |
clickhouse.cache.distributed.registry.updates.count (count) | Number of distributed cache registry updates |
clickhouse.cache.distributed.registry.updates.total (gauge) | Number of distributed cache registry updates |
clickhouse.cache.distributed.registry_lock.time (gauge) | Time spent to take DistributedCacheRegistry lock Shown as microsecond |
clickhouse.cache.distributed.response.time (gauge) | Time spend to wait for response from distributed cache Shown as microsecond |
clickhouse.cache.distributed.server.switches.count (count) | Number of server switches between distributed cache servers in read/write-through cache |
clickhouse.cache.distributed.server.switches.total (gauge) | Number of server switches between distributed cache servers in read/write-through cache |
clickhouse.cache.file_segments (gauge) | Number of existing cache file segments Shown as segment |
clickhouse.cache.mark.entry.found.count (count) | Number of times an entry has been found in the mark cache, so we didn't have to load a mark file. |
clickhouse.cache.mark.entry.found.total (gauge) | Number of times an entry has been found in the mark cache, so we didn't have to load a mark file. |
clickhouse.cache.mark.entry.missed.count (count) | Number of times an entry has not been found in the mark cache, so we had to load a mark file in memory, which is a costly operation, adding to query latency. |
clickhouse.cache.mark.entry.missed.total (gauge) | Number of times an entry has not been found in the mark cache, so we had to load a mark file in memory, which is a costly operation, adding to query latency. |
clickhouse.cache.mmap.file.found.count (count) | Number of times a file has been found in the MMap cache (for the 'mmap' read_method), so we didn't have to mmap it again. |
clickhouse.cache.mmap.file.found.total (gauge) | Number of times a file has been found in the MMap cache (for the 'mmap' read_method), so we didn't have to mmap it again. |
clickhouse.cache.mmap.file.missed.count (count) | Number of times a file has not been found in the MMap cache (for the 'mmap' read_method), so we had to mmap it again. |
clickhouse.cache.mmap.file.missed.total (gauge) | Number of times a file has not been found in the MMap cache (for the 'mmap' read_method), so we had to mmap it again. |
clickhouse.cache.opened_file.hits.count (count) | Number of times a file has been found in the opened file cache, so we didn't have to open it again. |
clickhouse.cache.opened_file.hits.total (gauge) | Number of times a file has been found in the opened file cache, so we didn't have to open it again. |
clickhouse.cache.opened_file.misses.count (count) | Number of times a file has been found in the opened file cache, so we had to open it again. |
clickhouse.cache.opened_file.misses.total (gauge) | Number of times a file has been found in the opened file cache, so we had to open it again. |
clickhouse.cache.opened_file.time (gauge) | Amount of time spent executing OpenedFileCache methods. Shown as microsecond |
clickhouse.cache.page.chunk.evicted.count (count) | Number of times a chunk has been found in the userspace page cache, not in use, but all its pages were evicted by the OS. |
clickhouse.cache.page.chunk.evicted.total (gauge) | Number of times a chunk has been found in the userspace page cache, not in use, but all its pages were evicted by the OS. |
clickhouse.cache.page.chunk.hits.count (count) | Number of times a chunk has been found in the userspace page cache, not in use, with all pages intact. |
clickhouse.cache.page.chunk.hits.partial.count (count) | Number of times a chunk has been found in the userspace page cache, not in use, but some of its pages were evicted by the OS. |
clickhouse.cache.page.chunk.hits.partial.total (gauge) | Number of times a chunk has been found in the userspace page cache, not in use, but some of its pages were evicted by the OS. |
clickhouse.cache.page.chunk.hits.total (gauge) | Number of times a chunk has been found in the userspace page cache, not in use, with all pages intact. |
clickhouse.cache.page.chunk.misses.count (count) | Number of times a chunk has not been found in the userspace page cache. |
clickhouse.cache.page.chunk.misses.total (gauge) | Number of times a chunk has not been found in the userspace page cache. |
clickhouse.cache.page.chunk.shared.count (count) | Number of times a chunk has been found in the userspace page cache, already in use by another thread. |
clickhouse.cache.page.chunk.shared.total (gauge) | Number of times a chunk has been found in the userspace page cache, already in use by another thread. |
clickhouse.cache.page.thread_pool_reader.prepare.time (gauge) | Time spent on preparation (e.g. call to reader seek() method) Shown as microsecond |
clickhouse.cache.page.thread_pool_reader.read.miss.time (gauge) | Time spent reading data inside the asynchronous job in ThreadPoolReader - when read was not done from the page cache. Shown as microsecond |
clickhouse.cache.page.thread_pool_reader.read.time (gauge) | Time spent reading data from page cache in ThreadPoolReader. Shown as microsecond |
clickhouse.cache.query.hits.count (count) | Number of times a query result has been found in the query cache (and query computation was avoided). Only updated for SELECT queries with SETTING usequerycache = 1. |
clickhouse.cache.query.hits.total (gauge) | Number of times a query result has been found in the query cache (and query computation was avoided). Only updated for SELECT queries with SETTING usequerycache = 1. |
clickhouse.cache.query.misses.count (count) | Number of times a query result has not been found in the query cache (and required query computation). Only updated for SELECT queries with SETTING usequerycache = 1. |
clickhouse.cache.query.misses.total (gauge) | Number of times a query result has not been found in the query cache (and required query computation). Only updated for SELECT queries with SETTING usequerycache = 1. |
clickhouse.cache.read.bytes.count (count) | Bytes read from filesystem cache |
clickhouse.cache.read.bytes.total (gauge) | Bytes read from filesystem cache |
clickhouse.cache.read.hits.count (count) | Number of times the read from filesystem cache hit the cache. |
clickhouse.cache.read.hits.total (gauge) | Number of times the read from filesystem cache hit the cache. |
clickhouse.cache.read.misses.count (count) | Number of times the read from filesystem cache miss the cache. |
clickhouse.cache.read.misses.total (gauge) | Number of times the read from filesystem cache miss the cache. |
clickhouse.cache.read.time (gauge) | Time reading from filesystem cache Shown as microsecond |
clickhouse.cache.remote_file_segments.waiting (gauge) | Total size of remote file segments waiting to be asynchronously loaded into filesystem cache. |
clickhouse.cache.schema.evitcted.count (count) | Number of times a schema from cache was evicted due to overflow |
clickhouse.cache.schema.evitcted.total (gauge) | Number of times a schema from cache was evicted due to overflow |
clickhouse.cache.schema.found.count (count) | Number of times the requested source is found in schema cache |
clickhouse.cache.schema.found.total (gauge) | Number of times the requested source is found in schema cache |
clickhouse.cache.schema.found_schemas.count (count) | Number of times the schema is found in schema cache during schema inference |
clickhouse.cache.schema.found_schemas.total (gauge) | Number of times the schema is found in schema cache during schema inference |
clickhouse.cache.schema.invalid.count (count) | Number of times a schema in cache became invalid due to changes in data |
clickhouse.cache.schema.invalid.total (gauge) | Number of times a schema in cache became invalid due to changes in data |
clickhouse.cache.schema.missed.count (count) | Number of times the requested source is not in schema cache |
clickhouse.cache.schema.missed.total (gauge) | Number of times the requested source is not in schema cache |
clickhouse.cache.schema.missed_schemas.count (count) | Number of times the requested source is in cache but the schema is not in cache during schema inference |
clickhouse.cache.schema.missed_schemas.total (gauge) | Number of times the requested source is in cache but the schema is not in cache during schema inference |
clickhouse.cache.schema.rows.found.count (count) | Number of times the number of rows is found in schema cache during count from files |
clickhouse.cache.schema.rows.found.total (gauge) | Number of times the number of rows is found in schema cache during count from files |
clickhouse.cache.schema.rows.missed.count (count) | Number of times the requested source is in cache but the number of rows is not in cache while count from files |
clickhouse.cache.schema.rows.missed.total (gauge) | Number of times the requested source is in cache but the number of rows is not in cache while count from files |
clickhouse.cache.source.read.bytes.count (count) | Bytes read from filesystem cache source (from remote fs, etc) |
clickhouse.cache.source.read.bytes.total (gauge) | Bytes read from filesystem cache source (from remote fs, etc) |
clickhouse.cache.source.read.time (gauge) | Time reading from filesystem cache source (from remote filesystem, etc) Shown as microsecond |
clickhouse.cache.source.write.bytes.count (count) | Bytes written from source (remote fs, etc) to filesystem cache |
clickhouse.cache.source.write.bytes.total (gauge) | Bytes written from source (remote fs, etc) to filesystem cache |
clickhouse.cache.source.write.time (gauge) | Time spent writing data into filesystem cache Shown as microsecond |
clickhouse.cache.uncompressed.block_data.count (count) | Number of times a block of data has been found in the uncompressed cache (and decompression was avoided). |
clickhouse.cache.uncompressed.block_data.miss.count (count) | Number of times a block of data has not been found in the uncompressed cache (and required decompression). |
clickhouse.cache.uncompressed.block_data.miss.total (gauge) | Number of times a block of data has not been found in the uncompressed cache (and required decompression). |
clickhouse.cache.uncompressed.block_data.total (gauge) | Number of times a block of data has been found in the uncompressed cache (and decompression was avoided). |
clickhouse.cache.write.bytes.count (count) | Bytes written from source (remote fs, etc) to filesystem cache |
clickhouse.cache.write.bytes.total (gauge) | Bytes written from source (remote fs, etc) to filesystem cache |
clickhouse.cache.write.time (gauge) | Time spent writing data into filesystem cache Shown as microsecond |
clickhouse.cache_dictionary.threads.active (gauge) | Number of threads in the CacheDictionary thread pool running a task. |
clickhouse.cache_dictionary.threads.scheduled (gauge) | Number of queued or active jobs in the CacheDictionary thread pool. |
clickhouse.cache_dictionary.threads.total (gauge) | Number of threads in the CacheDictionary thread pool. |
clickhouse.cache_dictionary.update_queue.batches (gauge) | Number of 'batches' (a set of keys) in update queue in CacheDictionaries. |
clickhouse.cache_dictionary.update_queue.keys (gauge) | Exact number of keys in update queue in CacheDictionaries. Shown as key |
clickhouse.cache_file_segments.detached (gauge) | Number of existing detached cache file segments Shown as segment |
clickhouse.cachewarmer.bytes.downloaded.count (count) | Amount of data fetched into filesystem cache by dedicated background threads. |
clickhouse.cachewarmer.bytes.downloaded.total (gauge) | Amount of data fetched into filesystem cache by dedicated background threads. |
clickhouse.compilation.attempt.count (count) | The number of times a compilation of generated C++ code was initiated during the last interval. Shown as event |
clickhouse.compilation.attempt.total (gauge) | The total number of times a compilation of generated C++ code was initiated. Shown as event |
clickhouse.compilation.function.execute.count (count) | The number of times a compiled function was executed during the last interval. Shown as execution |
clickhouse.compilation.function.execute.total (gauge) | The total number of times a compiled function was executed. Shown as execution |
clickhouse.compilation.llvm.attempt.count (count) | The number of times a compilation of generated LLVM code (to create fused function for complex expressions) was initiated during the last interval. Shown as event |
clickhouse.compilation.llvm.attempt.total (gauge) | The total number of times a compilation of generated LLVM code (to create fused function for complex expressions) was initiated. Shown as event |
clickhouse.compilation.regex.count (count) | The number of regular expressions compiled during the last interval. Identical regular expressions are compiled just once and cached forever. Shown as event |
clickhouse.compilation.regex.total (gauge) | The total number of regular expressions compiled. Identical regular expressions are compiled just once and cached forever. Shown as event |
clickhouse.compilation.size.count (count) | The number of bytes used for expressions compilation during the last interval. Shown as byte |
clickhouse.compilation.size.total (gauge) | The total number of bytes used for expressions compilation. Shown as byte |
clickhouse.compilation.success.count (count) | The number of times a compilation of generated C++ code was successful during the last interval. Shown as event |
clickhouse.compilation.success.total (gauge) | The total number of times a compilation of generated C++ code was successful. Shown as event |
clickhouse.compilation.time (gauge) | The percentage of time spent for compilation of expressions to LLVM code during the last interval. Shown as percent |
clickhouse.configuration.main.reloaded.count (count) | Number of times the main configuration was reloaded. |
clickhouse.configuration.main.reloaded.total (gauge) | Number of times the main configuration was reloaded. |
clickhouse.connection.http (gauge) | The number of connections to HTTP server Shown as connection |
clickhouse.connection.http.create.count (count) | The number of created HTTP connections (closed or opened) during the last interval. Shown as connection |
clickhouse.connection.http.create.total (gauge) | The total number of created HTTP connections (closed or opened). Shown as connection |
clickhouse.connection.http.stored (gauge) | Total count of sessions stored in the session pool for http hosts |
clickhouse.connection.http.total (gauge) | Total count of all sessions: stored in the pool and actively used right now for http hosts |
clickhouse.connection.interserver (gauge) | The number of connections from other replicas to fetch parts Shown as connection |
clickhouse.connection.mysql (gauge) | Number of client connections using MySQL protocol. Shown as connection |
clickhouse.connection.send.external (gauge) | The number of connections that are sending data for external tables to remote servers. External tables are used to implement GLOBAL IN and GLOBAL JOIN operators with distributed subqueries. Shown as connection |
clickhouse.connection.send.scalar (gauge) | The number of connections that are sending data for scalars to remote servers. Shown as connection |
clickhouse.connection.tcp (gauge) | The number of connections to TCP server (clients with native interface). Shown as connection |
clickhouse.connections.alive.total (gauge) | Number of alive connections Shown as connection |
clickhouse.connections.http.created.count (count) | Number of created http connections |
clickhouse.connections.http.created.time (gauge) | Total time spend on creating http connections Shown as microsecond |
clickhouse.connections.http.created.total (gauge) | Number of created http connections |
clickhouse.connections.http.expired.count (count) | Number of expired http connections |
clickhouse.connections.http.expired.total (gauge) | Number of expired http connections |
clickhouse.connections.http.failed.count (count) | Number of cases when creation of a http connection failed |
clickhouse.connections.http.failed.total (gauge) | Number of cases when creation of a http connection failed |
clickhouse.connections.http.preserved.count (count) | Number of preserved http connections |
clickhouse.connections.http.preserved.total (gauge) | Number of preserved http connections |
clickhouse.connections.http.reset.count (count) | Number of reset http connections |
clickhouse.connections.http.reset.total (gauge) | Number of reset http connections |
clickhouse.connections.http.reused.count (count) | Number of reused http connections |
clickhouse.connections.http.reused.total (gauge) | Number of reused http connections |
clickhouse.connections.outstanding.total (gauge) | Number of outstanding requests Shown as connection |
clickhouse.cpu.time (gauge) | The percentage of CPU time spent seen by OS during the last interval. Does not include involuntary waits due to virtualization. Shown as percent |
clickhouse.data.part.replicated.obsolete.count (count) | Number of times a data part was covered by another data part that has been fetched from a replica (so, we have marked a covered data part as obsolete and no longer needed). |
clickhouse.data.part.replicated.obsolete.total (gauge) | Number of times a data part was covered by another data part that has been fetched from a replica (so, we have marked a covered data part as obsolete and no longer needed). |
clickhouse.database.total (gauge) | The current number of databases. Shown as instance |
clickhouse.ddl.max_processed (gauge) | Max processed DDL entry of DDLWorker. |
clickhouse.dictionary.cache.keys.expired.count (count) | Number of keys looked up in the dictionaries of 'cache' types and found in the cache but they were obsolete. |
clickhouse.dictionary.cache.keys.expired.total (gauge) | Number of keys looked up in the dictionaries of 'cache' types and found in the cache but they were obsolete. |
clickhouse.dictionary.cache.keys.found.count (count) | Number of keys looked up in the dictionaries of 'cache' types and found in the cache. |
clickhouse.dictionary.cache.keys.found.total (gauge) | Number of keys looked up in the dictionaries of 'cache' types and found in the cache. |
clickhouse.dictionary.cache.keys.not_found.count (count) | Number of keys looked up in the dictionaries of 'cache' types and not found. |
clickhouse.dictionary.cache.keys.not_found.total (gauge) | Number of keys looked up in the dictionaries of 'cache' types and not found. |
clickhouse.dictionary.cache.keys.requested.count (count) | Number of keys requested from the data source for the dictionaries of 'cache' types. |
clickhouse.dictionary.cache.keys.requested.total (gauge) | Number of keys requested from the data source for the dictionaries of 'cache' types. |
clickhouse.dictionary.cache.read.waiting.time (gauge) | Number of nanoseconds spend in waiting for read lock to lookup the data for the dictionaries of 'cache' types. Shown as nanosecond |
clickhouse.dictionary.cache.request.time (gauge) | Number of nanoseconds spend in querying the external data sources for the dictionaries of 'cache' types. Shown as nanosecond |
clickhouse.dictionary.cache.requests.count (count) | Number of bulk requests to the external data sources for the dictionaries of 'cache' types. |
clickhouse.dictionary.cache.requests.total (gauge) | Number of bulk requests to the external data sources for the dictionaries of 'cache' types. |
clickhouse.dictionary.cache.write.waiting.time (gauge) | Number of nanoseconds spend in waiting for write lock to update the data for the dictionaries of 'cache' types. Shown as nanosecond |
clickhouse.dictionary.item.current (gauge) | The number of items stored in a dictionary. Shown as item |
clickhouse.dictionary.load (gauge) | The percentage filled in a dictionary (for a hashed dictionary, the percentage filled in the hash table). Shown as percent |
clickhouse.dictionary.memory.used (gauge) | The total amount of memory used by a dictionary. Shown as byte |
clickhouse.dictionary.request.cache (gauge) | The number of requests in fly to data sources of dictionaries of cache type. Shown as request |
clickhouse.disk.azure.copy_object.count (count) | Number of Disk Azure blob storage API CopyObject calls |
clickhouse.disk.azure.copy_object.total (gauge) | Number of Disk Azure blob storage API CopyObject calls |
clickhouse.disk.azure.upload_part.count (count) | Number of Disk Azure blob storage API UploadPart calls |
clickhouse.disk.azure.upload_part.total (gauge) | Number of Disk Azure blob storage API UploadPart calls |
clickhouse.disk.connectioned.active (gauge) | Total count of all sessions: stored in the pool and actively used right now for disks |
clickhouse.disk.connections.created.count (count) | Number of created connections for disk |
clickhouse.disk.connections.created.time (gauge) | Total time spend on creating connections for disk Shown as microsecond |
clickhouse.disk.connections.created.total (gauge) | Number of created connections for disk |
clickhouse.disk.connections.errors.count (count) | Number of cases when creation of a connection for disk is failed |
clickhouse.disk.connections.errors.total (gauge) | Number of cases when creation of a connection for disk is failed |
clickhouse.disk.connections.expired.count (count) | Number of expired connections for disk |
clickhouse.disk.connections.expired.total (gauge) | Number of expired connections for disk |
clickhouse.disk.connections.preserved.count (count) | Number of preserved connections for disk |
clickhouse.disk.connections.preserved.total (gauge) | Number of preserved connections for disk |
clickhouse.disk.connections.reset.count (count) | Number of reset connections for disk |
clickhouse.disk.connections.reset.total (gauge) | Number of reset connections for disk |
clickhouse.disk.connections.reused.count (count) | Number of reused connections for disk |
clickhouse.disk.connections.reused.total (gauge) | Number of reused connections for disk |
clickhouse.disk.connections.stored (gauge) | Total count of sessions stored in the session pool for disks |
clickhouse.disk.read.size.count (count) | The number of bytes read from disks or block devices during the last interval. Doesn't include bytes read from page cache. May include excessive data due to block size, readahead, etc. Shown as byte |
clickhouse.disk.read.size.total (gauge) | The total number of bytes read from disks or block devices. Doesn't include bytes read from page cache. May include excessive data due to block size, readahead, etc. Shown as byte |
clickhouse.disk.write.size.count (count) | The number of bytes written to disks or block devices during the last interval. Doesn't include bytes that are in page cache dirty pages. May not include data that was written by OS asynchronously. Shown as byte |
clickhouse.disk.write.size.total (gauge) | The total number of bytes written to disks or block devices. Doesn't include bytes that are in page cache dirty pages. May not include data that was written by OS asynchronously. Shown as byte |
clickhouse.disk_s3.abort_multipart_upload.count (count) | Number of DiskS3 API AbortMultipartUpload calls. |
clickhouse.disk_s3.abort_multipart_upload.total (gauge) | Number of DiskS3 API AbortMultipartUpload calls. |
clickhouse.disk_s3.copy_object.count (count) | Number of DiskS3 API CopyObject calls. |
clickhouse.disk_s3.copy_object.total (gauge) | Number of DiskS3 API CopyObject calls. |
clickhouse.disk_s3.create_multipart_upload.count (count) | Number of DiskS3 API CreateMultipartUpload calls. |
clickhouse.disk_s3.create_multipart_upload.total (gauge) | Number of DiskS3 API CreateMultipartUpload calls. |
clickhouse.disk_s3.delete_object.count (count) | Number of DiskS3 API DeleteObject(s) calls. |
clickhouse.disk_s3.delete_object.total (gauge) | Number of DiskS3 API DeleteObject(s) calls. |
clickhouse.disk_s3.get_object.count (count) | Number of DiskS3 API GetObject calls. |
clickhouse.disk_s3.get_object.total (gauge) | Number of DiskS3 API GetObject calls. |
clickhouse.disk_s3.get_object_attributes.count (count) | Number of DiskS3 API GetObjectAttributes calls. |
clickhouse.disk_s3.get_object_attributes.total (gauge) | Number of DiskS3 API GetObjectAttributes calls. |
clickhouse.disk_s3.get_request.throttler.time (gauge) | Total time a query was sleeping to conform DiskS3 GET and SELECT request throttling. Shown as microsecond |
clickhouse.disk_s3.head_objects.count (count) | Number of DiskS3 API HeadObject calls. |
clickhouse.disk_s3.head_objects.total (gauge) | Number of DiskS3 API HeadObject calls. |
clickhouse.disk_s3.list_objects.count (count) | Number of DiskS3 API ListObjects calls. |
clickhouse.disk_s3.list_objects.total (gauge) | Number of DiskS3 API ListObjects calls. |
clickhouse.disk_s3.put_object.count (count) | Number of DiskS3 API PutObject calls. |
clickhouse.disk_s3.put_object.total (gauge) | Number of DiskS3 API PutObject calls. |
clickhouse.disk_s3.put_request.throttler.time (gauge) | Total time a query was sleeping to conform DiskS3 PUT, COPY, POST and LIST request throttling. Shown as microsecond |
clickhouse.disk_s3.read.requests.count (count) | Number of GET and HEAD requests to DiskS3 storage. |
clickhouse.disk_s3.read.requests.errors.count (count) | Number of non-throttling errors in GET and HEAD requests to DiskS3 storage. |
clickhouse.disk_s3.read.requests.errors.total (gauge) | Number of non-throttling errors in GET and HEAD requests to DiskS3 storage. |
clickhouse.disk_s3.read.requests.redirects.count (count) | Number of redirects in GET and HEAD requests to DiskS3 storage. |
clickhouse.disk_s3.read.requests.redirects.total (gauge) | Number of redirects in GET and HEAD requests to DiskS3 storage. |
clickhouse.disk_s3.read.requests.throttling.count (count) | Number of 429 and 503 errors in GET and HEAD requests to DiskS3 storage. |
clickhouse.disk_s3.read.requests.throttling.total (gauge) | Number of 429 and 503 errors in GET and HEAD requests to DiskS3 storage. |
clickhouse.disk_s3.read.requests.total (gauge) | Number of GET and HEAD requests to DiskS3 storage. |
clickhouse.disk_s3.read.time (gauge) | Time of GET and HEAD requests to DiskS3 storage. Shown as microsecond |
clickhouse.disk_s3.upload_part.count (count) | Number of DiskS3 API UploadPart calls. |
clickhouse.disk_s3.upload_part.total (gauge) | Number of DiskS3 API UploadPart calls. |
clickhouse.disk_s3.upload_part_copy.count (count) | Number of DiskS3 API UploadPartCopy calls. |
clickhouse.disk_s3.upload_part_copy.total (gauge) | Number of DiskS3 API UploadPartCopy calls. |
clickhouse.disk_s3.write.requests.count (count) | Number of POST, DELETE, PUT and PATCH requests to DiskS3 storage. |
clickhouse.disk_s3.write.requests.errors.count (count) | Number of non-throttling errors in POST, DELETE, PUT and PATCH requests to DiskS3 storage. |
clickhouse.disk_s3.write.requests.errors.total (gauge) | Number of non-throttling errors in POST, DELETE, PUT and PATCH requests to DiskS3 storage. |
clickhouse.disk_s3.write.requests.redirects.count (count) | Number of redirects in POST, DELETE, PUT and PATCH requests to DiskS3 storage. |
clickhouse.disk_s3.write.requests.redirects.total (gauge) | Number of redirects in POST, DELETE, PUT and PATCH requests to DiskS3 storage. |
clickhouse.disk_s3.write.requests.total (gauge) | Number of POST, DELETE, PUT and PATCH requests to DiskS3 storage. |
clickhouse.disk_s3.write.time (gauge) | Time of POST, DELETE, PUT and PATCH requests to DiskS3 storage. Shown as microsecond |
clickhouse.distributed.connection.fail_at_all.count (count) | Count when distributed connection fails after all retries finished Shown as connection |
clickhouse.distributed.connection.fail_at_all.total (gauge) | Total count when distributed connection fails after all retries finished Shown as connection |
clickhouse.distributed.connection.fail_try.count (count) | Count when distributed connection fails with retry Shown as connection |
clickhouse.distributed.connection.fail_try.total (gauge) | Total count when distributed connection fails with retry Shown as connection |
clickhouse.distributed.connection.successful.count (count) | Total count of successful distributed connections to a usable server (with required table, but maybe stale). |
clickhouse.distributed.connection.successful.total (gauge) | Total count of successful distributed connections to a usable server (with required table, but maybe stale). |
clickhouse.distributed.connection.tries.count (count) | Total count of distributed connection attempts. |
clickhouse.distributed.connection.tries.total (gauge) | Total count of distributed connection attempts. |
clickhouse.distributed.delayed.inserts.time (gauge) | Total number of milliseconds spent while the INSERT of a block to a Distributed table was throttled due to high number of pending bytes. Shown as microsecond |
clickhouse.distributed.inserts.delayed.count (count) | Number of times the INSERT of a block to a Distributed table was throttled due to high number of pending bytes. Shown as query |
clickhouse.distributed.inserts.delayed.total (gauge) | Total number of times the INSERT of a block to a Distributed table was throttled due to high number of pending bytes. Shown as query |
clickhouse.distributed.inserts.rejected.count (count) | Number of times the INSERT of a block to a Distributed table was rejected with 'Too many bytes' exception due to high number of pending bytes. Shown as query |
clickhouse.distributed.inserts.rejected.total (gauge) | Total number of times the INSERT of a block to a Distributed table was rejected with 'Too many bytes' exception due to high number of pending bytes. Shown as query |
clickhouse.distributed_cache.clickhouse_server.connections.open (gauge) | Number of open connections to ClickHouse server from Distributed Cache |
clickhouse.distributed_cache.connections.open.total (gauge) | The number of open connections to distributed cache |
clickhouse.distributed_cache.connections.open.used (gauge) | Number of currently used connections to Distributed Cache |
clickhouse.distributed_cache.read.requests (gauge) | Number of executed Read requests to Distributed Cache |
clickhouse.distributed_cache.write.requests (gauge) | Number of executed Write requests to Distributed Cache |
clickhouse.drained_connections.async (gauge) | Number of connections drained asynchronously. Shown as connection |
clickhouse.drained_connections.async.active (gauge) | Number of active connections drained asynchronously. Shown as connection |
clickhouse.drained_connections.sync (gauge) | Number of connections drained synchronously. Shown as connection |
clickhouse.drained_connections.sync.active (gauge) | Number of active connections drained synchronously. Shown as connection |
clickhouse.error.dns.count (count) | Number of errors in DNS resolution Shown as error |
clickhouse.error.dns.total (gauge) | Total count of errors in DNS resolution Shown as error |
clickhouse.file.open.count (count) | The number of files opened during the last interval. Shown as file |
clickhouse.file.open.read (gauge) | The number of files open for reading Shown as file |
clickhouse.file.open.total (gauge) | The total number of files opened. Shown as file |
clickhouse.file.open.write (gauge) | The number of files open for writing Shown as file |
clickhouse.file.read.count (count) | The number of reads (read/pread) from a file descriptor during the last interval. Does not include sockets. Shown as read |
clickhouse.file.read.fail.count (count) | The number of times the read (read/pread) from a file descriptor have failed during the last interval. Shown as read |
clickhouse.file.read.fail.total (gauge) | The total number of times the read (read/pread) from a file descriptor have failed. Shown as read |
clickhouse.file.read.size.count (count) | The number of bytes read from file descriptors during the last interval. If the file is compressed, this will show the compressed data size. Shown as byte |
clickhouse.file.read.size.total (gauge) | The total number of bytes read from file descriptors. If the file is compressed, this will show the compressed data size. Shown as byte |
clickhouse.file.read.slow.count (count) | The number of reads from a file that were slow during the last interval. This indicates system overload. Thresholds are controlled by readbackoff* settings. Shown as read |
clickhouse.file.read.slow.total (gauge) | The total number of reads from a file that were slow. This indicates system overload. Thresholds are controlled by readbackoff* settings. Shown as read |
clickhouse.file.read.total (gauge) | The total number of reads (read/pread) from a file descriptor. Does not include sockets. Shown as read |
clickhouse.file.seek.count (count) | The number of times the lseek function was called during the last interval.Shown as operation |
clickhouse.file.seek.total (gauge) | The total number of times the lseek function was called.Shown as operation |
clickhouse.file.write.count (count) | The number of writes (write/pwrite) to a file descriptor during the last interval. Does not include sockets. Shown as write |
clickhouse.file.write.fail.count (count) | The number of times the write (write/pwrite) to a file descriptor have failed during the last interval. Shown as write |
clickhouse.file.write.fail.total (gauge) | The total number of times the write (write/pwrite) to a file descriptor have failed. Shown as write |
clickhouse.file.write.size.count (count) | The number of bytes written to file descriptors during the last interval. If the file is compressed, this will show compressed data size. Shown as byte |
clickhouse.file.write.size.total (gauge) | The total number of bytes written to file descriptors during the last interval. If the file is compressed, this will show compressed data size. Shown as byte |
clickhouse.file.write.total (gauge) | The total number of writes (write/pwrite) to a file descriptor. Does not include sockets. Shown as write |
clickhouse.file_segment.cache.complete.time (gauge) | Duration of FileSegment::complete() in filesystem cache Shown as microsecond |
clickhouse.file_segment.cache.predownload.time (gauge) | Metric per file segment. Time spent pre-downloading data to cache (pre-downloading - finishing file segment download (after someone who failed to do that) up to the point current thread was requested to do) Shown as microsecond |
clickhouse.file_segment.cache.write.time (gauge) | Metric per file segment. Time spend writing data to cache Shown as microsecond |
clickhouse.file_segment.download.wait_time.count (count) | Wait on DOWNLOADING state |
clickhouse.file_segment.download.wait_time.total (gauge) | Wait on DOWNLOADING state |
clickhouse.file_segment.holder.complete.time (gauge) | File segments holder complete() time Shown as microsecond |
clickhouse.file_segment.lock.time (gauge) | Lock file segment time Shown as microsecond |
clickhouse.file_segment.read.time (gauge) | Metric per file segment. Time spend reading from file Shown as microsecond |
clickhouse.file_segment.remove.time (gauge) | File segment remove() time Shown as microsecond |
clickhouse.file_segment.use.bytes.count (count) | Metric per file segment. How many bytes were actually used from current file segment |
clickhouse.file_segment.use.bytes.total (gauge) | Metric per file segment. How many bytes were actually used from current file segment |
clickhouse.file_segment.use.time (gauge) | File segment use() time Shown as microsecond |
clickhouse.file_segment.write.timex.count (count) | File segment write() time |
clickhouse.file_segment.write.timex.total (gauge) | File segment write() time |
clickhouse.filesystem.cache.buffers.active (gauge) | Number of active cache buffers Shown as buffer |
clickhouse.filesystem.cache.cleanup.queue (gauge) | Filesystem cache elements in background cleanup queue |
clickhouse.filesystem.cache.download.queue (gauge) | Filesystem cache elements in download queue |
clickhouse.filesystem.cache.elements (gauge) | Filesystem cache elements (file segments) |
clickhouse.filesystem.cache.eviction.bytes.count (count) | Number of bytes evicted from filesystem cache |
clickhouse.filesystem.cache.eviction.bytes.total (gauge) | Number of bytes evicted from filesystem cache |
clickhouse.filesystem.cache.eviction.time (gauge) | Filesystem cache eviction time Shown as microsecond |
clickhouse.filesystem.cache.filesegments.hold (gauge) | Filesystem cache file segments count, which were hold |
clickhouse.filesystem.cache.get.time (gauge) | Filesystem cache get() time Shown as microsecond |
clickhouse.filesystem.cache.get_set.time (gauge) | Filesystem cache getOrSet() time Shown as microsecond |
clickhouse.filesystem.cache.limit (gauge) | Filesystem cache size limit in bytes |
clickhouse.filesystem.cache.lock.key.time (gauge) | Lock cache key time Shown as microsecond |
clickhouse.filesystem.cache.lock.metadata.time (gauge) | Lock filesystem cache metadata time Shown as microsecond |
clickhouse.filesystem.cache.lock.time (gauge) | Lock filesystem cache time Shown as microsecond |
clickhouse.filesystem.cache.metadata.load.time (gauge) | Time spent loading filesystem cache metadata Shown as microsecond |
clickhouse.filesystem.cache.reserve.time (gauge) | Filesystem cache space reservation time Shown as microsecond |
clickhouse.filesystem.cache.size (gauge) | Filesystem cache size in bytes |
clickhouse.filesystem.remote.aysnc.read.prefetches.count (count) | Number of prefetches made with asynchronous reading from remote filesystem |
clickhouse.filesystem.remote.aysnc.read.prefetches.total (gauge) | Number of prefetches made with asynchronous reading from remote filesystem |
clickhouse.filesystem.remote.buffer.seeks.count (count) | Total number of seeks for async buffer |
clickhouse.filesystem.remote.buffer.seeks.reset.count (count) | Number of seeks which lead to a new connection |
clickhouse.filesystem.remote.buffer.seeks.reset.total (gauge) | Number of seeks which lead to a new connection |
clickhouse.filesystem.remote.buffer.seeks.total (gauge) | Total number of seeks for async buffer |
clickhouse.filesystem.remote.buffers.count (count) | Number of buffers created for asynchronous reading from remote filesystem |
clickhouse.filesystem.remote.buffers.total (gauge) | Number of buffers created for asynchronous reading from remote filesystem |
clickhouse.filesystem.remote.lazy_seeks.count (count) | Number of lazy seeks |
clickhouse.filesystem.remote.lazy_seeks.total (gauge) | Number of lazy seeks |
clickhouse.filesystem.remote.prefetched.reads.count (count) | Number of reads from prefecthed buffer |
clickhouse.filesystem.remote.prefetched.reads.total (gauge) | Number of reads from prefecthed buffer |
clickhouse.filesystem.remote.prefetched.size.count (count) | Number of bytes from prefecthed buffer |
clickhouse.filesystem.remote.prefetched.size.total (gauge) | Number of bytes from prefecthed buffer |
clickhouse.filesystem.remote.prefetches.pending.count (count) | Number of prefetches pending at buffer destruction |
clickhouse.filesystem.remote.prefetches.pending.total (gauge) | Number of prefetches pending at buffer destruction |
clickhouse.filesystem.remote.unprefetched.size.count (count) | Number of bytes from unprefetched buffer |
clickhouse.filesystem.remote.unprefetched.size.total (gauge) | Number of bytes from unprefetched buffer |
clickhouse.fs.read.size.count (count) | The number of bytes read from filesystem (including page cache) during the last interval. Shown as byte |
clickhouse.fs.read.size.total (gauge) | The total number of bytes read from filesystem (including page cache). Shown as byte |
clickhouse.fs.write.size.count (count) | The number of bytes written to filesystem (including page cache) during the last interval. Shown as byte |
clickhouse.fs.write.size.total (gauge) | The total number of bytes written to filesystem (including page cache). Shown as byte |
clickhouse.function.filesync.count (count) | Number of times the F_FULLFSYNC/fsync/fdatasync function was called for files. |
clickhouse.function.filesync.time (gauge) | Total time spent waiting for F_FULLFSYNC/fsync/fdatasync syscall for files. Shown as microsecond |
clickhouse.function.filesync.total (gauge) | Number of times the F_FULLFSYNC/fsync/fdatasync function was called for files. |
clickhouse.hash_table.elements.allocated.aggregation.count (count) | How many elements were preallocated in hash tables for aggregation. |
clickhouse.hash_table.elements.allocated.aggregation.total (gauge) | How many elements were preallocated in hash tables for aggregation. |
clickhouse.http_connection.addresses.expired.count (count) | Total count of expired addresses which is no longer presented in dns resolve results for http connections |
clickhouse.http_connection.addresses.expired.total (gauge) | Total count of expired addresses which is no longer presented in dns resolve results for http connections |
clickhouse.http_connection.addresses.faulty.count (count) | Total count of addresses which has been marked as faulty due to connection errors for http connections |
clickhouse.http_connection.addresses.faulty.total (gauge) | Total count of addresses which has been marked as faulty due to connection errors for http connections |
clickhouse.http_connection.addresses.new.count (count) | Total count of new addresses in dns resolve results for http connections |
clickhouse.http_connection.addresses.new.total (gauge) | Total count of new addresses in dns resolve results for http connections |
clickhouse.index.usearch.distance.compute.count (count) | Number of times distance was computed when adding vectors to usearch indexes. |
clickhouse.index.usearch.distance.compute.total (gauge) | Number of times distance was computed when adding vectors to usearch indexes. |
clickhouse.index.usearch.search.node.visit.count (count) | Number of nodes visited when searching in usearch indexes. |
clickhouse.index.usearch.search.node.visit.total (gauge) | Number of nodes visited when searching in usearch indexes. |
clickhouse.index.usearch.search.operation.count (count) | Number of search operations performed in usearch indexes. |
clickhouse.index.usearch.search.operation.total (gauge) | Number of search operations performed in usearch indexes. |
clickhouse.index.usearch.vector.add.count (count) | Number of vectors added to usearch indexes. |
clickhouse.index.usearch.vector.add.total (gauge) | Number of vectors added to usearch indexes. |
clickhouse.index.usearch.vector.node.visit.count (count) | Number of nodes visited when adding vectors to usearch indexes. |
clickhouse.index.usearch.vector.node.visit.total (gauge) | Number of nodes visited when adding vectors to usearch indexes. |
clickhouse.insert.query.time (gauge) | Total time of INSERT queries. Shown as microsecond |
clickhouse.insert_queue.async.size (gauge) | Number of pending bytes in the AsynchronousInsert queue. |
clickhouse.insert_queue.async.total (gauge) | Number of pending tasks in the AsynchronousInsert queue. |
clickhouse.insert_threads.async.active (gauge) | Number of threads in the AsynchronousInsert thread pool running a task. |
clickhouse.insert_threads.async.scheduled (gauge) | Number of queued or active jobs in the AsynchronousInsert thread pool. |
clickhouse.insert_threads.async.total (gauge) | Number of threads in the AsynchronousInsert thread pool. |
clickhouse.inserts.async.flush.pending (gauge) | Number of asynchronous inserts that are waiting for flush. |
clickhouse.interface.http.received.bytes.count (count) | Number of bytes received through HTTP interfaces |
clickhouse.interface.http.received.bytes.total (gauge) | Number of bytes received through HTTP interfaces |
clickhouse.interface.http.sent.bytes.count (count) | Number of bytes sent through HTTP interfaces |
clickhouse.interface.http.sent.bytes.total (gauge) | Number of bytes sent through HTTP interfaces |
clickhouse.interface.mysql.received.bytes.count (count) | Number of bytes received through MySQL interfaces |
clickhouse.interface.mysql.received.bytes.total (gauge) | Number of bytes received through MySQL interfaces |
clickhouse.interface.mysql.sent.bytes.count (count) | Number of bytes sent through MySQL interfaces |
clickhouse.interface.mysql.sent.bytes.total (gauge) | Number of bytes sent through MySQL interfaces |
clickhouse.interface.native.received.bytes.count (count) | Number of bytes received through native interfaces |
clickhouse.interface.native.received.bytes.total (gauge) | Number of bytes received through native interfaces |
clickhouse.interface.native.sent.bytes.count (count) | Number of bytes sent through native interfaces |
clickhouse.interface.native.sent.bytes.total (gauge) | Number of bytes sent through native interfaces |
clickhouse.interface.postgresql.sent.bytes.count (count) | Number of bytes sent through PostgreSQL interfaces |
clickhouse.interface.postgresql.sent.bytes.total (gauge) | Number of bytes sent through PostgreSQL interfaces |
clickhouse.interface.prometheus.sent.bytes.count (count) | Number of bytes sent through Prometheus interfaces |
clickhouse.interface.prometheus.sent.bytes.total (gauge) | Number of bytes sent through Prometheus interfaces |
clickhouse.io_buffer.allocated.bytes.count (count) | Number of bytes allocated for IO buffers (for ReadBuffer/WriteBuffer). Shown as byte |
clickhouse.io_buffer.allocated.bytes.total (gauge) | Number of bytes allocated for IO buffers (for ReadBuffer/WriteBuffer). Shown as byte |
clickhouse.io_buffer.allocated.count (count) | Number of allocations of IO buffers (for ReadBuffer/WriteBuffer). |
clickhouse.io_buffer.allocated.total (gauge) | Number of allocations of IO buffers (for ReadBuffer/WriteBuffer). |
clickhouse.io_uring.cqe.completed.count (count) | Total number of successfully completed io_uring CQEs |
clickhouse.io_uring.cqe.completed.total (gauge) | Total number of successfully completed io_uring CQEs |
clickhouse.io_uring.cqe.failed.count (count) | Total number of completed io_uring CQEs with failures |
clickhouse.io_uring.cqe.failed.total (gauge) | Total number of completed io_uring CQEs with failures |
clickhouse.io_uring.sqe.resubmitted.count (count) | Total number of io_uring SQE resubmits performed |
clickhouse.io_uring.sqe.resubmitted.total (gauge) | Total number of io_uring SQE resubmits performed |
clickhouse.io_uring.sqe.submitted.count (count) | Total number of io_uring SQEs submitted |
clickhouse.io_uring.sqe.submitted.total (gauge) | Total number of io_uring SQEs submitted |
clickhouse.jemalloc.active (gauge) | (EXPERIMENTAL) Shown as byte |
clickhouse.jemalloc.allocated (gauge) | The amount of memory allocated by ClickHouse. Shown as byte |
clickhouse.jemalloc.background_thread.num_runs (gauge) | (EXPERIMENTAL) Shown as byte |
clickhouse.jemalloc.background_thread.num_threads (gauge) | (EXPERIMENTAL) Shown as thread |
clickhouse.jemalloc.background_thread.run_interval (gauge) | (EXPERIMENTAL) Shown as byte |
clickhouse.jemalloc.mapped (gauge) | The amount of memory in active extents mapped by the allocator. Shown as byte |
clickhouse.jemalloc.metadata (gauge) | The amount of memory dedicated to metadata, which comprise base allocations used for bootstrap-sensitive allocator metadata structures and internal allocations. Shown as byte |
clickhouse.jemalloc.metadata_thp (gauge) | (EXPERIMENTAL) Shown as byte |
clickhouse.jemalloc.resident (gauge) | The amount of memory in physically resident data pages mapped by the allocator, comprising all pages dedicated to allocator metadata, pages backing active allocations, and unused dirty pages. Shown as byte |
clickhouse.jemalloc.retained (gauge) | The amount of memory in virtual memory mappings that were retained rather than being returned to the operating system. Shown as byte |
clickhouse.kafka.background.reads (gauge) | Number of background reads currently working (populating materialized views from Kafka) Shown as read |
clickhouse.kafka.background.reads.count (count) | Number of background reads currently working (populating materialized views from Kafka) |
clickhouse.kafka.background.reads.total (gauge) | Number of background reads currently working (populating materialized views from Kafka) |
clickhouse.kafka.commit.failed.count (count) | Number of failed commits of consumed offsets to Kafka (usually is a sign of some data duplication) |
clickhouse.kafka.commit.failed.total (gauge) | Number of failed commits of consumed offsets to Kafka (usually is a sign of some data duplication) |
clickhouse.kafka.commit.success.count (count) | Number of successful commits of consumed offsets to Kafka (normally should be the same as KafkaBackgroundReads) |
clickhouse.kafka.commit.success.total (gauge) | Number of successful commits of consumed offsets to Kafka (normally should be the same as KafkaBackgroundReads) |
clickhouse.kafka.consumer.errors.count (count) | Number of errors reported by librdkafka during polls |
clickhouse.kafka.consumer.errors.total (gauge) | Number of errors reported by librdkafka during polls |
clickhouse.kafka.consumers.active (gauge) | Number of active Kafka consumers |
clickhouse.kafka.consumers.assigned (gauge) | Number of active Kafka consumers which have some partitions assigned. |
clickhouse.kafka.consumers.in_use (gauge) | Number of consumers which are currently used by direct or background reads |
clickhouse.kafka.direct.read.count (count) | Number of direct selects from Kafka tables since server start |
clickhouse.kafka.direct.read.total (gauge) | Number of direct selects from Kafka tables since server start |
clickhouse.kafka.inserts.running (gauge) | Number of writes (inserts) to Kafka tables Shown as write |
clickhouse.kafka.messages.failed.count (count) | Number of Kafka messages ClickHouse failed to parse |
clickhouse.kafka.messages.failed.total (gauge) | Number of Kafka messages ClickHouse failed to parse |
clickhouse.kafka.messages.polled.count (count) | Number of Kafka messages polled from librdkafka to ClickHouse |
clickhouse.kafka.messages.polled.total (gauge) | Number of Kafka messages polled from librdkafka to ClickHouse |
clickhouse.kafka.messages.produced.count (count) | Number of messages produced to Kafka |
clickhouse.kafka.messages.produced.total (gauge) | Number of messages produced to Kafka |
clickhouse.kafka.messages.read.count (count) | Number of Kafka messages already processed by ClickHouse |
clickhouse.kafka.messages.read.total (gauge) | Number of Kafka messages already processed by ClickHouse |
clickhouse.kafka.partitions.assigned (gauge) | Number of partitions Kafka tables currently assigned to |
clickhouse.kafka.producer.errors.count (count) | Number of errors during producing the messages to Kafka |
clickhouse.kafka.producer.errors.total (gauge) | Number of errors during producing the messages to Kafka |
clickhouse.kafka.producer.flushes.count (count) | Number of explicit flushes to Kafka producer |
clickhouse.kafka.producer.flushes.total (gauge) | Number of explicit flushes to Kafka producer |
clickhouse.kafka.producers.active (gauge) | Number of active Kafka producer created |
clickhouse.kafka.rebalance.assignments.count (count) | Number of partition assignments (the final stage of consumer group rebalance) |
clickhouse.kafka.rebalance.assignments.total (gauge) | Number of partition assignments (the final stage of consumer group rebalance) |
clickhouse.kafka.rebalance.errors.count (count) | Number of failed consumer group rebalances |
clickhouse.kafka.rebalance.errors.total (gauge) | Number of failed consumer group rebalances |
clickhouse.kafka.rebalance.revocations.count (count) | Number of partition revocations (the first stage of consumer group rebalance) |
clickhouse.kafka.rebalance.revocations.total (gauge) | Number of partition revocations (the first stage of consumer group rebalance) |
clickhouse.kafka.rows.read.count (count) | Number of rows parsed from Kafka messages |
clickhouse.kafka.rows.read.total (gauge) | Number of rows parsed from Kafka messages |
clickhouse.kafka.rows.rejected.count (count) | Number of parsed rows which were later rejected (due to rebalances / errors or similar reasons). Those rows will be consumed again after the rebalance. |
clickhouse.kafka.rows.rejected.total (gauge) | Number of parsed rows which were later rejected (due to rebalances / errors or similar reasons). Those rows will be consumed again after the rebalance. |
clickhouse.kafka.rows.written.count (count) | Number of rows inserted into Kafka tables |
clickhouse.kafka.rows.written.total (gauge) | Number of rows inserted into Kafka tables |
clickhouse.kafkta.table.writes.count (count) | Number of writes (inserts) to Kafka tables |
clickhouse.kafkta.table.writes.total (gauge) | Number of writes (inserts) to Kafka tables |
clickhouse.keeper.cache.hit.count (count) | Number of times an object storage metadata request was answered from cache without making request to Keeper |
clickhouse.keeper.cache.hit.total (gauge) | Number of times an object storage metadata request was answered from cache without making request to Keeper |
clickhouse.keeper.cache.miss.count (count) | Number of times an object storage metadata request had to be answered from Keeper |
clickhouse.keeper.cache.miss.total (gauge) | Number of times an object storage metadata request had to be answered from Keeper |
clickhouse.keeper.cache.update.time (gauge) | Total time spent in updating the cache including waiting for responses from Keeper Shown as microsecond |
clickhouse.keeper.check.requests.count (count) | Number of check requests |
clickhouse.keeper.check.requests.total (gauge) | Number of check requests |
clickhouse.keeper.commits.count (count) | Number of successful commits |
clickhouse.keeper.commits.failed.count (count) | Number of failed commits |
clickhouse.keeper.commits.failed.total (gauge) | Number of failed commits |
clickhouse.keeper.commits.total (gauge) | Number of successful commits |
clickhouse.keeper.create.requests.count (count) | Number of create requests |
clickhouse.keeper.create.requests.total (gauge) | Number of create requests |
clickhouse.keeper.exists.requests.count (count) | Number of exists requests |
clickhouse.keeper.exists.requests.total (gauge) | Number of exists requests |
clickhouse.keeper.get.requests.count (count) | Number of get requests |
clickhouse.keeper.get.requests.total (gauge) | Number of get requests |
clickhouse.keeper.latency.count (count) | Keeper latency |
clickhouse.keeper.latency.total (gauge) | Keeper latency |
clickhouse.keeper.list.requests.count (count) | Number of list requests |
clickhouse.keeper.list.requests.total (gauge) | Number of list requests |
clickhouse.keeper.log_entry.file.prefetched.count (count) | Number of log entries in Keeper being prefetched from the changelog file |
clickhouse.keeper.log_entry.file.prefetched.total (gauge) | Number of log entries in Keeper being prefetched from the changelog file |
clickhouse.keeper.log_entry.file.read.count (count) | Number of log entries in Keeper being read directly from the changelog file |
clickhouse.keeper.log_entry.file.read.total (gauge) | Number of log entries in Keeper being read directly from the changelog file |
clickhouse.keeper.multi.requests.count (count) | Number of multi requests |
clickhouse.keeper.multi.requests.total (gauge) | Number of multi requests |
clickhouse.keeper.multi_read.requests.count (count) | Number of multi read requests |
clickhouse.keeper.multi_read.requests.total (gauge) | Number of multi read requests |
clickhouse.keeper.packets.received.count (count) | Packets received by keeper server |
clickhouse.keeper.packets.received.total (gauge) | Packets received by keeper server |
clickhouse.keeper.packets.sent.count (count) | Packets sent by keeper server |
clickhouse.keeper.packets.sent.total (gauge) | Packets sent by keeper server |
clickhouse.keeper.reconfig.requests.count (count) | Number of reconfig requests |
clickhouse.keeper.reconfig.requests.total (gauge) | Number of reconfig requests |
clickhouse.keeper.reconnects.count (count) | Number of times a reconnect to Keeper was done |
clickhouse.keeper.reconnects.total (gauge) | Number of times a reconnect to Keeper was done |
clickhouse.keeper.remove.requests.count (count) | Number of remove requests |
clickhouse.keeper.remove.requests.total (gauge) | Number of remove requests |
clickhouse.keeper.requests.count (count) | Number of times a request was made to Keeper |
clickhouse.keeper.requests.total (gauge) | Number of times a request was made to Keeper |
clickhouse.keeper.requests.total.count (count) | Total requests number on keeper server |
clickhouse.keeper.requests.total.total (gauge) | Total requests number on keeper server |
clickhouse.keeper.set.requests.count (count) | Number of set requests |
clickhouse.keeper.set.requests.total (gauge) | Number of set requests |
clickhouse.keeper.snapshot.apply.count (count) | Number of snapshot applying |
clickhouse.keeper.snapshot.apply.failed.count (count) | Number of failed snapshot applying |
clickhouse.keeper.snapshot.apply.failed.total (gauge) | Number of failed snapshot applying |
clickhouse.keeper.snapshot.apply.total (gauge) | Number of snapshot applying |
clickhouse.keeper.snapshot.create.count (count) | Number of snapshots creations |
clickhouse.keeper.snapshot.create.total (gauge) | Number of snapshots creations |
clickhouse.keeper.snapshot.read.count (count) | Number of snapshot read(serialization) |
clickhouse.keeper.snapshot.read.total (gauge) | Number of snapshot read(serialization) |
clickhouse.keeper.snapshot.save.count (count) | Number of snapshot save |
clickhouse.keeper.snapshot.save.total (gauge) | Number of snapshot save |
clickhouse.keerper.snapshot.create.failed.count (count) | Number of failed snapshot creations |
clickhouse.keerper.snapshot.create.failed.total (gauge) | Number of failed snapshot creations |
clickhouse.lock.context.acquisition.count (count) | The number of times the lock of Context was acquired or tried to acquire during the last interval. This is global lock. Shown as event |
clickhouse.lock.context.acquisition.total (gauge) | The total number of times the lock of Context was acquired or tried to acquire. This is global lock. Shown as event |
clickhouse.lock.context.wait_time.count (count) | Context lock wait time in microseconds |
clickhouse.lock.context.wait_time.total (gauge) | Context lock wait time in microseconds |
clickhouse.lock.read.rwlock.acquired.count (count) | Number of times a read lock was acquired (in a heavy RWLock). |
clickhouse.lock.read.rwlock.acquired.time (gauge) | Total time spent waiting for a read lock to be acquired (in a heavy RWLock). Shown as microsecond |
clickhouse.lock.read.rwlock.acquired.total (gauge) | Number of times a read lock was acquired (in a heavy RWLock). |
clickhouse.lock.write.rwlock.acquired.count (count) | Number of times a write lock was acquired (in a heavy RWLock). |
clickhouse.lock.write.rwlock.acquired.time (gauge) | Total time spent waiting for a write lock to be acquired (in a heavy RWLock). Shown as microsecond |
clickhouse.lock.write.rwlock.acquired.total (gauge) | Number of times a write lock was acquired (in a heavy RWLock). |
clickhouse.log.entry.merge.created.count (count) | Successfully created log entry to merge parts in ReplicatedMergeTree. Shown as event |
clickhouse.log.entry.merge.created.total (gauge) | Total successfully created log entryies to merge parts in ReplicatedMergeTree. Shown as event |
clickhouse.log.entry.merge.not_created.count (count) | Log entry to merge parts in ReplicatedMergeTree is not created due to concurrent log update by another replica. Shown as event |
clickhouse.log.entry.merge.not_created.total (gauge) | Total log entries to merge parts in ReplicatedMergeTree not created due to concurrent log update by another replica. Shown as event |
clickhouse.log.entry.mutation.created.count (count) | Successfully created log entry to mutate parts in ReplicatedMergeTree. Shown as event |
clickhouse.log.entry.mutation.created.total (gauge) | Total successfully created log entry to mutate parts in ReplicatedMergeTree. Shown as event |
clickhouse.log.entry.mutation.not_created.count (count) | Log entry to mutate parts in ReplicatedMergeTree is not created due to concurrent log update by another replica. Shown as event |
clickhouse.log.entry.mutation.not_created.total (gauge) | Total log entries to mutate parts in ReplicatedMergeTree not created due to concurrent log update by another replica. Shown as event |
clickhouse.log.messages.debug.count (count) | Number of log messages with level Debug |
clickhouse.log.messages.debug.total (gauge) | Number of log messages with level Debug |
clickhouse.log.messages.error.count (count) | Number of log messages with level Error |
clickhouse.log.messages.error.total (gauge) | Number of log messages with level Error |
clickhouse.log.messages.fatal.count (count) | Number of log messages with level Fatal |
clickhouse.log.messages.fatal.total (gauge) | Number of log messages with level Fatal |
clickhouse.log.messages.info.count (count) | Number of log messages with level Info |
clickhouse.log.messages.info.total (gauge) | Number of log messages with level Info |
clickhouse.log.messages.test.count (count) | Number of log messages with level Test |
clickhouse.log.messages.test.total (gauge) | Number of log messages with level Test |
clickhouse.log.messages.trace.count (count) | Number of log messages with level Trace |
clickhouse.log.messages.trace.total (gauge) | Number of log messages with level Trace |
clickhouse.log.messages.warning.count (count) | Number of log messages with level Warning |
clickhouse.log.messages.warning.total (gauge) | Number of log messages with level Warning |
clickhouse.marks.load.time (gauge) | Time spent loading marks Shown as microsecond |
clickhouse.marks.loaded.bytes.count (count) | Size of in-memory representations of loaded marks. |
clickhouse.marks.loaded.bytes.total (gauge) | Size of in-memory representations of loaded marks. |
clickhouse.marks.loaded.count.count (count) | Number of marks loaded (total across columns). |
clickhouse.marks.loaded.count.total (gauge) | Number of marks loaded (total across columns). |
clickhouse.memory.allocator.purge.count (count) | Total number of times memory allocator purge was requested |
clickhouse.memory.allocator.purge.time (gauge) | Total number of times memory allocator purge was requested Shown as microsecond |
clickhouse.memory.allocator.purge.total (gauge) | Total number of times memory allocator purge was requested |
clickhouse.memory.allocator.purge.wait.time (gauge) | Total time spent in waiting for memory to be freed in OvercommitTracker. Shown as microsecond |
clickhouse.memory.arena.bytes.count (count) | Number of bytes allocated for memory Arena (used for GROUP BY and similar operations) Shown as byte |
clickhouse.memory.arena.bytes.total (gauge) | Number of bytes allocated for memory Arena (used for GROUP BY and similar operations) Shown as byte |
clickhouse.memory.arena.chunks.count (count) | Number of chunks allocated for memory Arena (used for GROUP BY and similar operations) |
clickhouse.memory.arena.chunks.total (gauge) | Number of chunks allocated for memory Arena (used for GROUP BY and similar operations) |
clickhouse.memory.external.join.files.merged.count (count) | Number of times temporary files were merged for JOIN in external memory. |
clickhouse.memory.external.join.files.merged.total (gauge) | Number of times temporary files were merged for JOIN in external memory. |
clickhouse.memory.external.join.files.num_written.count (count) | Number of times a temporary file was written to disk for JOIN in external memory. |
clickhouse.memory.external.join.files.num_written.total (gauge) | Number of times a temporary file was written to disk for JOIN in external memory. |
clickhouse.memory.external.sort.files.num_written.count (count) | Number of times a temporary file was written to disk for sorting in external memory. |
clickhouse.memory.external.sort.files.num_written.total (gauge) | Number of times a temporary file was written to disk for sorting in external memory. |
clickhouse.merge.active (gauge) | The number of executing background merges Shown as merge |
clickhouse.merge.count (count) | The number of launched background merges during the last interval. Shown as merge |
clickhouse.merge.disk.reserved (gauge) | Disk space reserved for currently running background merges. It is slightly more than the total size of currently merging parts. Shown as byte |
clickhouse.merge.memory (gauge) | Total amount of memory allocated for background merges. Included in MemoryTrackingInBackgroundProcessingPool. Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks. Shown as byte |
clickhouse.merge.parts.compact.count (count) | Number of parts merged into Compact format. |
clickhouse.merge.parts.compact.total (gauge) | Number of parts merged into Compact format. |
clickhouse.merge.parts.wide.count (count) | Number of parts merged into Wide format. |
clickhouse.merge.parts.wide.total (gauge) | Number of parts merged into Wide format. |
clickhouse.merge.read.size.uncompressed.count (count) | The number of uncompressed bytes (for columns as they are stored in memory) that was read for background merges during the last interval. This is the number before merge. Shown as byte |
clickhouse.merge.read.size.uncompressed.total (gauge) | The total number of uncompressed bytes (for columns as they are stored in memory) that was read for background merges. This is the number before merge. Shown as byte |
clickhouse.merge.row.read.count (count) | The number of rows read for background merges during the last interval. This is the number of rows before merge. Shown as row |
clickhouse.merge.row.read.total (gauge) | The total number of rows read for background merges. This is the number of rows before merge. Shown as row |
clickhouse.merge.time (gauge) | The percentage of time spent for background merges during the last interval. Shown as percent |
clickhouse.merge.total (gauge) | The total number of launched background merges. Shown as merge |
clickhouse.merge_tree.announcements.sent (gauge) | The number of announcements sent from the remote server to the initiator server about the set of data parts (for MergeTree tables). Measured on the remote server side. |
clickhouse.merge_tree.read_task.requests.sent (gauge) | The number of callbacks requested from the remote server back to the initiator server to choose the read task (for MergeTree tables). Measured on the remote server side. |
clickhouse.merges_mutations.bytes.total (gauge) | Total amount of memory (bytes) allocated by background tasks (merges and mutations). |
clickhouse.mmapped.file.current (gauge) | Total number of mmapped files. Shown as file |
clickhouse.mmapped.file.size (gauge) | Sum size of mmapped file regions. Shown as byte |
clickhouse.moves.executing.currently (gauge) | Number of currently executing moves |
clickhouse.network.receive.elapsed.time (gauge) | Total time spent waiting for data to receive or receiving data from the network. Shown as microsecond |
clickhouse.network.receive.size.count (count) | The number of bytes received from network. Shown as byte |
clickhouse.network.receive.size.total (gauge) | The total number of bytes received from network. Shown as byte |
clickhouse.network.send.elapsed.time (gauge) | Total time spent waiting for data to send to network or sending data to network. Shown as microsecond |
clickhouse.network.send.size.count (count) | The number of bytes sent to the network. Shown as byte |
clickhouse.network.send.size.total (gauge) | The total number of bytes sent to the network. Shown as byte |
clickhouse.network.threads.receive (gauge) | Number of threads receiving data from the network. Shown as thread |
clickhouse.network.threads.send (gauge) | Number of threads sending data to the network. Shown as thread |
clickhouse.node.remove.count (count) | The number of times an error happened while trying to remove ephemeral node during the last interval. This is usually not an issue, because ClickHouse's implementation of ZooKeeper library guarantees that the session will expire and the node will be removed. Shown as error |
clickhouse.node.remove.total (gauge) | The total number of times an error happened while trying to remove ephemeral node. This is usually not an issue, because ClickHouse's implementation of ZooKeeper library guarantees that the session will expire and the node will be removed. Shown as error |
clickhouse.part.max (gauge) | The maximum number of active parts in partitions. Shown as item |
clickhouse.parts.active (gauge) | [Only versions >= 22.7.1] Active data part used by current and upcoming SELECTs. Shown as item |
clickhouse.parts.committed (gauge) | Active data part, used by current and upcoming SELECTs. Shown as item |
clickhouse.parts.compact (gauge) | Compact parts. Shown as item |
clickhouse.parts.compact.inserted.count (count) | Number of parts inserted in Compact format. Shown as item |
clickhouse.parts.compact.inserted.total (gauge) | Number of parts inserted in Compact format. Shown as item |
clickhouse.parts.delete_on_destroy (gauge) | Part was moved to another disk and should be deleted in own destructor. Shown as item |
clickhouse.parts.deleting (gauge) | Not active data part with identity refcounter, it is deleting right now by a cleaner. Shown as item |
clickhouse.parts.inmemory (gauge) | In-memory parts. Shown as item |
clickhouse.parts.mutations.applied.fly.count (count) | Total number of parts for which there was any mutation applied on fly |
clickhouse.parts.mutations.applied.fly.total (gauge) | Total number of parts for which there was any mutation applied on fly |
clickhouse.parts.outdated (gauge) | Not active data part, but could be used by only current SELECTs, could be deleted after SELECTs finishes. Shown as item |
clickhouse.parts.pre_active (gauge) | [Only versions >= 22.7.1] The part is in data_parts but not used for SELECTs. Shown as item |
clickhouse.parts.precommitted (gauge) | The part is in data_parts, but not used for SELECTs. Shown as item |
clickhouse.parts.temporary (gauge) | The part is generating now, it is not in data_parts list. Shown as item |
clickhouse.parts.wide (gauge) | Wide parts. Shown as item |
clickhouse.parts.wide.inserted.count (count) | Number of parts inserted in Wide format. |
clickhouse.parts.wide.inserted.total (gauge) | Number of parts inserted in Wide format. |
clickhouse.perf.alignment.faults.count (count) | Number of alignment faults. These happen when unaligned memory accesses happen; the kernel can handle these but it reduces performance. This happens only on some architectures (never on x86). Shown as event |
clickhouse.perf.alignment.faults.total (gauge) | Total number of alignment faults. These happen when unaligned memory accesses happen; the kernel can handle these but it reduces performance. This happens only on some architectures (never on x86). Shown as event |
clickhouse.perf.branch.instructions.count (count) | Retired branch instructions. Prior to Linux 2.6.35, this used the wrong event on AMD processors. Shown as unit |
clickhouse.perf.branch.instructions.total (gauge) | Total retired branch instructions. Prior to Linux 2.6.35, this used the wrong event on AMD processors. Shown as unit |
clickhouse.perf.branch.misses.count (count) | Mispredicted branch instructions. Shown as unit |
clickhouse.perf.branch.misses.total (gauge) | Total mispredicted branch instructions. Shown as unit |
clickhouse.perf.bus.cycles.count (count) | Bus cycles, which can be different from total cycles. Shown as unit |
clickhouse.perf.bus.cycles.total (gauge) | Total bus cycles, which can be different from total cycles. Shown as unit |
clickhouse.perf.cache.misses.count (count) | Cache misses. Usually this indicates Last Level Cache misses; this is intended to be used in conjunction with the PERFCOUNTHWCACHEREFERENCES event to calculate cache miss rates. Shown as miss |
clickhouse.perf.cache.misses.total (gauge) | Cache misses. Usually this indicates total Last Level Cache misses; this is intended to be used in conjunction with the PERFCOUNTHWCACHEREFERENCES event to calculate cache miss rates. Shown as miss |
clickhouse.perf.cache.references.count (count) | Cache accesses. Usually this indicates Last Level Cache accesses but this may vary depending on your CPU. This may include prefetches and coherency messages; again this depends on the design of your CPU. Shown as unit |
clickhouse.perf.cache.references.total (gauge) | Cache accesses. Usually this indicates total Last Level Cache accesses but this may vary depending on your CPU. This may include prefetches and coherency messages; again this depends on the design of your CPU. Shown as unit |
clickhouse.perf.context.switches.count (count) | Number of context switches |
clickhouse.perf.context.switches.total (gauge) | Total number of context switches |
clickhouse.perf.cpu.clock (gauge) | The CPU clock, a high-resolution per-CPU timer. Shown as unit |
clickhouse.perf.cpu.cycles.count (count) | CPU cycles. Be wary of what happens during CPU frequency scaling. Shown as unit |
clickhouse.perf.cpu.cycles.total (gauge) | Total CPU cycles. Be wary of what happens during CPU frequency scaling. Shown as unit |
clickhouse.perf.cpu.migrations.count (count) | Number of times the process has migrated to a new CPU Shown as unit |
clickhouse.perf.cpu.migrations.total (gauge) | Total number of times the process has migrated to a new CPU Shown as unit |
clickhouse.perf.cpu.ref_cycles.count (count) | CPU cycles; not affected by CPU frequency scaling. Shown as unit |
clickhouse.perf.cpu.ref_cycles.total (gauge) | Total cycles; not affected by CPU frequency scaling. Shown as unit |
clickhouse.perf.data.tlb.misses.count (count) | Data TLB misses Shown as miss |
clickhouse.perf.data.tlb.misses.total (gauge) | Total data TLB misses Shown as miss |
clickhouse.perf.data.tlb.references.count (count) | Data TLB references Shown as unit |
clickhouse.perf.data.tlb.references.total (gauge) | Total data TLB references Shown as unit |
clickhouse.perf.emulation.faults.count (count) | Number of emulation faults. The kernel sometimes traps on unimplemented instructions and emulates them for user space. This can negatively impact performance. Shown as fault |
clickhouse.perf.emulation.faults.total (gauge) | Total number of emulation faults. The kernel sometimes traps on unimplemented instructions and emulates them for user space. This can negatively impact performance. Shown as fault |
clickhouse.perf.instruction.tlb.misses.count (count) | Instruction TLB misses Shown as miss |
clickhouse.perf.instruction.tlb.misses.total (gauge) | Total instruction TLB misses Shown as miss |
clickhouse.perf.instruction.tlb.references.count (count) | Instruction TLB references Shown as unit |
clickhouse.perf.instruction.tlb.references.total (gauge) | Total instruction TLB references Shown as unit |
clickhouse.perf.instructions.count (count) | Retired instructions. Be careful, these can be affected by various issues, most notably hardware interrupt counts. Shown as unit |
clickhouse.perf.instructions.total (gauge) | Total retired instructions. Be careful, these can be affected by various issues, most notably hardware interrupt counts. Shown as unit |
clickhouse.perf.local_memory.misses.count (count) | Local NUMA node memory read misses Shown as miss |
clickhouse.perf.local_memory.misses.total (gauge) | Total local NUMA node memory read misses Shown as miss |
clickhouse.perf.local_memory.references.count (count) | Local NUMA node memory reads Shown as unit |
clickhouse.perf.local_memory.references.total (gauge) | Total local NUMA node memory reads Shown as unit |
clickhouse.perf.min_enabled.min_time (gauge) | For all events, minimum time that an event was enabled. Used to track event multiplexing influence. Shown as microsecond |
clickhouse.perf.min_enabled.running_time (gauge) | Running time for event with minimum enabled time. Used to track the amount of event multiplexing Shown as microsecond |
clickhouse.perf.stalled_cycles.backend.count (count) | Stalled cycles during retirement. Shown as unit |
clickhouse.perf.stalled_cycles.backend.total (gauge) | Total stalled cycles during retirement. Shown as unit |
clickhouse.perf.stalled_cycles.frontend.count (count) | Stalled cycles during issue. Shown as unit |
clickhouse.perf.stalled_cycles.frontend.total (gauge) | Total stalled cycles during issue. Shown as unit |
clickhouse.perf.task.clock (gauge) | A clock count specific to the task that is running |
clickhouse.pool.polygon.added.count (count) | A polygon has been added to the cache (pool) for the 'pointInPolygon' function. |
clickhouse.pool.polygon.added.total (gauge) | A polygon has been added to the cache (pool) for the 'pointInPolygon' function. |
clickhouse.pool.polygon.bytes.count (count) | The number of bytes for polygons added to the cache (pool) for the 'pointInPolygon' function. |
clickhouse.pool.polygon.bytes.total (gauge) | The number of bytes for polygons added to the cache (pool) for the 'pointInPolygon' function. |
clickhouse.postgresql.connection (gauge) | Number of client connections using PostgreSQL protocol Shown as connection |
clickhouse.processing.external.files.total.count (count) | Number of files used by external processing (sorting/aggragating/joining) |
clickhouse.processing.external.files.total.total (gauge) | Number of files used by external processing (sorting/aggragating/joining) |
clickhouse.queries.read.new_parts.ignored.count (count) | See setting ignorecoldparts_seconds. Number of times read queries ignored very new parts that weren't pulled into cache by CacheWarmer yet. |
clickhouse.queries.read.new_parts.ignored.total (gauge) | See setting ignorecoldparts_seconds. Number of times read queries ignored very new parts that weren't pulled into cache by CacheWarmer yet. |
clickhouse.queries.read.outdated.parts.count (count) | See setting preferwarmedunmergedpartsseconds. Number of times read queries used outdated pre-merge parts that are in cache instead of merged part that wasn't pulled into cache by CacheWarmer yet. |
clickhouse.queries.read.outdated.parts.total (gauge) | See setting preferwarmedunmergedpartsseconds. Number of times read queries used outdated pre-merge parts that are in cache instead of merged part that wasn't pulled into cache by CacheWarmer yet. |
clickhouse.query.active (gauge) | The number of executing queries Shown as query |
clickhouse.query.async.insert.bytes.count (count) | Data size in bytes of asynchronous INSERT queries. |
clickhouse.query.async.insert.bytes.total (gauge) | Data size in bytes of asynchronous INSERT queries. |
clickhouse.query.async.insert.count (count) | Same as InsertQuery, but only for asynchronous INSERT queries. |
clickhouse.query.async.insert.failed.count (count) | Number of failed ASYNC INSERT queries. |
clickhouse.query.async.insert.failed.total (gauge) | Number of failed ASYNC INSERT queries. |
clickhouse.query.async.insert.hash_id.duplicate.count (count) | Number of times a duplicate hash id has been found in asynchronous INSERT hash id cache. |
clickhouse.query.async.insert.hash_id.duplicate.total (gauge) | Number of times a duplicate hash id has been found in asynchronous INSERT hash id cache. |
clickhouse.query.async.insert.rows.count (count) | Number of rows inserted by asynchronous INSERT queries. |
clickhouse.query.async.insert.rows.total (gauge) | Number of rows inserted by asynchronous INSERT queries. |
clickhouse.query.async.insert.total (gauge) | Same as InsertQuery, but only for asynchronous INSERT queries. |
clickhouse.query.async.loader.wait.time (gauge) | Total time a query was waiting for async loader jobs. Shown as microsecond |
clickhouse.query.count (count) | The number of queries to be interpreted and potentially executed during the last interval. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries. Shown as query |
clickhouse.query.failed.count (count) | Number of failed queries. Shown as query |
clickhouse.query.failed.total (gauge) | Total number of failed queries. Shown as query |
clickhouse.query.initial.count (count) | Same as Query, but only counts initial queries (see isinitialquery). |
clickhouse.query.initial.total (gauge) | Same as Query, but only counts initial queries (see isinitialquery). |
clickhouse.query.insert.count (count) | The number of INSERT queries to be interpreted and potentially executed during the last interval. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries. Shown as query |
clickhouse.query.insert.delayed (gauge) | The number of INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree table. Shown as query |
clickhouse.query.insert.failed.count (count) | Same as FailedQuery, but only for INSERT queries. Shown as query |
clickhouse.query.insert.failed.total (gauge) | Same as FailedQuery, but only for INSERT queries. Shown as query |
clickhouse.query.insert.subqueries.count (count) | Count INSERT queries with all subqueries |
clickhouse.query.insert.subqueries.total (gauge) | Count INSERT queries with all subqueries |
clickhouse.query.insert.total (gauge) | The total number of INSERT queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries. Shown as query |
clickhouse.query.local_timers.active (gauge) | Number of Created thread local timers in QueryProfiler |
clickhouse.query.mask.match.count (count) | The number of times query masking rules were successfully matched during the last interval. Shown as occurrence |
clickhouse.query.mask.match.total (gauge) | The total number of times query masking rules were successfully matched. Shown as occurrence |
clickhouse.query.memory (gauge) | Total amount of memory allocated in currently executing queries. Note that some memory allocations may not be accounted. Shown as byte |
clickhouse.query.memory.limit_exceeded.count (count) | Number of times when memory limit exceeded for query. |
clickhouse.query.memory.limit_exceeded.total (gauge) | Total number of times when memory limit exceeded for query. |
clickhouse.query.mutation (gauge) | The number of mutations (ALTER DELETE/UPDATE) Shown as query |
clickhouse.query.other.time (gauge) | Total time of queries that are not SELECT or INSERT. Shown as microsecond |
clickhouse.query.overflow.any.count (count) | Number of times approximate GROUP BY was in effect: when aggregation was performed only on top of first 'maxrowstogroupby' unique keys and other keys were ignored due to 'groupbyoverflow_mode' = 'any'. |
clickhouse.query.overflow.any.total (gauge) | Number of times approximate GROUP BY was in effect: when aggregation was performed only on top of first 'maxrowstogroupby' unique keys and other keys were ignored due to 'groupbyoverflow_mode' = 'any'. |
clickhouse.query.overflow.break.count (count) | Number of times, data processing was cancelled by query complexity limitation with setting '*overflowmode' = 'break' and the result is incomplete. |
clickhouse.query.overflow.break.total (gauge) | Number of times, data processing was cancelled by query complexity limitation with setting '*overflowmode' = 'break' and the result is incomplete. |
clickhouse.query.overflow.throw.count (count) | Number of times, data processing was cancelled by query complexity limitation with setting '*overflowmode' = 'throw' and exception was thrown. |
clickhouse.query.overflow.throw.total (gauge) | Number of times, data processing was cancelled by query complexity limitation with setting '*overflowmode' = 'throw' and exception was thrown. |
clickhouse.query.profiler.runs.count (count) | Number of times QueryProfiler had been run. |
clickhouse.query.profiler.runs.total (gauge) | Number of times QueryProfiler had been run. |
clickhouse.query.read.backoff.count (count) | The number of times the number of query processing threads was lowered due to slow reads during the last interval. Shown as occurrence |
clickhouse.query.read.backoff.total (gauge) | The total number of times the number of query processing threads was lowered due to slow reads. Shown as occurrence |
clickhouse.query.select.count (count) | The number of SELECT queries to be interpreted and potentially executed during the last interval. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries. Shown as query |
clickhouse.query.select.subqueries.count (count) | Count SELECT queries with all subqueries |
clickhouse.query.select.subqueries.total (gauge) | Count SELECT queries with all subqueries |
clickhouse.query.select.time (gauge) | Total time of SELECT queries. Shown as microsecond |
clickhouse.query.select.total (gauge) | The total number of SELECT queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries. Shown as query |
clickhouse.query.signal.dropped.count (count) | The number of times the processing of a signal was dropped due to overrun plus the number of signals that the OS has not delivered due to overrun during the last interval. Shown as occurrence |
clickhouse.query.signal.dropped.total (gauge) | The total number of times the processing of a signal was dropped due to overrun plus the number of signals that the OS has not delivered due to overrun. Shown as occurrence |
clickhouse.query.sleep.time (gauge) | The percentage of time a query was sleeping to conform to the max_network_bandwidth setting during the last interval.Shown as percent |
clickhouse.query.subqueries.count (count) | Count queries with all subqueries |
clickhouse.query.subqueries.total (gauge) | Count queries with all subqueries |
clickhouse.query.time (gauge) | Total time of all queries. Shown as microsecond |
clickhouse.query.timers.active (gauge) | Number of Active thread local timers in QueryProfiler |
clickhouse.query.total (gauge) | The total number of queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries. Shown as query |
clickhouse.query.waiting (gauge) | The number of queries that are stopped and waiting due to 'priority' setting. Shown as query |
clickhouse.read.buffer.mmap.created.count (count) | Number of times a read buffer using 'mmap' was created for reading data (while choosing among other read methods). |
clickhouse.read.buffer.mmap.created.total (gauge) | Number of times a read buffer using 'mmap' was created for reading data (while choosing among other read methods). |
clickhouse.read.buffer.mmap.failed.count (count) | Number of times a read buffer with 'mmap' was attempted to be created for reading data (while choosing among other read methods), but the OS did not allow it (due to lack of filesystem support or other reasons) and we fallen back to the ordinary reading method. |
clickhouse.read.buffer.mmap.failed.total (gauge) | Number of times a read buffer with 'mmap' was attempted to be created for reading data (while choosing among other read methods), but the OS did not allow it (due to lack of filesystem support or other reasons) and we fallen back to the ordinary reading method. |
clickhouse.read.buffer.o_direct.created.count (count) | Number of times a read buffer with O_DIRECT was created for reading data (while choosing among other read methods). |
clickhouse.read.buffer.o_direct.created.total (gauge) | Number of times a read buffer with O_DIRECT was created for reading data (while choosing among other read methods). |
clickhouse.read.buffer.o_direct.failed.count (count) | Number of times a read buffer with O_DIRECT was attempted to be created for reading data (while choosing among other read methods), but the OS did not allow it (due to lack of filesystem support or other reasons) and we fallen back to the ordinary reading method. |
clickhouse.read.buffer.o_direct.failed.total (gauge) | Number of times a read buffer with O_DIRECT was attempted to be created for reading data (while choosing among other read methods), but the OS did not allow it (due to lack of filesystem support or other reasons) and we fallen back to the ordinary reading method. |
clickhouse.read.buffer.ordinary.created.count (count) | Number of times ordinary read buffer was created for reading data (while choosing among other read methods). |
clickhouse.read.buffer.ordinary.created.total (gauge) | Number of times ordinary read buffer was created for reading data (while choosing among other read methods). |
clickhouse.read.compressed.block.count (count) | The number of compressed blocks (the blocks of data that are compressed independent of each other) read from compressed sources (files, network) during the last interval. Shown as block |
clickhouse.read.compressed.block.total (gauge) | The total number of compressed blocks (the blocks of data that are compressed independent of each other) read from compressed sources (files, network). Shown as block |
clickhouse.read.compressed.raw.size.count (count) | The number of uncompressed bytes (the number of bytes after decompression) read from compressed sources (files, network) during the last interval. Shown as byte |
clickhouse.read.compressed.raw.size.total (gauge) | The total number of uncompressed bytes (the number of bytes after decompression) read from compressed sources (files, network). Shown as byte |
clickhouse.read.compressed.size.count (count) | The number of bytes (the number of bytes before decompression) read from compressed sources (files, network) during the last interval. Shown as byte |
clickhouse.read.compressed.size.total (gauge) | The total number of bytes (the number of bytes before decompression) read from compressed sources (files, network). Shown as byte |
clickhouse.read.connections.new.count (count) | Number of seeks which lead to new connection (s3, http) |
clickhouse.read.connections.new.total (gauge) | Number of seeks which lead to new connection (s3, http) |
clickhouse.read.synchronous.wait.time (gauge) | Time spent in waiting for synchronous reads in asynchronous local read. Shown as microsecond |
clickhouse.remote.query.read_throttler.sleep.time (gauge) | Total time a query was sleeping to conform 'maxremotereadnetworkbandwidthforserver'/'maxremotereadnetworkbandwidth' throttling. Shown as microsecond |
clickhouse.remote.query.write_throttler.sleep.time (gauge) | Total time a query was sleeping to conform 'maxremotewritenetworkbandwidthforserver'/'maxremotewritenetworkbandwidth' throttling. Shown as microsecond |
clickhouse.remote.read.synchronous.wait.time (gauge) | Time spent in waiting for synchronous remote reads. Shown as microsecond |
clickhouse.remote.read_throttler.bytes.count (count) | Bytes passed through 'maxremotereadnetworkbandwidthforserver'/'maxremotereadnetworkbandwidth' throttler. |
clickhouse.remote.read_throttler.bytes.total (gauge) | Bytes passed through 'maxremotereadnetworkbandwidthforserver'/'maxremotereadnetworkbandwidth' throttler. |
clickhouse.remote.write_throttler.bytes.count (count) | Bytes passed through 'maxremotewritenetworkbandwidthforserver'/'maxremotewritenetworkbandwidth' throttler. |
clickhouse.remote.write_throttler.bytes.total (gauge) | Bytes passed through 'maxremotewritenetworkbandwidthforserver'/'maxremotewritenetworkbandwidth' throttler. |
clickhouse.remote_reader.total (gauge) | Number of read with remote reader in fly |
clickhouse.replica.delay.absolute (gauge) | The maximum replica queue delay relative to current time. Shown as millisecond |
clickhouse.replica.delay.relative (gauge) | The maximum difference of absolute delay from any other replica. Shown as millisecond |
clickhouse.replica.leader.election (gauge) | The number of Replicas participating in leader election. Equals to total number of replicas in usual cases. Shown as shard |
clickhouse.replica.queue.size (gauge) | The number of replication tasks in queue. Shown as task |
clickhouse.replicas.parralel.announcement.handle.time (gauge) | Time spent processing replicas announcements Shown as microsecond |
clickhouse.replicas.parralel.available.count (count) | Number of replicas available to execute a query with task-based parallel replicas |
clickhouse.replicas.parralel.available.total (gauge) | Number of replicas available to execute a query with task-based parallel replicas |
clickhouse.replicas.parralel.collect_segment.time (gauge) | Time spent collecting segments meant by hash Shown as microsecond |
clickhouse.replicas.parralel.hash.stealing.time (gauge) | Time spent collecting segments meant for stealing by hash Shown as microsecond |
clickhouse.replicas.parralel.leftover_segment.stealing.time (gauge) | Time spent collecting orphaned segments Shown as microsecond |
clickhouse.replicas.parralel.processing.time (gauge) | Time spent processing data parts Shown as microsecond |
clickhouse.replicas.parralel.request.handle.time (gauge) | Time spent processing requests for marks from replicas Shown as microsecond |
clickhouse.replicas.parralel.requests.count (count) | Number of requests to the initiator. |
clickhouse.replicas.parralel.requests.total (gauge) | Number of requests to the initiator. |
clickhouse.replicas.parralel.used.count (count) | Number of replicas used to execute a query with task-based parallel replicas |
clickhouse.replicas.parralel.used.total (gauge) | Number of replicas used to execute a query with task-based parallel replicas |
clickhouse.s3.abort_multipart_upload.count (count) | Number of S3 API AbortMultipartUpload calls. |
clickhouse.s3.abort_multipart_upload.total (gauge) | Number of S3 API AbortMultipartUpload calls. |
clickhouse.s3.client.copy.reuse.count (count) | Number of S3 clients copies which reuse an existing auth provider from another client. |
clickhouse.s3.client.copy.reuse.total (gauge) | Number of S3 clients copies which reuse an existing auth provider from another client. |
clickhouse.s3.clients.created.count (count) | Number of created S3 clients. |
clickhouse.s3.clients.created.total (gauge) | Number of created S3 clients. |
clickhouse.s3.complete_multipart_upload.count (count) | Number of S3 API CompleteMultipartUpload calls. |
clickhouse.s3.complete_multipart_upload.total (gauge) | Number of S3 API CompleteMultipartUpload calls. |
clickhouse.s3.connect.time (gauge) | Time spent initializing connection to S3. Shown as microsecond |
clickhouse.s3.copy_object.count (count) | Number of S3 API CopyObject calls. |
clickhouse.s3.copy_object.total (gauge) | Number of S3 API CopyObject calls. |
clickhouse.s3.create_multipart_upload.count (count) | Number of S3 API CreateMultipartUpload calls. |
clickhouse.s3.create_multipart_upload.total (gauge) | Number of S3 API CreateMultipartUpload calls. |
clickhouse.s3.delete_obkect.count (count) | Number of S3 API DeleteObject(s) calls. |
clickhouse.s3.delete_obkect.total (gauge) | Number of S3 API DeleteObject(s) calls. |
clickhouse.s3.get_object.count (count) | Number of S3 API GetObject calls. |
clickhouse.s3.get_object.total (gauge) | Number of S3 API GetObject calls. |
clickhouse.s3.get_object_attribute.count (count) | Number of S3 API GetObjectAttributes calls. |
clickhouse.s3.get_object_attribute.total (gauge) | Number of S3 API GetObjectAttributes calls. |
clickhouse.s3.get_request.throttled.count (count) | Number of S3 GET and SELECT requests passed through throttler. |
clickhouse.s3.get_request.throttled.time (gauge) | Total time a query was sleeping to conform S3 GET and SELECT request throttling. Shown as microsecond |
clickhouse.s3.get_request.throttled.total (gauge) | Number of S3 GET and SELECT requests passed through throttler. |
clickhouse.s3.head_object.count (count) | Number of S3 API HeadObject calls. |
clickhouse.s3.head_object.total (gauge) | Number of S3 API HeadObject calls. |
clickhouse.s3.list_object.count (count) | Number of S3 API ListObjects calls. |
clickhouse.s3.list_object.total (gauge) | Number of S3 API ListObjects calls. |
clickhouse.s3.lock_localfile_status.time (gauge) | Time spent to lock local file statuses Shown as microsecond |
clickhouse.s3.put_object.count (count) | Number of S3 API PutObject calls. |
clickhouse.s3.put_object.total (gauge) | Number of S3 API PutObject calls. |
clickhouse.s3.put_request.throttled.count (count) | Number of S3 PUT, COPY, POST and LIST requests passed through throttler. |
clickhouse.s3.put_request.throttled.time (gauge) | Total time a query was sleeping to conform S3 PUT, COPY, POST and LIST request throttling. Shown as microsecond |
clickhouse.s3.put_request.throttled.total (gauge) | Number of S3 PUT, COPY, POST and LIST requests passed through throttler. |
clickhouse.s3.read.bytes.count (count) | Read bytes (incoming) in GET and HEAD requests to S3 storage. Shown as byte |
clickhouse.s3.read.bytes.total (gauge) | Total read bytes (incoming) in GET and HEAD requests to S3 storage. Shown as byte |
clickhouse.s3.read.errors.count (count) | Number of exceptions while reading from S3. |
clickhouse.s3.read.errors.total (gauge) | Number of exceptions while reading from S3. |
clickhouse.s3.read.file.time (gauge) | Time spent to read file data Shown as microsecond |
clickhouse.s3.read.requests.count (count) | Number of GET and HEAD requests to S3 storage. Shown as request |
clickhouse.s3.read.requests.errors.count (count) | Number of non-throttling errors in GET and HEAD requests to S3 storage. Shown as error |
clickhouse.s3.read.requests.errors.total (gauge) | Total number of non-throttling errors in GET and HEAD requests to S3 storage. Shown as error |
clickhouse.s3.read.requests.redirects.count (count) | Number of redirects in GET and HEAD requests to S3 storage. Shown as unit |
clickhouse.s3.read.requests.redirects.total (gauge) | Total number of redirects in GET and HEAD requests to S3 storage. Shown as unit |
clickhouse.s3.read.requests.throttling.count (count) | Number of 429 and 503 errors in GET and HEAD requests to S3 storage. Shown as error |
clickhouse.s3.read.requests.throttling.total (gauge) | Total number of 429 and 503 errors in GET and HEAD requests to S3 storage. Shown as error |
clickhouse.s3.read.requests.total (gauge) | Total number of GET and HEAD requests to S3 storage. Shown as request |
clickhouse.s3.read.reset.count (count) | Number of HTTP sessions that were reset in ReadBufferFromS3. |
clickhouse.s3.read.reset.total (gauge) | Number of HTTP sessions that were reset in ReadBufferFromS3. |
clickhouse.s3.read.sessions.preserved..count (count) | Number of HTTP sessions that were preserved in ReadBufferFromS3. |
clickhouse.s3.read.sessions.preserved..total (gauge) | Number of HTTP sessions that were preserved in ReadBufferFromS3. |
clickhouse.s3.read.size.count (count) | Bytes read from S3. |
clickhouse.s3.read.size.total (gauge) | Bytes read from S3. |
clickhouse.s3.read.time (gauge) | Time spent on reading from S3. Shown as microsecond |
clickhouse.s3.requests.count (gauge) | S3 requests count Shown as request |
clickhouse.s3.set.file.failed.time (gauge) | Time spent to set file as failed Shown as microsecond |
clickhouse.s3.set.file.processed.time (gauge) | Time spent to set file as processed Shown as microsecond |
clickhouse.s3.set.file.processing.time (gauge) | Time spent to set file as processing Shown as microsecond |
clickhouse.s3.set_file.failed.time (gauge) | Time spent to set file as failed Shown as microsecond |
clickhouse.s3.upload_part.count (count) | Number of S3 API UploadPart calls. |
clickhouse.s3.upload_part.total (gauge) | Number of S3 API UploadPart calls. |
clickhouse.s3.upload_part_copy.count (count) | Number of S3 API UploadPartCopy calls. |
clickhouse.s3.upload_part_copy.total (gauge) | Number of S3 API UploadPartCopy calls. |
clickhouse.s3.write.bytes.count (count) | Write bytes (outgoing) in POST, DELETE, PUT and PATCH requests to S3 storage. Shown as byte |
clickhouse.s3.write.bytes.total (gauge) | Total write bytes (outgoing) in POST, DELETE, PUT and PATCH requests to S3 storage. Shown as byte |
clickhouse.s3.write.errors.count (count) | Number of exceptions while writing to S3. |
clickhouse.s3.write.errors.total (gauge) | Number of exceptions while writing to S3. |
clickhouse.s3.write.requests.count (count) | Number of POST, DELETE, PUT and PATCH requests to S3 storage. Shown as request |
clickhouse.s3.write.requests.errors.count (count) | Number of non-throttling errors in POST, DELETE, PUT and PATCH requests to S3 storage. Shown as request |
clickhouse.s3.write.requests.errors.total (gauge) | Total number of non-throttling errors in POST, DELETE, PUT and PATCH requests to S3 storage. Shown as request |
clickhouse.s3.write.requests.redirects.count (count) | Number of redirects in POST, DELETE, PUT and PATCH requests to S3 storage. Shown as request |
clickhouse.s3.write.requests.redirects.total (gauge) | Total number of redirects in POST, DELETE, PUT and PATCH requests to S3 storage. Shown as request |
clickhouse.s3.write.requests.throttling.count (count) | Number of 429 and 503 errors in POST, DELETE, PUT and PATCH requests to S3 storage. Shown as request |
clickhouse.s3.write.requests.throttling.total (gauge) | Total number of 429 and 503 errors in POST, DELETE, PUT and PATCH requests to S3 storage. Shown as request |
clickhouse.s3.write.requests.total (gauge) | Total number of POST, DELETE, PUT and PATCH requests to S3 storage. Shown as request |
clickhouse.s3.write.size.count (count) | Bytes written to S3. |
clickhouse.s3.write.size.total (gauge) | Bytes written to S3. |
clickhouse.s3.write.time (gauge) | Time spent on writing to S3. Shown as microsecond |
clickhouse.s3.write.wait.time (gauge) | Time spent on waiting while some of the current requests are done when its number reached the limit defined by s3maxinflightpartsforonefile. Shown as microsecond |
clickhouse.select.query.select.failed.count (count) | Same as FailedQuery, but only for SELECT queries. Shown as query |
clickhouse.select.query.select.failed.total (gauge) | Same as FailedQuery, but only for SELECT queries. Shown as query |
clickhouse.selected.bytes.count (count) | Number of bytes (uncompressed; for columns as they stored in memory) SELECTed from all tables. Shown as byte |
clickhouse.selected.bytes.total (gauge) | Total number of bytes (uncompressed; for columns as they stored in memory) SELECTed from all tables. Shown as byte |
clickhouse.selected.rows.count (count) | Total number of rows SELECTed from all tables. Shown as row |
clickhouse.selected.rows.total (gauge) | Number of rows SELECTed from all tables. Shown as row |
clickhouse.server.startup.time (gauge) | Time elapsed from starting server to listening to sockets in milliseconds Shown as microsecond |
clickhouse.sessions_pool.storage.active (gauge) | Total count of all sessions: stored in the pool and actively used right now for storages |
clickhouse.sessions_pool.storage.total (gauge) | Total count of sessions stored in the session pool for storages |
clickhouse.shard.send_query.suspend.count (count) | Total count when sending query to shard was suspended when asyncquerysendingforremote is enabled. |
clickhouse.shard.send_query.suspend.total (gauge) | Total count when sending query to shard was suspended when asyncquerysendingforremote is enabled. |
clickhouse.shared_merge_tree.fetches.total (gauge) | Number of fetches in progress |
clickhouse.shell_command.executions.count (count) | Number of shell command executions. |
clickhouse.shell_command.executions.total (gauge) | Number of shell command executions. |
clickhouse.sleep_function.sleep.time (gauge) | Time spent sleeping in a sleep function (sleep, sleepEachRow). Shown as microsecond |
clickhouse.sqe.io_uring.inflight (gauge) | Number of io_uring SQEs in flight |
clickhouse.sqe.io_uring.waiting (gauge) | Number of io_uring SQEs waiting to be submitted |
clickhouse.sql.ordinary.function.calls.count (count) | Number of SQL ordinary function calls (SQL functions are called on per-block basis, so this number represents the number of blocks). Shown as block |
clickhouse.sql.ordinary.function.calls.total (gauge) | Number of SQL ordinary function calls (SQL functions are called on per-block basis, so this number represents the number of blocks). Shown as block |
clickhouse.storage.buffer.flush.count (count) | Number of times a buffer in a 'Buffer' table was flushed. |
clickhouse.storage.buffer.flush.total (gauge) | Number of times a buffer in a 'Buffer' table was flushed. |
clickhouse.storage.buffer.flush_error.count (count) | Number of times a buffer in the 'Buffer' table has not been able to flush due to error writing in the destination table. |
clickhouse.storage.buffer.flush_error.total (gauge) | Number of times a buffer in the 'Buffer' table has not been able to flush due to error writing in the destination table. |
clickhouse.storage.connection.create.error.count (count) | Number of cases when creation of a connection for storage is failed |
clickhouse.storage.connection.create.error.total (gauge) | Number of cases when creation of a connection for storage is failed |
clickhouse.storage.connection.create.expired.count (count) | Number of expired connections for storages |
clickhouse.storage.connection.create.expired.total (gauge) | Number of expired connections for storages |
clickhouse.storage.connection.created.count (count) | Number of created connections for storages |
clickhouse.storage.connection.created.time (gauge) | Total time spend on creating connections for storages Shown as microsecond |
clickhouse.storage.connection.created.total (gauge) | Number of created connections for storages |
clickhouse.storage.connection.preserved.count (count) | Number of preserved connections for storages |
clickhouse.storage.connection.preserved.total (gauge) | Number of preserved connections for storages |
clickhouse.storage.connection.reused.count (count) | Number of reused connections for storages |
clickhouse.storage.connection.reused.total (gauge) | Number of reused connections for storages |
clickhouse.storeage.connection.reset.count (count) | Number of reset connections for storages |
clickhouse.storeage.connection.reset.total (gauge) | Number of reset connections for storages |
clickhouse.subquery.scalar.read.cache.miss.count (count) | Number of times a read from a scalar subquery was not cached and had to be calculated completely |
clickhouse.subquery.scalar.read.cache.miss.total (gauge) | Number of times a read from a scalar subquery was not cached and had to be calculated completely |
clickhouse.syscall.directory.sync.count (count) | Number of times the F_FULLFSYNC/fsync/fdatasync function was called for directories. |
clickhouse.syscall.directory.sync.time (gauge) | Total time spent waiting for F_FULLFSYNC/fsync/fdatasync syscall for directories. Shown as microsecond |
clickhouse.syscall.directory.sync.total (gauge) | Number of times the F_FULLFSYNC/fsync/fdatasync function was called for directories. |
clickhouse.syscall.read (gauge) | The number of read (read, pread, io_getevents, etc.) syscalls in fly. Shown as read |
clickhouse.syscall.read.wait (gauge) | The percentage of time spent waiting for read syscall during the last interval. This includes reads from page cache. Shown as percent |
clickhouse.syscall.write (gauge) | The number of write (write, pwrite, io_getevents, etc.) syscalls in fly. Shown as write |
clickhouse.syscall.write.wait (gauge) | The percentage of time spent waiting for write syscall during the last interval. This include writes to page cache. Shown as percent |
clickhouse.table.buffer.row (gauge) | The number of rows in buffers of Buffer tables. Shown as row |
clickhouse.table.buffer.size (gauge) | Size of buffers of Buffer tables. Shown as byte |
clickhouse.table.distributed.bytes.insert.broken (gauge) | Number of bytes for asynchronous insertion into Distributed tables that has been marked as broken. Number of bytes for every shard is summed. |
clickhouse.table.distributed.bytes.insert.pending (gauge) | Number of pending bytes to process for asynchronous insertion into Distributed tables. Number of bytes for every shard is summed. |
clickhouse.table.distributed.connection.inserted (gauge) | The number of connections to remote servers sending data that was INSERTed into Distributed tables. Both synchronous and asynchronous mode. Shown as connection |
clickhouse.table.distributed.file.insert.broken (gauge) | Number of files for asynchronous insertion into Distributed tables that has been marked as broken. This metric will starts from 0 on start. Number of files for every shard is summed. Shown as file |
clickhouse.table.distributed.file.insert.pending (gauge) | The number of pending files to process for asynchronous insertion into Distributed tables. Number of files for every shard is summed. Shown as file |
clickhouse.table.function.count (count) | Number of table function calls. |
clickhouse.table.function.total (gauge) | Number of table function calls. |
clickhouse.table.insert.row.count (count) | The number of rows INSERTed to all tables during the last interval. Shown as row |
clickhouse.table.insert.row.total (gauge) | The total number of rows INSERTed to all tables. Shown as row |
clickhouse.table.insert.size.count (count) | The number of bytes (uncompressed; for columns as they stored in memory) INSERTed to all tables during the last interval. Shown as byte |
clickhouse.table.insert.size.total (gauge) | The total number of bytes (uncompressed; for columns as they stored in memory) INSERTed to all tables. Shown as byte |
clickhouse.table.mergetree.announcements.sent.time (gauge) | Time spent in sending the announcement from the remote server to the initiator server about the set of data parts (for MergeTree tables). Measured on the remote server side. Shown as microsecond |
clickhouse.table.mergetree.calculating.projections.time (gauge) | Time spent calculating projections Shown as microsecond |
clickhouse.table.mergetree.calculating.skip_indices.time (gauge) | Time spent calculating skip indices Shown as microsecond |
clickhouse.table.mergetree.calculating.sorting.time (gauge) | Time spent sorting blocks Shown as microsecond |
clickhouse.table.mergetree.calculating.statistics.time (gauge) | Time spent calculating statistics Shown as microsecond |
clickhouse.table.mergetree.insert.block.already_sorted.count (count) | The number of blocks INSERTed to MergeTree tables that appeared to be already sorted during the last interval. Shown as block |
clickhouse.table.mergetree.insert.block.already_sorted.projection.total (gauge) | Total number of blocks INSERTed to MergeTree tables projection that appeared to be already sorted. Shown as block |
clickhouse.table.mergetree.insert.block.already_sorted.total (gauge) | The total number of blocks INSERTed to MergeTree tables that appeared to be already sorted. Shown as block |
clickhouse.table.mergetree.insert.block.count (count) | The number of blocks INSERTed to MergeTree tables during the last interval. Each block forms a data part of level zero. Shown as block |
clickhouse.table.mergetree.insert.block.projection.count (count) | Number of blocks INSERTed to MergeTree tables projection. Each block forms a data part of level zero. Shown as block |
clickhouse.table.mergetree.insert.block.projection.total (gauge) | Total number of blocks INSERTed to MergeTree tables projection. Each block forms a data part of level zero. Shown as block |
clickhouse.table.mergetree.insert.block.rejected.count (count) | The number of times the INSERT of a block to a MergeTree table was rejected with Too many parts exception due to high number of active data parts for partition during the last interval.Shown as block |
clickhouse.table.mergetree.insert.block.rejected.total (gauge) | The total number of times the INSERT of a block to a MergeTree table was rejected with Too many parts exception due to high number of active data parts for partition.Shown as block |
clickhouse.table.mergetree.insert.block.size.compressed.projection.count (count) | Number of blocks INSERTed to MergeTree tables projection that appeared to be already sorted. Shown as block |
clickhouse.table.mergetree.insert.block.total (gauge) | The total number of blocks INSERTed to MergeTree tables. Each block forms a data part of level zero. Shown as block |
clickhouse.table.mergetree.insert.delayed.count (count) | The number of times the INSERT of a block to a MergeTree table was throttled due to high number of active data parts for partition during the last interval. Shown as throttle |
clickhouse.table.mergetree.insert.delayed.time (gauge) | The percentage of time spent while the INSERT of a block to a MergeTree table was throttled due to high number of active data parts for partition during the last interval. Shown as percent |
clickhouse.table.mergetree.insert.delayed.total (gauge) | The total number of times the INSERT of a block to a MergeTree table was throttled due to high number of active data parts for partition. Shown as throttle |
clickhouse.table.mergetree.insert.row.count (count) | The number of rows INSERTed to MergeTree tables during the last interval. Shown as row |
clickhouse.table.mergetree.insert.row.total (gauge) | The total number of rows INSERTed to MergeTree tables. Shown as row |
clickhouse.table.mergetree.insert.write.row.projection.count (count) | Number of rows INSERTed to MergeTree tables projection. Shown as row |
clickhouse.table.mergetree.insert.write.row.projection.total (gauge) | Total number of rows INSERTed to MergeTree tables projection. Shown as row |
clickhouse.table.mergetree.insert.write.size.compressed.count (count) | The number of bytes written to filesystem for data INSERTed to MergeTree tables during the last interval. Shown as byte |
clickhouse.table.mergetree.insert.write.size.compressed.total (gauge) | The total number of bytes written to filesystem for data INSERTed to MergeTree tables. Shown as byte |
clickhouse.table.mergetree.insert.write.size.uncompressed.count (count) | The number of uncompressed bytes (for columns as they are stored in memory) INSERTed to MergeTree tables during the last interval. Shown as byte |
clickhouse.table.mergetree.insert.write.size.uncompressed.projection.count (count) | Uncompressed bytes (for columns as they stored in memory) INSERTed to MergeTree tables projection. Shown as byte |
clickhouse.table.mergetree.insert.write.size.uncompressed.projection.total (gauge) | Total uncompressed bytes (for columns as they stored in memory) INSERTed to MergeTree tables projection. Shown as byte |
clickhouse.table.mergetree.insert.write.size.uncompressed.total (gauge) | The total number of uncompressed bytes (for columns as they are stored in memory) INSERTed to MergeTree tables. Shown as byte |
clickhouse.table.mergetree.mark.selected.count (count) | The number of marks (index granules) selected to read from a MergeTree table during the last interval. Shown as index |
clickhouse.table.mergetree.mark.selected.total (gauge) | The total number of marks (index granules) selected to read from a MergeTree table. Shown as index |
clickhouse.table.mergetree.merging.blocks.time (gauge) | Time spent merging input blocks (for special MergeTree engines) Shown as microsecond |
clickhouse.table.mergetree.merging.projection.time (gauge) | Time spent merging blocks Shown as microsecond |
clickhouse.table.mergetree.mutation.delayed.count (count) | Number of times the mutation of a MergeTree table was throttled due to high number of unfinished mutations for table. |
clickhouse.table.mergetree.mutation.delayed.total (gauge) | Number of times the mutation of a MergeTree table was throttled due to high number of unfinished mutations for table. |
clickhouse.table.mergetree.mutation.rejected.count (count) | Number of times the mutation of a MergeTree table was rejected with 'Too many mutations' exception due to high number of unfinished mutations for table. |
clickhouse.table.mergetree.mutation.rejected.total (gauge) | Number of times the mutation of a MergeTree table was rejected with 'Too many mutations' exception due to high number of unfinished mutations for table. |
clickhouse.table.mergetree.part.current (gauge) | The total number of data parts of a MergeTree table. Shown as object |
clickhouse.table.mergetree.part.selected.count (count) | The number of data parts selected to read from a MergeTree table during the last interval. Shown as item |
clickhouse.table.mergetree.part.selected.total (gauge) | The total number of data parts selected to read from a MergeTree table. Shown as item |
clickhouse.table.mergetree.partslock.hold.time (gauge) | Total time spent holding data parts lock in MergeTree tables Shown as microsecond |
clickhouse.table.mergetree.partslock.wait.time (gauge) | Total time spent waiting for data parts lock in MergeTree tables Shown as microsecond |
clickhouse.table.mergetree.prefetched_read_pool.tasks.time (gauge) | Time spent preparing tasks in MergeTreePrefetchedReadPool Shown as microsecond |
clickhouse.table.mergetree.range.selected.count (count) | The number of non-adjacent ranges in all data parts selected to read from a MergeTree table during the last interval. Shown as item |
clickhouse.table.mergetree.range.selected.total (gauge) | The total number of non-adjacent ranges in all data parts selected to read from a MergeTree table. Shown as item |
clickhouse.table.mergetree.read_task_requests.sent.time (gauge) | Time spent in callbacks requested from the remote server back to the initiator server to choose the read task (for MergeTree tables). Measured on the remote server side. Shown as microsecond |
clickhouse.table.mergetree.replicated.fetch.merged.count (count) | The number of times ClickHouse prefers to download already merged part from replica of ReplicatedMergeTree table instead of performing a merge itself (usually it prefers doing a merge itself to save network traffic) during the last interval. This happens when ClickHouse does not have all source parts to perform a merge or when the data part is old enough. Shown as fetch |
clickhouse.table.mergetree.replicated.fetch.merged.total (gauge) | The total number of times ClickHouse prefers to download already merged part from replica of ReplicatedMergeTree table instead of performing a merge itself (usually it prefers doing a merge itself to save network traffic). This happens when ClickHouse does not have all source parts to perform a merge or when the data part is old enough. Shown as fetch |
clickhouse.table.mergetree.replicated.fetch.replica.count (count) | The number of times a data part was downloaded from replica of a ReplicatedMergeTree table during the last interval. Shown as fetch |
clickhouse.table.mergetree.replicated.fetch.replica.fail.count (count) | The number of times a data part was failed to download from replica of a ReplicatedMergeTree table during the last interval. Shown as byte |
clickhouse.table.mergetree.replicated.fetch.replica.fail.total (gauge) | The total number of times a data part was failed to download from replica of a ReplicatedMergeTree table. Shown as byte |
clickhouse.table.mergetree.replicated.fetch.replica.total (gauge) | The total number of times a data part was downloaded from replica of a ReplicatedMergeTree table. Shown as fetch |
clickhouse.table.mergetree.replicated.insert.deduplicate.count (count) | The number of times the INSERTed block to a ReplicatedMergeTree table was deduplicated during the last interval. Shown as operation |
clickhouse.table.mergetree.replicated.insert.deduplicate.total (gauge) | The total number of times the INSERTed block to a ReplicatedMergeTree table was deduplicated. Shown as operation |
clickhouse.table.mergetree.replicated.leader.elected.count (count) | The number of times a ReplicatedMergeTree table became a leader during the last interval. Leader replica is responsible for assigning merges, cleaning old blocks for deduplications and a few more bookkeeping tasks. Shown as event |
clickhouse.table.mergetree.replicated.leader.elected.total (gauge) | The total number of times a ReplicatedMergeTree table became a leader. Leader replica is responsible for assigning merges, cleaning old blocks for deduplications and a few more bookkeeping tasks. Shown as event |
clickhouse.table.mergetree.replicated.merge.count (count) | The number of times data parts of ReplicatedMergeTree tables were successfully merged during the last interval. Shown as byte |
clickhouse.table.mergetree.replicated.merge.total (gauge) | The total number of times data parts of ReplicatedMergeTree tables were successfully merged. Shown as byte |
clickhouse.table.mergetree.replicated.mutated.count (count) | Number of times data parts of ReplicatedMergeTree tables were successfully mutated. |
clickhouse.table.mergetree.replicated.mutated.total (gauge) | Number of times data parts of ReplicatedMergeTree tables were successfully mutated. |
clickhouse.table.mergetree.row.current (gauge) | The total number of rows in a MergeTree table. Shown as row |
clickhouse.table.mergetree.size (gauge) | The total size of all data part files of a MergeTree table. Shown as byte |
clickhouse.table.mergetree.sorting.projection.time (gauge) | Time spent sorting blocks (for projection it might be a key different from table's sorting key) Shown as microsecond |
clickhouse.table.mergetree.storage.mark.cache (gauge) | The size of the cache of marks for StorageMergeTree.Shown as byte |
clickhouse.table.replica.change.hedged_requests.count (gauge) | Count when timeout for changing replica expired in hedged requests. Shown as timeout |
clickhouse.table.replica.change.hedged_requests.total (gauge) | Total count when timeout for changing replica expired in hedged requests. Shown as timeout |
clickhouse.table.replica.partial.shutdown.count (count) | How many times Replicated table has to deinitialize its state due to session expiration in ZooKeeper. The state is reinitialized every time when ZooKeeper is available again. |
clickhouse.table.replica.partial.shutdown.total (gauge) | Total times Replicated table has to deinitialize its state due to session expiration in ZooKeeper. The state is reinitialized every time when ZooKeeper is available again. |
clickhouse.table.replicated.active (gauge) | The number of replicas of this table that have a session in ZooKeeper (i.e., the number of functioning replicas). Shown as table |
clickhouse.table.replicated.leader (gauge) | The number of Replicated tables that are leaders. Leader replica is responsible for assigning merges, cleaning old blocks for deduplications and a few more bookkeeping tasks. There may be no more than one leader across all replicas at one moment of time. If there is no leader it will be elected soon or it indicate an issue. Shown as table |
clickhouse.table.replicated.leader.yield.count (count) | The number of times Replicated table yielded its leadership due to large replication lag relative to other replicas during the last interval. Shown as event |
clickhouse.table.replicated.leader.yield.total (gauge) | The total number of times Replicated table yielded its leadership due to large replication lag relative to other replicas. Shown as event |
clickhouse.table.replicated.log.max (gauge) | Maximum entry number in the log of general activity. Shown as item |
clickhouse.table.replicated.log.pointer (gauge) | Maximum entry number in the log of general activity that the replica copied to its execution queue, plus one. If this is much smaller than clickhouse.table.replicated.log.max , something is wrong.Shown as item |
clickhouse.table.replicated.part.check (gauge) | The number of data parts checking for consistency Shown as item |
clickhouse.table.replicated.part.check.count (count) | The number of data parts checking for consistency Shown as item |
clickhouse.table.replicated.part.check.failed.count (count) | Number of times the advanced search for a data part on replicas did not give result or when unexpected part has been found and moved away. |
clickhouse.table.replicated.part.check.failed.total (gauge) | Number of times the advanced search for a data part on replicas did not give result or when unexpected part has been found and moved away. |
clickhouse.table.replicated.part.check.total (gauge) | The number of data parts checking for consistency Shown as item |
clickhouse.table.replicated.part.fetch (gauge) | The number of data parts being fetched from replica Shown as item |
clickhouse.table.replicated.part.future (gauge) | The number of data parts that will appear as the result of INSERTs or merges that haven't been done yet. Shown as item |
clickhouse.table.replicated.part.loss.count (count) | The number of times a data part we wanted doesn't exist on any replica (even on replicas that are offline right now) during the last interval. Those data parts are definitely lost. This is normal due to asynchronous replication (if quorum inserts were not enabled), when the replica on which the data part was written failed and when it became online after fail it doesn't contain that data part. Shown as item |
clickhouse.table.replicated.part.loss.total (gauge) | The total number of times a data part that we wanted doesn't exist on any replica (even on replicas that are offline right now). That data parts are definitely lost. This is normal due to asynchronous replication (if quorum inserts were not enabled), when the replica on which the data part was written failed and when it became online after fail it doesn't contain that data part. Shown as item |
clickhouse.table.replicated.part.send (gauge) | The number of data parts being sent to replicas Shown as item |
clickhouse.table.replicated.part.suspect (gauge) | The number of data parts in the queue for verification. A part is put in the verification queue if there is suspicion that it might be damaged. Shown as item |
clickhouse.table.replicated.queue.insert (gauge) | The number of inserts of blocks of data that need to be made. Insertions are usually replicated fairly quickly. If this number is large, it means something is wrong. Shown as operation |
clickhouse.table.replicated.queue.merge (gauge) | The number of merges waiting to be made. Sometimes merges are lengthy, so this value may be greater than zero for a long time. Shown as merge |
clickhouse.table.replicated.queue.size (gauge) | Size of the queue for operations waiting to be performed. Operations include inserting blocks of data, merges, and certain other actions. It usually coincides with clickhouse.table.replicated.part.future .Shown as operation |
clickhouse.table.replicated.readonly (gauge) | The number of Replicated tables that are currently in readonly state due to re-initialization after ZooKeeper session loss or due to startup without ZooKeeper configured. Shown as table |
clickhouse.table.replicated.total (gauge) | The total number of known replicas of this table. Shown as table |
clickhouse.table.replicated.version (gauge) | Version number of the table structure indicating how many times ALTER was performed. If replicas have different versions, it means some replicas haven't made all of the ALTERs yet. Shown as operation |
clickhouse.table.total (gauge) | The current number of tables. Shown as table |
clickhouse.table_engines.files.read.count (count) | Number of files read in table engines working with files (like File/S3/URL/HDFS). |
clickhouse.table_engines.files.read.total (gauge) | Number of files read in table engines working with files (like File/S3/URL/HDFS). |
clickhouse.tables_to_drop.queue.total (gauge) | Number of dropped tables, that are waiting for background data removal. Shown as table |
clickhouse.task.mutate.calculate.projections.time (gauge) | Time spent calculating projections Shown as microsecond |
clickhouse.task.prefetch.reader.wait.time (gauge) | Time spend waiting for prefetched reader Shown as microsecond |
clickhouse.task.read.requests.received.count (count) | The number of callbacks requested from the remote server back to the initiator server to choose the read task (for s3Cluster table function and similar). Measured on the initiator server side. |
clickhouse.task.read.requests.received.total (gauge) | The number of callbacks requested from the remote server back to the initiator server to choose the read task (for s3Cluster table function and similar). Measured on the initiator server side. |
clickhouse.task.read.requests.sent.count (count) | The number of callbacks requested from the remote server back to the initiator server to choose the read task (for s3Cluster table function and similar). Measured on the remote server side. |
clickhouse.task.read.requests.sent.time (gauge) | Time spent in callbacks requested from the remote server back to the initiator server to choose the read task (for s3Cluster table function and similar). Measured on the remote server side. Shown as microsecond |
clickhouse.task.read.requests.sent.total (gauge) | The number of callbacks requested from the remote server back to the initiator server to choose the read task (for s3Cluster table function and similar). Measured on the remote server side. |
clickhouse.task.requests.callback (gauge) | The number of callbacks requested from the remote server back to the initiator server to choose the read task (for s3Cluster table function and similar). Measured on the remote server side. |
clickhouse.task.thread_pool_reader.cache.time (gauge) | How much time we spent checking if content is cached Shown as microsecond |
clickhouse.task.thread_pool_reader.read.count (count) | Bytes read from a threadpool task in asynchronous reading |
clickhouse.task.thread_pool_reader.read.size.count (count) | Bytes read from a threadpool task in asynchronous reading |
clickhouse.task.thread_pool_reader.read.size.total (gauge) | Bytes read from a threadpool task in asynchronous reading |
clickhouse.task.thread_pool_reader.read.sync.time (gauge) | How much time we spent reading synchronously Shown as microsecond |
clickhouse.task.thread_pool_reader.read.time (gauge) | Time spent getting the data in asynchronous reading Shown as microsecond |
clickhouse.task.thread_pool_reader.read.total (gauge) | Bytes read from a threadpool task in asynchronous reading |
clickhouse.tasks.background.loading_marks.count (count) | Number of background tasks for loading marks |
clickhouse.tasks.background.loading_marks.total (gauge) | Number of background tasks for loading marks |
clickhouse.temporary_files.aggregation.total (gauge) | Number of temporary files created for external aggregation |
clickhouse.temporary_files.join.total (gauge) | Number of temporary files created for JOIN |
clickhouse.temporary_files.sort.total (gauge) | Number of temporary files created for external sorting |
clickhouse.temporary_files.total (gauge) | Number of temporary files created |
clickhouse.temporary_files.unknown.total (gauge) | Number of temporary files created without known purpose |
clickhouse.thread.cpu.wait (gauge) | The percentage of time a thread was ready for execution but waiting to be scheduled by OS (from the OS point of view) during the last interval. Shown as percent |
clickhouse.thread.global.active (gauge) | The number of threads in global thread pool running a task. Shown as thread |
clickhouse.thread.global.scheduled (gauge) | Number of queued or active jobs in global thread pool. |
clickhouse.thread.global.total (gauge) | The number of threads in global thread pool. Shown as thread |
clickhouse.thread.io.wait (gauge) | The percentage of time a thread spent waiting for a result of IO operation (from the OS point of view) during the last interval. This is real IO that doesn't include page cache. Shown as percent |
clickhouse.thread.local.active (gauge) | The number of threads in local thread pools running a task. Shown as thread |
clickhouse.thread.local.scheduled (gauge) | Number of queued or active jobs in local thread pools. |
clickhouse.thread.local.total (gauge) | The number of threads in local thread pools. Should be similar to GlobalThreadActive. Shown as thread |
clickhouse.thread.lock.context.waiting (gauge) | The number of threads waiting for lock in Context. This is global lock. Shown as thread |
clickhouse.thread.lock.rw.active.read (gauge) | The number of threads holding read lock in a table RWLock. Shown as thread |
clickhouse.thread.lock.rw.active.write (gauge) | The number of threads holding write lock in a table RWLock. Shown as thread |
clickhouse.thread.lock.rw.waiting.read (gauge) | The number of threads waiting for read on a table RWLock. Shown as thread |
clickhouse.thread.lock.rw.waiting.write (gauge) | The number of threads waiting for write on a table RWLock. Shown as thread |
clickhouse.thread.process_time (gauge) | The percentage of time spent processing (queries and other tasks) threads during the last interval. Shown as percent |
clickhouse.thread.query (gauge) | The number of query processing threads Shown as thread |
clickhouse.thread.system.process_time (gauge) | The percentage of time spent processing (queries and other tasks) threads executing CPU instructions in OS kernel space during the last interval. This includes time CPU pipeline was stalled due to cache misses, branch mispredictions, hyper-threading, etc. Shown as percent |
clickhouse.thread.user.process_time (gauge) | The percentage of time spent processing (queries and other tasks) threads executing CPU instructions in user space during the last interval. This includes time CPU pipeline was stalled due to cache misses, branch mispredictions, hyper-threading, etc. Shown as percent |
clickhouse.threads.async.disk_object_storage.active (gauge) | Obsolete metric, shows nothing. |
clickhouse.threads.async.disk_object_storage.total (gauge) | Obsolete metric, shows nothing. |
clickhouse.threads.async.read (gauge) | Number of threads waiting for asynchronous read. Shown as thread |
clickhouse.threads.azure_object_storage.active (gauge) | Number of threads in the AzureObjectStorage thread pool running a task. |
clickhouse.threads.azure_object_storage.scheduled (gauge) | Number of queued or active jobs in the AzureObjectStorage thread pool. |
clickhouse.threads.azure_object_storage.total (gauge) | Number of threads in the AzureObjectStorage thread pool. |
clickhouse.threads.database_catalog.active (gauge) | Number of threads in the DatabaseCatalog thread pool running a task. |
clickhouse.threads.database_catalog.scheduled (gauge) | Number of queued or active jobs in the DatabaseCatalog thread pool. |
clickhouse.threads.database_catalog.total (gauge) | Number of threads in the DatabaseCatalog thread pool. |
clickhouse.threads.database_ondisk.active (gauge) | Number of threads in the DatabaseOnDisk thread pool running a task. |
clickhouse.threads.database_ondisk.scheduled (gauge) | Number of queued or active jobs in the DatabaseOnDisk thread pool. |
clickhouse.threads.database_ondisk.total (gauge) | Number of threads in the DatabaseOnDisk thread pool. |
clickhouse.threads.database_replicated.active (gauge) | Number of active threads in the threadpool for table creation in DatabaseReplicated. |
clickhouse.threads.database_replicated.scheduled (gauge) | Number of queued or active jobs in the threadpool for table creation in DatabaseReplicated. |
clickhouse.threads.database_replicated.total (gauge) | Number of threads in the threadpool for table creation in DatabaseReplicated. |
clickhouse.threads.ddl_worker.active (gauge) | Number of threads in the DDLWORKER thread pool for ON CLUSTER queries running a task. |
clickhouse.threads.ddl_worker.scheduled (gauge) | Number of queued or active jobs in the DDLWORKER thread pool for ON CLUSTER queries. |
clickhouse.threads.ddl_worker.total (gauge) | Number of threads in the DDLWorker thread pool for ON CLUSTER queries. |
clickhouse.threads.destroy_aggregates.active (gauge) | Number of threads in the thread pool for destroy aggregate states running a task. |
clickhouse.threads.destroy_aggregates.scheduled (gauge) | Number of queued or active jobs in the thread pool for destroy aggregate states. |
clickhouse.threads.destroy_aggregates.total (gauge) | Number of threads in the thread pool for destroy aggregate states. |
clickhouse.threads.distribured.insert.active (gauge) | Number of threads used for INSERT into Distributed running a task. |
clickhouse.threads.distribured.insert.scheduled (gauge) | Number of queued or active jobs used for INSERT into Distributed. |
clickhouse.threads.distribured.insert.total (gauge) | Number of threads used for INSERT into Distributed. |
clickhouse.threads.dwarf.active (gauge) | Number of threads in the DWARFBlockInputFormat thread pool running a task. |
clickhouse.threads.dwarf.scheduled (gauge) | Number of queued or active jobs in the DWARFBlockInputFormat thread pool. |
clickhouse.threads.dwarf.total (gauge) | Number of threads in the DWARFBlockInputFormat thread pool. |
clickhouse.threads.hashed_dictionary.active (gauge) | Number of threads in the HashedDictionary thread pool running a task. |
clickhouse.threads.hashed_dictionary.scheduled (gauge) | Number of queued or active jobs in the HashedDictionary thread pool. |
clickhouse.threads.hashed_dictionary.total (gauge) | Number of threads in the HashedDictionary thread pool. |
clickhouse.threads.idisk.copier.active (gauge) | Number of threads for copying data between disks of different types running a task. |
clickhouse.threads.idisk.copier.scheduled (gauge) | Number of queued or active jobs for copying data between disks of different types. |
clickhouse.threads.idisk.copier.total (gauge) | Number of threads for copying data between disks of different types. |
clickhouse.threads.in_overcommit_tracker.total (gauge) | Number of waiting threads inside of OvercommitTracker |
clickhouse.threads.io.active (gauge) | Number of threads in the IO thread pool running a task. |
clickhouse.threads.io.scheduled (gauge) | Number of queued or active jobs in the IO thread pool. |
clickhouse.threads.io.total (gauge) | Number of threads in the IO thread pool. |
clickhouse.threads.io_prefetch.active (gauge) | Number of threads in the IO prefetch thread pool running a task. |
clickhouse.threads.io_prefetch.scheduled (gauge) | Number of queued or active jobs in the IO prefetch thread pool. |
clickhouse.threads.io_prefetch.total (gauge) | Number of threads in the IO prefetch thread pool. |
clickhouse.threads.io_writer.active (gauge) | Number of threads in the IO writer thread pool running a task. |
clickhouse.threads.io_writer.scheduled (gauge) | Number of queued or active jobs in the IO writer thread pool. |
clickhouse.threads.io_writer.total (gauge) | Number of threads in the IO writer thread pool. |
clickhouse.threads.librdkafka.active (gauge) | Number of active librdkafka threads Shown as thread |
clickhouse.threads.marks_loader.active (gauge) | Number of threads in the thread pool for loading marks running a task. |
clickhouse.threads.marks_loader.scheduled (gauge) | Number of queued or active jobs in the thread pool for loading marks. |
clickhouse.threads.marks_loader.total (gauge) | Number of threads in thread pool for loading marks. |
clickhouse.threads.merge_tree_background_executor.active (gauge) | Number of threads in the MergeTreeBackgroundExecutor thread pool running a task. |
clickhouse.threads.merge_tree_background_executor.scheduled (gauge) | Number of queued or active jobs in the MergeTreeBackgroundExecutor thread pool. |
clickhouse.threads.merge_tree_background_executor.total (gauge) | Number of threads in the MergeTreeBackgroundExecutor thread pool. |
clickhouse.threads.merge_tree_data_selector_executor.active (gauge) | Number of threads in the MergeTreeDataSelectExecutor thread pool running a task. |
clickhouse.threads.merge_tree_data_selector_executor.scheduled (gauge) | Number of queued or active jobs in the MergeTreeDataSelectExecutor thread pool. |
clickhouse.threads.merge_tree_data_selector_executor.total (gauge) | Number of threads in the MergeTreeDataSelectExecutor thread pool. |
clickhouse.threads.merge_tree_outdated_parts_loader.active (gauge) | Number of active threads in the threadpool for loading Outdated data parts. |
clickhouse.threads.merge_tree_outdated_parts_loader.scheduled (gauge) | Number of queued or active jobs in the threadpool for loading Outdated data parts. |
clickhouse.threads.merge_tree_outdated_parts_loader.total (gauge) | Number of threads in the threadpool for loading Outdated data parts. |
clickhouse.threads.merge_tree_parts_cleaner.active (gauge) | Number of threads in the MergeTree parts cleaner thread pool running a task. |
clickhouse.threads.merge_tree_parts_cleaner.scheduled (gauge) | Number of queued or active jobs in the MergeTree parts cleaner thread pool. |
clickhouse.threads.merge_tree_parts_cleaner.total (gauge) | Number of threads in the MergeTree parts cleaner thread pool. |
clickhouse.threads.merge_tree_parts_loader.active (gauge) | Number of threads in the MergeTree parts loader thread pool running a task. |
clickhouse.threads.merge_tree_parts_loader.scheduled (gauge) | Number of queued or active jobs in the MergeTree parts loader thread pool. |
clickhouse.threads.merge_tree_parts_loader.total (gauge) | Number of threads in the MergeTree parts loader thread pool. |
clickhouse.threads.outdated_parts_loading.active (gauge) | Number of active threads in the threadpool for loading Outdated data parts. |
clickhouse.threads.outdated_parts_loading.scheduled (gauge) | Number of queued or active jobs in the threadpool for loading Outdated data parts. |
clickhouse.threads.outdated_parts_loading.total (gauge) | Number of threads in the threadpool for loading Outdated data parts. |
clickhouse.threads.parallel_formatting_output.active (gauge) | Number of threads in the ParallelFormattingOutputFormatThreads thread pool running a task. |
clickhouse.threads.parallel_formatting_output.scheduled (gauge) | Number of queued or active jobs in the ParallelFormattingOutputFormatThreads thread pool. |
clickhouse.threads.parallel_formatting_output.total (gauge) | Number of threads in the ParallelFormattingOutputFormatThreads thread pool. |
clickhouse.threads.parallel_parsing_input.active (gauge) | Number of threads in the ParallelParsingInputFormat thread pool running a task. |
clickhouse.threads.parallel_parsing_input.scheduled (gauge) | Number of queued or active jobs in the ParallelParsingInputFormat thread pool. |
clickhouse.threads.parallel_parsing_input.total (gauge) | Number of threads in the ParallelParsingInputFormat thread pool. |
clickhouse.threads.parquet_decoder.active (gauge) | Number of threads in the ParquetBlockInputFormat thread pool running a task. |
clickhouse.threads.parquet_decoder.scheduled (gauge) | Number of queued or active jobs in the ParquetBlockInputFormat thread pool. |
clickhouse.threads.parquet_decoder.total (gauge) | Number of threads in the ParquetBlockInputFormat thread pool. |
clickhouse.threads.parquet_encoder.active (gauge) | Number of threads in ParquetBlockOutputFormat thread pool running a task. |
clickhouse.threads.parquet_encoder.scheduled (gauge) | Number of queued or active jobs in ParquetBlockOutputFormat thread pool. |
clickhouse.threads.parquet_encoder.total (gauge) | Number of threads in ParquetBlockOutputFormat thread pool. |
clickhouse.threads.pool.fs_reader.active (gauge) | Number of threads in the thread pool for localfilesystemread_method=threadpool running a task. |
clickhouse.threads.pool.fs_reader.scheduled (gauge) | Number of queued or active jobs in the thread pool for localfilesystemread_method=threadpool. |
clickhouse.threads.pool.fs_reader.total (gauge) | Number of threads in the thread pool for localfilesystemread_method=threadpool. |
clickhouse.threads.pool.remote_fs_reader.active (gauge) | Number of threads in the thread pool for remotefilesystemread_method=threadpool running a task. |
clickhouse.threads.pool.remote_fs_reader.scheduled (gauge) | Number of queued or active jobs in the thread pool for remotefilesystemread_method=threadpool. |
clickhouse.threads.pool.remote_fs_reader.total (gauge) | Number of threads in the thread pool for remotefilesystemread_method=threadpool. |
clickhouse.threads.query.execution.hard_page_faults.count (count) | The number of hard page faults in query execution threads. High values indicate either that you forgot to turn off swap on your server, or eviction of memory pages of the ClickHouse binary during very high memory pressure, or successful usage of the 'mmap' read method for the tables data. Shown as thread |
clickhouse.threads.query.execution.hard_page_faults.total (gauge) | The number of hard page faults in query execution threads. High values indicate either that you forgot to turn off swap on your server, or eviction of memory pages of the ClickHouse binary during very high memory pressure, or successful usage of the 'mmap' read method for the tables data. Shown as thread |
clickhouse.threads.query.soft_page_faults.count (count) | The number of soft page faults in query execution threads. Soft page fault usually means a miss in the memory allocator cache, which requires a new memory mapping from the OS and subsequent allocation of a page of physical memory. |
clickhouse.threads.query.soft_page_faults.total (gauge) | The number of soft page faults in query execution threads. Soft page fault usually means a miss in the memory allocator cache, which requires a new memory mapping from the OS and subsequent allocation of a page of physical memory. |
clickhouse.threads.query_pipeline_executor.active (gauge) | Number of threads in the PipelineExecutor thread pool running a task. |
clickhouse.threads.query_pipeline_executor.scheduled (gauge) | Number of queued or active jobs in the PipelineExecutor thread pool. |
clickhouse.threads.query_pipeline_executor.total (gauge) | Number of threads in the PipelineExecutor thread pool. |
clickhouse.threads.restart_replica.active (gauge) | Number of threads in the RESTART REPLICA thread pool running a task. |
clickhouse.threads.restart_replica.scheduled (gauge) | Number of queued or active jobs in the RESTART REPLICA thread pool. |
clickhouse.threads.restore.active (gauge) | Number of threads in the thread pool for RESTORE running a task. |
clickhouse.threads.restore.scheduled (gauge) | Number of queued or active jobs for RESTORE. |
clickhouse.threads.restore.total (gauge) | Number of threads in the thread pool for RESTORE. |
clickhouse.threads.s3_object_storage.active (gauge) | Number of threads in the S3ObjectStorage thread pool running a task. |
clickhouse.threads.s3_object_storage.scheduled (gauge) | Number of queued or active jobs in the S3ObjectStorage thread pool. |
clickhouse.threads.s3_object_storage.total (gauge) | Number of threads in the S3ObjectStorage thread pool. |
clickhouse.threads.shared_merge_tree.active (gauge) | Number of threads in the thread pools in internals of SharedMergeTree running a task |
clickhouse.threads.shared_merge_tree.scheduled (gauge) | Number of queued or active threads in the thread pools in internals of SharedMergeTree |
clickhouse.threads.shared_merge_tree.total (gauge) | Number of threads in the thread pools in internals of SharedMergeTree |
clickhouse.threads.startup_system_tables.active (gauge) | Number of threads in the StartupSystemTables thread pool running a task. |
clickhouse.threads.startup_system_tables.scheduled (gauge) | Number of queued or active jobs in the StartupSystemTables thread pool. |
clickhouse.threads.startup_system_tables.total (gauge) | Number of threads in the StartupSystemTables thread pool. |
clickhouse.threads.storage_buffer_flush.active (gauge) | Number of threads for background flushes in StorageBuffer running a task |
clickhouse.threads.storage_buffer_flush.scheduled (gauge) | Number of queued or active threads for background flushes in StorageBuffer |
clickhouse.threads.storage_buffer_flush.total (gauge) | Number of threads for background flushes in StorageBuffer |
clickhouse.threads.storage_distributed.active (gauge) | Number of threads in the StorageDistributed thread pool running a task. |
clickhouse.threads.storage_distributed.scheduled (gauge) | Number of queued or active jobs in the StorageDistributed thread pool. |
clickhouse.threads.storage_distributed.total (gauge) | Number of threads in the StorageDistributed thread pool. |
clickhouse.threads.storage_hive.active (gauge) | Number of threads in the StorageHive thread pool running a task. |
clickhouse.threads.storage_hive.scheduled (gauge) | Number of queued or active jobs in the StorageHive thread pool. |
clickhouse.threads.storage_hive.total (gauge) | Number of threads in the StorageHive thread pool. |
clickhouse.threads.storage_s3.active (gauge) | Number of threads in the StorageS3 thread pool running a task. |
clickhouse.threads.storage_s3.scheduled (gauge) | Number of queued or active jobs in the StorageS3 thread pool. |
clickhouse.threads.storage_s3.total (gauge) | Number of threads in the StorageS3 thread pool. |
clickhouse.threads.system_replicas.active (gauge) | Number of threads in the system.replicas thread pool running a task. |
clickhouse.threads.system_replicas.scheduled (gauge) | Number of queued or active jobs in the system.replicas thread pool. |
clickhouse.threads.system_replicas.total (gauge) | Number of threads in the system.replicas thread pool. |
clickhouse.threads.tables_loader_background.active (gauge) | Number of threads in the tables loader background thread pool running a task. |
clickhouse.threads.tables_loader_background.scheduled (gauge) | Number of queued or active jobs in the tables loader background thread pool. |
clickhouse.threads.tables_loader_background.total (gauge) | Number of threads in the tables loader background thread pool. |
clickhouse.threads.tables_loader_foreground.active (gauge) | Number of threads in the tables loader foreground thread pool running a task. |
clickhouse.threads.tables_loader_foreground.scheduled (gauge) | Number of queued or active jobs in the tables loader foreground thread pool. |
clickhouse.threads.tables_loader_foreground.total (gauge) | Number of threads in the tables loader foreground thread pool. |
clickhouse.throttler.local_read.bytes.count (count) | Bytes passed through 'maxlocalreadbandwidthforserver'/'maxlocalreadbandwidth' throttler. |
clickhouse.throttler.local_read.bytes.total (gauge) | Bytes passed through 'maxlocalreadbandwidthforserver'/'maxlocalreadbandwidth' throttler. |
clickhouse.throttler.local_read.sleep.time (gauge) | Total time a query was sleeping to conform 'maxlocalreadbandwidthforserver'/'maxlocalreadbandwidth' throttling. Shown as microsecond |
clickhouse.throttler.local_write.bytes.count (count) | Bytes passed through 'maxlocalwritebandwidthforserver'/'maxlocalwritebandwidth' throttler. |
clickhouse.throttler.local_write.bytes.total (gauge) | Bytes passed through 'maxlocalwritebandwidthforserver'/'maxlocalwritebandwidth' throttler. |
clickhouse.throttler.local_write.sleep.time (gauge) | Total time a query was sleeping to conform 'maxlocalwritebandwidthforserver'/'maxlocalwritebandwidth' throttling. Shown as microsecond |
clickhouse.uptime (gauge) | The amount of time ClickHouse has been active. Shown as second |
clickhouse.views.refreshing.current (gauge) | Number of materialized views currently executing a refresh |
clickhouse.views.refreshing.total (gauge) | Number materialized views with periodic refreshing (REFRESH) |
clickhouse.zk.check.count (count) | Number of 'check' requests to ZooKeeper. Usually they don't make sense in isolation, only as part of a complex transaction. |
clickhouse.zk.check.total (gauge) | Number of 'check' requests to ZooKeeper. Usually they don't make sense in isolation, only as part of a complex transaction. |
clickhouse.zk.close.count (count) | Number of times connection with ZooKeeper has been closed voluntary. |
clickhouse.zk.close.total (gauge) | Number of times connection with ZooKeeper has been closed voluntary. |
clickhouse.zk.connection (gauge) | The number of sessions (connections) to ZooKeeper. Should be no more than one, because using more than one connection to ZooKeeper may lead to bugs due to lack of linearizability (stale reads) that ZooKeeper consistency model allows. Shown as connection |
clickhouse.zk.connection.established.count (count) | Number of times connection with ZooKeeper has been established. |
clickhouse.zk.connection.established.total (gauge) | Number of times connection with ZooKeeper has been established. |
clickhouse.zk.create.count (count) | Number of 'create' requests to ZooKeeper. Shown as request |
clickhouse.zk.create.total (gauge) | Number of 'create' requests to ZooKeeper. Shown as request |
clickhouse.zk.data.exception.count (count) | Number of exceptions while working with ZooKeeper related to the data (no node, bad version or similar). |
clickhouse.zk.data.exception.total (gauge) | Number of exceptions while working with ZooKeeper related to the data (no node, bad version or similar). |
clickhouse.zk.ddl_entry.max (gauge) | Max DDL entry of DDLWorker that pushed to zookeeper. |
clickhouse.zk.exist.count (count) | Number of 'exists' requests to ZooKeeper. Shown as request |
clickhouse.zk.exist.total (gauge) | Number of 'exists' requests to ZooKeeper. Shown as request |
clickhouse.zk.get.count (count) | Number of 'get' requests to ZooKeeper. Shown as request |
clickhouse.zk.get.total (gauge) | Number of 'get' requests to ZooKeeper. Shown as request |
clickhouse.zk.list.count (count) | Number of 'list' (getChildren) requests to ZooKeeper. Shown as request |
clickhouse.zk.list.total (gauge) | Number of 'list' (getChildren) requests to ZooKeeper. Shown as request |
clickhouse.zk.multi.count (count) | Number of 'multi' requests to ZooKeeper (compound transactions). Shown as request |
clickhouse.zk.multi.total (gauge) | Number of 'multi' requests to ZooKeeper (compound transactions). Shown as request |
clickhouse.zk.network.exception.count (count) | Number of exceptions while working with ZooKeeper related to network (connection loss or similar). |
clickhouse.zk.network.exception.total (gauge) | Number of exceptions while working with ZooKeeper related to network (connection loss or similar). |
clickhouse.zk.node.ephemeral (gauge) | The number of ephemeral nodes hold in ZooKeeper. Shown as node |
clickhouse.zk.operation.count (count) | Number of ZooKeeper operations, which include both read and write operations as well as multi-transactions. Shown as operation |
clickhouse.zk.operation.total (gauge) | Number of ZooKeeper operations, which include both read and write operations as well as multi-transactions. Shown as operation |
clickhouse.zk.other.exception.count (count) | Number of exceptions while working with ZooKeeper other than ZooKeeperUserExceptions and ZooKeeperHardwareExceptions. |
clickhouse.zk.other.exception.total (gauge) | Number of exceptions while working with ZooKeeper other than ZooKeeperUserExceptions and ZooKeeperHardwareExceptions. |
clickhouse.zk.parts.covered.count (count) | For debugging purposes. Number of parts in ZooKeeper that have a covering part, but doesn't exist on disk. Checked on server start. |
clickhouse.zk.parts.covered.total (gauge) | For debugging purposes. Number of parts in ZooKeeper that have a covering part, but doesn't exist on disk. Checked on server start. |
clickhouse.zk.received.size.count (count) | Number of bytes received over network while communicating with ZooKeeper. Shown as byte |
clickhouse.zk.received.size.total (gauge) | Number of bytes received over network while communicating with ZooKeeper. Shown as byte |
clickhouse.zk.reconfig.count (count) | Number of 'reconfig' requests to ZooKeeper. |
clickhouse.zk.reconfig.total (gauge) | Number of 'reconfig' requests to ZooKeeper. |
clickhouse.zk.remove.count (count) | Number of 'remove' requests to ZooKeeper. Shown as request |
clickhouse.zk.remove.total (gauge) | Number of 'remove' requests to ZooKeeper. Shown as request |
clickhouse.zk.request (gauge) | The number of requests to ZooKeeper in fly. Shown as request |
clickhouse.zk.sent.size.count (count) | Number of bytes send over network while communicating with ZooKeeper. Shown as byte |
clickhouse.zk.sent.size.total (gauge) | Number of bytes send over network while communicating with ZooKeeper. Shown as byte |
clickhouse.zk.set.count (count) | Number of 'set' requests to ZooKeeper. Shown as request |
clickhouse.zk.set.total (gauge) | Number of 'set' requests to ZooKeeper. Shown as request |
clickhouse.zk.sync.count (count) | Number of 'sync' requests to ZooKeeper. These requests are rarely needed or usable. |
clickhouse.zk.sync.total (gauge) | Number of 'sync' requests to ZooKeeper. These requests are rarely needed or usable. |
clickhouse.zk.wait.time (gauge) | Number of microseconds spent waiting for responses from ZooKeeper after creating a request, summed across all the requesting threads. Shown as microsecond |
clickhouse.zk.watch (gauge) | The number of watches (event subscriptions) in ZooKeeper. Shown as event |
clickhouse.zk.watch.count (count) | The number of watches (event subscriptions) in ZooKeeper. Shown as event |
clickhouse.zk.watch.total (gauge) | The number of watches (event subscriptions) in ZooKeeper. Shown as event |
ClickHouse 점검은 이벤트를 포함하지 않습니다.
clickhouse.can_connect
Returns CRITICAL
if the Agent is unable to connect to the monitored ClickHouse database, otherwise returns OK
.
Statuses: ok, critical
도움이 필요하신가요? Datadog 지원팀에 문의하세요.