Skip to main content
Version: 1.3.0

ClickHouse

Databases Analytics

Synopsis

Creates a ClickHouse target that sends log data to a ClickHouse database server for analytics and storage. Supports batch processing and field normalization.

Schema

- name: <string>
description: <string>
type: clickhouse
pipelines: <pipeline[]>
status: <boolean>
properties:
address: <string>
port: <integer>
username: <string>
password: <string>
database: <string>
table: <string>
batch_size: <integer>
field_format: <string>

Configuration

The following fields are used to define the target:

FieldRequiredDefaultDescription
nameYTarget name
descriptionN-Optional description
typeYMust be clickhouse
pipelinesN-Optional post-processor pipelines
statusNtrueEnable/disable the target

Connection

FieldRequiredDefaultDescription
addressY-ClickHouse server address
portN9000ClickHouse server port (native protocol)
usernameY-ClickHouse username
passwordY-ClickHouse password
databaseY-ClickHouse database name
tableY-ClickHouse table name

Processing

FieldRequiredDefaultDescription
batch_sizeN-Number of log entries to batch before sending
field_formatN-Data normalization format. See applicable Normalization section

Details

The ClickHouse target uses the native ClickHouse protocol to efficiently send log data in batches. Logs are accumulated until the batch size is reached, then sent to the server. The default batch size is defined by the service configuration, but can be overridden.

The target supports field normalization to convert log data into standard formats like Elastic Common Schema (ECS), Common Information Model (CIM), or Advanced Security Information Model (ASIM) before sending it to ClickHouse.

Prerequisites

  1. A running ClickHouse server
  2. A database and table already created in ClickHouse
  3. A user with write permissions to the specified table
note

The ClickHouse table should have a schema compatible with the log data being sent. At minimum, it should include columns for timestamp and message fields.

warning

For high-volume logging, ensure your ClickHouse server is properly configured for performance, including appropriate settings for inserts, memory usage, and disk I/O.

Examples

Basic

Minimum configuration for sending logs to ClickHouse:

targets:
- name: basic_clickhouse
type: clickhouse
properties:
address: "192.168.1.100"
username: "default"
password: "password"
database: "logs"
table: "system_logs"

Custom Port

Configuration with a non-default port:

targets:
- name: custom_port_clickhouse
type: clickhouse
properties:
address: "clickhouse.example.com"
port: 9440
username: "logger"
password: "secure_password"
database: "logs"
table: "application_logs"

With Normalization

Configuration using field normalization:

targets:
- name: normalized_clickhouse
type: clickhouse
properties:
address: "clickhouse.example.com"
username: "logger"
password: "secure_password"
database: "logs"
table: "security_logs"
field_format: "ecs"

With Pipeline

Using a pipeline for additional log processing:

targets:
- name: pipeline_clickhouse
type: clickhouse
pipelines:
- enrich_logs
properties:
address: "clickhouse.example.com"
username: "logger"
password: "secure_password"
database: "logs"
table: "enriched_logs"

Secure Configuration

Using environment variables for credentials:

targets:
- name: secure_clickhouse
type: clickhouse
properties:
address: "${CLICKHOUSE_ADDRESS}"
username: "${CLICKHOUSE_USER}"
password: "${CLICKHOUSE_PASSWORD}"
database: "logs"
table: "security_logs"