Skip to main content

Elasticsearch

Experimental Observability

Synopsis

Creates an Elasticsearch target that sends data using the Bulk API. Supports multiple endpoints, authentication, compression, and ingest pipelines. Data is batched for efficient delivery and can be automatically routed to different indeces.

Schema

- id: <numeric>
name: <string>
description: <string>
type: elastic
status: <boolean>
properties:
index: <string>
max_payload_size_kb: <numeric>
batch_size: <numeric>
timeout: <numeric>
insecure_skip_verify: <boolean>
use_compression: <boolean>
version: <string>
write_action: <string>
filter_path: <string>
pipeline: <string>
endpoints:
- endpoint: <string>
username: <string>
password: <string>

Configuration

The following are the minimum requirements to define the target.

FieldRequiredDefaultDescription
idY-Unique identifier
nameY-Target name
descriptionN-Optional description
typeY-Must be elastic
statusNtrueEnable/disable the target

Elasticsearch

FieldRequiredDefaultDescription
indexY-Default Elasticsearch index name
max_payload_size_kbN4096Maximum bulk request size in KB
batch_sizeN10000Maximum number of events per batch
timeoutN30Connection timeout in seconds
insecure_skip_verifyNfalseSkip TLS certificate verification
use_compressionNtrueEnable GZIP compression
versionNautoElasticsearch version
write_actionNcreateBulk API action (index, create, update, delete)
filter_pathNerrors,items.*.error,items.*._index,items.*.statusResponse filter path
pipelineN-Ingest pipeline name

Endpoint

FieldRequiredDefaultDescription
endpointY-Elasticsearch URL
usernameN-Basic auth username
passwordN-Basic auth password

Details

URLs are automatically appended with /_bulk if the suffix is not present. Events are batched until either the batch size or payload size limit is reached.

For load balancing, events are automatically spread across multiple endpoints.

Examples

Basic

Simple Elasticsearch output with a single endpoint...

- id: 1
name: elastic_output
type: elastic
properties:
index: "logs-%Y.%m.%d"
endpoints:
- endpoint: "http://elasticsearch:9200"

Secure

Secure configuration with authentication and TLS...

- id: 2
name: secure_elastic
type: elastic
properties:
index: "secure-logs"
use_compression: true
endpoints:
- endpoint: "https://elasticsearch:9200"
username: "elastic"
password: "password"
insecure_skip_verify: false
warning

In production environments, setting insecure_skip_verify to true is not recommended.

Ingest Pipeline

Send data through an ingest pipeline...

- id: 3
name: pipeline_elastic
type: elastic
properties:
index: "processed-logs"
pipeline: "log-processor"
write_action: "create"
endpoints:
- endpoint: "http://elasticsearch:9200"

High-Volume

Optimized for high-volume data ingestion...

- id: 4
name: highvol_elastic
type: elastic
properties:
index: "metrics"
batch_size: 20000
max_payload_size_kb: 8192
use_compression: true
timeout: 60
endpoints:
- endpoint: "http://es1:9200"
- endpoint: "http://es2:9200"
warning

Long timeout values may lead to connection pooling issues.

warning

Setting max_payload_size_kb too high might cause memory pressure.