Skip to main content
Version: 1.4.0

Elasticsearch

Experimental Observability

Synopsis

Creates an Elasticsearch target that sends data using the Bulk API. Supports multiple endpoints, field normalization, and customizable batch sizing.

Schema

- name: <string>
description: <string>
type: elastic
status: <boolean>
pipelines: <pipeline[]>
properties:
index: <string>
max_payload_size_kb: <numeric>
batch_size: <numeric>
timeout: <numeric>
insecure_skip_verify: <boolean>
use_compression: <boolean>
version: <string>
write_action: <string>
filter_path: <string>
pipeline: <string>
field_format: <string>
endpoints:
- endpoint: <string>
username: <string>
password: <string>

Configuration

The following are the fields used to define the target:

FieldRequiredDefaultDescription
nameY-Target name
descriptionN-Optional description
typeY-Must be elastic
pipelinesN-Optional post-processor pipelines
statusNtrueEnable/disable the target

Elasticsearch

FieldRequiredDefaultDescription
indexY-Default Elasticsearch index name
max_payload_size_kbN4096Maximum bulk request size in KB
batch_sizeN10000Maximum number of events per batch
timeoutN30Connection timeout in seconds
insecure_skip_verifyNfalseSkip TLS certificate verification
use_compressionNtrueEnable GZIP compression
versionNautoElasticsearch version
write_actionNcreateBulk API action (index, create, update, delete)
filter_pathNerrors,items.*.error,items.*._index,items.*.statusResponse filter path
pipelineN-Ingest pipeline name
field_formatN-Data normalization format. See applicable Normalization section

Endpoint

FieldRequiredDefaultDescription
endpointY-Elasticsearch URL
usernameN-Basic auth username
passwordN-Basic auth password

Details

The target supports multiple endpoints, authentication, compression, and ingest pipelines. Data is batched for efficient delivery and can be automatically routed to different indices.

URLs are automatically appended with /_bulk if the suffix is not present. Events are batched until either the batch size or payload size limit is reached.

For load balancing or failover, events are sent to the endpoints in order, with subsequent endpoints used only if the previous ones fail.

Each event is automatically enriched with a timestamp in RFC3339 format based on the log's epoch time. You can route events to different indices by setting the SystemS3 field in your logs to the desired index name.

warning

Long timeout values may lead to connection pooling issues.

warning

Setting max_payload_size_kb too high might cause memory pressure.

Field Normalization

The field_format property allows normalizing log data to standard formats:

  • ecs - Elastic Common Schema
  • cim - Common Information Model
  • asim - Advanced Security Information Model

Field normalization is applied before the logs are sent to Elasticsearch, ensuring consistent indexing and search capabilities.

Examples

Basic

Simple Elasticsearch output with a single endpoint...

targets:
- name: elastic_output
type: elastic
properties:
index: "logs-%Y.%m.%d"
endpoints:
- endpoint: "http://elasticsearch:9200"

Secure

Secure configuration with authentication and TLS...

targets:
- name: secure_elastic
type: elastic
properties:
index: "secure-logs"
use_compression: true
endpoints:
- endpoint: "https://elasticsearch:9200"
username: "elastic"
password: "password"
insecure_skip_verify: false
warning

In production environments, setting insecure_skip_verify to true is not recommended.

Ingest Pipeline

Send data through an ingest pipeline...

targets:
- name: pipeline_elastic
type: elastic
properties:
index: "processed-logs"
pipeline: "log-processor"
write_action: "create"
endpoints:
- endpoint: "http://elasticsearch:9200"

High-Volume

Optimized for high-volume data ingestion...

targets:
- name: highvol_elastic
type: elastic
properties:
index: "metrics"
batch_size: 20000
max_payload_size_kb: 8192
use_compression: true
timeout: 60
endpoints:
- endpoint: "http://es1:9200"
- endpoint: "http://es2:9200"

Field Normalization

Using ECS field normalization for enhanced compatibility with Elastic Stack...

targets:
- name: ecs_elastic
type: elastic
properties:
index: "normalized-logs"
field_format: "ecs"
endpoints:
- endpoint: "http://elasticsearch:9200"