Elasticsearch
Synopsis
Creates an Elasticsearch target that sends data using the Bulk API. Supports multiple endpoints, field normalization, and customizable batch sizing.
Schema
- name: <string>
description: <string>
type: elastic
status: <boolean>
pipelines: <pipeline[]>
properties:
index: <string>
max_payload_size_kb: <numeric>
batch_size: <numeric>
timeout: <numeric>
insecure_skip_verify: <boolean>
use_compression: <boolean>
version: <string>
write_action: <string>
filter_path: <string>
pipeline: <string>
field_format: <string>
endpoints:
- endpoint: <string>
username: <string>
password: <string>
Configuration
The following are the fields used to define the target:
Field | Required | Default | Description |
---|---|---|---|
name | Y | - | Target name |
description | N | - | Optional description |
type | Y | - | Must be elastic |
pipelines | N | - | Optional post-processor pipelines |
status | N | true | Enable/disable the target |
Elasticsearch
Field | Required | Default | Description |
---|---|---|---|
index | Y | - | Default Elasticsearch index name |
max_payload_size_kb | N | 4096 | Maximum bulk request size in KB |
batch_size | N | 10000 | Maximum number of events per batch |
timeout | N | 30 | Connection timeout in seconds |
insecure_skip_verify | N | false | Skip TLS certificate verification |
use_compression | N | true | Enable GZIP compression |
version | N | auto | Elasticsearch version |
write_action | N | create | Bulk API action (index , create , update , delete ) |
filter_path | N | errors,items.*.error,items.*._index,items.*.status | Response filter path |
pipeline | N | - | Ingest pipeline name |
field_format | N | - | Data normalization format. See applicable Normalization section |
Endpoint
Field | Required | Default | Description |
---|---|---|---|
endpoint | Y | - | Elasticsearch URL |
username | N | - | Basic auth username |
password | N | - | Basic auth password |
Details
The target supports multiple endpoints, authentication, compression, and ingest pipelines. Data is batched for efficient delivery and can be automatically routed to different indices.
URLs are automatically appended with /_bulk
if the suffix is not present. Events are batched until either the batch size or payload size limit is reached.
For load balancing or failover, events are sent to the endpoints in order, with subsequent endpoints used only if the previous ones fail.
Each event is automatically enriched with a timestamp in RFC3339 format based on the log's epoch time. You can route events to different indices by setting the SystemS3
field in your logs to the desired index name.
Long timeout
values may lead to connection pooling issues.
Setting max_payload_size_kb
too high might cause memory pressure.
Field Normalization
The field_format
property allows normalizing log data to standard formats:
ecs
- Elastic Common Schemacim
- Common Information Modelasim
- Advanced Security Information Model
Field normalization is applied before the logs are sent to Elasticsearch, ensuring consistent indexing and search capabilities.
Examples
Basic
Simple Elasticsearch output with a single endpoint... |
|
Secure
Secure configuration with authentication and TLS... |
|
In production environments, setting insecure_skip_verify
to true
is not recommended.
Ingest Pipeline
Send data through an ingest pipeline... |
|
High-Volume
Optimized for high-volume data ingestion... |
|
Field Normalization
Using ECS field normalization for enhanced compatibility with Elastic Stack... |
|