Elasticsearch
Synopsis
Creates an Elasticsearch target that sends data using the Bulk API. Supports multiple endpoints, field normalization, customizable batch sizing, and automatic load balancing across Elasticsearch nodes.
Schema
- name: <string>
  description: <string>
  type: elastic
  status: <boolean>
  pipelines: <pipeline[]>
  properties:
    index: <string>
    max_payload_size_kb: <numeric>
    batch_size: <numeric>
    timeout: <numeric>
    insecure_skip_verify: <boolean>
    use_compression: <boolean>
    write_action: <string>
    filter_path: <string>
    pipeline: <string>
    field_format: <string>
    endpoints:
      - endpoint: <string>
        username: <string>
        password: <string>
    interval: <string|numeric>
    cron: <string>
    debug:
      status: <boolean>
      dont_send_logs: <boolean>
Configuration
The following are the fields used to define the target:
| Field | Required | Default | Description | 
|---|---|---|---|
name | Y | - | Target name | 
description | N | - | Optional description | 
type | Y | - | Must be elastic | 
pipelines | N | - | Optional post-processor pipelines | 
status | N | true | Enable/disable the target | 
Elasticsearch
| Field | Required | Default | Description | 
|---|---|---|---|
index | Y | - | Default Elasticsearch index name | 
max_payload_size_kb | N | 4096 | Maximum bulk request size in KB | 
batch_size | N | 10000 | Maximum number of events per batch | 
timeout | N | 30 | Connection timeout in seconds | 
insecure_skip_verify | N | false | Skip TLS certificate verification | 
use_compression | N | true | Enable GZIP compression | 
write_action | N | create | Bulk API action (index, create, update, delete) | 
filter_path | N | errors,items.*.error,items.*._index,items.*.status | Response filter path | 
pipeline | N | - | Ingest pipeline name | 
field_format | N | - | Data normalization format. See applicable Normalization section | 
Endpoint
| Field | Required | Default | Description | 
|---|---|---|---|
endpoint | Y | - | Elasticsearch URL (automatically appends /_bulk if not present) | 
username | N | - | Basic auth username | 
password | N | - | Basic auth password | 
Scheduler
| Field | Required | Default | Description | 
|---|---|---|---|
interval | N | realtime | Execution frequency. See Interval for details | 
cron | N | - | Cron expression for scheduled execution. See Cron for details | 
Debug Options
| Field | Required | Default | Description | 
|---|---|---|---|
debug.status | N | false | Enable debug logging | 
debug.dont_send_logs | N | false | Process logs but don't send to target (testing) | 
Details
The target supports multiple endpoints, authentication, compression, and ingest pipelines. Data is batched for efficient delivery and can be automatically routed to different indices.
URLs are automatically appended with /_bulk if the suffix is not present. Events are batched until either the batch size or payload size limit is reached.
For load balancing, events are sent to randomly selected endpoints. If an endpoint fails, the next endpoint in the randomized list is tried until successful delivery or all endpoints fail.
Each event is automatically enriched with a timestamp in RFC3339 format based on the log's epoch time. You can route events to different indices by setting the index field in a pipeline processor.
Long timeout values may lead to connection pooling issues and increased resource consumption.
Setting max_payload_size_kb too high might cause memory pressure and can exceed Elasticsearch's http.max_content_length setting (default 100MB).
Load Balancing and Failover
When multiple endpoints are configured, the target uses randomized load balancing. For each batch:
- Endpoints are randomly shuffled
 - The batch is sent to the first endpoint
 - If it fails, the next endpoint in the shuffled list is tried
 - This continues until successful delivery or all endpoints fail
 
If only some endpoints fail but delivery eventually succeeds, the batch is cleared and a partial error is logged. If all endpoints fail, the batch is retained for retry and a complete failure error is returned.
JSON Message Handling
The target intelligently handles messages that are already in JSON format:
- If a message contains an 
@timestampfield or is ECS-normalized, it's treated as a structured JSON document - The JSON is parsed and sent as-is to Elasticsearch
 - If parsing fails, the message is sent as plain text with an auto-generated timestamp
 
This allows you to send both structured and unstructured logs through the same target.
Dynamic Index Routing
Route events to different indices using pipeline processors by setting the index field:
pipelines:
  - name: route_by_type
    processors:
      - set:
          field: index
          value: "error-logs"
          if: "level == 'error'"
      - set:
          field: index
          value: "metrics"
          if: "type == 'metric'"
This allows flexible routing without creating multiple target configurations.
Bulk API Error Handling
The target parses the bulk API response to detect individual document errors:
- Uses 
filter_pathto reduce response size and focus on error details - Extracts error type, reason, and HTTP status for failed documents
 - Returns detailed error messages indicating which documents failed and why
 
Common errors include:
- Document version conflicts (for 
createaction) - Mapping errors (field type mismatches)
 - Index not found or closed
 - Pipeline failures (when using ingest pipelines)
 
Write Actions
The write_action field determines how documents are indexed:
create(default): Only index if document doesn't exist. Fails on duplicates.index: Index or replace existing document. Always succeeds unless there's a system error.update: Update existing document. Fails if document doesn't exist.delete: Remove document. Use carefully.
Response Filtering
The filter_path parameter filters the bulk API response to reduce network overhead:
errors: Boolean indicating if any operations faileditems.*.error: Error details for failed operationsitems.*._index: Index name for each operationitems.*.status: HTTP status code for each operation
For high-volume scenarios, this filtering significantly reduces response size and parsing overhead.
Field Normalization
The field_format property allows normalizing log data to standard formats:
ecs- Elastic Common Schema
Field normalization is applied before the logs are sent to Elasticsearch, ensuring consistent indexing and search capabilities. ECS normalization maps common fields to Elasticsearch's standard schema for improved compatibility with Kibana dashboards and detection rules.
Compression
Compression is enabled by default and uses gzip to reduce network bandwidth. This adds minimal CPU overhead but can significantly improve throughput for high-volume scenarios. Disable compression only if you have bandwidth to spare and want to reduce CPU usage.
Examples
Basic
Simple Elasticsearch output with a single endpoint...  |  | 
Secure
Secure configuration with authentication and TLS...  |  | 
In production environments, setting insecure_skip_verify to true is not recommended.
Ingest Pipeline
Send data through an ingest pipeline for server-side processing...  |  | 
High-Volume
Optimized for high-volume data ingestion with load balancing...  |  | 
Field Normalization
Using ECS field normalization for enhanced compatibility with Elastic Stack...  |  | 
Index Action
Using index action to allow document updates and overwrites...  |  | 
Minimal Response
Optimize for minimal response size by filtering to only errors...  |  | 
Performance Tuning
Batch Size vs Payload Size
Events are batched until either limit is reached:
batch_size: Number of events per batchmax_payload_size_kb: Total size in kilobytes
Tune these based on your average event size:
- Small events (<1KB>): Increase 
batch_size, keep defaultmax_payload_size_kb - Large events (>10KB): Keep default 
batch_size, increasemax_payload_size_kb - Mixed sizes: Monitor both limits and adjust based on actual batch sizes
 
Timeout
Setting appropriate timeouts helps balance reliability and performance:
- Short timeouts (10-30s): Fail fast, better for real-time scenarios
 - Long timeouts (60s+): More tolerant of network issues, but may cause connection pooling problems
 
Compression
Enable compression (default) for high-volume scenarios to reduce network bandwidth. Disable only if CPU is constrained and network bandwidth is abundant.
Filter Path
The default filter_path provides detailed error information while minimizing response size. For even better performance in high-volume scenarios with low error rates, use filter_path: "errors" to only return the error flag.
Troubleshooting
Bulk API Errors
Check logs for detailed error messages including:
- Document index and position in batch
 - Error type and reason
 - HTTP status code
 
Common issues:
- Version conflicts: Switch to 
indexaction or handle conflicts in your application - Mapping errors: Ensure field types match index mapping
 - Pipeline errors: Verify ingest pipeline configuration
 
Payload Size Exceeded
If you see "bulk request size exceeds limit" errors:
- Reduce 
batch_size - Reduce 
max_payload_size_kb - Check Elasticsearch's 
http.max_content_lengthsetting 
Partial Endpoint Failures
If some endpoints fail but delivery succeeds, check logs for partial failure errors indicating which endpoints are problematic. Verify network connectivity and Elasticsearch node health.
All Endpoints Failed
If all endpoints fail:
- Verify network connectivity
 - Check Elasticsearch cluster health
 - Ensure endpoints are accessible and not rate-limited
 - Review Elasticsearch logs for errors