Amazon OpenSearch
Synopsis
Creates an Amazon OpenSearch target that sends data using the Bulk API with AWS IAM authentication. Supports multiple endpoints, field normalization, customizable batch sizing, and automatic load balancing across OpenSearch nodes.
Schema
- name: <string>
  description: <string>
  type: amazonopensearch
  status: <boolean>
  pipelines: <pipeline[]>
  properties:
    index: <string>
    max_payload_size_kb: <numeric>
    batch_size: <numeric>
    timeout: <numeric>
    insecure_skip_verify: <boolean>
    use_compression: <boolean>
    write_action: <string>
    filter_path: <string>
    pipeline: <string>
    field_format: <string>
    endpoints:
      - endpoint: <string>
        use_iam: <boolean>
        region: <string>
        key: <string>
        secret: <string>
        session: <string>
        username: <string>
        password: <string>
    interval: <string|numeric>
    cron: <string>
    debug:
      status: <boolean>
      dont_send_logs: <boolean>
Configuration
The following are the fields used to define the target:
| Field | Required | Default | Description | 
|---|---|---|---|
name | Y | - | Target name | 
description | N | - | Optional description | 
type | Y | - | Must be amazonopensearch | 
pipelines | N | - | Optional post-processor pipelines | 
status | N | true | Enable/disable the target | 
OpenSearch
| Field | Required | Default | Description | 
|---|---|---|---|
index | Y | - | Default OpenSearch index name | 
max_payload_size_kb | N | 4096 | Maximum bulk request size in KB | 
batch_size | N | 10000 | Maximum number of events per batch | 
timeout | N | 30 | Connection timeout in seconds | 
insecure_skip_verify | N | false | Skip TLS certificate verification | 
use_compression | N | true | Enable GZIP compression | 
write_action | N | create | Bulk API action (index, create, update, delete) | 
filter_path | N | errors,items.*.error,items.*._index,items.*.status | Response filter path | 
pipeline | N | - | Ingest pipeline name | 
field_format | N | - | Data normalization format. See applicable Normalization section | 
Endpoint
| Field | Required | Default | Description | 
|---|---|---|---|
endpoint | Y | - | OpenSearch domain URL (automatically appends /_bulk if not present) | 
use_iam | N | true | Use AWS IAM authentication (recommended) | 
region | Y* | - | AWS region for IAM authentication | 
key | N* | - | AWS access key ID | 
secret | N* | - | AWS secret access key | 
session | N | - | AWS session token for temporary credentials | 
username | N | - | Basic auth username (alternative to IAM) | 
password | N | - | Basic auth password (alternative to IAM) | 
* = Required when use_iam is true. If key and secret are not provided, the target will use the default AWS credential chain (environment variables, IAM role, etc.).
Scheduler
| Field | Required | Default | Description | 
|---|---|---|---|
interval | N | realtime | Execution frequency. See Interval for details | 
cron | N | - | Cron expression for scheduled execution. See Cron for details | 
Debug Options
| Field | Required | Default | Description | 
|---|---|---|---|
debug.status | N | false | Enable debug logging | 
debug.dont_send_logs | N | false | Process logs but don't send to target (testing) | 
Details
The target supports AWS IAM authentication using Signature V4, multiple endpoints, compression, and ingest pipelines. Data is batched for efficient delivery and can be automatically routed to different indices.
URLs are automatically appended with /_bulk if the suffix is not present. Events are batched until either the batch size or payload size limit is reached.
For load balancing, events are sent to randomly selected endpoints. If an endpoint fails, the next endpoint in the randomized list is tried until successful delivery or all endpoints fail.
Each event is automatically enriched with a timestamp in RFC3339 format based on the log's epoch time. You can route events to different indices by setting the index field in a pipeline processor.
Long timeout values may lead to connection pooling issues and increased resource consumption.
Setting max_payload_size_kb too high might cause memory pressure and can exceed OpenSearch's http.max_content_length setting (default 100MB).
AWS IAM Authentication
Amazon OpenSearch Service supports two authentication methods:
IAM Authentication (recommended)
- Uses AWS Signature V4 to sign HTTP requests
 - No passwords stored in configuration
 - Integrates with AWS IAM policies and roles
 - Supports temporary credentials from STS
 - Default authentication method for this target
 
Basic Authentication
- Traditional username and password
 - Requires fine-grained access control enabled on the domain
 - Less secure than IAM authentication
 
When use_iam is true, the target signs all HTTP requests with AWS Signature V4. The signing process uses the service name es as required by Amazon OpenSearch Service.
AWS Credential Chain
If key and secret are not provided in the endpoint configuration, the target uses the AWS SDK default credential chain:
- Environment variables (
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY) - Shared credentials file (
~/.aws/credentials) - IAM role for EC2 instances
 - IAM role for ECS tasks
 - IAM role for Lambda functions
 
This allows seamless integration when running on AWS infrastructure without hardcoding credentials.
Load Balancing and Failover
When multiple endpoints are configured, the target uses randomized load balancing. For each batch:
- Endpoints are randomly shuffled
 - The batch is sent to the first endpoint
 - If it fails, the next endpoint in the shuffled list is tried
 - This continues until successful delivery or all endpoints fail
 
If only some endpoints fail but delivery eventually succeeds, the batch is cleared and a partial error is logged. If all endpoints fail, the batch is retained for retry and a complete failure error is returned.
JSON Message Handling
The target intelligently handles messages that are already in JSON format:
- If a message contains an 
@timestampfield or is ECS-normalized, it's treated as a structured JSON document - The JSON is parsed and sent as-is to OpenSearch
 - If parsing fails, the message is sent as plain text with an auto-generated timestamp
 
This allows you to send both structured and unstructured logs through the same target.
Dynamic Index Routing
Route events to different indices using pipeline processors by setting the index field:
pipelines:
  - name: route_by_type
    processors:
      - set:
          field: index
          value: "error-logs"
          if: "level == 'error'"
      - set:
          field: index
          value: "metrics"
          if: "type == 'metric'"
This allows flexible routing without creating multiple target configurations.
Bulk API Error Handling
The target parses the bulk API response to detect individual document errors:
- Uses 
filter_pathto reduce response size and focus on error details - Extracts error type, reason, and HTTP status for failed documents
 - Returns detailed error messages indicating which documents failed and why
 
Common errors include:
- Document version conflicts (for 
createaction) - Mapping errors (field type mismatches)
 - Index not found or closed
 - Pipeline failures (when using ingest pipelines)
 
Write Actions
The write_action field determines how documents are indexed:
create(default): Only index if document doesn't exist. Fails on duplicates.index: Index or replace existing document. Always succeeds unless there's a system error.update: Update existing document. Fails if document doesn't exist.delete: Remove document. Use carefully.
Response Filtering
The filter_path parameter filters the bulk API response to reduce network overhead:
errors: Boolean indicating if any operations faileditems.*.error: Error details for failed operationsitems.*._index: Index name for each operationitems.*.status: HTTP status code for each operation
For high-volume scenarios, this filtering significantly reduces response size and parsing overhead.
Field Normalization
The field_format property allows normalizing log data to standard formats:
ecs- Elastic Common Schema
Field normalization is applied before the logs are sent to OpenSearch, ensuring consistent indexing and search capabilities. ECS normalization maps common fields to OpenSearch's standard schema for improved compatibility with OpenSearch Dashboards and detection rules.
Compression
Compression is enabled by default and uses gzip to reduce network bandwidth. This adds minimal CPU overhead but can significantly improve throughput for high-volume scenarios. Disable compression only if you have bandwidth to spare and want to reduce CPU usage.
Examples
Basic with IAM
Simple OpenSearch output with IAM authentication...  |  | 
IAM Role
Using IAM role credentials (no explicit keys needed)...  |  | 
When using IAM role authentication on EC2, ECS, or Lambda, credentials are automatically retrieved from the instance metadata service or task role.
Temporary Credentials
Using temporary STS credentials with session token...  |  | 
Basic Authentication
Using basic authentication instead of IAM...  |  | 
Ingest Pipeline
Send data through an ingest pipeline for server-side processing...  |  | 
High-Volume
Optimized for high-volume data ingestion with load balancing...  |  | 
Field Normalization
Using ECS field normalization for enhanced compatibility...  |  | 
Index Action
Using index action to allow document updates and overwrites...  |  | 
Minimal Response
Optimize for minimal response size by filtering to only errors...  |  | 
Cross-Region
Configuration with OpenSearch domains in different regions...  |  | 
Performance Tuning
Batch Size vs Payload Size
Events are batched until either limit is reached:
batch_size: Number of events per batchmax_payload_size_kb: Total size in kilobytes
Tune these based on your average event size:
- Small events (<1KB): Increase 
batch_size, keep defaultmax_payload_size_kb - Large events (>10KB): Keep default 
batch_size, increasemax_payload_size_kb - Mixed sizes: Monitor both limits and adjust based on actual batch sizes
 
Timeout
Setting appropriate timeouts helps balance reliability and performance:
- Short timeouts (10-30s): Fail fast, better for real-time scenarios
 - Long timeouts (60s+): More tolerant of network issues, but may cause connection pooling problems
 
Compression
Enable compression (default) for high-volume scenarios to reduce network bandwidth. Disable only if CPU is constrained and network bandwidth is abundant.
Filter Path
The default filter_path provides detailed error information while minimizing response size. For even better performance in high-volume scenarios with low error rates, use filter_path: "errors" to only return the error flag.
Troubleshooting
IAM Authentication Errors
If you encounter IAM authentication errors:
- Verify AWS credentials are valid and not expired
 - Check IAM policy allows 
es:ESHttpPostandes:ESHttpPutactions - Ensure the region matches your OpenSearch domain region
 - For temporary credentials, verify the session token is included
 
Common IAM error messages:
- Access denied: IAM policy doesn't allow required actions
 - Invalid signature: Check system clock is synchronized
 - Credential expired: Refresh temporary credentials
 
Bulk API Errors
Check logs for detailed error messages including:
- Document index and position in batch
 - Error type and reason
 - HTTP status code
 
Common issues:
- Version conflicts: Switch to 
indexaction or handle conflicts in your application - Mapping errors: Ensure field types match index mapping
 - Pipeline errors: Verify ingest pipeline configuration
 
Payload Size Exceeded
If you see "bulk request size exceeds limit" errors:
- Reduce 
batch_size - Reduce 
max_payload_size_kb - Check OpenSearch's 
http.max_content_lengthsetting 
Partial Endpoint Failures
If some endpoints fail but delivery succeeds, check logs for partial failure errors indicating which endpoints are problematic. Verify network connectivity and OpenSearch domain health.
All Endpoints Failed
If all endpoints fail:
- Verify network connectivity to OpenSearch domains
 - Check OpenSearch cluster health in AWS Console
 - Ensure security groups allow traffic from your source
 - Review OpenSearch domain logs for errors
 - Verify IAM permissions for the credentials being used