Splunk HEC
Synopsis
Creates a Splunk HTTP Event Collector (HEC) target that sends events to one or more Splunk instances. Supports batching, compression, field normalization, and automatic load balancing across multiple endpoints.
Schema
- name: <string>
  description: <string>
  type: splunk
  pipelines: <pipeline[]>
  status: <boolean>
  properties:
    endpoints:
      - endpoint: <string>
        auth_type: <string>
        token: <string>
        secret: <string>
    index: <string>
    sourcetype: <string>
    source: <string>
    batch_size: <numeric>
    timeout: <numeric>
    tcp_routing: <boolean>
    use_compression: <boolean>
    insecure_skip_verify: <boolean>
    field_format: <string>
    interval: <string|numeric>
    cron: <string>
    debug:
      status: <boolean>
      dont_send_logs: <boolean>
Configuration
The following are the fields used to define the target:
| Field | Required | Default | Description | 
|---|---|---|---|
name | Y | Target name | |
description | N | - | Optional description | 
type | Y | Must be splunk | |
pipelines | N | - | Optional post-processor pipelines | 
status | N | true | Enable/disable the target | 
Endpoint
| Field | Required | Default | Description | 
|---|---|---|---|
endpoint | Y | - | Splunk HEC endpoint URL | 
auth_type | N | token | Authentication type: token or secret | 
token | N | - | HEC token when using token auth | 
secret | N | - | Bearer token when using secret auth | 
Event
| Field | Required | Default | Description | 
|---|---|---|---|
index | N | - | Default Splunk index | 
sourcetype | N | - | Default sourcetype for events | 
source | N | - | Default source for events | 
batch_size | N | 10000 | Number of events to batch before sending | 
timeout | N | 30 | Connection timeout in seconds | 
Connection
| Field | Required | Default | Description | 
|---|---|---|---|
tcp_routing | N | false | Enable TCP routing header | 
use_compression | N | true | Enable gzip compression | 
insecure_skip_verify | N | false | Skip TLS certificate verification | 
field_format | N | - | Data normalization format. See applicable Normalization section | 
Scheduler
| Field | Required | Default | Description | 
|---|---|---|---|
interval | N | realtime | Execution frequency. See Interval for details | 
cron | N | - | Cron expression for scheduled execution. See Cron for details | 
Debug Options
| Field | Required | Default | Description | 
|---|---|---|---|
debug.status | N | false | Enable debug logging | 
debug.dont_send_logs | N | false | Process logs but don't send to target (testing) | 
Details
The Splunk HEC target sends log data to Splunk using the HTTP Event Collector (HEC) protocol. It supports multiple authentication methods, batching, compression, and automatic load balancing between endpoints.
Ensure your HEC tokens have the appropriate permissions and indexes enabled in Splunk. Invalid tokens or insufficient permissions will result in ingestion failures.
Events are automatically batched and compressed by default for optimal performance. When multiple endpoints are configured, the target randomly selects an endpoint for each batch to distribute load evenly across all available Splunk instances.
Setting insecure_skip_verify to true is not recommended for production environments.
Load Balancing and Failover
When multiple endpoints are configured, the target uses randomized load balancing. For each batch:
- Endpoints are randomly shuffled
 - The batch is sent to the first endpoint
 - If it fails, the next endpoint in the shuffled list is tried
 - This continues until successful delivery or all endpoints fail
 
If only some endpoints fail but delivery eventually succeeds, the batch is cleared and a partial error is logged. If all endpoints fail, the batch is retained for retry and a complete failure error is returned.
Dynamic Routing
The target supports dynamic routing of events to different indexes, sourcetypes, and sources using pipeline processors:
- Set the 
sourcefield in a pipeline to override the default source - Set the 
schemafield in a pipeline to override the default sourcetype - Set the 
indexfield in a pipeline to override the default index 
This allows sending different event types to appropriate indexes without creating multiple target configurations.
Example pipeline configuration:
pipelines:
  - name: route_by_severity
    processors:
      - set:
          field: source
          value: "production-app"
      - set:
          field: schema
          value: "app:error"
          if: "severity == 'error'"
      - set:
          field: index
          value: "critical-logs"
          if: "severity == 'critical'"
Compression
Compression is enabled by default and uses gzip to reduce network bandwidth. This adds minimal CPU overhead but can significantly improve throughput for high-volume scenarios. Disable compression only if you have bandwidth to spare and want to reduce CPU usage.
Field Normalization
Field normalization helps standardize log data before sending it to Splunk, ensuring consistent data formats that can be easily correlated:
cim- Common Information Model
Normalization is applied before batching and sending to Splunk.
Examples
Basic
Send events to a single HEC endpoint...  |  | 
Load Balanced
Configure load balancing and failover across multiple endpoints...  |  | 
High-Volume
Configure for high throughput with larger batches and extended timeout...  |  | 
With Field Normalization
Using CIM field normalization for better Splunk integration...  |  | 
Secure
Using secret-based auth with TLS verification and custom source...  |  | 
No Compression
Disable compression to reduce CPU overhead when bandwidth is not a concern...  |  | 
Performance Tuning
Batch Size
- Small batches (1000-5000): Lower latency, more frequent network calls
 - Medium batches (10000): Balanced approach, suitable for most use cases
 - Large batches (20000+): Higher throughput, increased memory usage and latency
 
Timeout
Setting appropriate timeouts helps balance reliability and performance:
- Short timeouts (10-30s): Fail fast, better for real-time scenarios
 - Long timeouts (60s+): More tolerant of network issues, but may cause connection pooling problems
 
Compression
Enable compression (default) for high-volume scenarios to reduce network bandwidth. Disable only if CPU is constrained and network bandwidth is abundant.
Troubleshooting
Authentication Failures
Ensure your HEC token has proper permissions and the target index exists in Splunk.
Partial Endpoint Failures
If some endpoints fail but delivery succeeds, check logs for partial failure errors indicating which endpoints are problematic.
All Endpoints Failed
If all endpoints fail, verify network connectivity, endpoint URLs, and Splunk HEC service status.