Handling Success
The processors in a pipeline run sequentially, and when one of them completes successfully, Director provides options to control what happens next through the on_success
field.
Notification
The simplest use of on_success
is to log a message or set a field when a processor completes successfully.
- rename:
field: message
target_field: event.message
on_success:
- set:
field: status
value: "Field renamed successfully"
Chaining
To run additional processors after a successful execution, list them in the on_success
field.
- grok:
field: message
patterns: ["%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{GREEDYDATA:message}"]
on_success:
- date:
field: timestamp
target_field: "@timestamp"
formats: ["ISO8601"]
- remove:
field: timestamp
Nesting
For complex processing flows, multiple on_success
processors can be nested.
- grok:
field: message
patterns: ["%{IP:client.ip} %{WORD:http.method} %{URIPATHPARAM:http.url}"]
on_success:
- geoip:
field: client.ip
target_field: client.geo
on_success:
- set:
field: enrichment_status
value: "IP and Geo information extracted"
Pipeline Branching
The on_success
handlers can trigger processors in other pipelines, enabling conditional processing flows.
pipelines:
parse_apache:
processors:
- grok:
field: message
patterns: ["%{COMBINEDAPACHELOG}"]
on_success:
- pipeline: enrich_apache
enrich_apache:
processors:
- geoip:
field: clientip
target_field: geoip
Metadata
Success context information is available in metadata fields that can be accessed within an on_success
block.
- grok:
field: message
patterns: ["%{DATA:event.type}"]
on_success:
- set:
field: processor_info
value: "Successfully processed by {{ _pipeline.name }} pipeline"
Unlike error handling, success handlers do not interrupt the normal pipeline flow. After executing theon_success
processors, Director continues with the next in the pipeline.