Skip to main content

Azure Blob Storage

Microsoft Azure Long Term Storage

Synopsis

Creates a target that writes log messages to Azure Blob Storage with support for various file formats, authentication methods, and retry mechanisms. Inherits file format capabilities from the base target.

Schema

- id: <numeric>
name: <string>
description: <string>
type: azblob
pipelines: <pipeline[]>
status: <boolean>
properties:
account: <string>
tenant_id: <string>
client_id: <string>
client_secret: <string>
container: <string>
name: <string>
type: <string>
compression: <string>
schema: <string>
format: <string>
no_buffer: <boolean>
max_retry: <numeric>
retry_interval: <numeric>
timeout: <numeric>
max_size: <numeric>

Configuration

The following are the minimum requirements to define the target.

FieldRequiredDefaultDescription
idYUnique identifier
nameYTarget name
descriptionN-Optional description
typeYMust be azblob
pipelinesN-Optional post-processor pipelines
statusNtrueEnable/disable the target

Azure

FieldRequiredDefaultDescription
accountYAzure storage account name
tenant_idYAzure tenant ID
client_idYAzure client ID
client_secretYAzure client secret
containerN"vmetric"Blob container name

Connection

FieldRequiredDefaultDescription
max_retryN5Maximum number of upload retries
retry_intervalN10Base interval between retries in seconds
timeoutN30Connection timeout in seconds
max_sizeN0Maximum file size in bytes
note

When max_size is reached, the current file is uploaded to blob storage and a new file is created. For unlimited file size, set the field to 0.

Files

FieldRequiredDefaultDescription
nameN"vmetric.{{.Timestamp}}.{{.Extension}}"Blob name template
typeN"json"File format (json, avro, ocf, parquet)
compressionN"zstd"Compression algorithm
schemaN-Data schema for Avro/OCF/Parquet formats
no_bufferNfalseDisable write buffering
formatN-Field normalization format (ecs, cim, asim, cef, leef, or csl)

Examples

The following upload configurations are available.

JSON

The minimum configuration for a JSON blob storage:

- id: 1
name: basic_blob
type: azblob
properties:
account: "mystorageaccount"
tenant_id: "00000000-0000-0000-0000-000000000000"
client_id: "00000000-0000-0000-0000-000000000000"
client_secret: "your-client-secret"

Parquet

Configuration for daily partitioned Parquet files:

- id: 2
name: parquet_blob
type: azblob
properties:
account: "mystorageaccount"
tenant_id: "00000000-0000-0000-0000-000000000000"
client_id: "00000000-0000-0000-0000-000000000000"
client_secret: "your-client-secret"
container: "logs"
type: "parquet"
compression: "zstd"
name: "logs/year={{.Year}}/month={{.Month}}/day={{.Day}}/data_{{.Timestamp}}.parquet"
max_size: 536870912 # 512MB

High Reliability

Configuration with enhanced retry settings:

- id: 3
name: reliable_blob
type: azblob
pipelines:
- checkpoint
properties:
account: "mystorageaccount"
tenant_id: "00000000-0000-0000-0000-000000000000"
client_id: "00000000-0000-0000-0000-000000000000"
client_secret: "your-client-secret"
max_retry: 10
retry_interval: 30
timeout: 60
no_buffer: true
warning

Failed uploads are retried automatically based on the max_retry and retry_interval settings. The retry_interval setting is based on an exponential backoff which is retry_interval * 2 ^ attempt, therefore each retry will wait longer than the previous one.

note

Files are only deleted after successful upload.