Overview

Use the Observability Pipelines Worker to send your processed logs to different destinations.

Select and set up your destinations when you set up a pipeline. This is step 4 in the pipeline setup process:

  1. Navigate to Observability Pipelines.
  2. Select a template.
  3. Select and set up your source.
  4. Select and set up your destinations.
  5. Set up your processors.
  6. Install the Observability Pipelines Worker.

Event batching

Observability Pipelines destinations send events in batches to the downstream integration. A batch of events is flushed when one of the following parameters is met:

  • Maximum number of events
  • Maximum number of bytes
  • Timeout (seconds)

For example, if a destination’s parameters are:

  • Maximum number of events = 2
  • Maximum number of bytes = 100,000
  • Timeout (seconds) = 5

And the destination receives 1 event in a 5-second window, it flushes the batch at the 5-second timeout.

If the destination receives 3 events within 2 seconds, it flushes a batch with 2 events and then flushes a second batch with the remaining event after 5 seconds. If the destination receives 1 event that is more than 100,000 bytes, it flushes this batch with the 1 event.

DestinationMaximum EventsMaximum BytesTimeout (seconds)
Amazon OpenSearchNone10,000,0001
Amazon S3 (Datadog Log Archives)None100,000,000900
Azure Storage (Datadog Log Archives)None100,000,000900
Datadog Logs1,0004,250,0005
ElasticsearchNone10,000,0001
Google ChronicleNone1,000,00015
Google Cloud Storage (Datadog Log Archives)None100,000,000900
OpenSearchNone10,000,0001
Splunk HTTP Event Collector (HEC)None1,000,0001
Sumo Logic Hosted CollecterNone10,000,0001

Note: The rsyslog and syslog-ng destinations do not batch events.

PREVIEWING: may/unit-testing