Overview
Use the Observability Pipelines Worker to send your processed logs to different destinations.
Select and set up your destinations when you set up a pipeline. This is step 4 in the pipeline setup process:
- Navigate to Observability Pipelines.
- Select a template.
- Select and set up your source.
- Select and set up your destinations.
- Set up your processors.
- Install the Observability Pipelines Worker.
Select a destination for more information:
Event batching
Observability Pipelines destinations send events in batches to the downstream integration. A batch of events is flushed when one of the following parameters is met:
- Maximum number of events
- Maximum number of bytes
- Timeout (seconds)
For example, if a destination’s parameters are:
- Maximum number of events = 2
- Maximum number of bytes = 100,000
- Timeout (seconds) = 5
And the destination receives 1 event in a 5-second window, it flushes the batch at the 5-second timeout.
If the destination receives 3 events within 2 seconds, it flushes a batch with 2 events and then flushes a second batch with the remaining event after 5 seconds. If the destination receives 1 event that is more than 100,000 bytes, it flushes this batch with the 1 event.
Destination | Maximum Events | Maximum Bytes | Timeout (seconds) |
---|
Amazon OpenSearch | None | 10,000,000 | 1 |
Amazon S3 (Datadog Log Archives) | None | 100,000,000 | 900 |
Azure Storage (Datadog Log Archives) | None | 100,000,000 | 900 |
Datadog Logs | 1,000 | 4,250,000 | 5 |
Elasticsearch | None | 10,000,000 | 1 |
Google Chronicle | None | 1,000,000 | 15 |
Google Cloud Storage (Datadog Log Archives) | None | 100,000,000 | 900 |
OpenSearch | None | 10,000,000 | 1 |
Splunk HTTP Event Collector (HEC) | None | 1,000,000 | 1 |
Sumo Logic Hosted Collecter | None | 10,000,000 | 1 |
Note: The rsyslog and syslog-ng destinations do not batch events.