Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel,
n'hésitez pas à nous contacter.
Overview
Use the Observability Pipelines Worker to send your processed logs to different destinations.
Select and set up your destinations when you set up a pipeline. This is step 4 in the pipeline setup process:
- Navigate to Observability Pipelines.
- Select a template.
- Select and set up your source.
- Select and set up your destinations.
- Set up your processors.
- Install the Observability Pipelines Worker.
The available Observability Pipelines destinations are:
Event batching
Observability Pipelines destinations send events in batches to the downstream integration. A batch of events is flushed when one of the following parameters is met:
- Maximum number of events
- Maximum number of bytes
- Timeout (seconds)
For example, if a destination’s parameters are:
- Maximum number of events = 2
- Maximum number of bytes = 100,000
- Timeout (seconds) = 5
And the destination receives 1 event in a 5-second window, it flushes the batch at the 5-second timeout.
If the destination receives 3 events within 2 seconds, it flushes a batch with 2 events and then flushes a second batch with the remaining event after 5 seconds. If the destination receives 1 event that is more than 100,000 bytes, it flushes this batch with the 1 event.
Note: The Syslog destination does not batch events.
Amazon OpenSearch
Set up the Amazon OpenSearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
Set up the destination
- Optionally, enter the name of the Amazon OpenSearch index.
- Select an authentication strategy, Basic or AWS. For AWS, enter the AWS region.
Set the environment variables
- Amazon OpenSearch authentication username:
- Stored in the environment variable
DD_OP_DESTINATION_AMAZON_OPENSEARCH_USERNAME
.
- Amazon OpenSearch authentication password:
- Stored in the environment variable
DD_OP_DESTINATION_AMAZON_OPENSEARCH_PASSWORD
.
- Amazon OpenSearch endpoint URL:
- Stored in the environment variable
DD_OP_DESTINATION_AMAZON_OPENSEARCH_ENDPOINT_URL
.
How the destination works
Event batching
A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|
None | 10,000,000 | 1 |
Datadog Log Management
Set up the destination
There are no configuration steps for your Datadog destination.
Set the environment variables
No environment variables required.
How the destination works
Event batching
A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|
1,000 | 4,250,000 | 5 |
Elasticsearch
Set up the Elasticsearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
Set up the destination
The following fields are optional:
- Enter the name for the Elasticsearch index.
- Enter the Elasticsearch version.
Set the environment variables
- Elasticsearch authentication username:
- Stored in the environment variable:
DD_OP_DESTINATION_ELASTICSEARCH_USERNAME
.
- Elasticsearch authentication password:
- Stored in the environment variable:
DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD
.
- Elasticsearch endpoint URL:
- Stored in the environment variable:
DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL
.
How the destination works
Event batching
A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|
None | 10,000,000 | 1 |
Google Chronicle
Set up the Google Chronicle destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
Set up the destination
To authenticate the Observability Pipelines Worker for Google Chronicle, contact your Google Security Operations representative for a Google Developer Service Account Credential. This credential is a JSON file and must be placed under DD_OP_DATA_DIR/config
. See Getting API authentication credential for more information.
Note: If you are installing the Worker in Kubernetes, see Referencing files in Kubernetes for information on how to reference the credentials file.
To set up the Worker’s Google Chronicle destination:
- Enter the customer ID for your Google Chronicle instance.
- Enter the path to the credentials JSON file you downloaded earlier.
- Select JSON or Raw encoding in the dropdown menu.
- Select the appropriate Log Type in the dropdown menu.
Note: Logs sent to the Google Chronicle destination must have ingestion labels. For example, if the logs are from a A10 load balancer, it must have the ingestion label A10_LOAD_BALANCER
. See Google Cloud’s Support log types with a default parser for a list of available log types and their respective ingestion labels.
Set the environment variables
- Google Chronicle endpoint URL:
- Stored in the environment variable:
DD_OP_DESTINATION_GOOGLE_CHRONICLE_UNSTRUCTURED_ENDPOINT_URL
.
How the destination works
Event batching
A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|
None | 1,000,000 | 15 |
OpenSearch
Set up the OpenSearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
Set up the destination
Optionally, enter the name of the OpenSearch index.
Set the environment variables
- OpenSearch authentication username:
- Stored in the environment variable:
DD_OP_DESTINATION_OPENSEARCH_USERNAME
.
- OpenSearch authentication password:
- Stored in the environment variable:
DD_OP_DESTINATION_OPENSEARCH_PASSWORD
.
- OpenSearch endpoint URL:
- Stored in the environment variable:
DD_OP_DESTINATION_OPENSEARCH_ENDPOINT_URL
.
How the destination works
Event batching
A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|
None | 10,000,000 | 1 |
rsyslog or syslog-ng
Set up the rsyslog or syslog-ng destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
Set up the destination
The rsyslog and syslog-ng destinations support the
RFC5424 format.
The rsyslog and syslog-ng destinations match these log fields to the following Syslog fields:
Log Event | SYSLOG FIELD | Default |
---|
log[“message”] | MESSAGE | NIL |
log[“procid”] | PROCID | The running Worker’s process ID. |
log[“appname”] | APP-NAME | observability_pipelines |
log[“facility”] | FACILITY | 8 (log_user) |
log[“msgid”] | MSGID | NIL |
log[“severity”] | SEVERITY | info |
log[“host”] | HOSTNAME | NIL |
log[“timestamp”] | TIMESTAMP | Current UTC time. |
The following destination settings are optional:
- Toggle the switch to enable TLS. If you enable TLS, the following certificate and key files are required:
Server Certificate Path
: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).CA Certificate Path
: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).Private Key Path
: The path to the .key
private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.
- Enter the number of seconds to wait before sending TCP keepalive probes on an idle connection.
Set the environment variables
- The rsyslog or syslog-ng endpoint URL. For example,
127.0.0.1:9997
.- The Observability Pipelines Worker sends logs to this address and port.
- Stored as the environment variable:
DD_OP_DESTINATION_SYSLOG_ENDPOINT_URL
.
How the destination works
Event batching
The Syslog destination does not batch events.
Splunk HTTP Event Collector (HEC)
Set up the Splunk HEC destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
Set up the destination
The following fields are optional:
- Enter the name of the Splunk index you want your data in. This has to be an allowed index for your HEC.
- Select whether the timestamp should be auto-extracted. If set to
true
, Splunk extracts the timestamp from the message with the expected format of yyyy-mm-dd hh:mm:ss
. - Set the
sourcetype
to override Splunk’s default value, which is httpevent
for HEC data.
Set the environment variables
- Splunk HEC token:
- The Splunk HEC token for the Splunk indexer.
- Stored in the environment variable
DD_OP_DESTINATION_SPLUNK_HEC_TOKEN
.
- Base URL of the Splunk instance:
- The Splunk HTTP Event Collector endpoint your Observability Pipelines Worker sends processed logs to. For example,
https://hec.splunkcloud.com:8088
.
Note: /services/collector/event
path is automatically appended to the endpoint. - Stored in the environment variable
DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL
.
How the destination works
Event batching
A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|
None | 1,000,000 | 1 |
Sumo Logic Hosted Collector
Set up the Sumo Logic destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
Set up the destination
The following fields are optional:
- In the Encoding dropdown menu, select whether you want to encode your pipeline’s output in
JSON
, Logfmt
, or Raw
text. If no decoding is selected, the decoding defaults to JSON. - Enter a source name to override the default
name
value configured for your Sumo Logic collector’s source. - Enter a host name to override the default
host
value configured for your Sumo Logic collector’s source. - Enter a category name to override the default
category
value configured for your Sumo Logic collector’s source. - Click Add Header to add any custom header fields and values.
Set the environment variables
- Unique URL generated for the HTTP Logs and Metrics Source to receive log data.
- The Sumo Logic HTTP Source endpoint. The Observability Pipelines Worker sends processed logs to this endpoint. For example,
https://<ENDPOINT>.collection.sumologic.com/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>
, where:<ENDPOINT>
is your Sumo collection endpoint.<UNIQUE_HTTP_COLLECTOR_CODE>
is the string that follows the last forward slash (/
) in the upload URL for the HTTP source.
- Stored in the environment variable
DD_OP_DESTINATION_SUMO_LOGIC_HTTP_COLLECTOR_URL
.
How the destination works
Event batching
A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|
None | 10,000,000 | 1 |