- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Use the Observability Pipelines Worker to send your processed logs to different destinations.
Select and set up your destinations when you set up a pipeline. This is step 4 in the pipeline setup process:
The available Observability Pipelines destinations are:
Observability Pipelines destinations send events in batches to the downstream integration. A batch of events is flushed when one of the following parameters is met:
For example, if a destination’s parameters are:
And the destination receives 1 event in a 5-second window, it flushes the batch at the 5-second timeout.
If the destination receives 3 events within 2 seconds, it flushes a batch with 2 events and then flushes a second batch with the remaining event after 5 seconds. If the destination receives 1 event that is more than 100,000 bytes, it flushes this batch with the 1 event.
Note: The Syslog destination does not batch events.
Set up the Amazon OpenSearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
DD_OP_DESTINATION_AMAZON_OPENSEARCH_USERNAME
.DD_OP_DESTINATION_AMAZON_OPENSEARCH_PASSWORD
.DD_OP_DESTINATION_AMAZON_OPENSEARCH_ENDPOINT_URL
.A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|---|---|
None | 10,000,000 | 1 |
There are no configuration steps for your Datadog destination.
No environment variables required.
A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|---|---|
1,000 | 4,250,000 | 5 |
Set up the Elasticsearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
The following fields are optional:
DD_OP_DESTINATION_ELASTICSEARCH_USERNAME
.DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD
.DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL
.A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|---|---|
None | 10,000,000 | 1 |
Set up the Google Chronicle destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
To authenticate the Observability Pipelines Worker for Google Chronicle, contact your Google Security Operations representative for a Google Developer Service Account Credential. This credential is a JSON file and must be placed under DD_OP_DATA_DIR/config
. See Getting API authentication credential for more information.
Note: If you are installing the Worker in Kubernetes, see Referencing files in Kubernetes for information on how to reference the credentials file.
To set up the Worker’s Google Chronicle destination:
Note: Logs sent to the Google Chronicle destination must have ingestion labels. For example, if the logs are from a A10 load balancer, it must have the ingestion label A10_LOAD_BALANCER
. See Google Cloud’s Support log types with a default parser for a list of available log types and their respective ingestion labels.
DD_OP_DESTINATION_GOOGLE_CHRONICLE_UNSTRUCTURED_ENDPOINT_URL
.A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|---|---|
None | 1,000,000 | 15 |
Set up the OpenSearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
Optionally, enter the name of the OpenSearch index.
DD_OP_DESTINATION_OPENSEARCH_USERNAME
.DD_OP_DESTINATION_OPENSEARCH_PASSWORD
.DD_OP_DESTINATION_OPENSEARCH_ENDPOINT_URL
.A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|---|---|
None | 10,000,000 | 1 |
Set up the rsyslog or syslog-ng destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
The rsyslog and syslog-ng destinations match these log fields to the following Syslog fields:
Log Event | SYSLOG FIELD | Default |
---|---|---|
log[“message”] | MESSAGE | NIL |
log[“procid”] | PROCID | The running Worker’s process ID. |
log[“appname”] | APP-NAME | observability_pipelines |
log[“facility”] | FACILITY | 8 (log_user) |
log[“msgid”] | MSGID | NIL |
log[“severity”] | SEVERITY | info |
log[“host”] | HOSTNAME | NIL |
log[“timestamp”] | TIMESTAMP | Current UTC time. |
The following destination settings are optional:
Server Certificate Path
: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).CA Certificate Path
: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).Private Key Path
: The path to the .key
private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.127.0.0.1:9997
.DD_OP_DESTINATION_SYSLOG_ENDPOINT_URL
.The Syslog destination does not batch events.
Set up the Splunk HEC destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
The following fields are optional:
true
, Splunk extracts the timestamp from the message with the expected format of yyyy-mm-dd hh:mm:ss
.sourcetype
to override Splunk’s default value, which is httpevent
for HEC data.DD_OP_DESTINATION_SPLUNK_HEC_TOKEN
.https://hec.splunkcloud.com:8088
./services/collector/event
path is automatically appended to the endpoint.DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL
.A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|---|---|
None | 1,000,000 | 1 |
Set up the Sumo Logic destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.
The following fields are optional:
JSON
, Logfmt
, or Raw
text. If no decoding is selected, the decoding defaults to JSON.name
value configured for your Sumo Logic collector’s source.host
value configured for your Sumo Logic collector’s source.category
value configured for your Sumo Logic collector’s source.https://<ENDPOINT>.collection.sumologic.com/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>
, where:<ENDPOINT>
is your Sumo collection endpoint.<UNIQUE_HTTP_COLLECTOR_CODE>
is the string that follows the last forward slash (/
) in the upload URL for the HTTP source.DD_OP_DESTINATION_SUMO_LOGIC_HTTP_COLLECTOR_URL
.A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|---|---|
None | 10,000,000 | 1 |