이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Overview

Use the Observability Pipelines Worker to send your processed logs to different destinations.

Select and set up your destinations when you set up a pipeline. This is step 4 in the pipeline setup process:

  1. Navigate to Observability Pipelines.
  2. Select a template.
  3. Select and set up your source.
  4. Select and set up your destinations.
  5. Set up your processors.
  6. Install the Observability Pipelines Worker.

Event batching

Observability Pipelines destinations send events in batches to the downstream integration. A batch of events is flushed when one of the following parameters is met:

  • Maximum number of events
  • Maximum number of bytes
  • Timeout (seconds)

For example, if a destination’s parameters are:

  • Maximum number of events = 2
  • Maximum number of bytes = 100,000
  • Timeout (seconds) = 5

And the destination receives 1 event in a 5-second window, it flushes the batch at the 5-second timeout.

If the destination receives 3 events within 2 seconds, it flushes a batch with 2 events and then flushes a second batch with the remaining event after 5 seconds. If the destination receives 1 event that is more than 100,000 bytes, it flushes this batch with the 1 event.

Note: The Syslog destination does not batch events.

Amazon OpenSearch

Set up the Amazon OpenSearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

  1. Optionally, enter the name of the Amazon OpenSearch index.
  2. Select an authentication strategy, Basic or AWS. For AWS, enter the AWS region.

Set the environment variables

  • Amazon OpenSearch authentication username:
    • Stored in the environment variable DD_OP_DESTINATION_AMAZON_OPENSEARCH_USERNAME.
  • Amazon OpenSearch authentication password:
    • Stored in the environment variable DD_OP_DESTINATION_AMAZON_OPENSEARCH_PASSWORD.
  • Amazon OpenSearch endpoint URL:
    • Stored in the environment variable DD_OP_DESTINATION_AMAZON_OPENSEARCH_ENDPOINT_URL.

How the destination works

Event batching

A batch of events is flushed when one of these parameters is met. See event batching for more information.

Max EventsMax BytesTimeout (seconds)
None10,000,0001

Datadog Log Management

Set up the destination

There are no configuration steps for your Datadog destination.

Set the environment variables

No environment variables required.

How the destination works

Event batching

A batch of events is flushed when one of these parameters is met. See event batching for more information.

Max EventsMax BytesTimeout (seconds)
1,0004,250,0005

Elasticsearch

Set up the Elasticsearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

The following fields are optional:

  1. Enter the name for the Elasticsearch index.
  2. Enter the Elasticsearch version.

Set the environment variables

  • Elasticsearch authentication username:
    • Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_USERNAME.
  • Elasticsearch authentication password:
    • Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD.
  • Elasticsearch endpoint URL:
    • Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL.

How the destination works

Event batching

A batch of events is flushed when one of these parameters is met. See event batching for more information.

Max EventsMax BytesTimeout (seconds)
None10,000,0001

Google Chronicle

Set up the Google Chronicle destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

To authenticate the Observability Pipelines Worker for Google Chronicle, contact your Google Security Operations representative for a Google Developer Service Account Credential. This credential is a JSON file and must be placed under DD_OP_DATA_DIR/config. See Getting API authentication credential for more information.

Note: If you are installing the Worker in Kubernetes, see Referencing files in Kubernetes for information on how to reference the credentials file.

To set up the Worker’s Google Chronicle destination:

  1. Enter the customer ID for your Google Chronicle instance.
  2. Enter the path to the credentials JSON file you downloaded earlier.
  3. Select JSON or Raw encoding in the dropdown menu.
  4. Select the appropriate Log Type in the dropdown menu.

Note: Logs sent to the Google Chronicle destination must have ingestion labels. For example, if the logs are from a A10 load balancer, it must have the ingestion label A10_LOAD_BALANCER. See Google Cloud’s Support log types with a default parser for a list of available log types and their respective ingestion labels.

Set the environment variables

  • Google Chronicle endpoint URL:
    • Stored in the environment variable: DD_OP_DESTINATION_GOOGLE_CHRONICLE_UNSTRUCTURED_ENDPOINT_URL.

How the destination works

Event batching

A batch of events is flushed when one of these parameters is met. See event batching for more information.

Max EventsMax BytesTimeout (seconds)
None1,000,00015

OpenSearch

Set up the OpenSearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

Optionally, enter the name of the OpenSearch index.

Set the environment variables

  • OpenSearch authentication username:
    • Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_USERNAME.
  • OpenSearch authentication password:
    • Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_PASSWORD.
  • OpenSearch endpoint URL:
    • Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_ENDPOINT_URL.

How the destination works

Event batching

A batch of events is flushed when one of these parameters is met. See event batching for more information.

Max EventsMax BytesTimeout (seconds)
None10,000,0001

rsyslog or syslog-ng

Set up the rsyslog or syslog-ng destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

The rsyslog and syslog-ng destinations support the RFC5424 format.

The rsyslog and syslog-ng destinations match these log fields to the following Syslog fields:

Log EventSYSLOG FIELDDefault
log[“message”]MESSAGENIL
log[“procid”]PROCIDThe running Worker’s process ID.
log[“appname”]APP-NAMEobservability_pipelines
log[“facility”]FACILITY8 (log_user)
log[“msgid”]MSGIDNIL
log[“severity”]SEVERITYinfo
log[“host”]HOSTNAMENIL
log[“timestamp”]TIMESTAMPCurrent UTC time.

The following destination settings are optional:

  1. Toggle the switch to enable TLS. If you enable TLS, the following certificate and key files are required:
    • Server Certificate Path: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).
    • CA Certificate Path: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).
    • Private Key Path: The path to the .key private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.
  2. Enter the number of seconds to wait before sending TCP keepalive probes on an idle connection.

Set the environment variables

  • The rsyslog or syslog-ng endpoint URL. For example, 127.0.0.1:9997.
    • The Observability Pipelines Worker sends logs to this address and port.
    • Stored as the environment variable: DD_OP_DESTINATION_SYSLOG_ENDPOINT_URL.

How the destination works

Event batching

The Syslog destination does not batch events.

Splunk HTTP Event Collector (HEC)

Set up the Splunk HEC destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

The following fields are optional:

  1. Enter the name of the Splunk index you want your data in. This has to be an allowed index for your HEC.
  2. Select whether the timestamp should be auto-extracted. If set to true, Splunk extracts the timestamp from the message with the expected format of yyyy-mm-dd hh:mm:ss.
  3. Set the sourcetype to override Splunk’s default value, which is httpevent for HEC data.

Set the environment variables

  • Splunk HEC token:
    • The Splunk HEC token for the Splunk indexer.
    • Stored in the environment variable DD_OP_DESTINATION_SPLUNK_HEC_TOKEN.
  • Base URL of the Splunk instance:
    • The Splunk HTTP Event Collector endpoint your Observability Pipelines Worker sends processed logs to. For example, https://hec.splunkcloud.com:8088.
      Note: /services/collector/event path is automatically appended to the endpoint.
    • Stored in the environment variable DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL.

How the destination works

Event batching

A batch of events is flushed when one of these parameters is met. See event batching for more information.

Max EventsMax BytesTimeout (seconds)
None1,000,0001

Sumo Logic Hosted Collector

Set up the Sumo Logic destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

The following fields are optional:

  1. In the Encoding dropdown menu, select whether you want to encode your pipeline’s output in JSON, Logfmt, or Raw text. If no decoding is selected, the decoding defaults to JSON.
  2. Enter a source name to override the default name value configured for your Sumo Logic collector’s source.
  3. Enter a host name to override the default host value configured for your Sumo Logic collector’s source.
  4. Enter a category name to override the default category value configured for your Sumo Logic collector’s source.
  5. Click Add Header to add any custom header fields and values.

Set the environment variables

  • Unique URL generated for the HTTP Logs and Metrics Source to receive log data.
    • The Sumo Logic HTTP Source endpoint. The Observability Pipelines Worker sends processed logs to this endpoint. For example, https://<ENDPOINT>.collection.sumologic.com/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>, where:
      • <ENDPOINT> is your Sumo collection endpoint.
      • <UNIQUE_HTTP_COLLECTOR_CODE> is the string that follows the last forward slash (/) in the upload URL for the HTTP source.
    • Stored in the environment variable DD_OP_DESTINATION_SUMO_LOGIC_HTTP_COLLECTOR_URL.

How the destination works

Event batching

A batch of events is flushed when one of these parameters is met. See event batching for more information.

Max EventsMax BytesTimeout (seconds)
None10,000,0001
PREVIEWING: evan.li/clarify-agentless