Overview

Use the Observability Pipelines Worker to send your processed logs to different destinations.

Select and set up your destinations when you set up a pipeline. This is step 4 in the pipeline setup process:

  1. Navigate to Observability Pipelines.
  2. Select a template.
  3. Select and set up your source.
  4. Select and set up your destinations.
  5. Set up your processors.
  6. Install the Observability Pipelines Worker.

Amazon OpenSearch

Set up the Amazon OpenSearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

  1. Optionally, enter the name of the Amazon OpenSearch index.
  2. Select an authentication strategy, Basic or AWS. For AWS, enter the AWS region.

Set the environment variables

  • Amazon OpenSearch authentication username:
    • Stored in the environment variable DD_OP_DESTINATION_AMAZON_OPENSEARCH_USERNAME.
  • Amazon OpenSearch authentication password:
    • Stored in the environment variable DD_OP_DESTINATION_AMAZON_OPENSEARCH_PASSWORD.
  • Amazon OpenSearch endpoint URL:
    • Stored in the environment variable DD_OP_DESTINATION_AMAZON_OPENSEARCH_ENDPOINT_URL.

Datadog Log Management

Set up the destination

There are no configuration steps for your Datadog destination.

Set the environment variables

No environment variables required.

Elasticsearch

Set up the Elasticsearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

The following fields are optional:

  1. Enter the name for the Elasticsearch index.
  2. Enter the Elasticsearch version.

Set the environment variables

  • Elasticsearch authentication username:
    • Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_USERNAME.
  • Elasticsearch authentication password:
    • Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD.
  • Elasticsearch endpoint URL:
    • Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL.

Google Chronicle

Set up the Google Chronicle destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

To authenticate the Worker for Google Chronicle:

  1. Create a Google Cloud Storage service account.
  2. Follow these instructions to create a service account key and download the JSON service account key file. This is the credentials JSON file and must be placed under DD_OP_DATA_DIR/config.

Note: If you are installing the Worker in Kubernetes, see Referencing files in Kubernetes for information on how to reference the credentials file.

To set up the Worker’s Google Chronicle destination:

  1. Enter the customer ID for your Google Chronicle instance.
  2. Enter the path to the credentials JSON file you downloaded earlier.
  3. Select JSON or Raw encoding in the dropdown menu.
  4. Select the appropriate Log Type in the dropdown menu.

Note: Logs sent to the Google Chronicle destination must have ingestion labels. For example, if the logs are from a A10 load balancer, it must have the ingestion label A10_LOAD_BALANCER. See Google Cloud’s Support log types with a default parser for a list of available log types and their respective ingestion labels.

Set the environment variables

  • Google Chronicle endpoint URL:
    • Stored in the environment variable: DD_OP_DESTINATION_GOOGLE_CHRONICLE_UNSTRUCTURED_ENDPOINT_URL.

OpenSearch

Set up the OpenSearch destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

Optionally, enter the name of the OpenSearch index.

Set the environment variables

  • OpenSearch authentication username:
    • Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_USERNAME.
  • OpenSearch authentication password:
    • Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_PASSWORD.
  • OpenSearch endpoint URL:
    • Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_ENDPOINT_URL.

Splunk HEC

Set up the Splunk HEC destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

The following fields are optional:

  1. Enter the name of the Splunk index you want your data in. This has to be an allowed index for your HEC.
  2. Select whether the timestamp should be auto-extracted. If set to true, Splunk extracts the timestamp from the message with the expected format of yyyy-mm-dd hh:mm:ss.
  3. Set the sourcetype to override Splunk’s default value, which is httpevent for HEC data.

Set the environment variables

  • Splunk HEC token:
    • The Splunk HEC token for the Splunk indexer.
    • Stored in the environment variable DD_OP_DESTINATION_SPLUNK_HEC_TOKEN.
  • Base URL of the Splunk instance:
    • The Splunk HTTP Event Collector endpoint your Observability Pipelines Worker sends processed logs to. For example, https://hec.splunkcloud.com:8088.
      Note: /services/collector/event path is automatically appended to the endpoint.
    • Stored in the environment variable DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL.

Sumo Logic

Set up the Sumo Logic destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

The following fields are optional:

  1. In the Encoding dropdown menu, select whether you want to encode your pipeline’s output in JSON, Logfmt, or Raw text. If no decoding is selected, the decoding defaults to JSON.
  2. Enter a source name to override the default name value configured for your Sumo Logic collector’s source.
  3. Enter a host name to override the default host value configured for your Sumo Logic collector’s source.
  4. Enter a category name to override the default category value configured for your Sumo Logic collector’s source.
  5. Click Add Header to add any custom header fields and values.

Set the environment variables

  • Unique URL generated for the HTTP Logs and Metrics Source to receive log data.
    • The Sumo Logic HTTP Source endpoint. The Observability Pipelines Worker sends processed logs to this endpoint. For example, https://<ENDPOINT>.collection.sumologic.com/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>, where:
      • <ENDPOINT> is your Sumo collection endpoint.
      • <UNIQUE_HTTP_COLLECTOR_CODE> is the string that follows the last forward slash (/) in the upload URL for the HTTP source.
    • Stored in the environment variable DD_OP_DESTINATION_SUMO_LOGIC_HTTP_COLLECTOR_URL.

Rsyslog or Syslog-ng

Set up the Rsyslog or Syslog-ng destination and its environment variables when you set up a pipeline. The information below is configured in the pipelines UI.

Set up the destination

The Rsyslog and Syslog-ng destinations support the RFC5424 format.

The Rsyslog and Syslog-ng destinations match these log fields to the following Syslog fields:

Log EventSYSLOG FIELDDefault
log[“message”]MESSAGENIL
log[“procid”]PROCIDThe running Worker’s process ID.
log[“appname”]APP-NAMEobservability_pipelines
log[“facility”]FACILITY8 (log_user)
log[“msgid”]MSGIDNIL
log[“severity”]SEVERITYinfo
log[“host”]HOSTNAMENIL
log[“timestamp”]TIMESTAMPCurrent UTC time.

The following destination settings are optional:

  1. Toggle the switch to enable TLS. If you enable TLS, the following certificate and key files are required:
    • Server Certificate Path: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).
    • CA Certificate Path: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).
    • Private Key Path: The path to the .key private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.
  2. Enter the number of seconds to wait before sending TCP keepalive probes on an idle connection.

Set the environment variables

  • The Rsyslog or Syslog-ng endpoint URL. For example, 127.0.0.1:9997.
    • The Observability Pipelines Worker sends logs to this address and port.
    • Stored as the environment variable: DD_OP_DESTINATION_SYSLOG_ENDPOINT_URL.
PREVIEWING: may/op-restructure-reference-components