Observability Pipelines is not available on the US1-FED Datadog site.
If you upgrade your OP Workers version 1.8 or below to version 2.0 or above, your existing pipelines will break. Do not upgrade your OP Workers if you want to continue using OP Workers version 1.8 or below. If you want to use OP Worker 2.0 or above, you must migrate your OP Worker 1.8 or earlier pipelines to OP Worker 2.x.
Datadog recommends that you update to OP Worker versions 2.0 or above. Upgrading to a major OP Worker version and keeping it updated is the only supported way to get the latest OP Worker functionality, fixes, and security updates.
The Observability Pipelines Worker can collect, process, and route logs from any source to any destination. Using Datadog, you can build and manage all of your Observability Pipelines Worker deployments at scale.
There are several ways to get started with the Observability Pipelines Worker.
Quickstart: Install the Worker with a simple pipeline that emits demo data to get started quickly.
Datadog setup guide: Install the Worker with an out-of-the-box pipeline for receiving and routing data from your Datadog Agents to Datadog.
Datadog archiving setup guide: Install the Worker with an out-of-the-box pipeline for receiving and routing data from your Datadog Agents to Datadog and S3.
Splunk setup guide: Install the Worker with an out-of-the-box pipeline for receiving and routing data from Splunk HEC to both Splunk and Datadog.
This document walks you through the quickstart installation steps and then provides resources for next steps. Use and operation of this software is governed by the End User License Agreement.
Remote configuration for Observability Pipelines is in private beta. Contact Datadog support or your Customer Success Manager for access.
If you are enrolled in the private beta of Remote Configuration, you can remotely roll out changes to your Workers from the Datadog UI, rather than make updates to your pipeline configuration in a text editor and then manually rolling out your changes. Choose your deployment method when you create a pipeline and install your Workers.
The Observability Pipelines Worker Docker image is published to Docker Hub here.
Download the sample pipeline configuration file. This configuration emits demo data, parses and structures the data, and then sends them to the console and Datadog. See Configurations for more information about the source, transform, and sink used in the sample configuration.
Run the following command to start the Observability Pipelines Worker with Docker:
docker run -i -e DD_API_KEY=<API_KEY> \
-e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
-e DD_SITE=<SITE> \
-p 8282:8282 \
-v ./pipeline.yaml:/etc/observability-pipelines-worker/pipeline.yaml:ro \
datadog/observability-pipelines-worker run
Replace <API_KEY> with your Datadog API key, <PIPELINES_ID> with your Observability Pipelines configuration ID, and <SITE> with datadoghq.com. Note: ./pipeline.yaml must be the relative or absolute path to the configuration you downloaded in step 1.
Download the Helm chart values file for AWS EKS. See Configurations for more information about the source, transform, and sink used in the sample configuration.
In the Helm chart, replace the datadog.apiKey and datadog.pipelineId values to match your pipeline and use datadoghq.com for the site value. Then, install it in your cluster with the following commands:
Download the Helm chart values file for Azure AKS. See Configurations for more information about the source, transform, and sink used in the sample configuration.
In the Helm chart, replace the datadog.apiKey and datadog.pipelineId values to match your pipeline and use datadoghq.com for the site value. Then, install it in your cluster with the following commands:
Download the Helm chart values file for Google GKE. See Configurations for more information about the source, transform, and sink used in the sample configuration.
In the Helm chart, replace the datadog.apiKey and datadog.pipelineId values to match your pipeline and use datadoghq.com for the site value. Then, install it in your cluster with the following commands:
Run the one-line install command to install the Worker. Replace <DD_API_KEY> with your Datadog API key, <PIPELINES_ID> with your Observability Pipelines ID, and <SITE> with datadoghq.com.
Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml on the host. See Configurations for more information about the source, transform, and sink used in the sample configuration.
Run the one-line install command to install the Worker. Replace <DD_API_KEY> with your Datadog API key, <PIPELINES_ID> with your Observability Pipelines ID, and <SITE> with datadoghq.com.
Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml on the host. See Configurations for more information about the source, transform, and sink used in the sample configuration.
Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml on the host. See Configurations for more information about the source, transform, and sink used in the sample configuration.
Set up the Worker module in your existing Terraform using the sample configuration. Make sure to update the values in vpc-id, subnet-ids, and region to match your AWS deployment in the configuration. Also,update the values in datadog-api-key and pipeline-id to match your pipeline.
See Configurations for more information about the source, transform, and sink used in the sample configuration.
See Working with Data for more information on transforming your data.
After deploying a pipeline, you can also switch deployment methods, such as going from a manually managed pipeline to a remote configuration enabled pipeline or vice versa.
If you want to switch from a remote configuration deployment to a manually managed deployment:
Navigate to Observability Pipelines and select the pipeline.
Click the settings cog.
In Deployment Mode, select manual to enable it.
Set the DD_OP_REMOTE_CONFIGURATION_ENABLED flag to false and restart the Worker. Workers that are not restarted with this flag continue to be remote configuration enabled, which means that the Workers are not updated manually through a local configuration file.
If you want to switch from manually managed deployment to a remote configuration deployment:
Navigate to Observability Pipelines and select the pipeline.
Click the settings cog.
In Deployment Mode, select Remote Configuration to enable it.
Set the DD_OP_REMOTE_CONFIGURATION_ENABLED flag to true and restart the Worker. Workers that are not restarted with this flag are not polled for configurations deployed in the UI.
Deploy a version in your version history, so that the Workers receive the new version configuration. Click on a version. Click Edit as Draft and then click Deploy.
The quickstart walked you through installing the Worker and deploying a sample pipeline configuration. For instructions on how to install the Worker to receive and route data from your Datadog Agents to Datadog or to receive and route data from your Splunk HEC to Splunk and Datadog, select your specific use case:
For recommendations on deploying and scaling multiple Workers:
See Deployment Design and Principles for information on what to consider when designing your Observability Pipelines architecture.