Join the Preview!

The Datadog Agent with embedded OpenTelemetry Collector is in Preview. To request access, fill out this form.

Request Access

If you are already using a standalone OpenTelemetry (OTel) Collector for your OTel-instrumented applications, you can migrate to the Datadog Agent with embedded OpenTelemetry Collector. The embedded OTel Collector allows you to leverage Datadog’s enhanced capabilities, including optimized configurations, seamless integrations, and additional features tailored for the Datadog ecosystem.

To migrate to the Datadog Agent with embedded OpenTelemetry Collector, you need to install the Datadog Agent and configure your applications to report the telemetry data.

This guide covers migrating the OpenTelemetry Collector deployed as an agent. The Gateway deployment pattern is not supported.

Prerequisites

Before starting the migration process, ensure you have:

  • A valid Datadog account
  • An OpenTelemetry-instrumented application ready to send telemetry data
  • Access to your current OpenTelemetry Collector configurations
  • Administrative access to your Kubernetes cluster (Kubernetes v1.29+ is required)
    • Note: EKS Fargate environments are not supported
  • Helm v3+

Review existing configuration

Before you begin, review your configuration to see if your existing config is supported by default:

  1. Examine your existing OpenTelemetry Collector configuration file (otel-config.yaml).
  2. Compare it to the list of components included in the Datadog Agent by default.
  3. If your setup uses components not included in the Agent by default, follow Use Custom OpenTelemetry Components with Datadog Agent.
The default configuration settings in Datadog's embedded collector may differ from the standard OpenTelemetry Collector configuration defaults. This can affect behavior of components like the filelogreceiver. Review the configuration closely when migrating from a standalone collector.

Example configuration

Here are two example Collector configuration files:

This example uses a custom metricstransform component:

collector-config.yaml

receivers:
  otlp:
    protocols:
      grpc:
         endpoint: 0.0.0.0:4317
      http:
         endpoint: 0.0.0.0:4318
exporters:
  datadog:
    api:
      key: ${env:DD_API_KEY}
processors:
  infraattributes:
    cardinality: 2
  batch:
    timeout: 10s
  metricstransform:
    transforms:
      - include: system.cpu.usage
        action: insert
        new_name: host.cpu.utilization
connectors:
  datadog/connector:
    traces:
      compute_top_level_by_span_kind: true
      peer_tags_aggregation: true
      compute_stats_by_span_kind: true
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [infraattributes, batch]
      exporters: [datadog/connector, datadog]
    metrics:
      receivers: [otlp, datadog/connector]
      processors: [metricstransform, infraattributes, batch]
      exporters: [datadog]
    logs:
      receivers: [otlp]
      processors: [infraattributes, batch]
      exporters: [datadog]

In this case, you need to follow Use Custom OpenTelemetry Components with Datadog Agent.

This example only uses components included in the Datadog Agent by default:

collector-config.yaml

receivers:
  otlp:
    protocols:
      grpc:
         endpoint: 0.0.0.0:4317
      http:
         endpoint: 0.0.0.0:4318
exporters:
  datadog:
    api:
      key: ${env:DD_API_KEY}
processors:
  infraattributes:
    cardinality: 2
  batch:
    timeout: 10s
connectors:
  datadog/connector:
    traces:
      compute_top_level_by_span_kind: true
      peer_tags_aggregation: true
      compute_stats_by_span_kind: true
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [infraattributes, batch]
      exporters: [datadog/connector, datadog]
    metrics:
      receivers: [otlp, datadog/connector]
      processors: [infraattributes, batch]
      exporters: [datadog]
    logs:
      receivers: [otlp]
      processors: [infraattributes, batch]
      exporters: [datadog]

In this case, you can proceed to installing the Agent with the embedded OpenTelemetry Collector.

Install the Agent with OpenTelemetry Collector

Follow these steps to install the Agent with embedded OpenTelemetry Collector.

Add the Datadog Helm Repository

To add the Datadog repository to your Helm repositories:

helm repo add datadog https://helm.datadoghq.com
helm repo update

Set up Datadog API and application keys

  1. Get the Datadog API and application keys.
  2. Store the keys as a Kubernetes secret:
    kubectl create secret generic datadog-secret \
      --from-literal api-key=<DD_API_KEY> \
      --from-literal app-key=<DD_APP_KEY>
    
    Replace <DD_API_KEY> and <DD_APP_KEY> with your actual Datadog API and application keys.

Configure the Datadog Agent

Use a YAML file to specify the Helm chart parameters for the Datadog Agent chart.

  1. Create an empty datadog-values.yaml file:

    touch datadog-values.yaml
    
    Unspecified parameters use defaults from values.yaml.
  2. Configure the Datadog API and application key secrets:

    datadog-values.yaml

    datadog:
      site: datadoghq.com
      apiKeyExistingSecret: datadog-secret
      appKeyExistingSecret: datadog-secret
      logLevel: info
       
    Set datadog.site to your Datadog site. Otherwise, it defaults to datadoghq.com, the US1 site.

    The log level datadog.logLevel parameter value should be set in lower case. Valid log levels are: trace, debug, info, warn, error, critical, off.
  3. Switch the Datadog Agent image tag to use builds with embedded OpenTelemetry collector:

    datadog-values.yaml

    agents:
      image:
        repository: gcr.io/datadoghq/agent
        tag: 7.62.2-ot-beta-jmx
        doNotCheckTag: true
    ...
       

    This guide uses a Java application example. The -jmx suffix in the image tag enables JMX utilities. For non-Java applications, use nightly-ot-beta-main instead.
    For more details, see Autodiscovery and JMX integration guide.
  4. Enable the OpenTelemetry Collector and configure the essential ports:

    datadog-values.yaml

    datadog:
      ...
      otelCollector:
        enabled: true
        ports:
          - containerPort: "4317" # default port for OpenTelemetry gRPC receiver.
            hostPort: "4317"
            name: otel-grpc
          - containerPort: "4318" # default port for OpenTelemetry HTTP receiver
            hostPort: "4318"
            name: otel-http
       
    It is required to set the hostPort in order for the container port to be exposed to the external network. This enables configuring the OTLP exporter to point to the IP address of the node to which the Datadog Agent is assigned.

    If you don’t want to expose the port, you can use the Agent service instead:

    1. Remove the hostPort entries from your datadog-values.yaml file.
    2. In your application’s deployment file (deployment.yaml), configure the OTLP exporter to use the Agent service:
      env:
        - name: OTEL_EXPORTER_OTLP_ENDPOINT
          value: 'http://<SERVICE_NAME>.<SERVICE_NAMESPACE>.svc.cluster.local'
        - name: OTEL_EXPORTER_OTLP_PROTOCOL
          value: 'grpc'
      
  5. (Optional) Enable additional Datadog features:

    Enabling these features may incur additional charges. Review the pricing page and talk to your CSM before proceeding.

    datadog-values.yaml

    datadog:
      ...
      apm:
        portEnabled: true
        peer_tags_aggregation: true
        compute_stats_by_span_kind: true
        peer_service_aggregation: true
      orchestratorExplorer:
        enabled: true
      processAgent:
        enabled: true
        processCollection: true
       
  6. (Optional) Collect pod labels and use them as tags to attach to metrics, traces, and logs:

    Custom metrics may impact billing. See the custom metrics billing page for more information.

    datadog-values.yaml

    datadog:
      ...
      podLabelsAsTags:
        app: kube_app
        release: helm_release

Your datadog-values.yaml file should look something like this:

datadog-values.yaml

agents:
  image:
    repository: gcr.io/datadoghq/agent
    tag: 7.62.2-ot-beta-jmx
    doNotCheckTag: true

datadog:
  site: datadoghq.com
  apiKeyExistingSecret: datadog-secret
  appKeyExistingSecret: datadog-secret
  logLevel: info

  otelCollector:
    enabled: true
    ports:
      - containerPort: "4317"
        hostPort: "4317"
        name: otel-grpc
      - containerPort: "4318"
        hostPort: "4318"
        name: otel-http
  apm:
    portEnabled: true
    peer_tags_aggregation: true
    compute_stats_by_span_kind: true
    peer_service_aggregation: true
  orchestratorExplorer:
    enabled: true
  processAgent:
    enabled: true
    processCollection: true

  podLabelsAsTags:
    app: kube_app
    release: helm_release
   

Deploy the Agent with OpenTelemetry Collector

  1. Install or upgrade the Datadog Agent with OpenTelemetry Collector to your Kubernetes environment:
    helm upgrade -i <RELEASE_NAME> datadog/datadog \
      -f datadog-values.yaml \
      --set-file datadog.otelCollector.config=collector-config.yaml
    
  2. Navigate to Integrations > Fleet Automation.
  3. Select the OTel Collector Version facet.
  4. Select an Agent and inspect its configuration to verify the new Agent with OpenTelemetry Collector is installed successfully.

Configure your application

To configure your existing application to use Datadog Agent instead of standalone Collector, ensure that the correct OTLP endpoint hostname is used. The Datadog Agent with embedded Collector deployed as a DaemonSet, so the current host needs to be targeted.

  1. Go to your application’s Deployment manifest file (deployment.yaml).
  2. Add following environment variables to configure the OTLP endpoint:

    deployment.yaml

    env:
      ...
      - name: HOST_IP
        valueFrom:
         fieldRef:
            fieldPath: status.hostIP
      - name: OTLP_GRPC_PORT
        value: "4317"
      - name: OTEL_EXPORTER_OTLP_ENDPOINT
        value: 'http://$(HOST_IP):$(OTLP_GRPC_PORT)'
      - name: OTEL_EXPORTER_OTLP_PROTOCOL
        value: 'grpc'

Operation name mapping differences

If you previously used span_name_as_resource_name or span_name_remappings configurations in your standalone Collector, you need to adapt your configuration.

  1. Remove these configurations from your Datadog Exporter and Connector settings.
  2. Enable the enable_operation_and_resource_name_logic_v2 feature flag in your Agent configuration.

For detailed instructions on migrating to the new operation name mappings, see Migrate to New Operation Name Mappings.

Correlate observability data

Unified service tagging ties observability data together in Datadog so you can navigate across metrics, traces, and logs with consistent tags.

To configure your application with unified service tagging, set the OTEL_RESOURCE_ATTRIBUTES environment variable:

  1. Go to your application’s Deployment manifest file.
  2. Add following lines to enable the correlation between application traces and other observability data:

    deployment.yaml

    env:
      ...
      - name: OTEL_SERVICE_NAME
        value: {{ include "calendar.fullname" . }}
      - name: OTEL_K8S_NAMESPACE
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.namespace
      - name: OTEL_K8S_NODE_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: spec.nodeName
      - name: OTEL_K8S_POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: OTEL_EXPORTER_OTLP_PROTOCOL
        value: 'grpc'
      - name: OTEL_RESOURCE_ATTRIBUTES
        value: >-
          service.name=$(OTEL_SERVICE_NAME),
          k8s.namespace.name=$(OTEL_K8S_NAMESPACE),
          k8s.node.name=$(OTEL_K8S_NODE_NAME),
          k8s.pod.name=$(OTEL_K8S_POD_NAME),
          k8s.container.name={{ .Chart.Name }},
          host.name=$(OTEL_K8S_NODE_NAME),
          deployment.environment=$(OTEL_K8S_NAMESPACE)      

Verify data flow

After configuring your application, verify that data is flowing correctly to Datadog:

  1. Apply the configuration changes by redeploying your applications.
    kubectl apply -f deployment.yaml
    
  2. Confirm that telemetry data is being received in your Datadog account. Check logs, traces and metrics to ensure correct data collection and correlation.

Uninstall standalone Collector

After you’ve confirmed that all data is being collected correctly in Datadog, you can remove the standalone OpenTelemetry Collector:

  1. Ensure all required data is being collected and displayed in Datadog.
  2. Uninstall the open source OpenTelemetry Collector from your environment:
    kubectl delete deployment old-otel-collector
    

Further reading

PREVIEWING: brett.blue/embedded-collector-release