Cette page n'est pas encore disponible en français, sa traduction est en cours. Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.
Join the Preview!
The Datadog Agent with embedded OpenTelemetry Collector is in Preview. To request access, fill out this form.
FedRAMP customers should not enable or use the embedded OpenTelemetry Collector.
Overview
Follow this guide to install the Datadog Agent with the OpenTelemetry Collector using Helm.
If you need OpenTelemetry components beyond what's provided in the default package, follow Use Custom OpenTelemetry Components to bring-your-Otel-Components to extend the Datadog Agent's capabilities. For a list of components included by default, see Included components.
Install the Datadog Agent with OpenTelemetry Collector
Select installation method
Choose one of the following installation methods:
Datadog Operator: A Kubernetes-native approach that automatically reconciles and maintains your Datadog setup. It reports deployment status, health, and errors in its Custom Resource status, and it limits the risk of misconfiguration thanks to higher-level configuration options.
Helm chart: A straightforward way to deploy Datadog Agent. It provides versioning, rollback, and templating capabilities, making deployments consistent and easier to replicate.
Replace <DD_API_KEY> and <DD_APP_KEY> with your actual Datadog API and application keys.
Configure the Datadog Agent
After deploying the Datadog Operator, create the DatadogAgent resource that triggers the deployment of the Datadog Agent, Cluster Agent and Cluster Checks Runners (if used) in your Kubernetes cluster. The Datadog Agent deploys as a DaemonSet, running a pod on every node of your cluster.
Use the datadog-agent.yaml file to specify your DatadogAgent deployment configuration.
This guide uses a Java application example. The -jmx suffix in the image tag enables JMX utilities. For non-Java applications, use 7.62.2-ot-beta instead. For more details, see Autodiscovery and JMX integration guide.
By default, the Agent image is pulled from Google Artifact Registry (gcr.io/datadoghq). If Artifact Registry is not accessible in your deployment region, use another registry.
This guide uses a Java application example. The -jmx suffix in the image tag enables JMX utilities. For non-Java applications, use 7.62.2-ot-beta instead. For more details, see Autodiscovery and JMX integration guide.
By default, the Agent image is pulled from Google Artifact Registry (gcr.io/datadoghq). If Artifact Registry is not accessible in your deployment region, use another registry.
Enable the OpenTelemetry Collector and configure the essential ports:
datadog-values.yaml
datadog:...otelCollector:enabled:trueports:- containerPort:"4317"# default port for OpenTelemetry gRPC receiver.hostPort:"4317"name:otel-grpc- containerPort:"4318"# default port for OpenTelemetry HTTP receiverhostPort:"4318"name:otel-http
Set the hostPort to expose the container port to the external network. This enables configuring the OTLP exporter to point to the IP address of the node where the Datadog Agent is assigned.
If you don’t want to expose the port, you can use the Agent service instead:
Remove the hostPort entries from your datadog-values.yaml file.
In your application’s deployment file (deployment.yaml), configure the OTLP exporter to use the Agent service:
The Datadog Operator provides a sample OpenTelemetry Collector configuration that you can use as a starting point. If you need to modify this configuration, the Datadog Operator supports two ways of providing a custom Collector configuration:
Inline configuration: Add your custom Collector configuration directly in the features.otelCollector.conf.configData field.
ConfigMap-based configuration: Store your Collector configuration in a ConfigMap and reference it in the features.otelCollector.conf.configMap field. This approach allows you to keep Collector configuration decoupled from the DatadogAgent resource.
Inline Collector configuration
In the snippet below, the Collector configuration is placed directly under the features.otelCollector.conf.configData parameter:
When you apply the datadog-agent.yaml file containing this DatadogAgent resource, the Operator automatically mounts the Collector configuration into the Agent DaemonSet.
Completed datadog-agent.yaml file
Completed datadog-agent.yaml with inline Collector configuration should look something like this:
The Datadog Helm chart provides a sample OpenTelemetry Collector configuration that you can use as a starting point. This section walks you through the predefined pipelines and included OpenTelemetry components.
This is the full OpenTelemetry Collector configuration in otel-config.yaml:
Note: If key is not specified or set to a secret, or if site is not specified, the system uses values from the core Agent configuration. By default, the core Agent sets site to datadoghq.com (US1).
Prometheus receiver
The Prometheus receiver collects health metrics from the OpenTelemetry Collector for the metrics pipeline.
Deploy the Datadog Agent with the configuration file:
kubectl apply -f datadog-agent.yaml
This deploys the Datadog Agent as a DaemonSet with the embedded OpenTelemetry Collector. The Collector runs on the same host as your application, following the Agent deployment pattern. The Gateway deployment pattern is not supported.
To install or upgrade the Datadog Agent with OpenTelemetry Collector in your Kubernetes environment, use one of the following Helm commands:
For default OpenTelemetry Collector configuration:
This command allows you to specify your own otel-config.yaml file.
Replace <RELEASE_NAME> with the Helm release name you are using.
You may see warnings during the deployment process. These warnings can be ignored.
This Helm chart deploys the Datadog Agent with OpenTelemetry Collector as a DaemonSet. The Collector is deployed on the same host as your application, following the Agent deployment pattern. The Gateway deployment pattern is not supported.
To configure your application container, ensure that the correct OTLP endpoint hostname is used. The Datadog Agent with OpenTelemetry Collector is deployed as a DaemonSet, so the current host needs to be targeted.
The Calendar application container is already configure with correct OTEL_EXPORTER_OTLP_ENDPOINT environment variable as defined in Helm chart:
Go to the Calendar application’s Deployment manifest file:
./deploys/calendar/templates/deployment.yaml
The following environment variables configure the OTLP endpoint:
This Helm chart deploys the sample Calendar application as a ReplicaSet.
To test that the Calendar application is running correctly, execute the following command from another terminal window:
curl localhost:9090/calendar
Verify that you receive a response like:
{"date":"2024-12-30"}
Each call to the Calendar application results in metrics, traces, and logs being forwarded to the Datadog backend.
Explore observability data in Datadog
Use Datadog to explore the observability data for the sample Calendar app.
Fleet automation
Explore your Datadog Agent and Collector configuration.
Live container monitoring
Monitor your container health using Live Container Monitoring capabilities.
Infrastructure node health
View runtime and infrastructure metrics to visualize, monitor, and measure the performance of your nodes.
Logs
View logs to monitor and troubleshoot application and system operations.
Traces
View traces and spans to observe the status and performance of requests processed by your application, with infrastructure metrics correlated in the same trace.
Runtime metrics
Monitor your runtime (JVM) metrics for your applications.
Collector health metrics
View metrics from the embedded Collector to monitor the Collector health.
Included components
By default, the Datadog Agent with embedded Collector ships with the following Collector components. You can also see the list in YAML format.