- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
The Datadog Agent with embedded OpenTelemetry Collector is in Preview. To request access, fill out this form.
Request AccessFollow this guide to install the Datadog Agent with the OpenTelemetry Collector using Helm.
To complete this guide, you need the following:
Datadog account:
Software: Install and set up the following on your machine:
Choose one of the following installation methods:
You can install the Datadog Operator in your cluster using the Datadog Operator Helm chart:
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm install datadog-operator datadog/datadog-operator
To add the Datadog repository to your Helm repositories:
helm repo add datadog https://helm.datadoghq.com
helm repo update
kubectl create secret generic datadog-secret \
--from-literal api-key=<DD_API_KEY> \
--from-literal app-key=<DD_APP_KEY>
<DD_API_KEY>
and <DD_APP_KEY>
with your actual Datadog API and application keys.After deploying the Datadog Operator, create the DatadogAgent
resource that triggers the deployment of the Datadog Agent, Cluster Agent and Cluster Checks Runners (if used) in your Kubernetes cluster. The Datadog Agent deploys as a DaemonSet, running a pod on every node of your cluster.
datadog-agent.yaml
file to specify your DatadogAgent
deployment configuration.datadog-agent.yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
name: datadog
spec:
global:
clusterName: <CLUSTER_NAME>
site: <DATADOG_SITE>
credentials:
apiSecret:
secretName: datadog-secret
keyName: api-key
appSecret:
secretName: datadog-secret
keyName: app-key
<CLUSTER_NAME>
with a name for your cluster.<DATADOG_SITE>
with your Datadog site. Your site is
. (Ensure the correct DATADOG SITE is selected on the right.)datadog-agent.yaml
...
override:
# Node Agent configuration
nodeAgent:
image:
name: "gcr.io/datadoghq/agent:7.62.2-ot-beta"
pullPolicy: Always
-jmx
suffix in the image tag enables JMX utilities. For non-Java applications, use 7.62.2-ot-beta
instead.By default, the Agent image is pulled from Google Artifact Registry (gcr.io/datadoghq
). If Artifact Registry is not accessible in your deployment region, use another registry.
datadog-agent.yaml
...
# Enable Features
features:
otelCollector:
enabled: true
The Datadog Operator automatically binds the OpenTelemetry Collector to ports 4317
(named otel-grpc
) and 4318
(named otel-http
) by default.
To explicitly override the default ports, use features.otelCollector.ports
parameter:
datadog-agent.yaml
...
# Enable Features
features:
otelCollector:
enabled: true
ports:
- containerPort: 4317
hostPort: 4317
name: otel-grpc
- containerPort: 4318
hostPort: 4318
name: otel-http
4317
and 4318
, you must use the default names otel-grpc
and otel-http
respectively to avoid port conflicts.datadog-agent.yaml
# Enable Features
features:
...
apm:
enabled: true
orchestratorExplorer:
enabled: true
processDiscovery:
enabled: true
liveProcessCollection:
enabled: true
usm:
enabled: true
clusterChecks:
enabled: true
Completed datadog-agent.yaml file
Your datadog-agent.yaml
file should look something like this:
datadog-agent.yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
name: datadog
spec:
global:
clusterName: <CLUSTER_NAME>
site: <DATADOG_SITE>
credentials:
apiSecret:
secretName: datadog-secret
keyName: api-key
appSecret:
secretName: datadog-secret
keyName: app-key
override:
# Node Agent configuration
nodeAgent:
image:
name: "gcr.io/datadoghq/agent:7.62.2-ot-beta"
pullPolicy: Always
# Enable Features
features:
apm:
enabled: true
orchestratorExplorer:
enabled: true
processDiscovery:
enabled: true
liveProcessCollection:
enabled: true
usm:
enabled: true
clusterChecks:
enabled: true
otelCollector:
enabled: true
ports:
- containerPort: 4317
hostPort: 4317
name: otel-grpc
- containerPort: 4318
hostPort: 4318
name: otel-http
Use a YAML file to specify the Helm chart parameters for the Datadog Agent chart.
datadog-values.yaml
file:touch datadog-values.yaml
datadog-values.yaml
datadog:
site: <DATADOG_SITE>
apiKeyExistingSecret: datadog-secret
appKeyExistingSecret: datadog-secret
logLevel: info
Set <DATADOG_SITE>
to your Datadog site. Otherwise, it defaults to datadoghq.com
, the US1 site.
datadog.logLevel
parameter value should be set in lower case. Valid log levels are: trace
, debug
, info
, warn
, error
, critical
, off
.datadog-values.yaml
agents:
image:
repository: gcr.io/datadoghq/agent
tag: 7.62.2-ot-beta-jmx
doNotCheckTag: true
...
-jmx
suffix in the image tag enables JMX utilities. For non-Java applications, use 7.62.2-ot-beta
instead.By default, the Agent image is pulled from Google Artifact Registry (gcr.io/datadoghq
). If Artifact Registry is not accessible in your deployment region, use another registry.
datadog-values.yaml
datadog:
...
otelCollector:
enabled: true
ports:
- containerPort: "4317" # default port for OpenTelemetry gRPC receiver.
hostPort: "4317"
name: otel-grpc
- containerPort: "4318" # default port for OpenTelemetry HTTP receiver
hostPort: "4318"
name: otel-http
Set the hostPort
to expose the container port to the external network. This enables configuring the OTLP exporter to point to the IP address of the node where the Datadog Agent is assigned.
If you don’t want to expose the port, you can use the Agent service instead:
hostPort
entries from your datadog-values.yaml
file.deployment.yaml
), configure the OTLP exporter to use the Agent service:env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: 'http://<SERVICE_NAME>.<SERVICE_NAMESPACE>.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: 'grpc'
datadog-values.yaml
datadog:
...
apm:
portEnabled: true
peer_tags_aggregation: true
compute_stats_by_span_kind: true
peer_service_aggregation: true
orchestratorExplorer:
enabled: true
processAgent:
enabled: true
processCollection: true
datadog-values.yaml
datadog:
...
podLabelsAsTags:
app: kube_app
release: helm_release
Completed datadog-values.yaml file
Your datadog-values.yaml
file should look something like this:
datadog-values.yaml
agents:
image:
repository: gcr.io/datadoghq/agent
tag: 7.62.2-ot-beta-jmx
doNotCheckTag: true
datadog:
site: datadoghq.com
apiKeyExistingSecret: datadog-secret
appKeyExistingSecret: datadog-secret
logLevel: info
otelCollector:
enabled: true
ports:
- containerPort: "4317"
hostPort: "4317"
name: otel-grpc
- containerPort: "4318"
hostPort: "4318"
name: otel-http
apm:
portEnabled: true
peer_tags_aggregation: true
compute_stats_by_span_kind: true
peer_service_aggregation: true
orchestratorExplorer:
enabled: true
processAgent:
enabled: true
processCollection: true
podLabelsAsTags:
app: kube_app
release: helm_release
The Datadog Operator provides a sample OpenTelemetry Collector configuration that you can use as a starting point. If you need to modify this configuration, the Datadog Operator supports two ways of providing a custom Collector configuration:
features.otelCollector.conf.configData
field.features.otelCollector.conf.configMap
field. This approach allows you to keep Collector configuration decoupled from the DatadogAgent
resource.In the snippet below, the Collector configuration is placed directly under the features.otelCollector.conf.configData
parameter:
datadog-agent.yaml
...
# Enable Features
features:
otelCollector:
enabled: true
ports:
- containerPort: 4317
hostPort: 4317
name: otel-grpc
- containerPort: 4318
hostPort: 4318
name: otel-http
conf:
configData: |-
receivers:
prometheus:
config:
scrape_configs:
- job_name: "otelcol"
scrape_interval: 10s
static_configs:
- targets:
- 0.0.0.0:8888
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
debug:
verbosity: detailed
datadog:
api:
key: ${env:DD_API_KEY}
site: ${env:DD_SITE}
processors:
infraattributes:
cardinality: 2
batch:
timeout: 10s
connectors:
datadog/connector:
traces:
compute_top_level_by_span_kind: true
peer_tags_aggregation: true
compute_stats_by_span_kind: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [debug, datadog, datadog/connector]
metrics:
receivers: [otlp, datadog/connector, prometheus]
processors: [infraattributes, batch]
exporters: [debug, datadog]
logs:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [debug, datadog]
When you apply the datadog-agent.yaml
file containing this DatadogAgent
resource, the Operator automatically mounts the Collector configuration into the Agent DaemonSet.
Completed datadog-agent.yaml file
Completed datadog-agent.yaml
with inline Collector configuration should look something like this:
datadog-agent.yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
name: datadog
spec:
global:
clusterName: <CLUSTER_NAME>
site: <DATADOG_SITE>
credentials:
apiSecret:
secretName: datadog-secret
keyName: api-key
appSecret:
secretName: datadog-secret
keyName: app-key
override:
# Node Agent configuration
nodeAgent:
image:
name: "gcr.io/datadoghq/agent:7.62.2-ot-beta"
pullPolicy: Always
# Enable Features
features:
apm:
enabled: true
orchestratorExplorer:
enabled: true
processDiscovery:
enabled: true
liveProcessCollection:
enabled: true
usm:
enabled: true
clusterChecks:
enabled: true
otelCollector:
enabled: true
ports:
- containerPort: 4317
hostPort: 4317
name: otel-grpc
- containerPort: 4318
hostPort: 4318
name: otel-http
conf:
configData: |-
receivers:
prometheus:
config:
scrape_configs:
- job_name: "datadog-agent"
scrape_interval: 10s
static_configs:
- targets:
- 0.0.0.0:8888
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
debug:
verbosity: detailed
datadog:
api:
key: ${env:DD_API_KEY}
site: ${env:DD_SITE}
processors:
infraattributes:
cardinality: 2
batch:
timeout: 10s
connectors:
datadog/connector:
traces:
compute_top_level_by_span_kind: true
peer_tags_aggregation: true
compute_stats_by_span_kind: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [debug, datadog, datadog/connector]
metrics:
receivers: [otlp, datadog/connector, prometheus]
processors: [infraattributes, batch]
exporters: [debug, datadog]
logs:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [debug, datadog]
For more complex or frequently updated configurations, storing Collector configuration in a ConfigMap can simplify version control.
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-agent-config-map
namespace: system
data:
# must be named otel-config.yaml
otel-config.yaml: |-
receivers:
prometheus:
config:
scrape_configs:
- job_name: "datadog-agent"
scrape_interval: 10s
static_configs:
- targets:
- 0.0.0.0:8888
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
debug:
verbosity: detailed
datadog:
api:
key: ${env:DD_API_KEY}
site: ${env:DD_SITE}
processors:
infraattributes:
cardinality: 2
batch:
timeout: 10s
connectors:
datadog/connector:
traces:
compute_top_level_by_span_kind: true
peer_tags_aggregation: true
compute_stats_by_span_kind: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [debug, datadog, datadog/connector]
metrics:
receivers: [otlp, datadog/connector, prometheus]
processors: [infraattributes, batch]
exporters: [debug, datadog]
logs:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [debug, datadog]
otel-config.yaml
.otel-agent-config-map
ConfigMap in your DatadogAgent
resource using features.otelCollector.conf.configMap
parameter:datadog-agent.yaml
...
# Enable Features
features:
otelCollector:
enabled: true
ports:
- containerPort: 4317
hostPort: 4317
name: otel-grpc
- containerPort: 4318
hostPort: 4318
name: otel-http
conf:
configMap:
name: otel-agent-config-map
The Operator automatically mounts otel-config.yaml
from the ConfigMap into the Agent’s OpenTelemetry Collector DaemonSet.
Completed datadog-agent.yaml file
Completed datadog-agent.yaml
with Collector configuration defined as ConfigMap should look something like this:
datadog-agent.yaml
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
name: datadog
spec:
global:
clusterName: <CLUSTER_NAME>
site: <DATADOG_SITE>
credentials:
apiSecret:
secretName: datadog-secret
keyName: api-key
appSecret:
secretName: datadog-secret
keyName: app-key
override:
# Node Agent configuration
nodeAgent:
image:
name: "gcr.io/datadoghq/agent:7.62.2-ot-beta"
pullPolicy: Always
# Enable Features
features:
apm:
enabled: true
orchestratorExplorer:
enabled: true
processDiscovery:
enabled: true
liveProcessCollection:
enabled: true
usm:
enabled: true
clusterChecks:
enabled: true
otelCollector:
enabled: true
ports:
- containerPort: 4317
hostPort: 4317
name: otel-grpc
- containerPort: 4318
hostPort: 4318
name: otel-http
conf:
configMap:
name: otel-agent-config-map
---
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-agent-config-map
namespace: system
data:
# must be named otel-config.yaml
otel-config.yaml: |-
receivers:
prometheus:
config:
scrape_configs:
- job_name: "datadog-agent"
scrape_interval: 10s
static_configs:
- targets:
- 0.0.0.0:8888
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
debug:
verbosity: detailed
datadog:
api:
key: ${env:DD_API_KEY}
site: ${env:DD_SITE}
processors:
infraattributes:
cardinality: 2
batch:
timeout: 10s
connectors:
datadog/connector:
traces:
compute_top_level_by_span_kind: true
peer_tags_aggregation: true
compute_stats_by_span_kind: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [debug, datadog, datadog/connector]
metrics:
receivers: [otlp, datadog/connector, prometheus]
processors: [infraattributes, batch]
exporters: [debug, datadog]
logs:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [debug, datadog]
The Datadog Helm chart provides a sample OpenTelemetry Collector configuration that you can use as a starting point. This section walks you through the predefined pipelines and included OpenTelemetry components.
This is the full OpenTelemetry Collector configuration in otel-config.yaml
:
otel-config.yaml
receivers:
prometheus:
config:
scrape_configs:
- job_name: "otelcol"
scrape_interval: 10s
static_configs:
- targets: ["0.0.0.0:8888"]
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
debug:
verbosity: detailed
datadog:
api:
key: ${env:DD_API_KEY}
site: ${env:DD_SITE}
processors:
infraattributes:
cardinality: 2
batch:
timeout: 10s
connectors:
datadog/connector:
traces:
compute_top_level_by_span_kind: true
peer_tags_aggregation: true
compute_stats_by_span_kind: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [datadog, datadog/connector]
metrics:
receivers: [otlp, datadog/connector, prometheus]
processors: [infraattributes, batch]
exporters: [datadog]
logs:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [datadog]
To send telemetry data to Datadog, the following components are defined in the configuration:
The Datadog connector computes Datadog APM trace metrics.
otel-config.yaml
connectors:
datadog/connector:
traces:
compute_top_level_by_span_kind: true
peer_tags_aggregation: true
compute_stats_by_span_kind: true
The Datadog exporter exports traces, metrics, and logs to Datadog.
otel-config.yaml
exporters:
datadog:
api:
key: ${env:DD_API_KEY}
site: ${env:DD_SITE}
Note: If key
is not specified or set to a secret, or if site
is not specified, the system uses values from the core Agent configuration. By default, the core Agent sets site to datadoghq.com
(US1).
The Prometheus receiver collects health metrics from the OpenTelemetry Collector for the metrics pipeline.
otel-config.yaml
receivers:
prometheus:
config:
scrape_configs:
- job_name: "otelcol"
scrape_interval: 10s
static_configs:
- targets: ["0.0.0.0:8888"]
For more information, see the Collector Health Metrics documentation.
Deploy the Datadog Agent with the configuration file:
kubectl apply -f datadog-agent.yaml
This deploys the Datadog Agent as a DaemonSet with the embedded OpenTelemetry Collector. The Collector runs on the same host as your application, following the Agent deployment pattern. The Gateway deployment pattern is not supported.
To install or upgrade the Datadog Agent with OpenTelemetry Collector in your Kubernetes environment, use one of the following Helm commands:
For default OpenTelemetry Collector configuration:
helm upgrade -i <RELEASE_NAME> datadog/datadog -f datadog-values.yaml
For custom OpenTelemetry Collector configuration:
helm upgrade -i <RELEASE_NAME> datadog/datadog \
-f datadog-values.yaml \
--set-file datadog.otelCollector.config=otel-config.yaml
This command allows you to specify your own otel-config.yaml
file.
Replace <RELEASE_NAME>
with the Helm release name you are using.
This Helm chart deploys the Datadog Agent with OpenTelemetry Collector as a DaemonSet. The Collector is deployed on the same host as your application, following the Agent deployment pattern. The Gateway deployment pattern is not supported.
Deployment diagram
To send your telemetry data to Datadog:
Instrument your application using the OpenTelemetry API.
As an example, you can use the Calendar sample application that’s already instrumented for you.
opentelemetry-examples
repository to your device:git clone https://github.com/DataDog/opentelemetry-examples.git
/calendar
directory:cd opentelemetry-examples/apps/rest-services/java/calendar
CalendarService.java
@WithSpan(kind = SpanKind.CLIENT)
public String getDate() {
Span span = Span.current();
span.setAttribute("peer.service", "random-date-service");
...
}
To configure your application container, ensure that the correct OTLP endpoint hostname is used. The Datadog Agent with OpenTelemetry Collector is deployed as a DaemonSet, so the current host needs to be targeted.
The Calendar application container is already configure with correct OTEL_EXPORTER_OTLP_ENDPOINT
environment variable as defined in Helm chart:
./deploys/calendar/templates/deployment.yaml
deployment.yaml
env:
...
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: OTLP_GRPC_PORT
value: "4317"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: 'http://$(HOST_IP):$(OTLP_GRPC_PORT)'
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: 'grpc'
Unified service tagging ties observability data together in Datadog so you can navigate across metrics, traces, and logs with consistent tags.
In this example, the Calendar application is already configured with unified service tagging as defined in Helm chart:
./deploys/calendar/templates/deployment.yaml
deployment.yaml
env:
...
- name: OTEL_SERVICE_NAME
value: {{ include "calendar.fullname" . }}
- name: OTEL_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: OTEL_K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: OTEL_K8S_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: 'grpc'
- name: OTEL_RESOURCE_ATTRIBUTES
value: >-
service.name=$(OTEL_SERVICE_NAME),
k8s.namespace.name=$(OTEL_K8S_NAMESPACE),
k8s.node.name=$(OTEL_K8S_NODE_NAME),
k8s.pod.name=$(OTEL_K8S_POD_NAME),
k8s.container.name={{ .Chart.Name }},
host.name=$(OTEL_K8S_NODE_NAME),
deployment.environment=$(OTEL_K8S_NAMESPACE)
To start generating and forwarding observability data to Datadog, you need to deploy the Calendar application with the OpenTelemetry SDK using Helm.
helm
command from the calendar/
folder:helm upgrade -i <CALENDAR_RELEASE_NAME> ./deploys/calendar/
curl localhost:9090/calendar
{"date":"2024-12-30"}
Each call to the Calendar application results in metrics, traces, and logs being forwarded to the Datadog backend.
Use Datadog to explore the observability data for the sample Calendar app.
Explore your Datadog Agent and Collector configuration.
Monitor your container health using Live Container Monitoring capabilities.
View runtime and infrastructure metrics to visualize, monitor, and measure the performance of your nodes.
View logs to monitor and troubleshoot application and system operations.
View traces and spans to observe the status and performance of requests processed by your application, with infrastructure metrics correlated in the same trace.
Monitor your runtime (JVM) metrics for your applications.
View metrics from the embedded Collector to monitor the Collector health.
By default, the Datadog Agent with embedded Collector ships with the following Collector components. You can also see the list in YAML format.
Receivers
Processors
Connectors
추가 유용한 문서, 링크 및 기사: