- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`If you are already using a standalone OpenTelemetry (OTel) Collector for your OTel-instrumented applications, you can migrate to the Datadog Distribution of OpenTelemetry (DDOT) Collector. The DDOT Collector allows you to leverage Datadog’s enhanced capabilities, including optimized configurations, seamless integrations, and additional features tailored for the Datadog ecosystem.
To migrate to the DDOT Collector, you need to install the Datadog Agent and configure your applications to report the telemetry data.
Before starting the migration process, ensure you have:
Before you begin, review your configuration to see if your existing config is supported by default:
otel-config.yaml
).span_name_as_resource_name
or span_name_remappings
, review the New Operation Name Mappings guide. The DDOT Collector enables these new mappings by default.filelogreceiver
. Review the configuration closely when migrating from a standalone collector.Here are two example Collector configuration files:
This example uses a custom metricstransform
component:
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
datadog:
api:
key: ${env:DD_API_KEY}
site: ${env:DD_SITE}
processors:
infraattributes:
cardinality: 2
batch:
timeout: 10s
metricstransform:
transforms:
- include: system.cpu.usage
action: insert
new_name: host.cpu.utilization
connectors:
datadog/connector:
traces:
service:
pipelines:
traces:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [datadog/connector, datadog]
metrics:
receivers: [otlp, datadog/connector]
processors: [metricstransform, infraattributes, batch]
exporters: [datadog]
logs:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [datadog]
In this case, you need to follow Use Custom OpenTelemetry Components with Datadog Agent.
This example only uses components included in the Datadog Agent by default:
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
datadog:
api:
key: ${env:DD_API_KEY}
site: ${env:DD_SITE}
processors:
infraattributes:
cardinality: 2
batch:
timeout: 10s
connectors:
datadog/connector:
traces:
service:
pipelines:
traces:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [datadog/connector, datadog]
metrics:
receivers: [otlp, datadog/connector]
processors: [infraattributes, batch]
exporters: [datadog]
logs:
receivers: [otlp]
processors: [infraattributes, batch]
exporters: [datadog]
In this case, you can proceed to installing the DDOT Collector.
Follow these steps to install the DDOT Collector.
To add the Datadog repository to your Helm repositories:
helm repo add datadog https://helm.datadoghq.com
helm repo update
kubectl create secret generic datadog-secret \
--from-literal api-key=<DD_API_KEY> \
--from-literal app-key=<DD_APP_KEY>
<DD_API_KEY>
and <DD_APP_KEY>
with your actual Datadog API and application keys.Use a YAML file to specify the Helm chart parameters for the Datadog Agent chart.
Create an empty datadog-values.yaml
file:
touch datadog-values.yaml
Configure the Datadog API and application key secrets:
datadog-values.yaml
datadog:
site: <DATADOG_SITE>
apiKeyExistingSecret: datadog-secret
appKeyExistingSecret: datadog-secret
<DATADOG_SITE>
to your Datadog site. Otherwise, it defaults to datadoghq.com
, the US1 site.Use the Datadog Agent image tag with embedded DDOT Collector::
datadog-values.yaml
agents:
image:
repository: gcr.io/datadoghq/agent
tag: 7.65.0-full
doNotCheckTag: true
...
Enable the OpenTelemetry Collector and configure the essential ports:
datadog-values.yaml
datadog:
...
otelCollector:
enabled: true
ports:
- containerPort: "4317" # default port for OpenTelemetry gRPC receiver.
hostPort: "4317"
name: otel-grpc
- containerPort: "4318" # default port for OpenTelemetry HTTP receiver
hostPort: "4318"
name: otel-http
hostPort
in order for the container port to be exposed to the external network. This enables configuring the OTLP exporter to point to the IP address of the node to which the Datadog Agent is assigned.If you don’t want to expose the port, you can use the Agent service instead:
hostPort
entries from your datadog-values.yaml
file.deployment.yaml
), configure the OTLP exporter to use the Agent service:env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: 'http://<SERVICE_NAME>.<SERVICE_NAMESPACE>.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: 'grpc'
(Optional) Enable additional Datadog features:
datadog-values.yaml
datadog:
...
apm:
portEnabled: true
peer_tags_aggregation: true
compute_stats_by_span_kind: true
peer_service_aggregation: true
orchestratorExplorer:
enabled: true
processAgent:
enabled: true
processCollection: true
(Optional) Collect pod labels and use them as tags to attach to metrics, traces, and logs:
datadog-values.yaml
datadog:
...
podLabelsAsTags:
app: kube_app
release: helm_release
helm upgrade -i <RELEASE_NAME> datadog/datadog \
-f datadog-values.yaml \
--set-file datadog.otelCollector.config=collector-config.yaml
To configure your existing application to use Datadog Agent instead of standalone Collector, ensure that the correct OTLP endpoint hostname is used. The Datadog Agent with DDOT Collector deployed as a DaemonSet, so the current host needs to be targeted.
deployment.yaml
).deployment.yaml
env:
...
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: OTLP_GRPC_PORT
value: "4317"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: 'http://$(HOST_IP):$(OTLP_GRPC_PORT)'
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: 'grpc'
If you previously used span_name_as_resource_name
or span_name_remappings
configurations in your standalone Collector, you need to adapt your configuration.
enable_operation_and_resource_name_logic_v2
feature flag in your Agent configuration.For detailed instructions on migrating to the new operation name mappings, see Migrate to New Operation Name Mappings.
Unified service tagging ties observability data together in Datadog so you can navigate across metrics, traces, and logs with consistent tags.
To configure your application with unified service tagging, set the OTEL_RESOURCE_ATTRIBUTES
environment variable:
deployment.yaml
env:
...
- name: OTEL_SERVICE_NAME
value: {{ include "calendar.fullname" . }}
- name: OTEL_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: OTEL_K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: OTEL_K8S_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: 'grpc'
- name: OTEL_RESOURCE_ATTRIBUTES
value: >-
service.name=$(OTEL_SERVICE_NAME),
k8s.namespace.name=$(OTEL_K8S_NAMESPACE),
k8s.node.name=$(OTEL_K8S_NODE_NAME),
k8s.pod.name=$(OTEL_K8S_POD_NAME),
k8s.container.name={{ .Chart.Name }},
host.name=$(OTEL_K8S_NODE_NAME),
deployment.environment=$(OTEL_K8S_NAMESPACE)
After configuring your application, verify that data is flowing correctly to Datadog:
kubectl apply -f deployment.yaml
After you’ve confirmed that all data is being collected correctly in Datadog, you can remove the standalone OpenTelemetry Collector:
kubectl delete deployment old-otel-collector