- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Supported OS
This check submits metrics exposed by the NVIDIA DCGM Exporter in Datadog Agent format. For more information on NVIDIA Data Center GPU Manager (DCGM), see NVIDIA DCGM.
Starting from Agent release 7.47.0, the DCGM check is included in the Datadog Agent package. However, you need to spin up the DCGM Exporter container to expose the GPU metrics in order for the Agent to collect this data. As the default counters are not sufficient, Datadog recommends using the following DCGM configuration to cover the same ground as the NVML integration in addition to having useful metrics.
# Format
# If line starts with a '#' it is considered a comment
# DCGM FIELD ,Prometheus metric type ,help message
# Clocks
DCGM_FI_DEV_SM_CLOCK ,gauge ,SM clock frequency (in MHz).
DCGM_FI_DEV_MEM_CLOCK ,gauge ,Memory clock frequency (in MHz).
# Temperature
DCGM_FI_DEV_MEMORY_TEMP ,gauge ,Memory temperature (in C).
DCGM_FI_DEV_GPU_TEMP ,gauge ,GPU temperature (in C).
# Power
DCGM_FI_DEV_POWER_USAGE ,gauge ,Power draw (in W).
DCGM_FI_DEV_TOTAL_ENERGY_CONSUMPTION ,counter ,Total energy consumption since boot (in mJ).
# PCIE
DCGM_FI_DEV_PCIE_REPLAY_COUNTER ,counter ,Total number of PCIe retries.
# Utilization (the sample period varies depending on the product)
DCGM_FI_DEV_GPU_UTIL ,gauge ,GPU utilization (in %).
DCGM_FI_DEV_MEM_COPY_UTIL ,gauge ,Memory utilization (in %).
DCGM_FI_DEV_ENC_UTIL ,gauge ,Encoder utilization (in %).
DCGM_FI_DEV_DEC_UTIL ,gauge ,Decoder utilization (in %).
# Errors and violations
DCGM_FI_DEV_XID_ERRORS ,gauge ,Value of the last XID error encountered.
# Memory usage
DCGM_FI_DEV_FB_FREE ,gauge ,Framebuffer memory free (in MiB).
DCGM_FI_DEV_FB_USED ,gauge ,Framebuffer memory used (in MiB).
# NVLink
DCGM_FI_DEV_NVLINK_BANDWIDTH_TOTAL ,counter ,Total number of NVLink bandwidth counters for all lanes.
# VGPU License status
DCGM_FI_DEV_VGPU_LICENSE_STATUS ,gauge ,vGPU License status
# Remapped rows
DCGM_FI_DEV_UNCORRECTABLE_REMAPPED_ROWS ,counter ,Number of remapped rows for uncorrectable errors
DCGM_FI_DEV_CORRECTABLE_REMAPPED_ROWS ,counter ,Number of remapped rows for correctable errors
DCGM_FI_DEV_ROW_REMAP_FAILURE ,gauge ,Whether remapping of rows has failed
# DCP metrics
DCGM_FI_PROF_PCIE_TX_BYTES ,counter ,The number of bytes of active pcie tx data including both header and payload.
DCGM_FI_PROF_PCIE_RX_BYTES ,counter ,The number of bytes of active pcie rx data including both header and payload.
DCGM_FI_PROF_GR_ENGINE_ACTIVE ,gauge ,Ratio of time the graphics engine is active (in %).
DCGM_FI_PROF_SM_ACTIVE ,gauge ,The ratio of cycles an SM has at least 1 warp assigned (in %).
DCGM_FI_PROF_SM_OCCUPANCY ,gauge ,The ratio of number of warps resident on an SM (in %).
DCGM_FI_PROF_PIPE_TENSOR_ACTIVE ,gauge ,Ratio of cycles the tensor (HMMA) pipe is active (in %).
DCGM_FI_PROF_DRAM_ACTIVE ,gauge ,Ratio of cycles the device memory interface is active sending or receiving data (in %).
DCGM_FI_PROF_PIPE_FP64_ACTIVE ,gauge ,Ratio of cycles the fp64 pipes are active (in %).
DCGM_FI_PROF_PIPE_FP32_ACTIVE ,gauge ,Ratio of cycles the fp32 pipes are active (in %).
DCGM_FI_PROF_PIPE_FP16_ACTIVE ,gauge ,Ratio of cycles the fp16 pipes are active (in %).
# Datadog additional recommended fields
DCGM_FI_DEV_COUNT ,counter ,Number of Devices on the node.
DCGM_FI_DEV_FAN_SPEED ,gauge ,Fan speed for the device in percent 0-100.
DCGM_FI_DEV_SLOWDOWN_TEMP ,gauge ,Slowdown temperature for the device.
DCGM_FI_DEV_POWER_MGMT_LIMIT ,gauge ,Current power limit for the device.
DCGM_FI_DEV_PSTATE ,gauge ,Performance state (P-State) 0-15. 0=highest
DCGM_FI_DEV_FB_TOTAL ,gauge ,
DCGM_FI_DEV_FB_RESERVED ,gauge ,
DCGM_FI_DEV_FB_USED_PERCENT ,gauge ,
DCGM_FI_DEV_CLOCK_THROTTLE_REASONS ,gauge ,Current clock throttle reasons (bitmask of DCGM_CLOCKS_THROTTLE_REASON_*)
DCGM_FI_PROCESS_NAME ,label ,The Process Name.
DCGM_FI_CUDA_DRIVER_VERSION ,label ,
DCGM_FI_DEV_NAME ,label ,
DCGM_FI_DEV_MINOR_NUMBER ,label ,
DCGM_FI_DRIVER_VERSION ,label ,
DCGM_FI_DEV_BRAND ,label ,
DCGM_FI_DEV_SERIAL ,label ,
To configure the exporter in a Docker environment:
$PWD/default-counters.csv
which contains the default fields from NVIDIA etc/default-counters.csv
as well as additional Datadog-recommended fields. To add more fields for collection, follow these instructions. For the complete list of fields, see the DCGM API reference manual.sudo docker run --pid=host --privileged -e DCGM_EXPORTER_INTERVAL=5000 --gpus all -d -v /proc:/proc -v $PWD/default-counters.csv:/etc/dcgm-exporter/default-counters.csv -p 9400:9400 --name dcgm-exporter nvcr.io/nvidia/k8s/dcgm-exporter:3.1.7-3.1.4-ubuntu20.04
The DCGM exporter can quickly be installed in a Kubernetes environment using the NVIDIA DCGM Exporter Helm chart. The instructions below are derived from the template provided by NVIDIA here.
helm repo add gpu-helm-charts https://nvidia.github.io/dcgm-exporter/helm-charts && helm repo update
ConfigMap
containing the Datadog-recommended metrics from Installation, as well as the RoleBinding
and Role
used by the DCGM pods to retrieve the ConfigMap
using the manifest below :apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dcgm-exporter-read-datadog-cm
namespace: default
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["datadog-dcgm-exporter-configmap"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dcgm-exporter-datadog
namespace: default
subjects:
- kind: ServiceAccount
name: dcgm-datadog-dcgm-exporter
namespace: default
roleRef:
kind: Role
name: dcgm-exporter-read-datadog-cm
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: datadog-dcgm-exporter-configmap
namespace: default
data:
metrics: |
# Copy the content from the Installation section.
dcgm-values.yaml
with the following content :# Exposing more metrics than the default for additional monitoring - this requires the use of a dedicated ConfigMap for which the Kubernetes ServiceAccount used by the exporter has access thanks to step 1.
# Ref: https://github.com/NVIDIA/dcgm-exporter/blob/e55ec750def325f9f1fdbd0a6f98c932672002e4/deployment/values.yaml#L38
arguments: ["-m", "default:datadog-dcgm-exporter-configmap"]
# Datadog Autodiscovery V2 annotations
podAnnotations:
ad.datadoghq.com/exporter.checks: |-
{
"dcgm": {
"instances": [
{
"openmetrics_endpoint": "http://%%host%%:9400/metrics"
}
]
}
}
# Optional - Disabling the ServiceMonitor which requires Prometheus CRD - can be re-enabled if Prometheus CRDs are installed in your cluster
serviceMonitor:
enabled: false
default
namespace with the following command, while being in the directory with your dcgm-values.yaml
:helm install dcgm-datadog gpu-helm-charts/dcgm-exporter -n default -f dcgm-values.yaml
Note: You can modify the release name dcgm-datadog
as well as the namespace, but you must modify accordingly the manifest from step 1.
The DCGM exporter can be installed in a Kubernetes environment by using NVIDIA GPU Operator. The instructions below are derived from the template provided by NVIDIA here.
helm repo add nvidia https://helm.ngc.nvidia.com/nvidia && helm repo update
dcgm-metrics.csv
: curl https://raw.githubusercontent.com/NVIDIA/dcgm-exporter/main/etc/dcp-metrics-included.csv > dcgm-metrics.csv
gpu-operator
if one is not already present: kubectl create namespace gpu-operator
.kubectl create configmap metrics-config -n gpu-operator --from-file=dcgm-metrics.csv
dcgm-values.yaml
with the following content:# Refer to NVIDIA documentation for the driver and toolkit for your GPU-enabled nodes - example below for Amazon Linux 2 g5.xlarge
driver:
enabled: true
toolkit:
version: v1.13.5-centos7
# Using custom metrics configuration to collect recommended Datadog additional metrics - requires the creation of the metrics-config ConfigMap from the previous step
# Ref: https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/getting-started.html#custom-metrics-config
dcgmExporter:
config:
name: metrics-config
env:
- name: DCGM_EXPORTER_COLLECTORS
value: /etc/dcgm-exporter/dcgm-metrics.csv
# Adding Datadog autodiscovery V2 annotations
daemonsets:
annotations:
ad.datadoghq.com/nvidia-dcgm-exporter.checks: |-
{
"dcgm": {
"instances": [
{
"openmetrics_endpoint": "http://%%host%%:9400/metrics"
}
]
}
}
default
namespace with the following command, while being in the directory with your dcgm-values.yaml
:helm install datadog-dcgm-gpu-operator -n gpu-operator nvidia/gpu-operator -f dcgm-values.yaml
Edit the dcgm.d/conf.yaml
file (located in the conf.d/
folder at the root of your Agent’s configuration directory) to start collecting your GPU Metrics. See the sample dcgm.d/conf.yaml for all available configuration options.
instances:
## @param openmetrics_endpoint - string - required
## The URL exposing metrics in the OpenMetrics format.
##
## Set this to <listenAddress>/<handlerPath> as configured in your DCGM Server
#
- openmetrics_endpoint: http://localhost:9400/metrics
Use the extra_metrics
configuration field to add metrics that go beyond the ones Datadog supports out of the box. See the NVIDIA docs for the full list of metrics that dcgm-exporter can collect. Make sure to enable these fields in the dcgm-exporter configuration as well.
Set Autodiscovery Integrations Templates as Docker labels on your DCGM exporter container:
LABEL "com.datadoghq.ad.check_names"='["dcgm"]'
LABEL "com.datadoghq.ad.init_configs"='[{}]'
LABEL "com.datadoghq.ad.instances"='[{"openmetrics_endpoint": "http://%%host%%:9400/metrics"}]'
Note: If you followed the instructions for the DCGM Exporter Helm chart or GPU Operator, the annotations are already applied to the pods and the instructions below can be ignored.
Set Autodiscovery Integrations Templates as pod annotations on your application container. Aside from this, templates can also be configured with a file, a configmap, or a key-value store.
Annotations v2 (for Datadog Agent v7.47+)
apiVersion: v1
kind: Pod
metadata:
name: '<POD_NAME>'
annotations:
ad.datadoghq.com/dcgm.checks: |
{
"dcgm": {
"init_config": {},
"instances": [
{
"openmetrics_endpoint": "http://%%host%%:9400/metrics"
}
]
}
}
spec:
containers:
- name: dcgm
When you’re finished making configuration changes, restart the Agent.
Run the Agent’s status subcommand and look for dcgm
under the Checks section.
The out-of-the-box monitors that come with this integration have some default values based on their alert thresholds. For example, the GPU temperature is determined based on an acceptable range for industrial devices. However, Datadog recommends that you check to make sure these values suit your particular needs.
dcgm.clock_throttle_reasons (gauge) | Current clock throttle reasons (bitmask of DCGMCLOCKSTHROTTLEREASON*) |
dcgm.correctable_remapped_rows.count (count) | Number of remapped rows for correctable errors. Shown as row |
dcgm.dec_utilization (gauge) | Decoder utilization (in %). Shown as percent |
dcgm.device.count (count) | Number of Devices on the node. Shown as device |
dcgm.dram.active (gauge) | Ratio of cycles the device memory interface is active sending or receiving data (in %). Shown as fraction |
dcgm.enc_utilization (gauge) | Encoder utilization (in %). Shown as percent |
dcgm.fan_speed (gauge) | Fan speed for the device in percent 0-100. Shown as percent |
dcgm.frame_buffer.free (gauge) | Free Frame Buffer in MB. Shown as megabyte |
dcgm.frame_buffer.reserved (gauge) | Reserved Frame Buffer in MB. Shown as megabyte |
dcgm.frame_buffer.total (gauge) | Total Frame Buffer of the GPU in MB. Shown as megabyte |
dcgm.frame_buffer.used (gauge) | Used Frame Buffer in MB. Shown as megabyte |
dcgm.frame_buffer.used_percent (gauge) | Percentage used of Frame Buffer: Used/(Total - Reserved). Range 0.0-1.0 Shown as fraction |
dcgm.gpu_utilization (gauge) | GPU utilization (in %). Shown as percent |
dcgm.gr_engine_active (gauge) | Ratio of time the graphics engine is active (in %). Shown as fraction |
dcgm.mem.clock (gauge) | Memory clock frequency (in MHz). Shown as megahertz |
dcgm.mem.copy_utilization (gauge) | Memory utilization (in %). Shown as percent |
dcgm.mem.temperature (gauge) | Memory temperature (in C). Shown as degree celsius |
dcgm.nvlink_bandwidth.count (count) | Total number of NVLink bandwidth counters for all lanes |
dcgm.pcie_replay.count (count) | Total number of PCIe retries. |
dcgm.pcie_rx_throughput.count (count) | PCIe Rx utilization information. |
dcgm.pcie_tx_throughput.count (count) | PCIe Tx utilization information. |
dcgm.pipe.fp16_active (gauge) | Ratio of cycles the fp16 pipes are active (in %). Shown as fraction |
dcgm.pipe.fp32_active (gauge) | Ratio of cycles the fp32 pipes are active (in %). Shown as fraction |
dcgm.pipe.fp64_active (gauge) | Ratio of cycles the fp64 pipes are active (in %). Shown as fraction |
dcgm.pipe.tensor_active (gauge) | Ratio of cycles the tensor (HMMA) pipe is active (in %). Shown as fraction |
dcgm.power_management_limit (gauge) | Current power limit for the device. Shown as watt |
dcgm.power_usage (gauge) | Power draw (in W). Shown as watt |
dcgm.pstate (gauge) | Performance state (P-State) 0-15. 0=highest |
dcgm.row_remap_failure (gauge) | Whether remapping of rows has failed. |
dcgm.slowdown_temperature (gauge) | Slowdown temperature for the device. Shown as degree celsius |
dcgm.sm_active (gauge) | The ratio of cycles an SM has at least 1 warp assigned (in %). Shown as fraction |
dcgm.sm_clock (gauge) | SM clock frequency (in MHz). Shown as megahertz |
dcgm.sm_occupancy (gauge) | The ratio of number of warps resident on an SM (in %). Shown as fraction |
dcgm.temperature (gauge) | GPU temperature (in C). Shown as degree celsius |
dcgm.total_energy_consumption.count (count) | Total energy consumption since boot (in mJ). Shown as millijoule |
dcgm.uncorrectable_remapped_rows.count (count) | Number of remapped rows for uncorrectable errors. Shown as row |
dcgm.vgpu_license_status (gauge) | vGPU License status |
dcgm.xid_errors (gauge) | Value of the last XID error encountered. |
The DCGM integration does not include any events.
See service_checks.json for a list of service checks that this integration provides.
If you have added some metrics that don’t appear in the metadata.csv above but appear in your account with the format DCGM_FI_DEV_NEW_METRIC
, remap these metrics in the dcgm.d/conf.yaml configuration file:
## @param extra_metrics - (list of string or mapping) - optional
## This list defines metrics to collect from the `openmetrics_endpoint`, in addition to
## what the check collects by default. If the check already collects a metric, then
## metric definitions here take precedence. Metrics may be defined in 3 ways:
...
The example below appends the part in NEW_METRIC
to the namespace (dcgm.
), giving dcgm.new_metric
:
extra_metrics:
- DCGM_FI_DEV_NEW_METRIC: new_metric
If a field is not being collected even after enabling it in default-counters.csv
and performing a curl
request to host:9400/metrics
, the dcgm-exporter developers recommend looking at the log file at var/log/nv-hostengine.log
.
Note: The dcgm-exporter
is a thin wrapper around lower-level libraries and drivers which do the actual reporting.
In some cases, the DCGM_FI_DEV_GPU_UTIL
metric can cause heavier resource consumption. If you’re experiencing this issue:
DCGM_FI_DEV_GPU_UTIL
in default-counters.csv
.default-counters.csv
:DCGM_FI_PROF_DRAM_ACTIVE
DCGM_FI_PROF_GR_ENGINE_ACTIVE
DCGM_FI_PROF_PCIE_RX_BYTES
DCGM_FI_PROF_PCIE_TX_BYTES
DCGM_FI_PROF_PIPE_FP16_ACTIVE
DCGM_FI_PROF_PIPE_FP32_ACTIVE
DCGM_FI_PROF_PIPE_FP64_ACTIVE
DCGM_FI_PROF_PIPE_TENSOR_ACTIVE
DCGM_FI_PROF_SM_ACTIVE
DCGM_FI_PROF_SM_OCCUPANCY
Contact Datadog support.
Additional helpful documentation, links, and articles: