- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Supported OS
Track disk utilization and failed volumes on each of your HDFS DataNodes. This Agent check collects metrics for these, as well as block- and cache-related metrics.
Use this check (hdfs_datanode) and its counterpart check (hdfs_namenode), not the older two-in-one check (hdfs); that check is deprecated.
Follow the instructions below to install and configure this check for an Agent running on a host. For containerized environments, see the Autodiscovery Integration Templates for guidance on applying these instructions.
The HDFS DataNode check is included in the Datadog Agent package, so you don’t need to install anything else on your DataNodes.
To configure this check for an Agent running on a host:
Edit the hdfs_datanode.d/conf.yaml
file, in the conf.d/
folder at the root of your Agent’s configuration directory. See the sample hdfs_datanode.d/conf.yaml for all available configuration options:
init_config:
instances:
## @param hdfs_datanode_jmx_uri - string - required
## The HDFS DataNode check retrieves metrics from the HDFS DataNode's JMX
## interface via HTTP(S) (not a JMX remote connection). This check must be installed on a HDFS DataNode. The HDFS
## DataNode JMX URI is composed of the DataNode's hostname and port.
##
## The hostname and port can be found in the hdfs-site.xml conf file under
## the property dfs.datanode.http.address
## https://hadoop.apache.org/docs/r3.1.3/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
#
- hdfs_datanode_jmx_uri: http://localhost:9864
For containerized environments, see the Autodiscovery Integration Templates for guidance on applying the parameters below.
Parameter | Value |
---|---|
<INTEGRATION_NAME> | hdfs_datanode |
<INIT_CONFIG> | blank or {} |
<INSTANCE_CONFIG> | {"hdfs_datanode_jmx_uri": "http://%%host%%:9864"} |
Available for Agent >6.0
Collecting logs is disabled by default in the Datadog Agent. Enable it in the datadog.yaml
file with:
logs_enabled: true
Add this configuration block to your hdfs_datanode.d/conf.yaml
file to start collecting your DataNode logs:
logs:
- type: file
path: /var/log/hadoop-hdfs/*.log
source: hdfs_datanode
service: <SERVICE_NAME>
Change the path
and service
parameter values and configure them for your environment.
Run the Agent’s status subcommand and look for hdfs_datanode
under the Checks section.
hdfs.datanode.cache_capacity (gauge) | Cache capacity in bytes Shown as byte |
hdfs.datanode.cache_used (gauge) | Cache used in bytes Shown as byte |
hdfs.datanode.dfs_capacity (gauge) | Disk capacity in bytes Shown as byte |
hdfs.datanode.dfs_remaining (gauge) | The remaining disk space left in bytes Shown as byte |
hdfs.datanode.dfs_used (gauge) | Disk usage in bytes Shown as byte |
hdfs.datanode.estimated_capacity_lost_total (gauge) | The estimated capacity lost in bytes Shown as byte |
hdfs.datanode.last_volume_failure_date (gauge) | The date/time of the last volume failure in milliseconds since epoch Shown as millisecond |
hdfs.datanode.num_blocks_cached (gauge) | The number of blocks cached Shown as block |
hdfs.datanode.num_blocks_failed_to_cache (gauge) | The number of blocks that failed to cache Shown as block |
hdfs.datanode.num_blocks_failed_to_uncache (gauge) | The number of failed blocks to remove from cache Shown as block |
hdfs.datanode.num_failed_volumes (gauge) | Number of failed volumes |
The HDFS-datanode check does not include any events.
hdfs.datanode.jmx.can_connect
Returns CRITICAL
if the Agent cannot connect to the DataNode’s JMX interface for any reason. Returns OK
otherwise.
Statuses: ok, critical
Need help? Contact Datadog support.