Collect your exposed Prometheus and OpenMetrics metrics from your application running inside Kubernetes by using the Datadog Agent and the OpenMetrics or Prometheus integrations. By default, all metrics retrieved by the generic Prometheus check are considered custom metrics.
Starting with version 6.5.0, the Agent includes OpenMetrics and Prometheus checks capable of scraping Prometheus endpoints. Datadog recommends using the OpenMetrics check since it is more efficient and fully supports the Prometheus text format. For more advanced usage of the OpenMetricsCheck interface, including writing a custom check, see the Developer Tools section. Use the Prometheus check only when the metrics endpoint does not support a text format.
This page explains the basic usage of these checks, which enable you to scrape custom metrics from Prometheus endpoints. For an explanation of how Prometheus and OpenMetrics metrics map to Datadog metrics, see the Mapping Prometheus Metrics to Datadog Metrics guide.
Configure your OpenMetrics or Prometheus check using Autodiscovery, by applying the following annotations to your pod exposing the OpenMetrics/Prometheus metrics:
Note: AD Annotations v2 was introduced in Datadog Agent version 7.36 to simplify integration configuration. For previous versions of the Datadog Agent, use AD Annotations v1.
With the following configuration placeholder values:
Placeholder
Description
<CONTAINER_NAME>
Matches the name of the container that exposes the metrics.
<PROMETHEUS_ENDPOINT>
URL path for the metrics served by the container, in Prometheus format.
<METRICS_NAMESPACE_PREFIX_FOR_DATADOG>
Set namespace to be prefixed to every metric when viewed in Datadog.
<METRIC_TO_FETCH>
Prometheus metrics key to be fetched from the Prometheus endpoint.
<NEW_METRIC_NAME>
Transforms the <METRIC_TO_FETCH> metric key to <NEW_METRIC_NAME> in Datadog.
The metrics configuration is a list of metrics to retrieve as custom metrics. Include each metric to fetch and the desired metric name in Datadog as key value pairs, for example, {"<METRIC_TO_FETCH>":"<NEW_METRIC_NAME>"}. To prevent excess custom metrics charges, Datadog recommends limiting the scope to only include metrics that you need. You can alternatively provide a list of metric names strings, interpreted as regular expressions, to bring the desired metrics with their current names. If you want all metrics, then use ".*" rather than "*".
Note: Regular expressions can potentially send a lot of custom metrics.
Use the Prometheus prometheus.yaml to launch an example Prometheus Deployment with the Autodiscovery configuration on the pod:
Note: AD Annotations v2 was introduced in Datadog Agent version 7.36 to simplify integration configuration. For previous versions of the Datadog Agent, use AD Annotations v1.
Go into your Metric summary page to see the metrics collected from this example pod. This configuration will collect the metric promhttp_metric_handler_requests, promhttp_metric_handler_requests_in_flight, and all exposed metrics starting with go_memory.
Metric collection with Prometheus annotations
With Prometheus Autodiscovery, the Datadog Agent is able to detect native Prometheus annotations (for example: prometheus.io/scrape, prometheus.io/path, prometheus.io/port) and schedule OpenMetrics checks automatically to collect Prometheus metrics in Kubernetes.
Requirements
Datadog Agent v7.27+ or v6.27+ (for Pod checks)
Datadog Cluster Agent v1.11+ (for service and endpoint checks)
Configuration
It’s recommended to first check which pods and services have the prometheus.io/scrape=true annotation before enabling this feature. This can be done with the following commands:
kubectl get pods -o=jsonpath='{.items[?(@.metadata.annotations.prometheus\.io/scrape=="true")].metadata.name}' --all-namespaces
kubectl get services -o=jsonpath='{.items[?(@.metadata.annotations.prometheus\.io/scrape=="true")].metadata.name}' --all-namespaces
Once the Prometheus scrape feature is enabled, the Datadog Agent collects custom metrics from these resources. If you do not want to collect the custom metrics from these resources, you can remove this annotation or update the Autodiscovery rules as described in the advanced configuration section.
Note: Enabling this feature without advanced configuration can cause a significant increase in custom metrics, which can lead to billing implications. See the advanced configuration section to learn how to only collect metrics from a subset of containers/pods/services.
Basic configuration
Update your Datadog Operator configuration to contain the following:
If the Cluster Agent is enabled, inside its manifest cluster-agent-deployment.yaml, add the following environment variables for the Cluster Agent container:
This instructs the Datadog Agent to detect the pods that have native Prometheus annotations and generate corresponding OpenMetrics checks.
It also instructs the Datadog Cluster Agent (if enabled) to detect the services that have native Prometheus annotations and generate corresponding OpenMetrics checks.
prometheus.io/scrape=true: Required.
prometheus.io/path: Optional, defaults to /metrics.
prometheus.io/port: Optional, default is %%port%%, a template variable that is replaced by the container/service port.
This configuration generates a check that collects all metrics exposed using the default configuration of the OpenMetrics integration.
Advanced configuration
You can further configure metric collection (beyond native Prometheus annotations) with the additionalConfigs field.
Additional OpenMetrics check configurations
Use additionalConfigs.configurations to define additional OpenMetrics check configurations. See the list of supported OpenMetrics parameters that you can pass in additionalConfigs.
Custom Autodiscovery rules
Use additionalConfigs.autodiscovery to define custom Autodiscovery rules. These rules can be based on container names, Kubernetes annotations, or both.
If both kubernetes_container_names and kubernetes_annotations are defined, AND logic is used (both rules must match).
Examples
The following configuration targets a container named my-app running in a pod with the annotation app=my-app. The OpenMetrics check configuration is customized to enable the send_distribution_buckets option and define a custom timeout of 5 seconds.
Update your Datadog Operator configuration to contain the following:
By default, all metrics retrieved by the generic Prometheus check are considered custom metrics. If you are monitoring off-the-shelf software and think it deserves an official integration, don’t hesitate to contribute!
Official integrations have their own dedicated directories. There’s a default instance mechanism in the generic check to hardcode the default configuration and metrics metadata. For example, reference the kube-proxy integration.
Further Reading
Additional helpful documentation, links, and articles: