Overview

Synthetic tests come with estimated usage metrics that allow you to keep track of your usage. These metrics notably enable you to:

  • Understand how your usage evolves over time.
  • Visualize which teams, applications, or services are contributing the most to your Synthetic Monitoring usage.
  • Alert on unexpected usage spikes that can impact your billing.

To visualize or alert on your Synthetic Monitoring usage, use the following queries:

  • Single and Multistep API tests: sum:datadog.estimated_usage.synthetics.api_test_runs{*}.as_count()

  • Browser tests: sum:datadog.estimated_usage.synthetics.browser_test_runs{*}.as_count().

    Note: The pricing for browser A browser test run is a simulation of a web transaction, up to 25 steps.Glossarys is based on the number of steps. See Pricing documentation for more information.

For a higher level of refinement, scope or group these metrics by tags associated with your test, such as team or application.

You can graph and monitor these metrics against static thresholds as well as use machine learning based algorithms like anomaly detection or forecast to ensure you do not get alerted for expected usage growth.

Further Reading

PREVIEWING: adrien.boitreaud/host-tags-djm-doc