- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Use this guide to get started monitoring your Google Cloud environment. This approach simplifies the setup for Google Cloud environments with multiple projects, allowing you to maximize your monitoring coverage.
If you don’t see an integration for a specific Google Cloud service, reach out to Datadog Support.
Integration | Description |
---|---|
App Engine | PaaS (platform as a service) to build scalable applications |
BigQuery | Enterprise data warehouse |
Bigtable | NoSQL Big Data database service |
Cloud SQL | MySQL database service |
Cloud APIs | Programmatic interfaces for all Google Cloud Platform services |
Cloud Armor | Network security service to help protect against denial-of-service and web attacks |
Cloud Composer | A fully managed workflow orchestration service |
Cloud Dataproc | A cloud service for running Apache Spark and Apache Hadoop clusters |
Cloud Dataflow | A fully-managed service for transforming and enriching data in stream and batch modes |
Cloud Filestore | High-performance, fully managed file storage |
Cloud Firestore | A flexible, scalable database for mobile, web, and server development |
Cloud Interconnect | Hybrid connectivity |
Cloud IoT | Secure device connection and management |
Cloud Load Balancing | Distribute load-balanced compute resources |
Cloud Logging | Real-time log management and analysis |
Cloud Memorystore for Redis | A fully managed in-memory data store service |
Cloud Router | Exchange routes between your VPC and on-premises networks by using BGP |
Cloud Run | Managed compute platform that runs stateless containers over HTTP |
Cloud Security Command Center | Security Command Center is a threat reporting service |
Cloud Tasks | Distributed task queues |
Cloud TPU | Train and run machine learning models |
Compute Engine | High performance virtual machines |
Container Engine | Kubernetes, managed by Google |
Datastore | NoSQL database |
Firebase | Mobile platform for application development |
Functions | Serverless platform for building event-based microservices |
Kubernetes Engine | Cluster manager and orchestration system |
Machine Learning | Machine learning services |
Private Service Connect | Access managed services with private VPC connections |
Pub/Sub | Real-time messaging service |
Spanner | Horizontally scalable, globally consistent, relational database service |
Storage | Unified object storage |
Vertex AI | Build, train, and deploy custom machine learning (ML) models |
VPN | Managed network functionality |
Set up Datadog’s Google Cloud integration to collect metrics and logs from your Google Cloud services.
C0147pk0i
C03lf3ewa
2. Service account impersonation and automatic project discovery relies on you having certain roles and APIs enabled to monitor projects. Before you start, ensure the following APIs are enabled for each of the projects you want to monitor:
3. Ensure that any projects being monitored are not configured as scoping projects that pull in metrics from multiple other projects.
Organization-level (or folder-level) monitoring is recommended for comprehensive coverage of all projects, including any future projects that may be created in an org or folder.
Note: Your Google Cloud Identity user account must have the Admin
role assigned to it at the desired scope to complete the setup in Google Cloud (for example, Organization Admin
).
Note: The Browser
role is only required in the default project of the service account. Other projects require only the other listed roles.
Note: If you previously configured access using a shared Datadog principal, you can revoke the permission for that principal after you complete these steps.
Note: Keep this window open for Section 4.
<SA_NAME>@<PROJECT_ID>.iam.gserviceaccount.com
.Metrics appear in Datadog approximately 15 minutes after setup.
By default, Google Cloud attributes the cost of monitoring API calls, as well as API quota usage, to the project containing the service account for this integration. As a best practice for Google Cloud environments with multiple projects, enable per-project cost attribution of monitoring API calls and API quota usage. With this enabled, costs and quota usage are attributed to the project being queried, rather than the project containing the service account. This provides visibility into the monitoring costs incurred by each project, and also helps to prevent reaching API rate limits.
To enable this feature:
You can use service account impersonation and automatic project discovery to integrate Datadog with Google Cloud.
This method enables you to monitor all projects visible to a service account by assigning IAM roles in the relevant projects. You can assign these roles to projects individually, or you can configure Datadog to monitor groups of projects by assigning these roles at the organization or folder level. Assigning roles in this way allows Datadog to automatically discover and monitor all projects in the given scope, including any new projects that may be added to the group in the future.
In Datadog, navigate to the Integrations > Google Cloud Platform.
Click on Add GCP Account. If you have no configured projects, you are automatically redirected to this page.
If you have not generated a Datadog principal for your org, click the Generate Principal button.
Copy your Datadog principal and keep it for the next section.
Note: Keep this window open for the next section.
In Google Cloud console, under the Service Accounts menu, find the service account you created in the first section.
Go to the Permissions tab and click on Grant Access.
Paste your Datadog principal into the New principals text box.
Assign the role of Service Account Token Creator and click SAVE.
Note: If you previously configured access using a shared Datadog principal, you can revoke the permission for that principal after you complete these steps.
<sa-name>@<project-id>.iam.gserviceaccount.com
.In approximately fifteen minutes, metrics appear in Datadog.
To view your metrics, use the left menu to navigate to Metrics > Summary and search for gcp
:
Optionally, you can choose which Google Cloud services you monitor with Datadog. Configuring metrics collection for specific Google services lets you optimize your Google Cloud Monitoring API costs, while retaining visibility into your critical services.
Under the Metric Collection tab in Datadog’s Google Cloud integration page, unselect the metric namespaces to exclude. You can also choose to disable collection of all metric namespaces.
By default, you’ll see all your Google Compute Engine (GCE) instances in Datadog’s infrastructure overview. Datadog automatically tags them with GCE host tags and any GCE labels you may have added.
Optionally, you can use tags to limit the instances that are pulled into Datadog. Under a project’s Metric Collection tab, enter the tags in the Limit Metric Collection Filters textbox. Only hosts that match one of the defined tags are imported into Datadog. You can use wildcards (?
for single character, *
for multi-character) to match many hosts, or !
to exclude certain hosts. This example includes all c1*
sized instances, but excludes staging hosts:
datadog:monitored,env:production,!env:staging,instance-type:c1.*
See Google’s Organize resources using labels page for more details.
Use the Datadog Agent to collect the most granular, low-latency metrics from your infrastructure. Install the Agent on any host, including GKE, to get deeper insights from the traces and logs it can collect. For more information, see Why should I install the Datadog Agent on my cloud instances?
Forward logs from your Google Cloud services to Datadog using Google Cloud Dataflow and the Datadog template. This method provides both compression and batching of events before forwarding to Datadog.
You can use the terraform-gcp-datadog-integration module to manage this infrastructure through Terraform, or follow the instructions in this section to:
You have full control over which logs are sent to Datadog through the logging filters you create in the log sink, including GCE and GKE logs. See Google’s Logging query language page for information about writing filters. For a detailed examination of the created architecture, see Stream logs from Google Cloud to Datadog in the Cloud Architecture Center.
Note: You must enable the Dataflow API to use Google Cloud Dataflow. See Enabling APIs in the Google Cloud documentation for more information.
To collect logs from applications running in GCE or GKE, you can also use the Datadog Agent.
Go to the Cloud Pub/Sub console and create a new topic. Select the option Add a default subscription to simplify the setup.
Note: You can also manually configure a Cloud Pub/Sub subscription with the Pull delivery type. If you manually create your Pub/Sub subscription, leave the Enable dead lettering
box unchecked. For more details, see Unsupported Pub/Sub features.
Give that topic an explicit name such as export-logs-to-datadog
and click Create.
Create an additional topic and default subscription to handle any log messages rejected by the Datadog API. The name of this topic is used within the Datadog Dataflow template as part of the path configuration for the outputDeadletterTopic
template parameter. When you have inspected and corrected any issues in the failed messages, send them back to the original export-logs-to-datadog
topic by running a Pub/Sub to Pub/Sub template job.
Datadog recommends creating a secret in Secret Manager with your valid Datadog API key value, for later use in the Datadog Dataflow template.
Warning: Cloud Pub/Subs are subject to Google Cloud quotas and limitations. If the number of logs you have exceeds those limitations, Datadog recommends you split your logs over several topics. See the Monitor the Pub/Sub Log Forwarding section for information on setting up monitor notifications if you approach those limits.
The default behavior for Dataflow pipeline workers is to use your project’s Compute Engine default service account, which grants permissions to all resources in the project. If you are forwarding logs from a Production environment, you should instead create a custom worker service account with only the necessary roles and permissions, and assign this service account to your Dataflow pipeline workers.
roles/dataflow.admin
roles/dataflow.worker
roles/pubsub.viewer
roles/pubsub.subscriber
roles/pubsub.publisher
roles/secretmanager.secretAccessor
roles/storage.objectAdmin
Note: If you don’t create a custom service account for the Dataflow pipeline workers, ensure that the default Compute Engine service account has the required permissions above.
Go to the Logs Explorer page in the Google Cloud console.
From the Log Router tab, select Create Sink.
Provide a name for the sink.
Choose Cloud Pub/Sub as the destination and select the Cloud Pub/Sub topic that was created for that purpose. Note: The Cloud Pub/Sub topic can be located in a different project.
Choose the logs you want to include in the sink with an optional inclusion or exclusion filter. You can filter the logs with a search query, or use the sample function. For example, to include only 10% of the logs with a severity
level of ERROR
, create an inclusion filter with severity="ERROR" AND sample(insertId, 0.1)
.
Click Create Sink.
Note: It is possible to create several exports from Google Cloud Logging to the same Cloud Pub/Sub topic with different sinks.
Go to the Create job from template page in the Google Cloud console.
Give the job a name and select a Dataflow regional endpoint.
Select Pub/Sub to Datadog
in the Dataflow template dropdown, and the Required parameters section appears.
a. Select the input subscription in the Pub/Sub input subscription dropdown.
b. Enter the following in the Datadog Logs API URL field:
https://
Note: Ensure that the Datadog site selector on the right of the page is set to your Datadog site before copying the URL above.
c. Select the topic created to receive message failures in the Output deadletter Pub/Sub topic dropdown. d. Specify a path for temporary files in your storage bucket in the Temporary location field.
Under Optional Parameters, check Include full Pub/Sub message in the payload
.
If you created a secret in Secret Manager with your Datadog API key value as mentioned in step 1, enter the resource name of the secret in the Google Cloud Secret Manager ID field.
See Template parameters in the Dataflow template for details on using the other available options:
apiKeySource=KMS
with apiKeyKMSEncryptionKey
set to your Cloud KMS key ID and apiKey
set to the encrypted API keyapiKeySource=PLAINTEXT
with apiKey
set to the plaintext API keyNote: If you have a shared VPC, see the Specify a network and subnetwork page in the Dataflow documentation for guidelines on specifying the Network
and Subnetwork
parameters.
New logging events delivered to the Cloud Pub/Sub topic appear in the Datadog Log Explorer.
Note: You can use the Google Cloud Pricing Calculator to calculate potential costs.
The Google Cloud Pub/Sub integration provides helpful metrics to monitor the status of the log forwarding:
gcp.pubsub.subscription.num_undelivered_messages
for the number of messages pending deliverygcp.pubsub.subscription.oldest_unacked_message_age
for the age of the oldest unacknowledged message in a subscriptionUse the metrics above with a metric monitor to receive alerts for the messages in your input and deadletter subscriptions.
Use Datadog’s Google Cloud Dataflow integration to monitor all aspects of your Dataflow pipelines. You can see all your key Dataflow metrics on the out-of-the-box dashboard, enriched with contextual data such as information about the GCE instances running your Dataflow workloads, and your Pub/Sub throughput.
You can also use a preconfigured Recommended Monitor to set up notifications for increases in backlog time in your pipeline. For more information, read Monitor your Dataflow pipelines with Datadog in the Datadog blog.
Collecting Google Cloud logs with a Pub/Sub Push subscription is in the process of being deprecated.
The above documentation for the Push subscription is only maintained for troubleshooting or modifying legacy setups.
Use a Pull subscription with the Datadog Dataflow template as described under Dataflow Method to forward your Google Cloud logs to Datadog instead.
Expanded BigQuery monitoring is in Preview. Use this form to sign up to start gaining insights into your query performance.
Request AccessExpanded BigQuery monitoring provides granular visibility into your BigQuery environments.
To monitor the performance of your BigQuery jobs, grant the BigQuery Resource Viewer role to the Datadog service account for each Google Cloud project.
Notes:
BigQuery data quality monitoring provides quality metrics from your BigQuery tables (such as freshness and updates to row count and size). Explore the data from your tables in depth on the Data Quality Monitoring page.
To collect quality metrics, grant the BigQuery Metadata Viewer role to the Datadog Service Account for each BigQuery table you are using.
Note: BigQuery Metadata Viewer can be applied at a BigQuery table, dataset, project, or organization level.
Datadog recommends setting up a new logs index called data-observability-queries
, and indexing your BigQuery job logs for 15 days. Use the following index filter to pull in the logs:
service:data-observability @platform:*
See the Log Management pricing page for cost estimation.
Resource changes collection is in Preview! To request access, use the attached form.
Request AccessSelect Enable Resource Collection in the Resource Collection tab of the Google Cloud integration page. This allows you to receive resource events in Datadog when Google’s Cloud Asset Inventory detects changes in your cloud resources.
Then, follow the steps below to forward change events from a Pub/Sub topic to the Datadog Event Explorer.
export-asset-changes-to-datadog
for the subscription name.To read from this Pub/Sub subscription, the Google Cloud service account used by the integration needs the pubsub.subscriptions.consume
permission for the subscription. A default role with minimal permissions that allows this is the Pub/Sub subscriber role. Follow the steps below to grant this role:
export-asset-changes-to-datadog
subscription.Run the command below in Cloud Shell or the gcloud CLI to create a Cloud Asset Inventory Feed that sends change events to the Pub/Sub topic created above.
gcloud asset feeds create <FEED_NAME>
--project=<PROJECT_ID>
--pubsub-topic=projects/<PROJECT_ID>/topics/<TOPIC_NAME>
--asset-names=<ASSET_NAMES>
--asset-types=<ASSET_TYPES>
--content-type=<CONTENT_TYPE>
Update the placeholder values as indicated:
<FEED_NAME>
: A descriptive name for the Cloud Asset Inventory Feed.<PROJECT_ID>
: Your Google Cloud project ID.<TOPIC_NAME>
: The name of the Pub/Sub topic linked with the export-asset-changes-to-datadog
subscription.<ASSET_NAMES>
: Comma-separated list of resource full names to receive change events from. Optional if specifying asset-types
.<ASSET_TYPES>
: Comma-separated list of asset types to receive change events from. Optional if specifying asset-names
.<CONTENT_TYPE>
: Optional asset content type to receive change events from.gcloud asset feeds create <FEED_NAME>
--folder=<FOLDER_ID>
--pubsub-topic=projects/<PROJECT_ID>/topics/<TOPIC_NAME>
--asset-names=<ASSET_NAMES>
--asset-types=<ASSET_TYPES>
--content-type=<CONTENT_TYPE>
Update the placeholder values as indicated:
<FEED_NAME>
: A descriptive name for the Cloud Asset Inventory Feed.<FOLDER_ID>
: Your Google Cloud folder ID.<TOPIC_NAME>
: The name of the Pub/Sub topic linked with the export-asset-changes-to-datadog
subscription.<ASSET_NAMES>
: Comma-separated list of resource full names to receive change events from. Optional if specifying asset-types
.<ASSET_TYPES>
: Comma-separated list of asset types to receive change events from. Optional if specifying asset-names
.<CONTENT_TYPE>
: Optional asset content type to receive change events from.gcloud asset feeds create <FEED_NAME>
--organization=<ORGANIZATION_ID>
--pubsub-topic=projects/<PROJECT_ID>/topics/<TOPIC_NAME>
--asset-names=<ASSET_NAMES>
--asset-types=<ASSET_TYPES>
--content-type=<CONTENT_TYPE>
Update the placeholder values as indicated:
<FEED_NAME>
: A descriptive name for the Cloud Asset Inventory Feed.<ORGANIZATION_ID>
: Your Google Cloud organization ID.<TOPIC_NAME>
: The name of the Pub/Sub topic linked with the export-asset-changes-to-datadog
subscription.<ASSET_NAMES>
: Comma-separated list of resource full names to receive change events from. Optional if specifying asset-types
.<ASSET_TYPES>
: Comma-separated list of asset types to receive change events from. Optional if specifying asset-names
.<CONTENT_TYPE>
: Optional asset content type to receive change events from.Copy the following Terraform template and substitute the necessary arguments:
locals {
project_id = "<PROJECT_ID>"
}
resource "google_pubsub_topic" "pubsub_topic" {
project = local.project_id
name = "<TOPIC_NAME>"
}
resource "google_pubsub_subscription" "pubsub_subscription" {
project = local.project_id
name = "export-asset-changes-to-datadog"
topic = google_pubsub_topic.pubsub_topic.id
}
resource "google_pubsub_subscription_iam_member" "subscriber" {
project = local.project_id
subscription = google_pubsub_subscription.pubsub_subscription.id
role = "roles/pubsub.subscriber"
member = "serviceAccount:<SERVICE_ACCOUNT_EMAIL>"
}
resource "google_cloud_asset_project_feed" "project_feed" {
project = local.project_id
feed_id = "<FEED_NAME>"
content_type = "<CONTENT_TYPE>" # Optional. Remove if unused.
asset_names = ["<ASSET_NAMES>"] # Optional if specifying asset_types. Remove if unused.
asset_types = ["<ASSET_TYPES>"] # Optional if specifying asset_names. Remove if unused.
feed_output_config {
pubsub_destination {
topic = google_pubsub_topic.pubsub_topic.id
}
}
}
Update the placeholder values as indicated:
<PROJECT_ID>
: Your Google Cloud project ID.<TOPIC_NAME>
: The name of the Pub/Sub topic to be linked with the export-asset-changes-to-datadog
subscription.<SERVICE_ACCOUNT_EMAIL>
: The service account email used by the Datadog Google Cloud integration.<FEED_NAME>
: A descriptive name for the Cloud Asset Inventory Feed.<ASSET_NAMES>
: Comma-separated list of resource full names to receive change events from. Optional if specifying asset-types
.<ASSET_TYPES>
: Comma-separated list of asset types to receive change events from. Optional if specifying asset-names
.<CONTENT_TYPE>
: Optional asset content type to receive change events from.locals {
project_id = "<PROJECT_ID>"
}
resource "google_pubsub_topic" "pubsub_topic" {
project = local.project_id
name = "<TOPIC_NAME>"
}
resource "google_pubsub_subscription" "pubsub_subscription" {
project = local.project_id
name = "export-asset-changes-to-datadog"
topic = google_pubsub_topic.pubsub_topic.id
}
resource "google_pubsub_subscription_iam_member" "subscriber" {
project = local.project_id
subscription = google_pubsub_subscription.pubsub_subscription.id
role = "roles/pubsub.subscriber"
member = "serviceAccount:<SERVICE_ACCOUNT_EMAIL>"
}
resource "google_cloud_asset_folder_feed" "folder_feed" {
billing_project = local.project_id
folder = "<FOLDER_ID>"
feed_id = "<FEED_NAME>"
content_type = "<CONTENT_TYPE>" # Optional. Remove if unused.
asset_names = ["<ASSET_NAMES>"] # Optional if specifying asset_types. Remove if unused.
asset_types = ["<ASSET_TYPES>"] # Optional if specifying asset_names. Remove if unused.
feed_output_config {
pubsub_destination {
topic = google_pubsub_topic.pubsub_topic.id
}
}
}
Update the placeholder values as indicated:
<PROJECT_ID>
: Your Google Cloud project ID.<FOLDER_ID>
: The ID of the folder this feed should be created in.<TOPIC_NAME>
: The name of the Pub/Sub topic to be linked with the export-asset-changes-to-datadog
subscription.<SERVICE_ACCOUNT_EMAIL>
: The service account email used by the Datadog Google Cloud integration.<FEED_NAME>
: A descriptive name for the Cloud Asset Inventory Feed.<ASSET_NAMES>
: Comma-separated list of resource full names to receive change events from. Optional if specifying asset-types
.<ASSET_TYPES>
: Comma-separated list of asset types to receive change events from. Optional if specifying asset-names
.<CONTENT_TYPE>
: Optional asset content type to receive change events from.locals {
project_id = "<PROJECT_ID>"
}
resource "google_pubsub_topic" "pubsub_topic" {
project = local.project_id
name = "<TOPIC_NAME>"
}
resource "google_pubsub_subscription" "pubsub_subscription" {
project = local.project_id
name = "export-asset-changes-to-datadog"
topic = google_pubsub_topic.pubsub_topic.id
}
resource "google_pubsub_subscription_iam_member" "subscriber" {
project = local.project_id
subscription = google_pubsub_subscription.pubsub_subscription.id
role = "roles/pubsub.subscriber"
member = "serviceAccount:<SERVICE_ACCOUNT_EMAIL>"
}
resource "google_cloud_asset_organization_feed" "organization_feed" {
billing_project = local.project_id
org_id = "<ORGANIZATION_ID>"
feed_id = "<FEED_NAME>"
content_type = "<CONTENT_TYPE>" # Optional. Remove if unused.
asset_names = ["<ASSET_NAMES>"] # Optional if specifying asset_types. Remove if unused.
asset_types = ["<ASSET_TYPES>"] # Optional if specifying asset_names. Remove if unused.
feed_output_config {
pubsub_destination {
topic = google_pubsub_topic.pubsub_topic.id
}
}
}
Update the placeholder values as indicated:
<PROJECT_ID>
: Your Google Cloud project ID.<TOPIC_NAME>
: The name of the Pub/Sub topic to be linked with the export-asset-changes-to-datadog
subscription.<SERVICE_ACCOUNT_EMAIL>
: The service account email used by the Datadog Google Cloud integration.<ORGANIZATION_ID>
: Your Google Cloud organization ID.<FEED_NAME>
: A descriptive name for the Cloud Asset Inventory Feed.<ASSET_NAMES>
: Comma-separated list of resource full names to receive change events from. Optional if specifying asset-types
.<ASSET_TYPES>
: Comma-separated list of asset types to receive change events from. Optional if specifying asset-names
.<CONTENT_TYPE>
: Optional asset content type to receive change events from.Datadog recommends setting the asset-types
parameter to the regular expression .*
to collect changes for all resources.
Note: You must specify at least one value for either the asset-names
or asset-types
parameter.
See the gcloud asset feeds create reference for the full list of configurable parameters.
Click to Enable Resource Changes Collection in the Resource Collection tab of the Google Cloud integration page.
Find your asset change events in the Datadog Event Explorer.
Use the Google Cloud Private Service Connect integration to visualize connections, data transferred, and dropped packets through Private Service Connect. This gives you visibility into important metrics from your Private Service Connect connections, both for producers as well as consumers. Private Service Connect (PSC) is a Google Cloud networking product that enables you to access Google Cloud services, third-party partner services, and company-owned applications directly from your Virtual Private Cloud (VPC).
See Access Datadog privately and monitor your Google Cloud Private Service Connect usage in the Datadog blog for more information.
gcp.gce.instance.cpu.utilization (gauge) | Fraction of the allocated CPU that is currently in use on the instance. Note that some machine types allow bursting above 100% usage. Shown as fraction |
Cumulative metrics are imported into Datadog with a .delta
metric for each metric name. A cumulative metric is a metric where the value constantly increases over time. For example, a metric for sent bytes
might be cumulative. Each value records the total number of bytes sent by a service at that time. The delta value represents the change since the previous measurement.
For example:
gcp.gke.container.restart_count
is a CUMULATIVE metric. While importing this metric as a cumulative metric, Datadog adds the gcp.gke.container.restart_count.delta
metric which includes the delta values (as opposed to the aggregate value emitted as part of the CUMULATIVE metric). See Google Cloud metric kinds for more information.
All service events generated by your Google Cloud Platform are forwarded to your Datadog Events Explorer.
The Google Cloud Platform integration does not include any service checks.
Tags are automatically assigned based on a variety of Google Cloud Platform and Google Compute Engine configuration options. The project_id
tag is added to all metrics. Additional tags are collected from the Google Cloud Platform when available, and varies based on metric type.
Additionally, Datadog collects the following as tags:
<key>:<value>
labels.For non-standard gcp.logging metrics, such as metrics beyond Datadog’s out of the box logging metrics, the metadata applied may not be consistent with Google Cloud Logging.
In these cases, the metadata should be manually set by navigating to the metric summary page, searching and selecting the metric in question, and clicking the pencil icon next to the metadata.
Need help? Contact Datadog support.
Additional helpful documentation, links, and articles: