Manage Logs and Metrics with Terraform

이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Overview

You can use Terraform to interact with the Datadog API and manage your logs and metrics. This guide provides example use cases and includes links to commonly used Datadog resources and data sources in the Terraform registry.

You can also import your existing resources into your Terraform configuration, and reference existing resources as Terraform data sources.

Set up the Datadog Terraform Provider

If you haven’t already, configure the Datadog Terraform provider to interact with Datadog APIs through a Terraform configuration.

Log configuration

Set up multiple indexes

Set up multiple indexes if you want to segment your logs for different retention periods or daily quotas, usage monitoring, and billing. For example, if you have logs that only need to be retained for 7 days, while other logs need to be retained for 30 days, use multiple indexes to separate out the logs by the two retention periods. See Inclusion filters and Exclusion filters documentation for information on defining queries for them. Since ingested logs go into the first index whose filter they match, order your indexes according to your use case.

Set up a custom pipeline

Log pipelines are a chain of sequential processors that extract meaningful information or attributes from the content to reuse as facets. Each log that goes through the pipelines is matched against each pipeline filter. If it matches the filter, then all the processors are applied to the log before moving to the next pipeline. Set up a custom pipeline to parse and enrich your logs. See the Processors documentation for details on the available processors. You can also reorder your pipelines to make sure logs are getting processed in the correct order.

Integration pipelines are automatically installed when you send logs from certain sources (for example, the NGINX integration). You can reorder these pipelines with the logs integration pipelines resource.

Set up multiple archives for long-term storage

Set up Log Archives if you want to store your logs for longer periods of time. Log Archives sends your logs to a storage-optimized system, such as Amazon S3, Azure Storage, or Google Cloud Storage. You can also reorder your archives as needed.

Generate metrics from ingested logs

Generate log-based metrics to summarize log data from your ingested logs. For example, you can generate a count metric of logs that match a query or match a distribution metric of a numeric value contained in the logs, such as request duration. See Generate Metrics from Ingested Logs for more information.

Metric configuration

A metric’s metadata includes the metric name, description, and unit. Use the metric metadata resource to modify the information.

Tags add dimensions to your metrics so that they can be filtered, aggregated, and compared in visualizations. Use the metric tag configuration resource to modify your metrics tags in Terraform. See Getting Started with Tags for more information on using tags.

PREVIEWING: mervebolat/span-id-preprocessing