- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
You can use Terraform to interact with the Datadog API and manage your logs and metrics. This guide provides example use cases and includes links to commonly used Datadog resources and data sources in the Terraform registry.
You can also import your existing resources into your Terraform configuration, and reference existing resources as Terraform data sources.
If you haven’t already, configure the Datadog Terraform provider to interact with Datadog APIs through a Terraform configuration.
Set up multiple indexes if you want to segment your logs for different retention periods or daily quotas, usage monitoring, and billing. For example, if you have logs that only need to be retained for 7 days, while other logs need to be retained for 30 days, use multiple indexes to separate out the logs by the two retention periods. See Inclusion filters and Exclusion filters documentation for information on defining queries for them. Since ingested logs go into the first index whose filter they match, order your indexes according to your use case.
Log pipelines are a chain of sequential processors that extract meaningful information or attributes from the content to reuse as facets. Each log that goes through the pipelines is matched against each pipeline filter. If it matches the filter, then all the processors are applied to the log before moving to the next pipeline. Set up a custom pipeline to parse and enrich your logs. See the Processors documentation for details on the available processors. You can also reorder your pipelines to make sure logs are getting processed in the correct order.
Integration pipelines are automatically installed when you send logs from certain sources (for example, the NGINX integration). You can reorder these pipelines with the logs integration pipelines resource.
Set up Log Archives if you want to store your logs for longer periods of time. Log Archives sends your logs to a storage-optimized system, such as Amazon S3, Azure Storage, or Google Cloud Storage. You can also reorder your archives as needed.
Generate log-based metrics to summarize log data from your ingested logs. For example, you can generate a count metric of logs that match a query or match a distribution metric of a numeric value contained in the logs, such as request duration. See Generate Metrics from Ingested Logs for more information.
A metric’s metadata includes the metric name, description, and unit. Use the metric metadata resource to modify the information.
Tags add dimensions to your metrics so that they can be filtered, aggregated, and compared in visualizations. Use the metric tag configuration resource to modify your metrics tags in Terraform. See Getting Started with Tags for more information on using tags.