(LEGACY) Observability Pipelines Documentation

이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Observability Pipelines is not available on the US1-FED Datadog site.

If you upgrade your OP Workers version 1.8 or below to version 2.0 or above, your existing pipelines will break. Do not upgrade your OP Workers if you want to continue using OP Workers version 1.8 or below. If you want to use OP Worker 2.0 or above, you must migrate your OP Worker 1.8 or earlier pipelines to OP Worker 2.x.

Datadog recommends that you update to OP Worker versions 2.0 or above. Upgrading to a major OP Worker version and keeping it updated is the only supported way to get the latest OP Worker functionality, fixes, and security updates.

The following documents are for the Observability Pipelines Worker 1.8 and older.



Working with Data



Reference: Configurations


Reference: Datadog Processing Language




Legacy Observability Pipelines

A graphic showing different data sources on the left that flows into three hexagons named transform, reduce, and route, with arrows pointing to different destinations for the modified data

Overview

Observability Pipelines allow you to collect, process, and route logs from any source to any destination in infrastructure that you own or manage.

With Observability Pipelines, you can:

  • Control your data volume before routing to manage costs.
  • Route data anywhere to reduce vendor lock-in and simplify migrations.
  • Transform logs by adding, parsing, enriching, and removing fields and tags.
  • Redact sensitive data from your telemetry data.

The Observability Pipelines Worker is the software that runs in your infrastructure. It aggregates and centrally processes and routes your data. More specifically, the Worker can:

  • Receive or pull all your observability data collected by your agents, collectors, or forwarders.
  • Transform ingested data (for example: parse, filter, sample, enrich, and more).
  • Route the processed data to any destination.

The Datadog UI provides a control plane to manage your Observability Pipelines Workers. You can monitor your pipelines to understand the health of your pipelines, identify bottlenecks and latencies, fine-tune performance, validate data delivery, and investigate your largest volume contributors. You can build or edit pipelines, whether it be routing a subset of data to a new destination or introducing a new sensitive data redaction rule, and roll out these changes to your active pipelines from the Datadog UI.

Get started

  1. Set up the Observability Pipelines Worker.
  2. Create pipelines to collect, transform and route your data.
  3. Discover how to deploy Observability Pipelines at production scale:

Explore Observability Pipelines

Start getting insights into your Observability Pipelines:

Collect data from any source and route data to any destination

Collect data* from any source and route them to any destination to reduce vendor lock-in and simplify migrations.

The Datadog Logs component side panel showing a line graph of events in/out per second and a link graph of bytes in/out per second

Control your data volume before it gets routed

Optimize volume and reduce the size of your observability data by sampling, filtering, deduplicating, and aggregating your logs.

The list of transforms side panel showing the transforms available such as aggregate, Amazon EC2 Metadata, dedupe and more.

Redact sensitive data from your telemetry data

Redact sensitive data before they are routed outside of your infrastructure, using out-of-the-box patterns to scan for PII, PCI, private keys, and more.

The sensitive data scanner rules library panel showing the available rules for personal identifiable information and network and device information

Monitor the health of your pipelines

Get a holistic view of all of your pipelines’ topologies and monitor key performance indicators, such as average load, error rate, and throughput for each of your flows.

The pipeline configuration page showing a warning because components are experiencing errors and an event ingestion delay was detected

Further Reading

PREVIEWING: alai97/reorganize-some-sections-in-dora-metrics