Setup Data Streams Monitoring for Python

This page is not yet available in Spanish. We are working on its translation.
If you have any questions or feedback about our current translation project, feel free to reach out to us!

Data Streams Monitoring is not supported in the AP1 region.

Prerequisites

To start with Data Streams Monitoring, you need recent versions of the Datadog Agent and Python libraries:

Installation

Python uses auto-instrumentation to inject and extract additional metadata required by Data Streams Monitoring for measuring end-to-end latencies and the relationship between queues and services. To enable Data Streams Monitoring, set the DD_DATA_STREAMS_ENABLED environment variable to true on services sending messages to (or consuming messages from) Kafka.

For example:

environment:
  - DD_DATA_STREAMS_ENABLED: "true"

Libraries Supported

Data Streams Monitoring supports the confluent-kafka library and kombu package.

Monitoring SQS Pipelines

Data Streams Monitoring uses one message attribute to track a message’s path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or less message attributes set, allowing the remaining attribute for Data Streams Monitoring.

Monitoring Kinesis Pipelines

There are no message attributes in Kinesis to propagate context and track a message’s full path through a Kinesis stream. As a result, Data Streams Monitoring’s end-to-end latency metrics are approximated based on summing latency on segments of a message’s path, from the producing service through a Kinesis Stream, to a consumer service. Throughput metrics are based on segments from the producing service through a Kinesis Stream, to the consumer service. The full topology of data streams can still be visualized through instrumenting services.

Further Reading

Más enlaces, artículos y documentación útiles:

PREVIEWING: rtrieu/product-analytics-ui-changes