Setup Data Streams Monitoring for .NET

Prerequisites

Supported libraries

TechnologyLibraryMinimal tracer versionRecommended tracer version
KafkaConfluent.Kafka2.28.02.41.0 or later
RabbitMQRabbitMQ.Client2.28.02.37.0 or later
Amazon SQSAmazon SQS SDK2.48.02.48.0 or later
Amazon SNSAmazon SNS SDK3.6.03.6.0 or later
IBM MQIBMMQDotnetClient2.49.02.49.0 or later
Azure Service Bus

(requires additional setup)
Azure.Messaging.ServiceBus2.53.02.53.0 or later

Installation

.NET uses auto-instrumentation to inject and extract additional metadata required by Data Streams Monitoring for measuring end-to-end latencies and the relationship between queues and services. To enable Data Streams Monitoring, set the DD_DATA_STREAMS_ENABLED environment variable to true on services sending messages to (or consuming messages from) Kafka or RabbitMQ.

For example:

environment:
  - DD_DATA_STREAMS_ENABLED: "true"

Monitoring SQS pipelines

Data Streams Monitoring uses one message attribute to track a message’s path through an SQS queue. As Amazon SQS has a maximum limit of 10 message attributes allowed per message, all messages streamed through the data pipelines must have 9 or fewer message attributes set, allowing the remaining attribute for Data Streams Monitoring.

Monitoring SNS-to-SQS pipelines

To monitor a data pipeline where Amazon SNS talks directly to Amazon SQS, you must enable Amazon SNS raw message delivery.

Monitoring Azure Service Bus

Setting up Data Streams Monitoring for Azure Service Bus applications requires additional configuration for the instrumented application.

  1. Either set the environment variable AZURE_EXPERIMENTAL_ENABLE_ACTIVITY_SOURCE to true, or in your application code set the Azure.Experimental.EnableActivitySource context switch to true. This instructs the Azure Service Bus library to generate tracing information. See Azure SDK documentation for more details.
  2. Set the DD_TRACE_OTEL_ENABLED environment variable to true. This instructs the .NET auto-instrumentation to listen to the tracing information generated by the Azure Service Bus Library and enables the inject and extract operations required for Data Streams Monitoring.

Monitoring connectors

Confluent Cloud connectors

Data Streams Monitoring can automatically discover your Confluent Cloud connectors and visualize them within the context of your end-to-end streaming data pipeline.

Setup
  1. Install and configure the Datadog-Confluent Cloud integration.

  2. In Datadog, open the Confluent Cloud integration tile.

    The Confluent Cloud integration tile in Datadog, on the Configure tab. Under an Actions heading, a table titled '13 Resources autodiscovered' containing a list of resources and checkboxes for each resource.

    Under Actions, a list of resources populates with detected clusters and connectors. Datadog attempts to discover new connectors every time you view this integration tile.

  3. Select the resources you want to add.

  4. Click Add Resources.

  5. Navigate to Data Streams Monitoring to visualize the connectors and track connector status and throughput.

Further reading

PREVIEWING: guacbot/translation-pipeline