To start with Data Streams Monitoring, you need recent versions of the Datadog Agent and Data Streams Monitoring libraries. Note: This documentation uses v2 of the Go tracer, which Datadog recommends for all users. If you are using v1, see the migration guide to upgrade to v2.
1: /tracing/trace_collection/custom_instrumentation/go/migration Data Streams Monitoring has not been changed between v1 and v2 of the tracer.
Wrap the producer creation with ddkafka.NewProducer and use the ddkafka.WithDataStreams() configuration
// CREATE PRODUCER WITH THIS WRAPPER
producer,err:=ddkafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers":bootStrapServers,},ddkafka.WithDataStreams())
If a service consumes data from one point and produces to another point, propagate context between the two places using the Go context structure:
Data Streams Monitoring can automatically discover your Confluent Cloud connectors and visualize them within the context of your end-to-end streaming data pipeline.
Under Actions, a list of resources populates with detected clusters and connectors. Datadog attempts to discover new connectors every time you view this integration tile.
Select the resources you want to add.
Click Add Resources.
Navigate to Data Streams Monitoring to visualize the connectors and track connector status and throughput.
Further reading
Additional helpful documentation, links, and articles: