- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`The following instrumentation types are available:
To start with Data Streams Monitoring, you need recent versions of the Datadog Agent and Data Streams Monitoring libraries:
Technology | Library | Minimal tracer version | Recommended tracer version |
---|---|---|---|
Kafka | confluent-kafka-go | 1.56.1 | 1.66.0 or later |
Kafka | Sarama | 1.56.1 | 1.66.0 or later |
Data Streams Monitoring uses message headers to propagate context through Kafka streams. If log.message.format.version
is set in the Kafka broker configuration, it must be set to 0.11.0.0
or higher. Data Streams Monitoring is not supported for versions lower than this.
The RabbitMQ integration can provide detailed monitoring and metrics of your RabbitMQ deployments. For full compatibility with Data Streams Monitoring, Datadog recommends configuring the integration as follows:
instances:
- prometheus_plugin:
url: http://<HOST>:15692
unaggregated_endpoint: detailed?family=queue_coarse_metrics&family=queue_consumer_count&family=channel_exchange_metrics&family=channel_queue_exchange_metrics&family=node_coarse_metrics
This ensures that all RabbitMQ graphs populate, and that you see detailed metrics for individual exchanges as well as queues.
Automatic instrumentation uses Orchestrion to install dd-trace-go and supports both the Sarama and Confluent Kafka libraries.
To automatically instrument your service:
DD_DATA_STREAMS_ENABLED=true
environment variableTo manually instrument the Sarama Kafka client with Data Streams Monitoring:
ddsarama
go libraryimport (
ddsarama "gopkg.in/DataDog/dd-trace-go.v1/contrib/Shopify/sarama" // 1.x
// ddsarama "github.com/DataDog/dd-trace-go/contrib/Shopify/sarama/v2" // 2.x
)
2. Wrap the producer with `ddsarama.WrapAsyncProducer`
...
config := sarama.NewConfig()
producer, err := sarama.NewAsyncProducer([]string{bootStrapServers}, config)
// ADD THIS LINE
producer = ddsarama.WrapAsyncProducer(config, producer, ddsarama.WithDataStreams())
To manually instrument Confluent Kafka with Data Streams Monitoring:
ddkafka
go libraryimport (
ddkafka "gopkg.in/DataDog/dd-trace-go.v1/contrib/confluentinc/confluent-kafka-go/kafka.v2" // 1.x
// ddkafka "github.com/DataDog/dd-trace-go/contrib/confluentinc/confluent-kafka-go/kafka.v2/v2" // 2.x
)
ddkafka.NewProducer
and use the ddkafka.WithDataStreams()
configuration// CREATE PRODUCER WITH THIS WRAPPER
producer, err := ddkafka.NewProducer(&kafka.ConfigMap{
"bootstrap.servers": bootStrapServers,
}, ddkafka.WithDataStreams())
If a service consumes data from one point and produces to another point, propagate context between the two places using the Go context structure:
ctx = datastreams.ExtractFromBase64Carrier(ctx, ddsarama.NewConsumerMessageCarrier(message))
datastreams.InjectToBase64Carrier(ctx, ddsarama.NewProducerMessageCarrier(message))
You can also use manual instrumentation. For example, you can propagate context through Kinesis.
ctx, ok := tracer.SetDataStreamsCheckpointWithParams(ctx, options.CheckpointParams{PayloadSize: getProducerMsgSize(msg)}, "direction:out", "type:kinesis", "topic:kinesis_arn")
if ok {
datastreams.InjectToBase64Carrier(ctx, message)
}
ctx, ok := tracer.SetDataStreamsCheckpointWithParams(datastreams.ExtractFromBase64Carrier(context.Background(), message), options.CheckpointParams{PayloadSize: payloadSize}, "direction:in", "type:kinesis", "topic:kinesis_arn")
Data Streams Monitoring can automatically discover your Confluent Cloud connectors and visualize them within the context of your end-to-end streaming data pipeline.
Install and configure the Datadog-Confluent Cloud integration.
In Datadog, open the Confluent Cloud integration tile.
Under Actions, a list of resources populates with detected clusters and connectors. Datadog attempts to discover new connectors every time you view this integration tile.
Select the resources you want to add.
Click Add Resources.
Navigate to Data Streams Monitoring to visualize the connectors and track connector status and throughput.