- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots or data extraction tools, using Amazon Bedrock.
If you are building LLM applications, use LLM Observability to investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.
See the LLM Observability tracing view video for an example of how you can investigate a trace.
Amazon Bedrock is a fully managed service that makes foundation models (FMs) from Amazon and leading AI startups available through an API, so you can choose from various FMs to find the model that’s best suited for your use case.
Enable this integration to see all your Bedrock metrics in Datadog.
You can enable LLM Observability in different environments. Follow the appropriate setup based on your scenario:
ddtrace
package: pip install ddtrace
DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_AGENTLESS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME> ddtrace-run python <YOUR_APP>.py
docker run -d \
--cgroupns host \
--pid host \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /proc/:/host/proc/:ro \
-v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
-e DD_API_KEY=<DATADOG_API_KEY> \
-p 127.0.0.1:8126:8126/tcp \
-p 127.0.0.1:8125:8125/udp \
-e DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true \
-e DD_APM_ENABLED=true \
gcr.io/datadoghq/agent:latest
ddtrace
package: pip install ddtrace
ddtrace-run
command to automatically enable tracing: DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME> ddtrace-run python <YOUR_APP>.py
Note: If the Agent is running on a custom host or port, set DD_AGENT_HOST
and DD_TRACE_AGENT_PORT
accordingly.
DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME>
Note: In serverless environments, Datadog automatically flushes spans when the Lambda function finishes running.
The Amazon Bedrock integration is automatically enabled when LLM Observability is configured. This captures latency, errors, input and output messages, as well as token usage for Amazon Bedrock calls.
The following methods are traced for both synchronous and streamed Amazon Bedrock operations:
InvokeModel()
InvokeModelWithResponseStream()
No additional setup is required for these methods.
Validate that LLM Observability is properly capturing spans by checking your application logs for successful span creation. You can also run the following command to check the status of the ddtrace
integration:
ddtrace-run --info
Look for the following message to confirm the setup:
Agent error: None
If you encounter issues during setup, enable debug logging by passing the --debug
flag:
ddtrace-run --debug
This displays any errors related to data transmission or instrumentation, including issues with Amazon Bedrock traces.
If you haven’t already, set up the Amazon Web Services integration first.
Bedrock
is enabled under the Metric Collection
tab.aws.bedrock.content_filtered_count (count) | The total number of times the text output content was filtered. Shown as time |
aws.bedrock.input_token_count (gauge) | The average number of input tokens used in prompts invoked for a model. Shown as token |
aws.bedrock.input_token_count.maximum (gauge) | The maximum number of input tokens used in prompts invoked for a model. Shown as token |
aws.bedrock.input_token_count.minimum (gauge) | The minimum number of input tokens used in prompts invoked for a model. Shown as token |
aws.bedrock.input_token_count.sum (count) | The total number of input tokens used in prompts invoked for a model. Shown as token |
aws.bedrock.invocation_client_errors (count) | The number of client invocation errors. Shown as error |
aws.bedrock.invocation_latency (gauge) | Average latency of the invocations in milliseconds. Shown as millisecond |
aws.bedrock.invocation_latency.maximum (gauge) | The maximum invocation latency over a 1 minute period. Shown as millisecond |
aws.bedrock.invocation_latency.minimum (gauge) | The minimum invocation latency over a 1 minute period. Shown as millisecond |
aws.bedrock.invocation_latency.p90 (gauge) | The 90th percentile of invocation latency over a 1 minute period. Shown as millisecond |
aws.bedrock.invocation_latency.p95 (gauge) | The 95th percentile of invocation latency over a 1 minute period. Shown as millisecond |
aws.bedrock.invocation_latency.p99 (gauge) | The 99th percentile of invocation latency over a 1 minute period. Shown as millisecond |
aws.bedrock.invocation_server_errors (count) | The number of server invocation errors. Shown as error |
aws.bedrock.invocation_throttles (count) | The number of invocation throttles. Shown as throttle |
aws.bedrock.invocations (count) | The number of invocations sent to a model endpoint. Shown as invocation |
aws.bedrock.output_image_count (gauge) | The average number of output images returned by model invocations over a 1 minute period. Shown as item |
aws.bedrock.output_token_count (gauge) | The average number of output tokens returned by model invocations over a 1 minute period. Shown as token |
aws.bedrock.output_token_count.maximum (gauge) | The maximum number of output tokens returned by model invocations over a 1 minute period. Shown as token |
aws.bedrock.output_token_count.minimum (gauge) | The minimum number of output tokens returned by model invocations over a 1 minute period. Shown as token |
aws.bedrock.output_token_count.sum (count) | The total number of output tokens returned by all model invocations. Shown as token |
The Amazon Bedrock integration does not include any events.
The Amazon Bedrock integration does not include any service checks.
Need help? Contact Datadog support.
Additional helpful documentation, links, and articles: