Amazon Bedrock

This page is not yet available in Spanish. We are working on its translation.
If you have any questions or feedback about our current translation project, feel free to reach out to us!

Overview

Monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots or data extraction tools, using Amazon Bedrock.

If you are building LLM applications, use LLM Observability to investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.

See the LLM Observability tracing view video for an example of how you can investigate a trace.

Amazon Bedrock is a fully managed service that makes foundation models (FMs) from Amazon and leading AI startups available through an API, so you can choose from various FMs to find the model that’s best suited for your use case.

Enable this integration to see all your Bedrock metrics in Datadog.

Setup

LLM Observability: Get end-to-end visibility into your LLM application using Amazon Bedrock

You can enable LLM Observability in different environments. Follow the appropriate setup based on your scenario:

Installation for Python

If you do not have the Datadog Agent:
  1. Install the ddtrace package:
  pip install ddtrace
  1. Start your application with the following command, enabling Agentless mode:
  DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_AGENTLESS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME> ddtrace-run python <YOUR_APP>.py
If you already have the Datadog Agent installed:
  1. Make sure the Agent is running and that APM and StatsD are enabled. For example, use the following command with Docker:
docker run -d \
  --cgroupns host \
  --pid host \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -v /proc/:/host/proc/:ro \
  -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
  -e DD_API_KEY=<DATADOG_API_KEY> \
  -p 127.0.0.1:8126:8126/tcp \
  -p 127.0.0.1:8125:8125/udp \
  -e DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true \
  -e DD_APM_ENABLED=true \
  gcr.io/datadoghq/agent:latest
  1. If you haven’t already, install the ddtrace package:
  pip install ddtrace
  1. Start your application using the ddtrace-run command to automatically enable tracing:
   DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME> ddtrace-run python <YOUR_APP>.py

Note: If the Agent is running on a custom host or port, set DD_AGENT_HOST and DD_TRACE_AGENT_PORT accordingly.

If you are running LLM Observability in a serverless environment (AWS Lambda):
  1. Install the Datadog-Python and Datadog-Extension Lambda layers as part of your AWS Lambda setup.
  2. Enable LLM Observability by setting the following environment variables:
   DD_SITE=<YOUR_DATADOG_SITE> DD_API_KEY=<YOUR_API_KEY> DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=<YOUR_ML_APP_NAME>

Note: In serverless environments, Datadog automatically flushes spans when the Lambda function finishes running.

Automatic Amazon Bedrock tracing

The Amazon Bedrock integration is automatically enabled when LLM Observability is configured. This captures latency, errors, input and output messages, as well as token usage for Amazon Bedrock calls.

The following methods are traced for both synchronous and streamed Amazon Bedrock operations:

  • InvokeModel()
  • InvokeModelWithResponseStream()

No additional setup is required for these methods.

Validation

Validate that LLM Observability is properly capturing spans by checking your application logs for successful span creation. You can also run the following command to check the status of the ddtrace integration:

ddtrace-run --info

Look for the following message to confirm the setup:

Agent error: None
Debugging

If you encounter issues during setup, enable debug logging by passing the --debug flag:

ddtrace-run --debug

This displays any errors related to data transmission or instrumentation, including issues with Amazon Bedrock traces.

APM: Get Usage Metrics for Python Applications

If you haven’t already, set up the Amazon Web Services integration first.

Metric collection

  1. In the AWS integration page, ensure that Bedrock is enabled under the Metric Collection tab.
  2. Install the Datadog - Amazon Bedrock integration.

Data Collected

Metrics

aws.bedrock.content_filtered_count
(count)
The total number of times the text output content was filtered.
Shown as time
aws.bedrock.input_token_count
(gauge)
The average number of input tokens used in prompts invoked for a model.
Shown as token
aws.bedrock.input_token_count.maximum
(gauge)
The maximum number of input tokens used in prompts invoked for a model.
Shown as token
aws.bedrock.input_token_count.minimum
(gauge)
The minimum number of input tokens used in prompts invoked for a model.
Shown as token
aws.bedrock.input_token_count.sum
(count)
The total number of input tokens used in prompts invoked for a model.
Shown as token
aws.bedrock.invocation_client_errors
(count)
The number of client invocation errors.
Shown as error
aws.bedrock.invocation_latency
(gauge)
Average latency of the invocations in milliseconds.
Shown as millisecond
aws.bedrock.invocation_latency.maximum
(gauge)
The maximum invocation latency over a 1 minute period.
Shown as millisecond
aws.bedrock.invocation_latency.minimum
(gauge)
The minimum invocation latency over a 1 minute period.
Shown as millisecond
aws.bedrock.invocation_latency.p90
(gauge)
The 90th percentile of invocation latency over a 1 minute period.
Shown as millisecond
aws.bedrock.invocation_latency.p95
(gauge)
The 95th percentile of invocation latency over a 1 minute period.
Shown as millisecond
aws.bedrock.invocation_latency.p99
(gauge)
The 99th percentile of invocation latency over a 1 minute period.
Shown as millisecond
aws.bedrock.invocation_server_errors
(count)
The number of server invocation errors.
Shown as error
aws.bedrock.invocation_throttles
(count)
The number of invocation throttles.
Shown as throttle
aws.bedrock.invocations
(count)
The number of invocations sent to a model endpoint.
Shown as invocation
aws.bedrock.output_image_count
(gauge)
The average number of output images returned by model invocations over a 1 minute period.
Shown as item
aws.bedrock.output_token_count
(gauge)
The average number of output tokens returned by model invocations over a 1 minute period.
Shown as token
aws.bedrock.output_token_count.maximum
(gauge)
The maximum number of output tokens returned by model invocations over a 1 minute period.
Shown as token
aws.bedrock.output_token_count.minimum
(gauge)
The minimum number of output tokens returned by model invocations over a 1 minute period.
Shown as token
aws.bedrock.output_token_count.sum
(count)
The total number of output tokens returned by all model invocations.
Shown as token

Events

The Amazon Bedrock integration does not include any events.

Service Checks

The Amazon Bedrock integration does not include any service checks.

Troubleshooting

Need help? Contact Datadog support.

Further Reading

Additional helpful documentation, links, and articles:

PREVIEWING: rtrieu/product-analytics-ui-changes