This page is not yet available in Spanish. We are working on its translation. If you have any questions or feedback about our current translation project, feel free to reach out to us!
Monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots or data extraction tools, using Google Gemini.
If you are building LLM applications, use Datadog’s LLM Observability to investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.
The Google Gemini integration provides automatic tracing for the Google AI Python SDK’s content generation calls. This captures latency, errors, input and output messages, as well as token usage for Google Gemini operations.
The following methods are traced for both synchronous and asynchronous Google Gemini operations:
Validate that LLM Observability is properly capturing spans by checking your application logs for successful span creation. You can also run the following command to check the status of the ddtrace integration:
ddtrace-run --info
Look for the following message to confirm the setup: