Cette page n'est pas encore disponible en français, sa traduction est en cours. Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.
Monitor, troubleshoot, and evaluate your LLM-powered applications (e.g. chatbot, data extraction tool, etc) built using LangChain.
Use LLM Observability to investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.
Get cost estimation, prompt and completion sampling, error tracking, performance metrics, and more out of LangChain Python library requests using Datadog metrics and APM.
LangChain integration is automatically enabled when LLM Observability is configured. This captures latency, errors, input/output messages, and token usage for LangChain operations.
The following methods are traced for both synchronous and asynchronous LangChain operations:
Validate that LLM Observability is properly capturing spans by checking your application logs for successful span creation. You can also run the following command to check the status of the ddtrace integration:
ddtrace-run --info
Look for the following message to confirm the setup:
Pass the --debug flag to ddtrace-run to enable debug logging.
ddtrace-run --debug
This displays any errors sending data:
ERROR:ddtrace.internal.writer.writer:failed to send, dropping 1 traces to intake at http://localhost:8126/v0.5/traces after 3 retries ([Errno 61] Connection refused)