- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
With LLM Observability, you can monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots. You can investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.
Each request fulfilled by your application is represented as a trace on the LLM Observability page in Datadog.
A trace can represent:
Each trace contains spans representing each choice made by an agent or each step of a given workflow. A given trace can also include input and output, latency, privacy issues, errors, and more. For more information, see Terms and Concepts.
View every step of your LLM application chains and calls to pinpoint problematic requests and identify the root cause of errors.
Monitor the cost, latency, performance, and usage trends for all your LLM applications with out-of-the-box dashboards.
Identify problematic clusters and monitor the quality of responses over time with topical clustering and checks like sentiment, failure to answer, and so on.
Automatically scan and redact any sensitive data in your AI applications and identify prompt injections, among other evaluations.
The LLM Observability SDK for Python integrates with frameworks such as OpenAI, LangChain, AWS Bedrock, and Anthropic. It automatically traces and annotate LLM calls, capturing latency, errors, and token usage metrics—without code changes.
For more information, see the Auto Instrumentation documentation.
See the Setup documentation for instructions on instrumenting your LLM application or follow the Trace an LLM Application guide to generate a trace using the LLM Observability SDK for Python.
추가 유용한 문서, 링크 및 기사: