- 필수 기능
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- 디지털 경험
- 소프트웨어 제공
- 보안
- 로그 관리
- 관리
- 인프라스트럭처
- ci
- containers
- csm
- ndm
- otel_guides
- overview
- slos
- synthetics
- tests
- 워크플로
With LLM Observability, you can monitor, troubleshoot, and evaluate your LLM-powered applications, such as chatbots. You can investigate the root cause of issues, monitor operational performance, and evaluate the quality, privacy, and safety of your LLM applications.
Each request fulfilled by your application is represented as a trace on the LLM Observability traces page in Datadog. A trace can represent:
Each trace contains spans representing each choice made by an agent or each step of a given workflow. A given trace can also include input and output, latency, privacy issues, errors, and more.
You can instrument your application with the LLM Observability SDK for Python, or by calling the LLM Observability API.
To get started with LLM Observability, you can build a simple example with the Quickstart, or follow the guide for instrumenting your LLM application.
View every step of your LLM application chains and calls to pinpoint problematic requests and identify the root cause of errors.
Monitor the throughput, latency, and token usage trends for all your LLM applications.
Identify problematic clusters and monitor the quality of responses over time with topical clustering and checks like sentiment, failure to answer, and so on.
Automatically scan and redact any sensitive data in your AI applications and identify prompt injections.
By using LLM Observability, you acknowledge that Datadog is authorized to share your Company’s data with OpenAI LLC for the purpose of providing and improving LLM Observability. OpenAI will not use your data for training or tuning purposes. If you have any questions or want to opt out of features that depend on OpenAI, reach out to your account representative.