- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`Like any traditionally application, LLM applications can be implemented across multiple different microservices. With LLM Observability, if one of these services is a LLM proxy or gateway service, you can trace the LLM calls made by individual LLM applications in a complete end-to-end trace.
To enable LLM Observability for a proxy or gateway service that might be called from several different ML applications, you can enable LLM Observability without specifying an ML application name. In its place, specify the service name. This can be used to filter out spans in LLM observability specific to the proxy or gateway service.
# proxy.py
from ddtrace.llmobs import LLMObs
LLMObs.enable(service="chat-proxy")
# proxy-specific logic, including guardrails, sensitive data scans, and the LLM call
// proxy.js
const tracer = require('dd-trace').init({
llmobs: true,
service: "chat-proxy"
});
const llmobs = tracer.llmobs;
// proxy-specific logic, including guardrails, sensitive data scans, and the LLM call
In your specific applications that orchestrate the ML applications that make calls to the proxy or gateway service, enable LLM Observability with the ML application name:
# application.py
from ddtrace.llmobs import LLMObs
LLMObs.enable(ml_app="my-ml-app")
import requests
if __name__ == "__main__":
with LLMObs.workflow(name="run-chat"):
# other application-specific logic - RAG steps, parsing, etc.
response = requests.post("http://localhost:8080/chat", json={
# data to pass to the proxy service
})
# other application-specific logic handling the response
// application.js
const tracer = require('dd-trace').init({
llmobs: {
mlApp: 'my-ml-app'
}
});
const llmobs = tracer.llmobs;
const axios = require('axios');
async function main () {
llmobs.trace({ name: 'run-chat', kind: 'workflow' }, async () => {
// other application-specific logic - RAG steps, parsing, etc.
// wrap the proxy call in a task span
const response = await axios.post('http://localhost:8080/chat', {
// data to pass to the proxy service
});
// other application-specific logic handling the response
});
}
main();
When making requests to the proxy or gateway service, the LLM Observability SDKs automatically propagate the ML application name from the original LLM application. The propagated ML application name takes precedence over the ML application name specified in the proxy or gateway service.
To view all requests to the proxy service as top-level spans, wrap the entrypoint of the proxy service endpoint in a workflow
span:
# proxy.py
from ddtrace.llmobs import LLMObs
LLMObs.enable(service="chat-proxy")
@app.route('/chat')
def chat():
with LLMObs.workflow(name="chat-proxy-entrypoint"):
# proxy-specific logic, including guardrails, sensitive data scans, and the LLM call
// proxy.js
const tracer = require('dd-trace').init({
llmobs: true,
service: "chat-proxy"
});
const llmobs = tracer.llmobs;
app.post('/chat', async (req, res) => {
await llmobs.trace({ name: 'chat-proxy-entrypoint', kind: 'workflow' }, async () => {
// proxy-specific logic, including guardrails, sensitive data scans, and the LLM call
res.send("Hello, world!");
});
});
All requests to the proxy service can now be viewed as top-level spans within the LLM trace view:
All Applications
from the top-left dropdown.All Spans
view in the top-right dropdown.service
tag and the workflow name.The workflow span name can also be filtered by a facet on the left hand side of the trace view:
To instead monitor only the LLM calls made within a proxy or gateway service, filter by llm
spans in the trace view:
This can also be done by filtering the span kind facet on the left hand side of the trace view:
Both processes described in the filtering by top-level calls to the proxy service and LLM calls made within the proxy or gateway service can also be applied to a specific ML application to view its interaction with the proxy or gateway service.
All Spans
view to the Traces
view in the top-right dropdown.List
view to a Timeseries
view in the Traces view while maintaining the All Span
filter in the top-right dropdown.To observe the complete end-to-end usage of an LLM application that makes calls to a proxy or gateway service, you can filter for traces with that ML application name:
Traces
view in the top-right dropdown.