- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
This guide uses the LLM Observability SDKs for Python and Node.js. If your application is written in another language, you can create traces by calling the API instead.
To better understand LLM Observability terms and concepts, you can explore the examples in the LLM Observability Jupyter Notebooks repository. These notebooks provide a hands-on experience, and allow you to apply these concepts in real time.
To generate an LLM Observability trace, you can run a Python or Node.js script.
OPENAI_API_KEY
. To create one, see Account Setup and Set up your API key in the official OpenAI documentation.Install the SDK and OpenAI packages:
pip install ddtrace
pip install openai
Create a script, which makes a single OpenAI call.
import os
from openai import OpenAI
oai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
completion = oai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful customer assistant for a furniture store."},
{"role": "user", "content": "I'd like to buy a chair for my living room."},
],
)
Run the script with the following shell command. This sends a trace of the OpenAI call to Datadog.
DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=onboarding-quickstart \
DD_API_KEY=<YOUR_DATADOG_API_KEY> DD_SITE=<YOUR_DD_SITE> \
DD_LLMOBS_AGENTLESS_ENABLED=1 ddtrace-run python quickstart.py
Replace <YOUR_DATADOG_API_KEY>
with your Datadog API key, and replace <YOUR_DD_SITE>
with your Datadog site.
For more information about required environment variables, see the SDK documentation.
Install the SDK and OpenAI packages:
npm install dd-trace
npm install openai
Create a script, which makes a single OpenAI call.
const { OpenAI } = require('openai');
const oaiClient = new OpenAI(process.env.OPENAI_API_KEY);
function main () {
const completion = await oaiClient.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a helpful customer assistant for a furniture store.' },
{ role: 'user', content: 'I\'d like to buy a chair for my living room.' },
]
});
}
main();
Run the script with the following shell command. This sends a trace of the OpenAI call to Datadog.
DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=onboarding-quickstart \
DD_API_KEY=<YOUR_DATADOG_API_KEY> DD_SITE=<YOUR_DD_SITE> \
DD_LLMOBS_AGENTLESS_ENABLED=1 NODE_OPTIONS="--import dd-trace/initialize.mjs" node quickstart.js
Replace <YOUR_DATADOG_API_KEY>
with your Datadog API key, and replace <YOUR_DD_SITE>
with your Datadog site.
For more information about required environment variables, see the SDK documentation.
Note: DD_LLMOBS_AGENTLESS_ENABLED
is only required if you do not have the Datadog Agent running. If the Agent is running in your production environment, make sure this environment variable is unset.
View the trace of your LLM call on the Traces tab of the LLM Observability page in Datadog.
The trace you see is composed of a single LLM span. The ddtrace-run
or NODE_OPTIONS="--import dd-trace/initialize.mjs"
command automatically traces your LLM calls from Datadog’s list of supported integrations.
If your application consists of more elaborate prompting or complex chains or workflows involving LLMs, you can trace it using the Setup documentation and the SDK documentation.
Like any traditionally application, LLM applications can be implemented across multiple different microservices. With LLM Observability, if one of these services is a LLM proxy or gateway service, you can trace the LLM calls made by individual LLM applications in a complete end-to-end trace.
To enable LLM Observability for a proxy or gateway service that might be called from several different ML applications, you can enable LLM Observability without specifying an ML application name.
In the proxy service, enable LLM Observability without specifying an ML application name. Optionally, you can specify a service name.
# proxy.py
from ddtrace.llmobs import LLMObs
LLMObs.enable(service="chat-proxy")
# proxy-specific logic, including guardrails, sensitive data scans, and the LLM call
// proxy.js
const tracer = require('dd-trace').init({
llmobs: true
});
const llmobs = tracer.llmobs;
// proxy-specific logic, including guardrails, sensitive data scans, and the LLM call
In your specific applications that orchestrate the ML applications that make calls to the proxy or gateway service, enable LLM Observability with the ML application name, and wrap the proxy call in a task
span:
# application.py
from ddtrace.llmobs import LLMObs
LLMObs.enable(ml_app_name="my-ml-app")
import requests
if __name__ == "__main__":
with LLMObs.workflow(name="run-chat"):
# other application-specific logic - RAG steps, parsing, etc.
with LLMObs.task(name="chat-proxy"): # wrap the proxy call in a task span
response = requests.post("http://localhost:8080/chat", json={
# data to pass to the proxy service
})
# other application-specific logic handling the response
// application.js
const tracer = require('dd-trace').init({
llmobs: {
mlApp: 'my-ml-app'
}
});
const llmobs = tracer.llmobs;
const axios = require('axios');
async function main () {
llmobs.trace({ name: 'run-chat', kind: 'workflow' }, async () => {
// other application-specific logic - RAG steps, parsing, etc.
// wrap the proxy call in a task span
const response = llmobs.trace({ name: 'chat-proxy', kind: 'task' }, async () => {
return await axios.post('http://localhost:8080/chat', {
// data to pass to the proxy service
});
});
// other application-specific logic handling the response
});
}
main();
When making requests to the proxy or gateway service, the LLM Observability SDKs automatically propagate the ML application name from the original LLM application. Additionally, the LLM Observability SDKs automatically tag the task
span capturing the proxy or gateway service call with the tag ml-proxy:custom
.
To observe the tasks performed by a variety of ML applications in the proxy service:
All Applications
from the top-left dropdown.All Spans
view in the top-right dropdown.ml-proxy:custom
tag.The spans listed are the parent spans of the any operations executed by the ML applications. To see all spans, and not just the top-level task spans, from the proxy service, you can instead filter by the service
tag:
To observe the complete end-to-end usage of an LLM application that makes calls to a proxy or gateway service, you can filter for traces with that ML application name:
Traces
view in the top-right dropdown.추가 유용한 문서, 링크 및 기사: