- 필수 기능
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- 디지털 경험
- 소프트웨어 제공
- 보안
- 로그 관리
- 관리
- 인프라스트럭처
- ci
- containers
- csm
- ndm
- otel_guides
- overview
- slos
- synthetics
- tests
- 워크플로
Our quickstart docs make use of the LLM Observability SDK for Python. For detailed usage, see the SDK documentation. If your application is written in another language, you can create traces by calling the API instead.
To run examples from Jupyter notebooks, see the LLM Observability Jupyter Notebooks repository.
Use the steps below to run a simple Python script that generates an LLM Observability trace.
OPENAI_API_KEY
. To create one, see Account Setup and Set up your API key in the OpenAI documentation.Install the following ddtrace
and openai
packages:
pip install ddtrace
pip install openai
The Python script below makes a single OpenAI call. Save it as quickstart.py
.
quickstart.py
import os
from openai import OpenAI
oai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
completion = oai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful customer assistant for a furniture store."},
{"role": "user", "content": "I'd like to buy a chair for my living room."},
],
)
Run the Python script with the following shell command, sending a trace of the OpenAI call to Datadog:
DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=onboarding-quickstart \
DD_API_KEY=<YOUR_DATADOG_API_KEY> DD_SITE=<YOUR_DATADOG_SITE> \
DD_LLMOBS_AGENTLESS_ENABLED=1 ddtrace-run python quickstart.py
For details on the required environment variables, see the SDK documentation.
A trace of your LLM call should appear in the Traces tab of LLM Observability in Datadog.
The trace you see is composed of a single LLM span. The ddtrace-run
command automatically traces your LLM calls from Datadog’s list of supported integrations.
If your application consists of more elaborate prompting or complex chains or workflows involving LLMs, you can trace it using the instrumentation guide and the SDK documentation.