- 필수 기능
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- 디지털 경험
- 소프트웨어 제공
- 보안
- 로그 관리
- 관리
- 인프라스트럭처
- ci
- containers
- csm
- ndm
- otel_guides
- overview
- slos
- synthetics
- tests
- 워크플로
Your application can submit data to LLM Observability in two ways: with LLM Observability’s Python SDK, or with the LLM Observability API.
Each request fulfilled by your application is represented as a trace on the LLM Observability traces page in Datadog:
If you’re new to LLM Observability traces, read the Core Concepts before proceeding to decide which instrumentation options best suit your application.
Datadog provides auto-instrumentation to capture LLM calls for specific LLM provider libraries. However, manually instrumenting your LLM application using the Python SDK can unlock even more of Datadog’s LLM Observability features.
To trace an LLM application:
temperature
), metrics (such as input_tokens
), and key-value tags (such as version:1.0.0
).ddtrace-run
, as described in those instructions.To create a span, the LLM Observability SDK provides two options:
ddtrace.llmobs.decorators.<SPAN_KIND>()
as a decorator on the function you’d like to trace, replacing <SPAN_KIND>
with the desired span kind.ddtrace.llmobs.LLMObs.<SPAN_KIND>()
as a context manager to trace any inline code, replacing <SPAN_KIND>
with the desired span kind.The examples below create a workflow span.
from ddtrace.llmobs.decorators import workflow
@workflow
def process_message():
... # user application logic
return
from ddtrace.llmobs import LLMObs
def process_message():
with LLMObs.workflow() as span:
... # user application logic
return
Starting a new span before the current span is finished automatically traces a parent-child relationship between the two spans. The parent span represents the larger operation, while the child span represents a smaller nested sub-operation within it.
The examples below create a trace with two spans.
from ddtrace.llmobs.decorators import task, workflow
@workflow
def process_message():
perform_preprocessing()
... # user application logic
return
@task
def perform_preprocessing():
... # user application logic
return
from ddtrace.llmobs import LLMObs
def process_message():
with LLMObs.workflow(name="process_message") as workflow_span:
with LLMObs.task(name="perform_preprocessing") as task_span:
... # user application logic
return
To add extra information to a span such as inputs, outputs, metadata, metrics, or tags, use the LLM Observability SDK’s LLMObs.annotate()
method.
The examples below annotate the workflow span created in the above example:
from ddtrace.llmobs import LLMObs
from ddtrace.llmobs.decorators import workflow
@workflow(name="process_message")
def process_message():
... # user application logic
LLMObs.annotate(
input_data="<ARGUMENT>",
output_data="<OUTPUT>",
metadata={},
metrics={"input_tokens": 15, "output_tokens": 24},
tags={},
)
return
from ddtrace.llmobs import LLMObs
def process_message():
with LLMObs.workflow() as span:
... # user application logic
LLMObs.annotate(
span=span,
input_data="<ARGUMENT>",
output_data="<OUTPUT>",
metadata={},
metrics={"input_tokens": 15, "output_tokens": 24},
tags={},
)
return
For more information on alternative tracing methods and tracing features, see the SDK documentation.
Depending on the complexity of your LLM application, you can also:
session_id
.