Cette page n'est pas encore disponible en français, sa traduction est en cours. Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.
LLM Observability is not available in the selected site () at this time.
You can visualize the interactions and performance data of your LLM applications on the LLM Observability Traces page, where each request fulfilled by your application is represented as a trace.
For more information about traces, see Terms and Concepts and decide which instrumentation option best suits your application’s needs.
Instrument an LLM application
Datadog provides auto-instrumentation to capture LLM calls for specific LLM provider libraries. However, manually instrumenting your LLM application using the LLM Observability SDK for Python enables access to additional LLM Observability features.
These instructions use the LLM Observability SDK for Python. If your application is running in a serverless environment, follow the serverless setup instructions.If your application is not written in Python, you can complete the steps below with API requests instead of SDK function calls.
Configure the SDK by providing the required environment variables in your application startup command, or programmatically in-code. Ensure you have configured your configure your Datadog API key, Datadog site, and machine learning (ML) app name.
Trace an LLM application
To trace an LLM application:
Create spans in your LLM application code to represent your application’s operations. For more information about spans, see Terms and Concepts.
Annotate your spans with input data, output data, metadata (such as temperature), metrics (such as input_tokens), and key-value tags (such as version:1.0.0).
To create a span, the LLM Observability SDK provides two options: using a function decorator or using a context manager inline.
Using a function decorator is the preferred method. Using a context manager is more advanced and allows more fine-grained control over tracing.
Decorators
Use ddtrace.llmobs.decorators.<SPAN_KIND>() as a decorator on the function you’d like to trace, replacing <SPAN_KIND> with the desired span kind.
Inline
Use ddtrace.llmobs.LLMObs.<SPAN_KIND>() as a context manager to trace any inline code, replacing <SPAN_KIND> with the desired span kind.
The examples below create a workflow span.
fromddtrace.llmobs.decoratorsimportworkflow@workflowdefextract_data(document):...# LLM-powered workflow that extracts structure data from a documentreturn
fromddtrace.llmobsimportLLMObsdefextract_data(document):withLLMObs.workflow(name="extract_data")asspan:...# LLM-powered workflow that extracts structure data from a documentreturn
Annotating spans
To add extra information to a span such as inputs, outputs, metadata, metrics, or tags, use the LLM Observability SDK’s LLMObs.annotate() method.
The examples below annotate the workflow span created in the example above:
fromddtrace.llmobsimportLLMObsfromddtrace.llmobs.decoratorsimportworkflow@workflowdefextract_data(document:str,generate_summary:bool):extracted_data=...# user application logicLLMObs.annotate(input_data=document,output_data=extracted_data,metadata={"generate_summary":generate_summary},tags={"env":"dev"},)returnextracted_data
fromddtrace.llmobsimportLLMObsdefextract_data(document:str,generate_summary:bool):withLLMObs.workflow(name="extract_data")asspan:...# user application logicextracted_data=...# user application logicLLMObs.annotate(input_data=document,output_data=extracted_data,metadata={"generate_summary":generate_summary},tags={"env":"dev"},)returnextracted_data
Nesting spans
Starting a new span before the current span is finished automatically traces a parent-child relationship between the two spans. The parent span represents the larger operation, while the child span represents a smaller nested sub-operation within it.
The examples below create a trace with two spans.
fromddtrace.llmobs.decoratorsimporttask,workflow@workflowdefextract_data(document):preprocess_document(document)...# performs data extraction on the documentreturn@taskdefpreprocess_document():...# preprocesses a document for data extractionreturn
fromddtrace.llmobsimportLLMObsdefextract_data():withLLMObs.workflow(name="extract_data")asworkflow_span:withLLMObs.task(name="preprocess_document")astask_span:...# preprocesses a document for data extraction...# performs data extraction on the documentreturn
For more information on alternative tracing methods and tracing features, see the SDK documentation.
Advanced tracing
Depending on the complexity of your LLM application, you can also: