LLM Observability Terms and Concepts

이 페이지는 아직 영어로 제공되지 않습니다. 번역 작업 중입니다.
현재 번역 프로젝트에 대한 질문이나 피드백이 있으신 경우 언제든지 연락주시기 바랍니다.

LLM Observability is not available in the selected site () at this time.

Overview

The LLM Observability UI provides many tools to troubleshoot conversation performance and correlate data throughout the product, enabling you to find and resolve issues in large language models (LLMs).

ConceptDescription
SpansA span is a unit of work representing an operation in your LLM application, and is the building block of a trace.
TracesA trace represents the work involved in processing a request in your LLM application, and consists of one or more nested spans. A root span is the first span in a trace, and marks the beginning and end of the trace.
EvaluationsEvaluations are a method for measuring the performance of your LLM application. For example, quality checks like failure to answer or topic relevancy are different types of evaluations that you can track for your LLM application.

Spans

A span consists of the following attributes:

  • Name
  • Start time and duration
  • Error type, message, and traceback
  • Inputs and outputs, such as LLM prompts and completions
  • Metadata (for example, LLM parameters such as temperature, max_tokens)
  • Metrics, such as input_tokens and output_tokens
  • Tags

Span kinds

LLM Observability categorizes spans by their span kind, which defines the type of work the span is performing. This can give you more granular insights on what operations are being performed by your LLM application.

LLM Observability supports the following span kinds:

KindRepresentsValid Root span?Examples
LLMA call to an LLM.YesA call to a model, such as OpenAI GPT-4.
WorkflowAny predetermined sequence of operations which include LLM calls and any surrounding contextual operations.YesA service that takes a URL and returns a summary of the page, requiring a tool call to fetch the page, some text processing tasks, and an LLM summarization.
AgentA series of decisions and operations made by an autonomous agent, which usually consist of nested workflows, LLMs, tools, and task calls.YesA chatbot that answers customer questions.
ToolA call to a program or service where the call arguments are generated by an LLM.NoA call to a web search API or calculator.
TaskA standalone step that does not involve a call to an external service.NoA data preprocessing step.
EmbeddingA call to a model or function that returns an embedding.NoA call to text-embedding-ada-002.
RetrievalA data retrieval operation from an external knowledge base.NoA call to a vector database that returns an array of ranked documents.

For instructions on creating spans from your application, including code examples, see Tracing spans in the LLM Observability SDK for Python documentation.

LLM span

LLM spans represent a call to an LLM where input and outputs are represented as text.

A trace can contain a single LLM span, in which case the trace represents an LLM inference operation.

LLM spans typically do not have child spans, as they are standalone operations representing a direct call to an LLM.

Workflow span

Workflow spans represent any static sequence of operations. Use workflows to group together an LLM call with its supporting contextual operations, such as tool calls, data retrievals, and other tasks.

Workflow spans are frequently the root span of a trace consisting of a standard sequence. For example, a function might take an arXiv paper link and return a summary. This process might involve a tool call to fetch the paper, some text-processing tasks, and an LLM summarization.

Workflow spans may have any spans as children, which represent child steps in the workflow sequence.

Agent span

Agent spans represent a dynamic sequence of operations where a large language model determines and executes operations based on the inputs. For example, an agent span might represent a series of reasoning steps controlled by a ReAct agent.

Agent spans are frequently the root span for traces representing autonomous agents or reasoning agents.

Agent spans may have any spans as children, which represent child steps orchestrated by a reasoning engine.

Tool span

Tool spans represent a standalone step in a workflow or agent that involves a call to an external program or service, such as a web API or database.

Tool spans typically do not have child spans, as they are standalone operations representing a tool execution.

Task span

Task spans represent a standalone step in a workflow or agent that does not involve a call to an external service, such as a data sanitization step before a prompt is submitted to an LLM.

Task spans typically do not have child spans, as they are standalone steps in the workflow or agent.

Embedding span

Embedding spans are a subcategory of tool spans and represent a standalone call to an embedding model or function to create an embedding. For example, an embedding span could be used to trace a call to OpenAI’s embedding endpoint.

Embedding spans can have task spans as children, but typically do not have children.

Retrieval span

Retrieval spans are a subcategory of tool spans and represent a vector search operation involving a list of documents being returned from an external knowledge base. For example, a retrieval span could be used to trace a similarity search to a vector store to collect relevant documents for augmenting a user prompt for a given topic.

When used alongside embedding spans, retrieval spans can provide visibility into retrieval augmented generation (RAG) operations.

Retrieval spans typically do not have child spans, as they represent a standalone retrieval step.

Traces

LLM Observability supports observability for LLM applications with varying complexity. Based on the structure and complexity of your traces, you can use the following features of LLM Observability:

LLM Inference Monitoring

LLM inference traces are composed of a single LLM span.

A single LLM span

Tracing individual LLM inferences unlocks basic LLM Observability features, allowing you to:

  1. Track inputs and outputs to your LLM calls.
  2. Track token usage, error rates, and latencies for your LLM calls.
  3. Break down important metrics by model and model provider.

For a detailed example, see the LLM Monitoring Jupyter notebook which demonstrates how to create and trace an LLM call.

The SDK provides integrations to automatically capture LLM calls to specific providers. See Auto-instrumentation for more information. If you are using an LLM provider that is not supported, you must manually instrument your application.

LLM Workflow Monitoring

A workflow trace is composed of a root workflow span with nested LLM, task, tool, embedding, and retrieval spans.

A trace visualizing a more complex LLM workflow

Most LLM applications include operations that surround LLM calls and play a large role in your overall application performance - for example, tool calls to external APIs or preprocessing task steps.

By tracing LLM calls and contextual task or tool operations together under workflow spans, you can unlock more granular insights and a more holistic view of your LLM application.

For detailed examples, see the LLM Monitoring Jupyter notebook which demonstrates how to create and trace a complex, static series of steps involving a tool call and a call to an LLM or the LLM Monitoring Jupyter notebook which demonstrates how to create, trace, and evaluate a RAG workflow.

LLM Agent Monitoring

An agent monitoring trace is composed of a root agent span with nested LLM, task, tool, embedding, retrieval, and workflow spans.

A trace visualizing an LLM agent

If your LLM application has complex autonomous logic, such as decision-making that can’t be captured by a static workflow, you are likely using an LLM Agent. Agents may execute multiple different workflows depending on the user input.

You can instrument your LLM application to trace and group together all workflows and contextual operations run by a single LLM agent as an agent trace.

For a detailed example, see the LLM Monitoring Jupyter notebook which demonstrates how to create and trace an LLM-powered agent that calls tools and makes decisions based on the data.

Evaluations

LLM Observability offers out-of-the-box evaluations and quality checks to evaluate the quality, safety, and effectiveness of your LLM conversations. With evaluations, you can understand the performance of conversations and enhance your LLM application’s responses. This improves the user experience and ensures valuable, accurate outputs.

Datadog provides a variety of options for your evaluations:

In addition, Datadog’s Sensitive Data Scanner is natively integrated with LLM Observability, so you can ensure any sensitive data in your input and output is scanned and redacted.

Further Reading

PREVIEWING: guacbot/translation-pipeline