- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
title: Configuration description: Learn how to configure topics and evaluations for your LLM applications on the Configuration page. further_reading:
You can configure your LLM applications on the Configuration page with settings that can optimize your application’s performance and security.
Select an LLM application set up with LLM Observability to start customizing its topics and evaluations.
Enabling any of the out-of-the-box evaluations outside of Language Mismatch
shares your input and output to OpenAI.
To enter a topic, click the Edit icon and add keywords. For example, for an LLM application that was designed for incident management, add observability
, software engineering
, or incident resolution
.
Topics can contain multiple words and should be as specific and descriptive as possible. For example, if your application handles customer inquiries for an e-commerce store, you can use “Customer questions about purchasing furniture on an e-commerce store”.
To enable evaluations, click the toggle for the respective evaluations that you’d like to assess your LLM application against in the Quality and Security and Safety sections. For more information about evaluations, see Terms and Concepts.
Enabling evaluations results in your prompt-response data being shared with OpenAI. Under the zero data retention (ZDR) policy, OpenAI does not use any data sent from Datadog for training purposes.
추가 유용한 문서, 링크 및 기사: