- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
The profiler is shipped within Datadog tracing libraries. If you are already using APM to collect traces for your application, you can skip installing the library and go directly to enabling the profiler.
For a summary of the minimum and recommended runtime and tracer versions across all languages, read Supported Language and Tracer Versions.
The Datadog Profiler requires Python 2.7+.
The following profiling features are available depending on your Python version. For more details, read Profile Types:
Feature | Supported Python versions |
---|---|
Wall time profiling | Python 2.7+ |
CPU time profiling | Python 2.7+ on POSIX platforms |
Exception profiling | Python 3.7+ on POSIX platforms |
Lock profiling | Python 2.7+ |
Memory profiling | Python 3.5+ |
The installation requires pip version 18 or above.
The following profiling features are available in the following minimum versions of the dd-trace-py
library:
Feature | Required dd-trace-py version |
---|---|
Code Hotspots | 0.44.0+ |
Endpoint Profiling | 0.54.0+ |
Timeline | 2.10.5+ |
Continuous Profiler is in beta support for some serverless platforms, such as AWS Lambda.
Ensure Datadog Agent v6+ is installed and running. Datadog recommends using Datadog Agent v7+.
Install ddtrace
, which provides both tracing and profiling functionalities:
pip install ddtrace
Note: Profiling requires the ddtrace
library version 0.40+.
If you are using a platform where ddtrace
binary distribution is not available, first install a development environment.
For example, on Alpine Linux, this can be done with:
apk install gcc musl-dev linux-headers
To automatically profile your code, set the DD_PROFILING_ENABLED
environment variable to true
when you use ddtrace-run
:
DD_PROFILING_ENABLED=true \
DD_ENV=prod \
DD_SERVICE=my-web-app \
DD_VERSION=1.0.3 \
ddtrace-run python app.py
See Configuration for more advanced usage.
Optionally, set up Source Code Integration to connect your profiling data with your Git repositories.
After a couple of minutes, visualize your profiles on the Datadog APM > Profiler page.
If you want to manually control the lifecycle of the profiler, use the ddtrace.profiling.Profiler
object:
from ddtrace.profiling import Profiler
prof = Profiler(
env="prod", # if not specified, falls back to environment variable DD_ENV
service="my-web-app", # if not specified, falls back to environment variable DD_SERVICE
version="1.0.3", # if not specified, falls back to environment variable DD_VERSION
)
prof.start() # Should be as early as possible, eg before other imports, to ensure everything is profiled
When your process forks using os.fork
, the profiler needs to be started in
the child process. In Python 3.7+, this is done automatically. In Python < 3.7,
you need to manually start a new profiler in your child process:
# For ddtrace-run users, call this in your child process
ddtrace.profiling.auto.start_profiler() # Should be as early as possible, eg before other imports, to ensure everything is profiled
# Alternatively, for manual instrumentation,
# create a new profiler in your child process:
from ddtrace.profiling import Profiler
prof = Profiler(...)
prof.start() # Should be as early as possible, eg before other imports, to ensure everything is profiled
You can configure the profiler using the environment variables.
The Python profiler supports code provenance reporting, which provides
insight into the library that is running the code. While this is
disabled by default, you can turn it on by setting
DD_PROFILING_ENABLE_CODE_PROVENANCE=1
.
The Getting Started with Profiler guide takes a sample service with a performance problem and shows you how to use Continuous Profiler to understand and fix the problem.