- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Go to APM -> Profiles and select a service to view its profiles. Select a profile type to view different resources (for example, CPU, Memory, Exception, and I/O).
You can filter according to infrastructure tags or application tags set up from your environment tracing configuration. By default the following facets are available:
Facet | Definition |
---|---|
Env | The environment your application is running on (production , staging ). |
Service | The name of the service your code is running. |
Version | The version of your code. |
Host | The hostname your profiled process is running on. |
Runtime | The type of runtime the profiled process is running (JVM , CPython ). |
The following measures are available:
Measure | Definition |
---|---|
CPU | CPU usage, measured in cores. |
Memory Allocation | Memory allocation rate over the course of the profile. This value can be above the amount of memory on your system because allocated memory can be garbage collected during the profile. |
Wall time | The elapsed time used by the code. Elapsed time includes time when code is running on CPU, waiting for I/O, and anything else that happens while it is running. |
For each runtime, there is also a broader set of metrics available, which you can see listed by timeseries.
In the Profiles tab, you can see all profile types available for a given language. Depending on the language, the information collected about your profile differs. See Profile types for a list of profile types available for each language.
The flame graph is the default visualization for Continuous Profiler. It shows how much CPU each method used (since this is a CPU profile) and how each method was called.
For example, starting from the first row in the previous image, Thread.run()
called ThreadPoolExecutor$Worker.run()
, which called ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)
, and so on.
The width of a frame represents how much of the total CPU it consumed. On the right, you can see a CPU time by Method top list that only accounts for self time, which is the time a method spent on CPU without calling another method.
Flame graphs can be be included in Dashboards and Notebooks with the Profiling Flame Graph Widget.
By default, profiles are uploaded once a minute. Depending on the language, these processes are profiled between 15s and 60s.
To view a specific profile, set the Visualize as option to Profile List and click an item in the list:
The header contains information associated with your profile, like the service that generated it, or the environment and code version associated to it.
Four tabs are below the profile header:
Tab | Definition |
---|---|
Profiles | A flame graph and summary table of the profile you are looking at. You can switch between profile types (for example, CPU , Memory allocation ). |
Analysis | A set of heuristics that suggest potential issues or areas of improvement in your code. Only available for Java. |
Metrics | Profiler metrics coming from all profiles of the same service. |
Runtime Info | Runtime properties in supported languages, and profile tags. |
Note: In the upper right corner of each profile, there are options to:
The timeline view is equivalent to the flame graph, with time-based patterns and work distribution over the period of a single profile, a single process in profiling explorer and a trace.
Compared to the flame graph, the timeline view can help you:
To access the timeline view:
Depending on the runtime and language, the timeline lanes vary:
See prerequisites to learn how to enable this feature for Python.
Each lane represents a thread. Threads from a common pool are grouped together. You can expand the pool to view details for each thread.