If your application is showing performance problems in production, integrating distributed tracing with code stack trace benchmarks from profiling is a powerful way to identify the performance bottlenecks. Application processes that have both APM distributed tracing and continuous profiler enabled are automatically linked.
You can move directly from span information to profiling data on the Profiles tab, and find specific lines of code related to performance issues. Similarly, you can also debug slow and resource consuming endpoints directly in the Profiling UI.
The Trace to Profiling integration is enabled by default when you turn on profiling for your Java service on Linux and macOS.
The feature is not available on Windows.
For manually instrumented code, continuous profiler requires scope activation of spans:
finalSpanspan=tracer.buildSpan("ServicehandlerSpan").start();try(finalScopescope=tracer.activateSpan(span)){// mandatory for Datadog continuous profiler to link with span// worker thread impl}finally{// Step 3: Finish Span when work is completespan.finish();}
The Trace to Profiling integration is enabled by default when you turn on profiling for your Node.js service on Linux and macOS. The feature is not available on Windows.
Requires dd-trace-js 5.11.0+, 4.35.0+, and 3.56.0+.
Setting these variables will record up to 1 minute (or 5 MiB) of execution tracing data every 15 minutes.
You can find this data:
In the Profile List by adding go_execution_traced:yes to your search query. Click on a profile to view the Profile Timeline. To go even deeper, download the profile and use go tool trace or gotraceui to view the contained go.trace files.
In the Trace Explorer by adding @go_execution_traced:yes (note the @) to your search query. Click on a span and then select the Profiles tab to view the Span Timeline.
While recording execution traces, your application may observe an increase in CPU usage similar to a garbage collection. Although this should not have a significant impact for most applications, Go 1.21 includes patches to eliminate this overhead.
This capability requires dd-trace-go version 1.37.0+ (1.52.0+ for timeline view) and works best with Go version 1.18 or later (1.21 or later for timeline view).
See prerequisites to learn how to enable this feature for Python.
Each lane represents a thread. Threads from a common pool are grouped together. You can expand the pool to view details for each thread.
Each lane represents a goroutine. This includes the goroutine that started the selected span, as well as any goroutines it created and their descendants. Goroutines created by the same go statement are grouped together. You can expand the group to view details for each goroutine.
Lanes on top are runtime activities that may add extra latency. They can be unrelated to the request itself.
See prerequisites to learn how to enable this feature for Ruby.
Each lane represents a thread. Threads from a common pool are grouped together. You can expand the pool to view details for each thread.
Each lane represents a thread. Threads with the same name are grouped together. You can expand a group to view details for each thread. Note that threads that are explicitly created by code are grouped under Managed Threads.
Lanes on top are runtime activities that may add extra latency. They can be unrelated to the request itself.
See prerequisites to learn how to enable this feature for Node.js.
There is one lane for the JavaScript thread.
Lanes on the top are garbage collector runtime activities that may add extra latency to your request.
See prerequisites to learn how to enable this feature for PHP.
There is one lane for each PHP thread (in PHP NTS, this is only one lane). Fibers that run in this thread are represented in the same lane.
Lanes on the top are runtime activities that may add extra latency to your request, due to file compilation and garbage collection.
From the timeline, click Open in Profiling to see the same data on a new page. From there, you can change the visualization to a flame graph.
Click the Focus On selector to define the scope of the data:
Span & Children scopes the profiling data to the selected span and all descendant spans in the same service.
Span only scopes the profiling data to the previously selected span.
Span time period scopes the profiling data to all threads during the time period the span was active.
Full profile scopes the data to 60 seconds of the whole service process that executed the previously selected span.
Endpoint profiling allows you to scope your flame graphs by any endpoint of your web service to find endpoints that are slow, latency-heavy, and causing poor end-user experience. These endpoints can be tricky to debug and understand why they are slow. The slowness could be caused by an unintended large amount of resource consumption such as the endpoint consuming lots of CPU cycles.
With endpoint profiling you can:
Identify the bottleneck methods that are slowing down your endpoint’s overall response time.
Isolate the top endpoints responsible for the consumption of valuable resources such as CPU, memory, or exceptions. This is particularly helpful when you are generally trying to optimize your service for performance gains.
Understand if third-party code or runtime libraries are the reason for your endpoints being slow or resource-consumption heavy.
It is valuable to track top endpoints that are consuming valuable resources such as CPU and wall time. The list can help you identify if your endpoints have regressed or if you have newly introduced endpoints that are consuming drastically more resources, slowing down your overall service.
The following image shows that GET /store_history is periodically impacting this service by consuming 20% of its CPU and 50% of its allocated memory:
Select Per endpoint call to see behavior changes even as traffic shifts over time. This is useful for progressive rollout sanity checks or analyzing daily traffic patterns.
The following example shows that CPU per request increased for /GET train: