- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Supported languages:
Language | Version |
---|---|
Python 2 | >= 2.7 |
Python 3 | >= 3.6 |
Supported test frameworks:
Test Framework | Version |
---|---|
pytest | >= 3.0.0 |
pytest-benchmark | >= 3.1.0 |
unittest | >= 3.7 |
To report test results to Datadog, you need to configure the Datadog Python library:
We support auto-instrumentation for the following CI providers:
CI Provider | Auto-Instrumentation method |
---|---|
GitHub Actions | Datadog Test Visibility Github Action |
Jenkins | UI-based configuration with Datadog Jenkins plugin |
GitLab | Datadog Test Visibility GitLab Script |
CircleCI | Datadog Test Visibility CircleCI Orb |
If you are using auto-instrumentation for one of these providers, you can skip the rest of the setup steps below.
If you are using a cloud CI provider without access to the underlying worker nodes, such as GitHub Actions or CircleCI, configure the library to use the Agentless mode. For this, set the following environment variables:
DD_CIVISIBILITY_AGENTLESS_ENABLED=true
(Required)false
DD_API_KEY
(Required)(empty)
Additionally, configure the Datadog site to which you want to send data.
DD_SITE
(Required)datadoghq.com
If you are running tests on an on-premises CI provider, such as Jenkins or self-managed GitLab CI, install the Datadog Agent on each worker node by following the Agent installation instructions. This is the recommended option as it allows you to automatically link test results to logs and underlying host metrics.
If you are using a Kubernetes executor, Datadog recommends using the Datadog Operator. The operator includes Datadog Admission Controller which can automatically inject the tracer library into the build pods. Note: If you use the Datadog Operator, there is no need to download and inject the tracer library since the Admission Controller can do this for you, so you can skip the corresponding step below. However, you still need to make sure that your pods set the environment variables or command-line parameters necessary to enable Test Visibility.
If you are not using Kubernetes or can’t use the Datadog Admission Controller and the CI provider is using a container-based executor, set the DD_TRACE_AGENT_URL
environment variable (which defaults to http://localhost:8126
) in the build container running the tracer to an endpoint that is accessible from within that container. Note: Using localhost
inside the build references the container itself and not the underlying worker node or any container where the Agent might be running in.
DD_TRACE_AGENT_URL
includes the protocol and port (for example, http://localhost:8126
) and takes precedence over DD_AGENT_HOST
and DD_TRACE_AGENT_PORT
, and is the recommended configuration parameter to configure the Datadog Agent’s URL for CI Visibility.
If you still have issues connecting to the Datadog Agent, use the Agentless Mode. Note: When using this method, tests are not correlated with logs and infrastructure metrics.
Install the Python tracer by running:
pip install -U ddtrace
For more information, see the Python tracer installation documentation.
To enable instrumentation of pytest
tests, add the --ddtrace
option when running pytest
, specifying the name of the service or library under test in the DD_SERVICE
environment variable, and the environment where tests are being run (for example, local
when running tests on a developer workstation, or ci
when running them on a CI provider) in the DD_ENV
environment variable:
DD_SERVICE=my-python-app DD_ENV=ci pytest --ddtrace
If you also want to enable the rest of the APM integrations to get more information in your flamegraph, add the --ddtrace-patch-all
option:
DD_SERVICE=my-python-app DD_ENV=ci pytest --ddtrace --ddtrace-patch-all
To add custom tags to your tests, declare ddspan
as an argument in your test:
from ddtrace import tracer
# Declare `ddspan` as argument to your test
def test_simple_case(ddspan):
# Set your tags
ddspan.set_tag("test_owner", "my_team")
# test continues normally
# ...
To create filters or group by
fields for these tags, you must first create facets. For more information about adding tags, see the Adding Tags section of the Python custom instrumentation documentation.
Just like tags, to add custom measures to your tests, use the current active span:
from ddtrace import tracer
# Declare `ddspan` as an argument to your test
def test_simple_case(ddspan):
# Set your tags
ddspan.set_tag("memory_allocations", 16)
# test continues normally
# ...
Read more about custom measures in the Add Custom Measures Guide.
To instrument your benchmark tests with pytest-benchmark
, run your benchmark tests with the --ddtrace
option when running pytest
, and Datadog detects metrics from pytest-benchmark
automatically:
def square_value(value):
return value * value
def test_square_value(benchmark):
result = benchmark(square_value, 5)
assert result == 25
To enable instrumentation of unittest
tests, run your tests by appending ddtrace-run
to the beginning of your unittest
command.
Make sure to specify the name of the service or library under test in the DD_SERVICE
environment variable.
Additionally, you may declare the environment where tests are being run in the DD_ENV
environment variable:
DD_SERVICE=my-python-app DD_ENV=ci ddtrace-run python -m unittest
Alternatively, if you wish to enable unittest
instrumentation manually, use patch()
to enable the integration:
from ddtrace import patch
import unittest
patch(unittest=True)
class MyTest(unittest.TestCase):
def test_will_pass(self):
assert True
The following is a list of the most important configuration settings that can be used with the tracer, either in code or using environment variables:
DD_SERVICE
DD_SERVICE
pytest
my-python-app
DD_ENV
DD_ENV
none
local
, ci
For more information about service
and env
reserved tags, see Unified Service Tagging.
The following environment variable can be used to configure the location of the Datadog Agent:
DD_TRACE_AGENT_URL
http://hostname:port
.http://localhost:8126
All other Datadog Tracer configuration options can also be used.
Datadog uses Git information for visualizing your test results and grouping them by repository, branch, and commit. Git metadata is automatically collected by the test instrumentation from CI provider environment variables and the local .git
folder in the project path, if available.
If you are running tests in non-supported CI providers or with no .git
folder, you can set the Git information manually using environment variables. These environment variables take precedence over any auto-detected information. Set the following environment variables to provide Git information:
DD_GIT_REPOSITORY_URL
git@github.com:MyCompany/MyApp.git
, https://github.com/MyCompany/MyApp.git
DD_GIT_BRANCH
develop
DD_GIT_TAG
1.0.1
DD_GIT_COMMIT_SHA
a18ebf361cc831f5535e58ec4fae04ffd98d8152
DD_GIT_COMMIT_MESSAGE
Set release number
DD_GIT_COMMIT_AUTHOR_NAME
John Smith
DD_GIT_COMMIT_AUTHOR_EMAIL
john@example.com
DD_GIT_COMMIT_AUTHOR_DATE
2021-03-12T16:00:28Z
DD_GIT_COMMIT_COMMITTER_NAME
Jane Smith
DD_GIT_COMMIT_COMMITTER_EMAIL
jane@example.com
DD_GIT_COMMIT_COMMITTER_DATE
2021-03-12T16:00:28Z
Plugins for pytest
that alter test execution may cause unexpected behavior.
Plugins that introduce parallelization to pytest
(such as pytest-xdist
or pytest-forked
) create one session event for each parallelized instance. Multiple module or suite events may be created if tests from the same package or module execute in different processes.
The overall count of test events (and their correctness) remain unaffected. Individual session, module, or suite events may have inconsistent results with other events in the same pytest
run.
Plugins that change the ordering of test execution (such as pytest-randomly
) can create multiple module or suite events. The duration and results of module or suite events may also be inconsistent with the results reported by pytest
.
The overall count of test events (and their correctness) remain unaffected.
In some cases, if your unittest
test execution is run in a parallel manner, this may break the instrumentation and affect test visibility.
Datadog recommends you use up to one process at a time to prevent affecting test visibility.