Cette page n'est pas encore disponible en français, sa traduction est en cours. Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.
CI Visibility is not available in the selected site () at this time.
Compatibility
Supported languages:
Language
Version
Python 2
>= 2.7
Python 3
>= 3.6
Supported test frameworks:
Test Framework
Version
pytest
>= 3.0.0
pytest-benchmark
>= 3.1.0
unittest
>= 3.7
Configuring reporting method
To report test results to Datadog, you need to configure the Datadog Python library:
We support auto-instrumentation for the following CI providers:
If you are using auto-instrumentation for one of these providers, you can skip the rest of the setup steps below.
Si vous utilisez un fournisseur de CI sur le cloud sans accès aux nœuds de worker sous-jacents, comme GitHub Actions ou CircleCI, configurez la bibliothèque pour utiliser le mode sans Agent. Pour cela, définissez les variables d’environnement suivantes :
DD_CIVISIBILITY_AGENTLESS_ENABLED=true (requis)
Active ou désactive le mode sans Agent. Valeur par défaut : false
DD_API_KEY (requis)
La clé d’API Datadog utilisée pour importer les résultats de test. Valeur par défaut: (empty)
Configurez aussi le site Datadog vers lequel vous souhaitez envoyer des données.
DD_SITE (requis)
Le site Datadog vers lequel importer les résultats. Valeur par défaut : datadoghq.com
SI vous exécutez des tests avec un fournisseur de CI sur site, comme Jenkins ou GitLab CI autogéré, installez l’Agent Datadog sur chaque nœud de worker en suivant les instructions d’installation de l’Agent. Cette méthode est recommandée, car elle vous permet d’associer automatiquement les résultats de test aux logs et aux métriques des hosts sous-jacents.
Si vous utilisez un exécuteur Kubernetes, Datadog vous conseille d’utiliser l’Operator Datadog. Celui-ci comprend le contrôleur d’admission Datadog, qui peut automatiquement injecter la bibliothèque du traceur dans les pods du build. Remarque : si vous utilisez l’Operator Datadog, il n’est pas nécessaire de télécharger et d’injecter la bibliothèque du traceur, car le contrôleur d’admission le fait à votre place. Vous pouvez donc ignorer l’étape correspondante ci-dessous. Vous devez toutefois vous assurer que vos pods définissent les variables d’environnement ou paramètres de ligne de commande nécessaires à l’activation de Test Visibility.
Si vous n’utilisez pas Kubernetes, ou si vous ne pouvez pas utiliser le contrôleur d’admission Datadog, et que le fournisseur de CI repose sur un exécuteur basé sur des conteneurs, définissez la variable d’environnement DD_TRACE_AGENT_URL (valeur par défaut : http://localhost:8126) dans le conteneur du build exécutant le traceur sur un endpoint accessible dans le conteneur. Remarque : lorsqu’elle est utilisée à l’intérieur du conteneur, la valeur localhost désigne le conteneur, et non le nœud de worker sous-jacent ou un conteneur dans lequel l’Agent pourrait s’exécuter.
DD_TRACE_AGENT_URL comprend le protocole et le port (par exemple, http://localhost:8126) et est prioritaire par rapport à DD_AGENT_HOST et DD_TRACE_AGENT_PORT. Ce paramètre est recommandé pour la configuration de l’URL de l’Agent Datadog pour CI Visibility.
Si vous ne parvenez pas à établir une connexion avec l’Agent Datadog, utilisez le mode sans agent. Remarque : avec cette méthode, les tests ne sont pas mis en corrélation avec les logs et les métriques d’infrastructure.
To enable instrumentation of pytest tests, add the --ddtrace option when running pytest, specifying the name of the service or library under test in the DD_SERVICE environment variable, and the environment where tests are being run (for example, local when running tests on a developer workstation, or ci when running them on a CI provider) in the DD_ENV environment variable:
To add custom tags to your tests, declare ddspan as an argument in your test:
fromddtraceimporttracer# Declare `ddspan` as argument to your testdeftest_simple_case(ddspan):# Set your tagsddspan.set_tag("test_owner","my_team")# test continues normally# ...
To create filters or group by fields for these tags, you must first create facets. For more information about adding tags, see the Adding Tags section of the Python custom instrumentation documentation.
Adding custom measures to tests
Just like tags, to add custom measures to your tests, use the current active span:
fromddtraceimporttracer# Declare `ddspan` as an argument to your testdeftest_simple_case(ddspan):# Set your tagsddspan.set_tag("memory_allocations",16)# test continues normally# ...
To instrument your benchmark tests with pytest-benchmark, run your benchmark tests with the --ddtrace option when running pytest, and Datadog detects metrics from pytest-benchmark automatically:
To enable instrumentation of unittest tests, run your tests by appending ddtrace-run to the beginning of your unittest command.
Make sure to specify the name of the service or library under test in the DD_SERVICE environment variable.
Additionally, you may declare the environment where tests are being run in the DD_ENV environment variable:
Note: The Test Optimization manual testing API is in beta and subject to change.
As of version 2.13.0, the Datadog Python tracer provides the Test Optimization API (ddtrace.ext.test_visibility) to submit test optimization results as needed.
API execution
The API uses classes to provide namespaced methods to submit test optimization events.
Test execution has two phases:
Discovery: inform the API what items to expect
Execution: submit results (using start and finish calls)
The distinct discovery and execution phases allow for a gap between the test runner process collecting the tests and the tests starting.
API users must provide consistent identifiers (described below) that are used as references for Test Optimization items within the API’s state storage.
Enable test_visibility
You must call the ddtrace.ext.test_visibility.api.enable_test_visibility() function before using the Test Optimization API.
Call the ddtrace.ext.test_visibility.api.disable_test_visibility() function before process shutdown to ensure proper flushing of data.
Domain model
The API is based around four concepts: test session, test module, test suite, and test.
Modules, suites, and tests form a hierarchy in the Python Test Optimization API, represented by the item identifier’s parent relationship.
Test session
A test session represents a project’s test execution, typically corresponding to the execution of a test command. Only one session can be discovered, started, and finished in the execution of Test Optimization program.
Call ddtrace.ext.test_visibility.api.TestSession.discover() to discover the session, passing the test command, a given framework name, and version.
Call ddtrace.ext.test_visibility.api.TestSession.start() to start the session.
When tests have completed, call ddtrace.ext.test_visibility.api.TestSession.finish() .
Test module
A test module represents a smaller unit of work within a project’s tests run (a directory, for example).
Call ddtrace.ext.test_visibility.api.TestModuleId(), providing the module name as a parameter, to create a TestModuleId.
Call ddtrace.ext.test_visibility.api.TestModule.discover(), passing the TestModuleId object as an argument, to discover the module.
Call ddtrace.ext.test_visibility.api.TestModule.start(), passing the TestModuleId object as an argument, to start the module.
After all the children items within the module have completed, call ddtrace.ext.test_visibility.api.TestModule.finish(), passing the TestModuleId object as an argument.
Test suite
A test suite represents a subset of tests within a project’s modules (.py file, for example).
Call ddtrace.ext.test_visibility.api.TestSuiteId(), providing the parent module’s TestModuleId and the suite’s name as arguments, to create a TestSuiteId.
Call ddtrace.ext.test_visibility.api.TestSuite.discover(), passing the TestSuiteId object as an argument, to discover the suite.
Call ddtrace.ext.test_visibility.api.TestSuite.start(), passing the TestSuiteId object as an argument, to start the suite.
After all the child items within the suite have completed, call ddtrace.ext.test_visibility.api.TestSuite.finish(), passing the TestSuiteId object as an argument.
Test
A test represents a single test case that is executed as part of a test suite.
Call ddtrace.ext.test_visibility.api.TestId(), providing the parent suite’s TestSuiteId and the test’s name as arguments, to create a TestId. The TestId() method accepts a JSON-parseable string as the optional parameters argument. The parameters argument can be used to distinguish parametrized tests that have the same name, but different parameter values.
Call ddtrace.ext.test_visibility.api.Test.discover(), passing the TestId object as an argument, to discover the test. The Test.discover() classmethod accepts a string as the optional resource parameter, which defaults to the TestId’s name.
Call ddtrace.ext.test_visibility.api.Test.start(), passing the TestId object as an argument, to start the test.
Call ddtrace.ext.test_visibility.api.Test.mark_pass(), passing the TestId object as an argument, to mark that the test has passed successfully.
Call ddtrace.ext.test_visibility.api.Test.mark_fail(), passing the TestId object as an argument, to mark that the test has failed. mark_fail() accepts an optional TestExcInfo object as the exc_info parameter.
Call ddtrace.ext.test_visibility.api.Test.mark_skip(), passing the TestId object as an argument, to mark that the test was skipped. mark_skip() accepts an optional string as the skip_reason parameter.
Exception information
The ddtrace.ext.test_visibility.api.Test.mark_fail() classmethod holds information about exceptions encountered during a test’s failure.
The ddtrace.ext.test_visibility.api.TestExcInfo() method takes three positional parameters:
exc_type: the type of the exception encountered
exc_value: the BaseException object for the exception
exc_traceback: the Traceback object for the exception
Codeowner information
The ddtrace.ext.test_visibility.api.Test.discover() classmethod accepts an optional list of strings as the codeowners parameter.
Test source file information
The ddtrace.ext.test_visibility.api.Test.discover() classmethod accepts an optional TestSourceFileInfo object as the source_file_info parameter. A TestSourceFileInfo object represents the path and optionally, the start and end lines for a given test.
The ddtrace.ext.test_visibility.api.TestSourceFileInfo() method accepts three positional parameters:
path: a pathlib.Path object (made relative to the repo root by the Test Optimization API)
start_line: an optional integer representing the start line of the test in the file
end_line: an optional integer representing the end line of the test in the file
Setting parameters after test discovery
The ddtrace.ext.test_visibility.api.Test.set_parameters() classmethod accepts a TestId object as an argument, and a JSON-parseable string, to set the parameters for the test.
Note: this overwrites the parameters associated with the test, but does not modify the TestId object’s parameters field.
Setting parameters after a test has been discovered requires that the TestId object be unique even without the parameters field being set.
Code example
fromddtrace.ext.test_visibilityimportapiimportpathlibimportsysif__name__=="__main__":# Enable the Test Optimization serviceapi.enable_test_visibility()# Discover itemsapi.TestSession.discover("manual_test_api_example","my_manual_framework","1.0.0")test_module_1_id=api.TestModuleId("module_1")api.TestModule.discover(test_module_1_id)test_suite_1_id=api.TestSuiteId(test_module_1_id,"suite_1")api.TestSuite.discover(test_suite_1_id)test_1_id=api.TestId(test_suite_1_id,"test_1")api.Test.discover(test_1_id)# A parameterized test with codeowners and a source filetest_2_codeowners=["team_1","team_2"]test_2_source_info=api.TestSourceFileInfo(pathlib.Path("/path/to_my/tests.py"),16,35)parametrized_test_2_a_id=api.TestId(test_suite_1_id,"test_2",parameters='{"parameter_1": "value_is_a"}')api.Test.discover(parametrized_test_2_a_id,codeowners=test_2_codeowners,source_file_info=test_2_source_info,resource="overriden resource name A",)parametrized_test_2_b_id=api.TestId(test_suite_1_id,"test_2",parameters='{"parameter_1": "value_is_b"}')api.Test.discover(parametrized_test_2_b_id,codeowners=test_2_codeowners,source_file_info=test_2_source_info,resource="overriden resource name B")test_3_id=api.TestId(test_suite_1_id,"test_3")api.Test.discover(test_3_id)test_4_id=api.TestId(test_suite_1_id,"test_4")api.Test.discover(test_4_id)# Start and execute itemsapi.TestSession.start()api.TestModule.start(test_module_1_id)api.TestSuite.start(test_suite_1_id)# test_1 passes successfullyapi.Test.start(test_1_id)api.Test.mark_pass(test_1_id)# test_2's first parametrized test succeeds, but the second fails without attaching exception infoapi.Test.start(parametrized_test_2_a_id)api.Test.mark_pass(parametrized_test_2_a_id)api.Test.start(parametrized_test_2_b_id)api.Test.mark_fail(parametrized_test_2_b_id)# test_3 is skippedapi.Test.start(test_3_id)api.Test.mark_skip(test_3_id,skip_reason="example skipped test")# test_4 fails, and attaches exception infoapi.Test.start(test_4_id)try:raise(ValueError("this test failed"))except:api.Test.mark_fail(test_4_id,exc_info=api.TestExcInfo(*sys.exc_info()))# Finish suites and modulesapi.TestSuite.finish(test_suite_1_id)api.TestModule.finish(test_module_1_id)api.TestSession.finish()
Configuration settings
The following is a list of the most important configuration settings that can be used with the tracer, either in code or using environment variables:
DD_SERVICE
Name of the service or library under test. Environment variable: DD_SERVICE Default: pytest Example: my-python-app
DD_ENV
Name of the environment where tests are being run. Environment variable: DD_ENV Default: none Examples: local, ci
Datadog tire profit des données Git pour vous présenter les résultats de vos tests et les regrouper par référentiel, branche et commit. Les métadonnées Git sont automatiquement recueillies par l’instrumentation de test, à partir des variables d’environnement du fournisseur de CI et du dossier local .git dans le chemin du projet, le cas échéant.
Si vous exécutez des tests dans des fournisseurs de CI non pris en charge, ou sans dossier .git, vous pouvez configurer manuellement les données Git à l’aide de variables d’environnement. Ces dernières sont prioritaires et remplacent les informations détectées automatiquement. Configurez les variables d’environnement suivantes pour obtenir des données Git :
DD_GIT_REPOSITORY_URL
URL du référentiel dans lequel le code est stocké. Les URL HTTP et SSH sont prises en charge. Exemple : git@github.com:MyCompany/MyApp.git, https://github.com/MyCompany/MyApp.git
DD_GIT_BRANCH
Branche Git testée. Ne renseignez pas cette variable si vous fournissez à la place des informations sur les tags. Exemple : develop
DD_GIT_TAG
Tag Git testé (le cas échéant). Ne renseignez pas cette variable si vous fournissez à la place des informations sur la branche. Exemple : 1.0.1
DD_GIT_COMMIT_SHA
Hash entier du commit. Exemple : a18ebf361cc831f5535e58ec4fae04ffd98d8152
DD_GIT_COMMIT_MESSAGE
Message du commit. Exemple : Set release number
DD_GIT_COMMIT_AUTHOR_NAME
Nom de l’auteur du commit. Exemple : John Smith
DD_GIT_COMMIT_AUTHOR_EMAIL
E-mail de l’auteur du commit. Exemple : john@example.com
DD_GIT_COMMIT_AUTHOR_DATE
Date de l’auteur du commit, au format ISO 8601. Exemple : 2021-03-12T16:00:28Z
DD_GIT_COMMIT_COMMITTER_NAME
Nom du responsable du commit. Exemple : Jane Smith
DD_GIT_COMMIT_COMMITTER_EMAIL
E-mail du responsable du commit. Exemple : jane@example.com
DD_GIT_COMMIT_COMMITTER_DATE
Date du responsable du commit, au format ISO 8601. Exemple : 2021-03-12T16:00:28Z
Known limitations
Plugins for pytest that alter test execution may cause unexpected behavior.
Parallelization
Plugins that introduce parallelization to pytest (such as pytest-xdist or pytest-forked) create one session event for each parallelized instance. Multiple module or suite events may be created if tests from the same package or module execute in different processes.
The overall count of test events (and their correctness) remain unaffected. Individual session, module, or suite events may have inconsistent results with other events in the same pytest run.
Test ordering
Plugins that change the ordering of test execution (such as pytest-randomly) can create multiple module or suite events. The duration and results of module or suite events may also be inconsistent with the results reported by pytest.
The overall count of test events (and their correctness) remain unaffected.
In some cases, if your unittest test execution is run in a parallel manner, this may break the instrumentation and affect test optimization.
Datadog recommends you use up to one process at a time to prevent affecting test optimization.
Further reading
Documentation, liens et articles supplémentaires utiles: