Create an Agent Integration

Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.

Overview

This page walks Technology Partners through how to create a Datadog Agent integration, which you can list as out-of-the-box on the Integrations page, or for a price on the Marketplace page.

Agent-based integrations

Agent-based integrations use the Datadog Agent to submit data through checks written by the developer. Checks can emit metrics, events, and service checks into a customer’s Datadog account. The Agent itself can submit logs as well, but that is configured outside of the check.

The implementation code for these integrations is hosted by Datadog. Agent integrations are best suited for collecting data from systems or applications that live in a local area network (LAN) or virtual private cloud (VPC). Creating an Agent integration requires you to publish and deploy your solution as a Python wheel (.whl).

You can include out-of-the-box assets such as monitors, dashboards, and log pipelines with your Agent-based integration. When a user clicks Install on your integration tile, they are prompted to follow the setup instructions, and all out-of-the-box dashboards will appear in their account. Other assets, such as log pipelines, will appear for users after proper installation and configuration of the integration.

Development process

The process to build an Agent-based integration looks like this:

  1. Once you’ve been accepted to the Datadog Partner Network, you will meet with the Datadog Technology Partner team to discuss your offering and use cases.
  2. Request a Datadog sandbox account for development through the Datadog Partner Network portal.
  3. Begin development of your integration, which includes writing the integration code on your end as well as building and installing a Python wheel (.whl).
  4. Test your integration in your Datadog sandbox account.
  5. Once your development work is tested and complete, populate your tile assets by providing information like setup instructions, images, support information, and more that will make up your integration tile that’s displayed on the Integrations or Marketplace page.
  6. Once your pull request is submitted and approved, the Datadog Technology Partner team will schedule a demo for a final review of your integration.
  7. You will have the option of testing the tile and integration in your Datadog sandbox account before publishing, or immediately publishing the integration for all customers.

Prerequisites

The required Datadog Agent integration development tools include the following:

Select a tab for instructions on building an out-of-the-box Agent-based integration on the Integrations page, or an Agent-based integration on the Marketplace page.

To build an out-of-the-box integration:

Create a dd directory:

mkdir $HOME/dd && cd $HOME/dd

The Datadog Development Toolkit expects you to work in the $HOME/dd/ directory. This is not mandatory, but working in a different directory requires additional configuration steps.

  1. Fork the integrations-extras repository.

  2. Clone your fork into the dd directory:

    git clone git@github.com:<YOUR USERNAME>/integrations-extras.git
    
  3. Create a feature branch to work in:

    git switch -c <YOUR INTEGRATION NAME> origin/master
    

Configure the developer tool

The Agent Integration Developer Tool allows you to create scaffolding when you are developing an integration by generating a skeleton of your integration tile’s assets and metadata. For instructions on installing the tool, see Install the Datadog Agent Integration Developer Tool.

To configure the tool for the integrations-extras repository:

  1. Optionally, if your integrations-extras repo is somewhere other than $HOME/dd/, adjust the ddev configuration file:

    ddev config set extras "/path/to/integrations-extras"
    
  2. Set integrations-extras as the default working repository:

    ddev config set repo extras
    

To build an integration:

  1. See Build a Marketplace Offering to request access to the Marketplace repository.

  2. Create a dd directory:

    mkdir $HOME/dd```
    
    The Datadog Development Toolkit command expects you to be working in the `$HOME/dd/` directory. This is not mandatory, but working in a different directory requires additional configuration steps.
    
  3. Once you have been granted access to the Marketplace repository, create the dd directory and clone the marketplace repository:

    git clone git@github.com:DataDog/marketplace.git```
    
  4. Create a feature branch to work in:

    git switch -c <YOUR INTEGRATION NAME> origin/master```
    

Install and configure the Datadog development toolkit

The Agent Integration Developer Tool allows you to create scaffolding when you are developing an integration by generating a skeleton of your integration tile’s assets and metadata. For instructions on installing the tool, see Install the Datadog Agent Integration Developer Tool.

Once you have installed the Agent Integration Developer Tool, configure it for the Marketplace repository.

  1. Set marketplace as the default working repository:

    
    ddev config set marketplace $HOME/dd/marketplace
    ddev config set repo marketplace
    
  2. If you used a directory other than $HOME/dd to clone the marketplace directory, use the following command to set your working repository:

    
    ddev config set marketplace <PATH/TO/MARKETPLACE>
    ddev config set repo marketplace
    

Create your integration

Once you’ve downloaded Docker, installed an appropriate version of Python, and prepared your development environment, you can start creating an Agent-based integration.

The following instructions use an example integration called Awesome. Follow along using the code from Awesome, or replace Awesome with your own code, as well as the name of your integration within the commands. For example, use ddev create <your-integration-name> instead of ddev create Awesome.

Create scaffolding for your integration

The ddev create command runs an interactive tool that creates the basic file and path structure (or scaffolding) necessary for an Agent-based integration.

  1. Before you create your first integration directory, try a dry-run using the -n/--dry-run flag, which doesn’t write anything to the disk:

    ddev create -n Awesome
    

    This command displays the path where the files would have been written, as well as the structure itself. Make sure the path in the first line of output matches your repository location.

  2. Run the command without the -n flag. The tool asks you for an email and name and then creates the files you need to get started with an integration.

    If you are creating an integration for the Datadog Marketplace, ensure that your directory follows the pattern of {partner name}_{integration name}.
    ddev create Awesome
    

Write an Agent check

At the core of each Agent-based integration is an Agent Check that periodically collects information and sends it to Datadog.

Checks inherit their logic from the AgentCheck base class and have the following requirements:

  • Integrations running on the Datadog Agent v7 or later must be compatible with Python 3. Integrations running on the Datadog Agent v5 and v6 still use Python 2.7.
  • Checks must derive from AgentCheck.
  • Checks must provide a method with this signature: check(self, instance).
  • Checks are organized in regular Python packages under the datadog_checks namespace. For example, the code for Awesome lives in the awesome/datadog_checks/awesome/ directory.
  • The name of the package must be the same as the check name.
  • There are no restrictions on the name of the Python modules within that package, nor on the name of the class implementing the check.

Implement check logic

For Awesome, the Agent Check is composed of a service check named awesome.search that searches for a string on a web page. It results in OK if the string is present, WARNING if the page is accessible but the string was not found, and CRITICAL if the page is inaccessible.

To learn how to submit metrics with your Agent Check, see Custom Agent Check.

The code contained within awesome/datadog_checks/awesome/check.py looks something like this:

check.py

import requests

from datadog_checks.base import AgentCheck, ConfigurationError


class AwesomeCheck(AgentCheck):
    """AwesomeCheck derives from AgentCheck, and provides the required check method."""

    def check(self, instance):
        url = instance.get('url')
        search_string = instance.get('search_string')

        # It's a very good idea to do some basic sanity checking.
        # Try to be as specific as possible with the exceptions.
        if not url or not search_string:
            raise ConfigurationError('Configuration error, please fix awesome.yaml')

        try:
            response = requests.get(url)
            response.raise_for_status()
        # Something went horribly wrong
        except Exception as e:
            # Ideally we'd use a more specific message...
            self.service_check('awesome.search', self.CRITICAL, message=str(e))
        # Page is accessible
        else:
            # search_string is present
            if search_string in response.text:
                self.service_check('awesome.search', self.OK)
            # search_string was not found
            else:
                self.service_check('awesome.search', self.WARNING)

To learn more about the base Python class, see Anatomy of a Python Check.

Write validation tests

There are two types of tests:

pytest and hatch are used to run the tests. Tests are required in order to publish your integration.

Write a unit test

The first part of the check method for Awesome retrieves and verifies two elements from the configuration file. This is a good candidate for a unit test.

Open the file at awesome/tests/test_awesome.py and replace the contents with the following:

test_awesome.py

import pytest

    # Don't forget to import your integration

from datadog_checks.awesome import AwesomeCheck
from datadog_checks.base import ConfigurationError


@pytest.mark.unit
def test_config():
    instance = {}
    c = AwesomeCheck('awesome', {}, [instance])

    # empty instance
    with pytest.raises(ConfigurationError):
        c.check(instance)

    # only the url
    with pytest.raises(ConfigurationError):
        c.check({'url': 'http://foobar'})

    # only the search string
    with pytest.raises(ConfigurationError):
        c.check({'search_string': 'foo'})

    # this should not fail
    c.check({'url': 'http://foobar', 'search_string': 'foo'})

pytest has the concept of markers that can be used to group tests into categories. Notice that test_config is marked as a unit test.

The scaffolding is set up to run all the tests located in awesome/tests. To run the tests, run the following command:

ddev test awesome

Write an integration test

The unit test above doesn’t check the collection logic. To test the logic, you need to create an environment for an integration test and write an integration test.

Create an environment for the integration test

The toolkit uses docker to spin up an NGINX container and lets the check retrieve the welcome page.

To create an environment for the integration test, create a docker-compose file at awesome/tests/docker-compose.yml with the following contents:

docker-compose.yml

version: "3"

services:
  nginx:
    image: nginx:stable-alpine
    ports:
      - "8000:80"

Next, open the file at awesome/tests/conftest.py and replace the contents with the following:

conftest.py

import os

import pytest

from datadog_checks.dev import docker_run, get_docker_hostname, get_here

URL = 'http://{}:8000'.format(get_docker_hostname())
SEARCH_STRING = 'Thank you for using nginx.'
INSTANCE = {'url': URL, 'search_string': SEARCH_STRING}


@pytest.fixture(scope='session')
def dd_environment():
    compose_file = os.path.join(get_here(), 'docker-compose.yml')

    # This does 3 things:
    #
    # 1. Spins up the services defined in the compose file
    # 2. Waits for the url to be available before running the tests
    # 3. Tears down the services when the tests are finished
    with docker_run(compose_file, endpoints=[URL]):
        yield INSTANCE


@pytest.fixture
def instance():
    return INSTANCE.copy()

Add an integration test

After you’ve setup an environment for the integration test, add an integration test to the awesome/tests/test_awesome.py file:

test_awesome.py

@pytest.mark.integration
@pytest.mark.usefixtures('dd_environment')
def test_service_check(aggregator, instance):
    c = AwesomeCheck('awesome', {}, [instance])

    # the check should send OK
    c.check(instance)
    aggregator.assert_service_check('awesome.search', AwesomeCheck.OK)

    # the check should send WARNING
    instance['search_string'] = 'Apache'
    c.check(instance)
    aggregator.assert_service_check('awesome.search', AwesomeCheck.WARNING)

To speed up development, use the -m/--marker option to run integration tests only:

ddev test -m integration awesome

Your integration is almost complete. Next, add the necessary check assets.

Populate integration assets

The following set of assets created by the ddev scaffolding must be populated with relevant information to your integration:

README.md
This contains the documentation for your Agent Check, how to set it up, which data it collects, and support information.
spec.yaml
This is used to generate the conf.yaml.example using the ddev tooling. For more information, see Configuration Specification.
conf.yaml.example
This contains default (or example) configuration options for your Agent Check. Do not edit this file by hand. It is generated from the contents of spec.yaml. For more information, see the Configuration file reference documentation.
manifest.json
This contains the metadata for your Agent Check such as the title and categories. For more information, see the Manifest file reference documentation.
metadata.csv
This contains the list of all metrics collected by your Agent Check. For more information, see the Metrics metadata file reference documentation.
service_check.json
This contains the list of all Service Checks collected by your Agent Check. For more information, see the Service check file reference documentation.

For more information about the README.md and manifest.json files, see Create a Tile and Integrations Asset Reference.

Build the wheel

The pyproject.toml file provides the metadata that is used to package and build the wheel. The wheel contains the files necessary for the functioning of the integration itself, which includes the Agent Check, configuration example file, and artifacts generated during the wheel build.

All additional elements, including the metadata files, are not meant to be contained within the wheel, and are used elsewhere by the Datadog platform and ecosystem.

To learn more about Python packaging, see Packaging Python Projects.

Once your pyproject.toml is ready, create a wheel using one of the following options:

  • (Recommended) With the ddev tooling: ddev release build <INTEGRATION_NAME>.
  • Without the ddev tooling: cd <INTEGRATION_DIR> && pip wheel . --no-deps --wheel-dir dist.

Install the wheel

The wheel is installed using the Agent integration command, available in Agent v6.10.0 or later. Depending on your environment, you may need to execute this command as a specific user or with specific privileges:

Linux (as dd-agent):

sudo -u dd-agent datadog-agent integration install -w /path/to/wheel.whl

OSX (as admin):

sudo datadog-agent integration install -w /path/to/wheel.whl

Windows PowerShell (Ensure that your shell session has administrator privileges):

Agent v6.11 or earlier
& "C:\Program Files\Datadog\Datadog Agent\embedded\agent.exe" integration install -w /path/to/wheel.whl
Agentv6.12 or later
& "C:\Program Files\Datadog\Datadog Agent\bin\agent.exe" integration install -w /path/to/wheel.whl

For installing your wheel to test in Kubernetes environments:

  1. Mount the .whl file into an initContainer.
  2. Run the wheel install in the initContainer.
  3. Mount the initContainer in the Agent container while it’s running.

For customer install commands for both host and container environments, see the Community and Marketplace Integrations documentation.

Populate your tile and publish your integration

Once you have created your Agent-based integration, see the Create a tile documentation for information on populating the remaining required assets that appear on your integration tile, and opening a pull request.

Update your integration

To update your integration, edit the relevant files and open a new pull request to your integration’s directory in the integrations-extras or marketplace repository.

  • If you are editing or adding new integration code, a version bump is required.

  • If you are editing or adding new README content, manifest information, or assets such as dashboards and recommended monitors, a version bump is not needed.

After making updates to assets such as dashboards and recommended monitors, or non-code files such as README.md and manifest.json, no further action is needed from the developer after the corresponding pull requests have been merged. These changes will show up for the customer without any action on their end.

Bumping an integration version

In addition to any code changes, the following is required when bumping an integration version:

  1. Update __about__.py to reflect the new version number. This file can be found in your integration’s directory under /datadog_checks/<your_check_name>/__about__.py.
  2. Add an entry to the CHANGELOG.md file that adheres to the following format:
    ## Version Number / Date
    
    ***Added***: 
    
    * New feature
    * New feature
    
    ***Fixed***:
    
    * Bug fix
    * Bug fix
    
  3. Update all references to the version number mentioned in README.md and elsewhere. Installation instructions in README.md often include the version number, which needs to be updated.

Further reading

PREVIEWING: rtrieu/product-analytics-ui-changes