Integration Backup and Restore Tool

Supported OS Linux Windows Mac OS

marketplace
Integration version1.0.1

Overview

Take your Datadog experience to the next level with the Integration Backup and Restore Tool (IBRT). This powerful tool helps you streamline your workflow by allowing you to easily create backups of your Datadog configuration, ensuring that you can quickly restore your setup after Agent upgrades or migrations.

Features

  • Creates a complete backup of your Datadog configuration, including:
    • Integrations
    • Dependencies
    • Configuration files (for example, integrations’ datadog.yaml and conf.yaml files)
  • Supports multiple backup locations, allowing you to store your backups in the locations that best suit your needs
  • Flexible backup scheduling:
    • Run on-demand backups as needed
    • Schedule periodic backups to run automatically, based on your specific requirements
  • Provides options during restoration:
    1. Agent migration or reinstallation: Installs all integrations and copies YAML files, including integration conf.yaml and datadog.yaml, for a seamless migration experience.
    2. Agent upgrade: Installs integrations as YAML configurations, and preserves dependencies during the upgrade process.

Supported backup locations

  • Local machine
  • Remote machine
  • Cloud services:
    • AWS S3 buckets
    • Azure Blob storage
    • Google Cloud Storage

Ease of use

Unlike traditional backup methods that require manual installation and configuration, IBRT provides a simple and convenient way to create backups. You can easily create a backup of your Datadog configuration by running an on-demand command at the Agent level, or by scheduling periodic backups to run automatically. Additionally, restoring your backup is just as easy, requiring only a single script to get your configuration up and running again.

Every instance is scheduled independently of the others.

instances:

## @param backup_storage_platform - list of strings - required
## The platform where you want to create a backup zip file.
## Possible values: local, vm, aws, gcp, azure
## To store in local => local
## To store in other VM => vm
## AWS S3 bucket => aws
## Azure Storage accounts => azure
## Google Cloud Storage => gcp
##
## If you want to store backups in multiple places, like local, aws, and vm at the same time, then you can provide values in this format:

## - backup_storage_platform: 
##   - local
##   - vm
##   - aws
##
## Note: If you select "vm", then make sure OpenSSH server is enabled and running if your target machine is running on Windows.
## If you don't want to enable OpenSSH server, then you have to store backup on cloud services, like AWS S3 Bucket, Azure Blob Storage, or Google Cloud Service.
## Then you can manually download backups to your target machine at the time of restoration.
## For Linux and macOS, OpenSSH server is enabled by default.
#
  • backup_storage_platform:

    • local

    @param backup_zip_path_local - string - optional

    The path where the backup zip file will be created on your local machine.

    Required if backup_storage_platform is ’local'.

    Note: Users must have write permissions to this directory.

    Users should also not manually alter any files in this folder.

    backup_zip_path_local: <backup_zip_path_local>

    @param backup_zip_path_vm - string - optional

    The path where the backup zip file will be created on the remote machine.

    Required if backup_storage_platform is ‘vm’.

    Note: Users must have write permissions to this directory.

    Users should also not manually alter any files in this folder.

    backup_zip_path_vm: <backup_zip_path_vm>

    @param vm_hostname - string - optional

    The hostname of the VM on which the zip will be saved.

    vm_hostname: <ip_address OR hostname>

    @param vm_username - string - optional

    The username of the VM on which the zip will be saved.

    vm_username:

    @param vm_password - string - optional

    The password of the VM on which the zip will be saved.

    vm_password:

    @param aws_s3_bucket - string - optional

    If backup_storage_platform is aws, the name of the s3 bucket that stores the backup files.

    aws_s3_bucket: <aws_s3_bucket>

    @param aws_access_key - string - optional

    If backup_storage_platform is aws, the access key used to authenticate with the AWS API to upload to s3.

    aws_access_key: <aws_access_key>

    @param aws_secret_key - string - optional

    If backup_storage_platform is aws, the secret key used to authenticate with the AWS API to upload to s3.

    aws_secret_key: <aws_secret_key>

    @param aws_bucket_region - string - optional

    If backup_storage_platform is aws, the regio used to identify the s3 bucket.

    aws_bucket_region: <aws_bucket_region>

    @param azure_connection_string - string - optional

    If backup_storage_platform is azure, the storage account’s connection string for authentication.

    azure_connection_string: <azure_connection_string>

    @param azure_container_name - string - optional

    If backup_storage_platform is azure, the name of the container that holds the blob storage.

    azure_container_name: <azure_container_name>

    @param gcp_service_account_key_path - string - optional

    If backup_storage_platform is GCP, the path of the json file that contains the GCP service account key.

    gcp_service_account_key_path: <gcp_service_account_key_path>

    @param gcs_bucket_name - string - optional

    If backup_storage_platform is GCP, the Google Cloud Storage bucket name.

    gcs_bucket_name: <gcs_bucket_name>

    @param proxy_type - string - optional

    Type of the proxy server. Allowed proxy type is http.

    Required if ‘proxy_host’ is provided, and vice-versa.

    proxy_type: http

    @param proxy_host - string - optional

    Host of the proxy server.

    Required if ‘proxy_type’ is provided, and vice-versa.

    proxy_host: 10.0.0.1

    @param proxy_port - integer - optional - default: 3128

    Port of the proxy server.

    proxy_port: 3128

    @param proxy_username - string - optional

    The username of the proxy server.

    Required if ‘proxy_password’ is provided, and vice-versa.

    proxy_username: <PROXY_USERNAME>

    @param proxy_password - string - optional

    The password of the proxy server.

    Required if ‘proxy_username’ is provided, and vice-versa.

    proxy_password: <PROXY_PASSWORD>

    @param tags - list of strings - optional

    A list of tags to attach to every metric and service check emitted by this instance.

    Learn more about tagging at https://docs.datadoghq.com/tagging

    tags:

    - <KEY_1>:<VALUE_1>

    - <KEY_2>:<VALUE_2>

    @param service - string - optional

    Attach the tag service:<SERVICE> to every metric, event, and service check emitted by this integration.

    Overrides any service defined in the init_config section.

    service:

    @param min_collection_interval - number - required - default: 1296000

    Backups should only need to be generated every 15 days at most.

    Default value for min_collection_interval is equal to 15 days.

    Minimum allowed value for min_collection_interval is 3600 seconds (1 hour).

    min_collection_interval: 1296000

    @param empty_default_hostname - boolean - optional - default: false

    Forces the check to send metrics with no hostname. This is useful for cluster-level checks.

    empty_default_hostname: false

    @param metric_patterns - mapping - optional

    A mapping of metrics to include or exclude, with each entry being a regular expression.

    Metrics defined in exclude will take precedence in case of overlap.

    metric_patterns:

    include:

    - <INCLUDE_REGEX>

    exclude:

    - <EXCLUDE_REGEX>

  1. Restart the Agent.

Validation

Run the Agent’s status subcommand and look for crest_data_systems_integration_backup_and_restore_tool under the Checks section.

Alternatively, use one of the following commands to obtain detailed information about the integration:

  • Linux

    sudo -Hu dd-agent datadog-agent check crest_data_systems_integration_backup_and_restore_tool --log-level debug
    
  • Windows

    "%programfiles%\Datadog\Datadog Agent\bin\agent.exe" check crest_data_systems_integration_backup_and_restore_tool --log-level debug
    
  • macOS

    sudo datadog-agent check crest_data_systems_integration_backup_and_restore_tool --log-level debug
    

Restore the backup

Follow these steps after the backup is created:

  1. Download or copy the backup to the latest agent location.
  2. Unzip the zip file.
  3. Restore the Datadog configuration by running the script cds_ibrt_restore_script.py, which is provided in the backup.

Note: Do not manually alter the content of the unzipped backup directory.

Follow the steps below to restore the configuration, based on your Agent type:

Linux

  • To run the script, give the dd-agent user ownership to the unzipped directory in which the cds_ibrt_restore_script.py is located

  • Run the following command, replacing <directory> with the directory that contains the cds_ibrt_restore_script.py script:

    sudo chown dd-agent:dd-agent <directory> -R
    
  • After this, run the following command to execute the script:

    sudo -Hu dd-agent /opt/datadog-agent/embedded/bin/python cds_ibrt_restore_script.py
    

Note for RHEL and CentOS users: If you are running this script on RHEL or CentOS, ensure that all parent directories are accessible by the dd-agent user. If the script is stored in your home directory (e.g., /home/devuser/), make sure that directory has permission 755:

sudo chmod 755 /home/devuser

If you prefer not to change permissions, choose a storage location where the dd-agent user already has write access, such as /opt/datadog-agent.

Windows

Open the command prompt as administrator and run the following command:

"%programfiles%\Datadog\Datadog Agent\embedded3\python.exe" cds_ibrt_restore_script.py

macOS

sudo /opt/datadog-agent/embedded/bin/python cds_ibrt_restore_script.py

Docker

To restore a backup on your Docker container, use the following command:

/opt/datadog-agent/embedded/bin/python cds_ibrt_restore_script.py

After running the script, choose the option that best fits your scenario:

  • Select 1 for Agent migration/reinstallation: Installs all integrations and copies YAML files, including integration conf.yaml and datadog.yaml, for a seamless migration experience.
  • Select 2 for Agent Upgrade: Installs integrations as YAML configurations, preserving dependencies during the upgrade process.

Once you select an option, your Datadog environment will be configured accordingly.

What is your Datadog agent installation scenario?
1. Fresh agent installation/reinstallation or agent migration
2. Just upgraded existing datadog-agent
Enter your choice (1/2):

Troubleshooting

Here’s how to fix some common issues you might encounter.

  • If you encounter any issues storing backup files on a remote VM using proxy, ensure that you have allowed the SSH port in your proxy server. For example, if you are using a Squid proxy server, you need to allow the SSH port in the squid.conf file and restart the proxy server.
    acl SSL_ports port 22
    http_access allow SSL_ports
    
  • If you encounter any permission issues, ensure that you have followed all the steps mentioned in the Notes to avoid permission issues section.
  • Here are some things to keep in mind while restoring your backup:
    • Core integrations will be skipped with the below log message during the restore process, because these integrations are shipped with the Datadog Agent.
      • INFO - <integration_name> is core integration, hence skipping...
    • Custom-created integrations, which are not on the Datadog Marketplace, will be skipped with the below log message during the restore process:
      • INFO - <integration_name> is custom integration, hence skipping...

Support

For support or feature requests, contact Crest Data through the following channels:


This application is made available through the Marketplace and is supported by a Datadog Technology Partner. Click Here to purchase this application.

PREVIEWING: may/embedded-workflows