Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.

Observability Pipelines is not available on the US1-FED Datadog site.

Si vous mettez à jour la version 1.8 ou antérieure du worker des pipelines d'observabilité vers la version 2.0 ou ultérieure, vos pipelines existants seront interrompus. Ne mettez pas à niveau votre worker des pipelines d'observabilité si vous souhaitez continuer à utiliser la version 1.8 ou antérieure. Si vous souhaitez utiliser la version 2.0 ou ultérieure du worker des pipelines d'observabilité, vous devez migrer les pipelines de la version 1.8 ou antérieure vers les versions 2.x.

Datadog vous recommande d'effectuer la mise à jour vers la version 2.0 ou une version ultérieure. La mise à niveau vers une version majeure et son maintien à jour est le seul moyen d'obtenir les dernières fonctionnalités, corrections et mises à jour de sécurité.

Overview

The Observability Pipelines Worker can collect, process, and route logs from any source to any destination. Using Datadog, you can build and manage all of your Observability Pipelines Worker deployments at scale.

There are several ways to get started with the Observability Pipelines Worker.

  • Quickstart: Install the Worker with a simple pipeline that emits demo data to get started quickly.
  • Datadog setup guide: Install the Worker with an out-of-the-box pipeline for receiving and routing data from your Datadog Agents to Datadog.
  • Datadog archiving setup guide: Install the Worker with an out-of-the-box pipeline for receiving and routing data from your Datadog Agents to Datadog and S3.
  • Splunk setup guide: Install the Worker with an out-of-the-box pipeline for receiving and routing data from Splunk HEC to both Splunk and Datadog.

This document walks you through the quickstart installation steps and then provides resources for next steps. Use and operation of this software is governed by the End User License Agreement.

Deployment Modes

La configuration à distance pour les pipelines d'observabilité est disponible en version bêta privée. Contactez l'assistance Datadog ou votre chargé de compte pour demander à y accéder.

Si vous participez à la version bêta privée de la configuration à distance, vous pouvez déployer à distance des modifications de vos workers depuis l’interface Datadog, plutôt que de mettre à jour la configuration de votre pipeline dans un éditeur de texte, puis de déployer manuellement les changements. Choisissez votre méthode de déploiement lors de la création d’un pipeline et de l’installation de vos workers.

Consultez la rubrique Modifier les modes de déploiement pour découvrir comment modifier le mode de déploiement après le déploiement d’un pipeline.

Prerequisites

To install the Observability Pipelines Worker, you need the following:

To generate a new API key and pipeline:

  1. Navigate to Observability Pipelines.
  2. Click New Pipeline.
  3. Enter a name for your pipeline.
  4. Click Next.
  5. Select the template you want and follow the instructions.

Quickstart

Follow the below instructions to install the Worker and deploy a sample pipeline configuration that uses demo data.

Install the Observability Pipelines Worker

The Observability Pipelines Worker Docker image is published to Docker Hub here.

  1. Download the sample pipeline configuration file. This configuration emits demo data, parses and structures the data, and then sends them to the console and Datadog. See Configurations for more information about the source, transform, and sink used in the sample configuration.

  2. Run the following command to start the Observability Pipelines Worker with Docker:

    docker run -i -e DD_API_KEY=<API_KEY> \
      -e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
      -e DD_SITE=<SITE> \
      -p 8282:8282 \
      -v ./pipeline.yaml:/etc/observability-pipelines-worker/pipeline.yaml:ro \
      datadog/observability-pipelines-worker run
    

    Replace <API_KEY> with your Datadog API key, <PIPELINES_ID> with your Observability Pipelines configuration ID, and <SITE> with . Note: ./pipeline.yaml must be the relative or absolute path to the configuration you downloaded in step 1.

  1. Download the Helm chart values file for AWS EKS. See Configurations for more information about the source, transform, and sink used in the sample configuration.

  2. In the Helm chart, replace the datadog.apiKey and datadog.pipelineId values to match your pipeline and use for the site value. Then, install it in your cluster with the following commands:

    helm repo add datadog https://helm.datadoghq.com
    
    helm repo update
    
    helm upgrade --install \
        opw datadog/observability-pipelines-worker \
        -f aws_eks.yaml
    
  1. Download the Helm chart values file for Azure AKS. See Configurations for more information about the source, transform, and sink used in the sample configuration.

  2. In the Helm chart, replace the datadog.apiKey and datadog.pipelineId values to match your pipeline and use for the site value. Then, install it in your cluster with the following commands:

    helm repo add datadog https://helm.datadoghq.com
    
    helm repo update
    
    helm upgrade --install \
      opw datadog/observability-pipelines-worker \
      -f azure_aks.yaml
    
  1. Download the Helm chart values file for Google GKE. See Configurations for more information about the source, transform, and sink used in the sample configuration.

  2. In the Helm chart, replace the datadog.apiKey and datadog.pipelineId values to match your pipeline and use for the site value. Then, install it in your cluster with the following commands:

    helm repo add datadog https://helm.datadoghq.com
    
    helm repo update
    
    helm upgrade --install \
      opw datadog/observability-pipelines-worker \
      -f google_gke.yaml
    

Install the Worker with the one-line install script or manually.

One-line installation script

  1. Run the one-line install command to install the Worker. Replace <DD_API_KEY> with your Datadog API key, <PIPELINES_ID> with your Observability Pipelines ID, and <SITE> with .

    DD_API_KEY=<DD_API_KEY> DD_OP_PIPELINE_ID=<PIPELINES_ID> DD_SITE=<SITE> bash -c "$(curl -L https://install.datadoghq.com/scripts/install_script_op_worker1.sh)"
    
  2. Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml on the host. See Configurations for more information about the source, transform, and sink used in the sample configuration.

  3. Start the worker:

    sudo systemctl restart observability-pipelines-worker
    

Manual installation

  1. Run the following commands to set up APT to download through HTTPS:

    sudo apt-get update
    sudo apt-get install apt-transport-https curl gnupg
    
  2. Run the following commands to set up the Datadog deb repo on your system and create a Datadog archive keyring:

    sudo sh -c "echo 'deb [signed-by=/usr/share/keyrings/datadog-archive-keyring.gpg] https://apt.datadoghq.com/ stable observability-pipelines-worker-1' > /etc/apt/sources.list.d/datadog-observability-pipelines-worker.list"
    sudo touch /usr/share/keyrings/datadog-archive-keyring.gpg
    sudo chmod a+r /usr/share/keyrings/datadog-archive-keyring.gpg
    curl https://keys.datadoghq.com/DATADOG_APT_KEY_CURRENT.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
    curl https://keys.datadoghq.com/DATADOG_APT_KEY_06462314.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
    curl https://keys.datadoghq.com/DATADOG_APT_KEY_F14F620E.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
    curl https://keys.datadoghq.com/DATADOG_APT_KEY_C0962C7D.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
    
  3. Run the following commands to update your local apt repo and install the Worker:

    sudo apt-get update
    sudo apt-get install observability-pipelines-worker datadog-signing-keys
    
  4. Add your keys and the site () to the Worker’s environment variables:

    sudo cat <<-EOF > /etc/default/observability-pipelines-worker
    DD_API_KEY=<API_KEY>
    DD_OP_PIPELINE_ID=<PIPELINE_ID>
    DD_SITE=<SITE>
    EOF
    
  5. Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml on the host.

  6. Start the Worker:

    sudo systemctl restart observability-pipelines-worker
    

Install the Worker with the one-line install script or manually.

One-line installation script

  1. Run the one-line install command to install the Worker. Replace <DD_API_KEY> with your Datadog API key, <PIPELINES_ID> with your Observability Pipelines ID, and <SITE> with .

    DD_API_KEY=<DD_API_KEY> DD_OP_PIPELINE_ID=<PIPELINES_ID> DD_SITE=<SITE> bash -c "$(curl -L https://install.datadoghq.com/scripts/install_script_op_worker1.sh)"
    
  2. Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml on the host. See Configurations for more information about the source, transform, and sink used in the sample configuration.

  3. Run the following command to start the Worker:

    sudo systemctl restart observability-pipelines-worker
    

Manual installation

  1. Run the following commands to set up the Datadog rpm repo on your system:

    cat <<EOF > /etc/yum.repos.d/datadog-observability-pipelines-worker.repo
    [observability-pipelines-worker]
    name = Observability Pipelines Worker
    baseurl = https://yum.datadoghq.com/stable/observability-pipelines-worker-1/\$basearch/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://keys.datadoghq.com/DATADOG_RPM_KEY_CURRENT.public
           https://keys.datadoghq.com/DATADOG_RPM_KEY_4F09D16B.public
           https://keys.datadoghq.com/DATADOG_RPM_KEY_B01082D3.public
           https://keys.datadoghq.com/DATADOG_RPM_KEY_FD4BF915.public
           https://keys.datadoghq.com/DATADOG_RPM_KEY_E09422B3.public
    EOF
    

    Note: If you are running RHEL 8.1 or CentOS 8.1, use repo_gpgcheck=0 instead of repo_gpgcheck=1 in the configuration above.

  2. Update your packages and install the Worker:

    sudo yum makecache
    sudo yum install observability-pipelines-worker
    
  3. Add your keys and the site () to the Worker’s environment variables:

    sudo cat <<-EOF > /etc/default/observability-pipelines-worker
    DD_API_KEY=<API_KEY>
    DD_OP_PIPELINE_ID=<PIPELINE_ID>
    DD_SITE=<SITE>
    EOF
    
  4. Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml on the host. See Configurations for more information about the source, transform, and sink used in the sample configuration.

  5. Run the following command to start the Worker:

    sudo systemctl restart observability-pipelines-worker
    
  1. Download the the sample configuration.
  2. Set up the Worker module in your existing Terraform using the sample configuration. Make sure to update the values in vpc-id, subnet-ids, and region to match your AWS deployment in the configuration. Also,update the values in datadog-api-key and pipeline-id to match your pipeline.

See Configurations for more information about the source, transform, and sink used in the sample configuration.

See Working with Data for more information on transforming your data.

Updating deployment modes

Après avoir déployé un pipeline, vous pouvez également modifier les méthodes de déploiement. Il est ainsi possible de passer d’une gestion manuelle à une configuration à distance, et inversement.

Pour passer d’un déploiement avec une configuration à distance à un déploiement avec une gestion manuelle, procédez comme suit :

  1. Accédez à Observability Pipelines et sélectionnez le pipeline.
  2. Cliquez sur l’icône en forme d’engrenage des réglages.
  3. Dans Deployment Mode, sélectionnez manual pour activer la gestion manuelle.
  4. Définissez le flag DD_OP_REMOTE_CONFIGURATION_ENABLED sur false et redémarrez le worker. Si vous ne redémarrez pas un worker avec ce flag, la configuration à distance continue à être activée pour celui-ci : il n’est donc pas mis à jour manuellement via un fichier de configuration local.

Pour passer d’un déploiement avec une gestion manuelle à un déploiement avec une configuration à distance, procédez comme suit :

  1. Accédez à Observability Pipelines et sélectionnez le pipeline.
  2. Cliquez sur l’icône en forme d’engrenage des réglages.
  3. Dans Deployment Mode, sélectionnez Remote Configuration pour activer la configuration à distance.
  4. Définissez le flag DD_OP_REMOTE_CONFIGURATION_ENABLED sur true et redémarrez le worker. Si vous ne redémarrez pas un worker avec ce flag, les configurations déployées dans l’interface ne sont pas récupérées.
  5. Déployez une version de votre historique, afin que les workers reçoivent la nouvelle configuration de version. Cliquez sur une version. Cliquez sur Edit as Draft, puis sur Deploy.

Next steps

The quickstart walked you through installing the Worker and deploying a sample pipeline configuration. For instructions on how to install the Worker to receive and route data from your Datadog Agents to Datadog or to receive and route data from your Splunk HEC to Splunk and Datadog, select your specific use case:

Datadog
Splunk

For recommendations on deploying and scaling multiple Workers:

Further reading

PREVIEWING: mcretzman/DOCS-9337-add-cloud-info-byoti