Run Multiple Pipelines on a Host

Overview

If you want to run multiple pipelines on a single host to send logs from different sources, you need to manually add the Worker files for any additional Workers. This document explains which files you need to add and modify to run those Workers.

Prerequisites

Set up the first pipeline and install the Worker on your host.

Create an additional pipeline

Set up another pipeline for the additional Worker that you want to run on the same host. When you reach the Install page, follow the below steps to run the Worker for this pipeline.

Run the Worker for the additional pipeline

When you installed the first Worker, by default you have:

  • A service binary: /usr/bin/observability-pipelines-worker
  • A service definition file that looks like:

    /lib/systemd/system/observability-pipelines-worker.service

        [Unit]
        Description="Observability Pipelines Worker"
        Documentation=https://docs.datadoghq.com/observability_pipelines/
        After=network-online.target
        Wants=network-online.target
    
        [Service]
        User=observability-pipelines-worker
        Group=observability-pipelines-worker
        ExecStart=/usr/bin/observability-pipelines-worker run
        Restart=always
        AmbientCapabilities=CAP_NET_BIND_SERVICE
        EnvironmentFile=-/etc/default/observability-pipelines-worker
    
        [Install]
        WantedBy=multi-user.target
        
  • An environment file that looks like:

    /etc/default/observability-pipelines-worker

        DD_API_KEY=<datadog_api_key>
        DD_SITE=<dd_site>
        DD_OP_PIPELINE_ID=<pipeline_id>
        
  • A data directory: /var/lib/observability-pipelines-worker

Configure the additional Worker

For this example, another pipeline was created with the Fluent source. To configure a Worker for this pipeline:

  1. Run the following command to create a new data directory, replacing op-fluent with a directory name that fits your use case:

    sudo mkdir /var/lib/op-fluent
    
  2. Run the following command to change the owner of the data directory to observability-pipelines-worker:observability-pipelines-worker. Make sure to update op-fluent to your data directory’s name.

    sudo chown -R observability-pipelines-worker:observability-pipelines-worker /var/lib/op-fluent/
    
  3. Create an environment file for the new systemd service, such as /etc/default/op-fluent where op-fluent is replaced with your specific filename. Example of the file content:

    /etc/default/op-fluent

        DD_API_KEY=<datadog_api_key>
        DD_OP_PIPELINE_ID=<pipeline_id>
        DD_SITE=<dd_site>
        <destintation_environment_variables>
        DD_OP_SOURCE_FLUENT_ADDRESS=0.0.0.0:9091
        DD_OP_DATA_DIR=/var/lib/op-fluent
        
    In this example:

    • DD_OP_DATA_DIR is set to /var/lib/op-fluent. Replace /var/lib/op-fluent with the path to your data directory.
    • DD_OP_SOURCE_FLUENT_ADDRESS=0.0.0.0:9091 is the environment variable required for the Fluent source in this example. Replace it with the environment variable for your source.

    Also, make sure to replace:

  4. Create a new systemd service entry, such as /lib/systemd/system/op-fluent.service. Example content for the entry:

    /lib/systemd/system/op-fluent.service

        [Unit]
        Description="OPW for Fluent Pipeline"
        Documentation=https://docs.datadoghq.com/observability_pipelines/
        After=network-online.target
        Wants=network-online.target
    
        [Service]
        User=observability-pipelines-worker
        Group=observability-pipelines-worker
        ExecStart=/usr/bin/observability-pipelines-worker run
        Restart=always
        AmbientCapabilities=CAP_NET_BIND_SERVICE
        EnvironmentFile=-/etc/default/op-fluent
    
        [Install]
        WantedBy=multi-user.target
        
    In this example:

    • The service name is op-fluent because the pipeline is using the Fluent source. Replace op-fluent.service with a service name for your use case.
    • The Description is OPW for Fluent Pipeline. Replace OPW for Fluent Pipeline with a description for your use case.
    • EnvironmentFile is set to -/etc/default/op-fluent. Replace -/etc/default/op-fluent with the systemd service environment variables file you created for your Worker.
  5. Run this command to reload systemd:

    sudo systemctl daemon-reload
    
  6. Run this command to start the new service:

    sudo systemctl enable --now op-fluent
    
  7. Run this command to verify the service is running:

    sudo systemctl status op-fluent
    

Additionally, you can use the command sudo journalctl -u op-fluent.service to help you debug any issues.

Deploy the pipeline

  1. Navigate to the additional pipeline’s Install page.
  2. In the Deploy your pipeline section, you should see your additional Worker detected. Click Deploy.

Further reading

PREVIEWING: guacbot/translation-pipeline