このページは日本語には対応しておりません。随時翻訳に取り組んでいます。翻訳に関してご質問やご意見ございましたら、お気軽にご連絡ください。

Overview

For existing pipelines in Observability Pipelines, you can update and deploy changes for source settings, destination settings, and processors in the Observability Pipelines UI. But if you want to update source and destination environment variables, you need to manually update the Worker with the new values.

Update an existing pipeline

  1. Navigate to Observability Pipelines.
  2. Select the pipeline you want to update.
  3. Click Edit Pipeline in the top right corner.
  4. Make changes to the pipeline.
    • If you are updating the source or destination settings shown in the tiles, or updating and adding processors, make the changes and then click Deploy Changes.
    • To update source or destination environment variables, click Go to Worker Installation Steps and see Update source or destination variables for instructions.

Update source or destination variables

On the the Worker installation page:

  1. Select your platform in the Choose your installation platform dropdown menu.
  2. If you want to update source environment variables, update the information for your log source.
    • Datadog Agent address:
      • The Observability Pipelines Worker listens to this socket address to receive logs from the Datadog Agent.
      • Stored in the environment variableDD_OP_SOURCE_DATADOG_AGENT_ADDRESS.
    • Fluent socket address and port:
      • The Observability Pipelines Worker listens on this address for incoming log messages.
      • Stored in the environment variable DD_OP_SOURCE_FLUENT_ADDRESS.

    There are no environment variables for the Google Pub/Sub source.

    • HTTP/s endpoint URL:
      • The Observability Pipelines Worker collects log events from this endpoint. For example, https://127.0.0.8/logs.
      • Stored as the environment variable: DD_OP_SOURCE_HTTP_CLIENT_ENDPOINT_URL.
    • If you are using basic authentication:
      • HTTP/S endpoint authentication username and password.
      • Stored as the environment variables: DD_OP_SOURCE_HTTP_CLIENT_USERNAME and DD_OP_SOURCE_HTTP_CLIENT_PASSWORD.
    • If you are using bearer authentication:
      • HTTP/S endpoint bearer token.
      • Stored as the environment variable: DD_OP_SOURCE_HTTP_CLIENT_BEARER_TOKEN.
    • HTTP/S server address:
      • The Observability Pipelines Worker listens to this socket address, such as 0.0.0.0:9997, for your HTTP client logs.
      • Stored in the environment variable: DD_OP_SOURCE_HTTP_SERVER_ADDRESS.
    • Logstash address and port:
      • The Observability Pipelines Worker listens on this address, such as 0.0.0.0:9997, for incoming log messages.
      • Stored in the environment variable as: DD_OP_SOURCE_LOGSTASH_ADDRESS
    • Splunk HEC アドレス:
      • Observability Pipelines Worker がリッスンして、本来 Splunk インデクサー向けであるログを受信するバインドアドレスです。例えば、0.0.0.0:8088
        : /services/collector/event は自動的にエンドポイントに付加されます。
      • 環境変数 DD_OP_SOURCE_SPLUNK_HEC_ADDRESS に格納されます。
    • Splunk TCP address:
      • The Observability Pipelines Worker listens to this socket address to receive logs from the Splunk Forwarder. For example, 0.0.0.0:9997.
      • Stored in the environment variable DD_OP_SOURCE_SPLUNK_TCP_ADDRESS.
    • Sumo Logic address:
      • The bind address that your Observability Pipelines Worker listens on to receive logs originally intended for the Sumo Logic HTTP Source. For example, 0.0.0.0:80.
        Note: /receiver/v1/http/ path is automatically appended to the endpoint.
      • Stored in the environment variable DD_OP_SOURCE_SUMO_LOGIC_ADDRESS.
    • rsyslog or syslog-ng address:
      • The Observability Pipelines Worker listens on this bind address to receive logs from the Syslog forwarder. For example, 0.0.0.0:9997.
      • Stored in the environment variable DD_OP_SOURCE_SYSLOG_ADDRESS.
  3. If you want to update destination environment variables, update the information for your log destination.

    Amazon S3

    • AWS access key ID of your S3 archive:

      • Stored in the environment variable: DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_ACCESS_KEY_ID
    • AWS secret access key ID of your S3 archive:

      • The AWS secret access key ID for the S3 archive bucket.
      • Stored in the environment variable DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_SECRET_KEY.

    Google Cloud Storage

    There are no environment variables to configure.

    Azure Storage

    • Azure connections string to give the Worker access to your Azure Storage bucket.
      • Stored in the environment variable: DD_OP_DESTINATION_DATADOG_ARCHIVES_AZURE_BLOB_CONNECTION_STRING.

    環境変数は不要です。

    • Splunk HEC token:
      • The Splunk HEC token for the Splunk indexer.
      • Stored in the environment variable DD_OP_DESTINATION_SPLUNK_HEC_TOKEN.
    • Base URL of the Splunk instance:
      • The Splunk HTTP Event Collector endpoint your Observability Pipelines Worker sends processed logs to. For example, https://hec.splunkcloud.com:8088.
        Note: /services/collector/event path is automatically appended to the endpoint.
      • Stored in the environment variable DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL.
    • Unique URL generated for the HTTP Logs and Metrics Source to receive log data.
      • The Sumo Logic HTTP Source endpoint. The Observability Pipelines Worker sends processed logs to this endpoint. For example, https://<ENDPOINT>.collection.sumologic.com/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>, where:
        • <ENDPOINT> is your Sumo collection endpoint.
        • <UNIQUE_HTTP_COLLECTOR_CODE> is the string that follows the last forward slash (/) in the upload URL for the HTTP source.
      • Stored in the environment variable DD_OP_DESTINATION_SUMO_LOGIC_HTTP_COLLECTOR_URL.
    • The rsyslog or syslog-ng endpoint URL. For example, 127.0.0.1:9997.
      • The Observability Pipelines Worker sends logs to this address and port.
      • Stored as the environment variable: DD_OP_DESTINATION_SYSLOG_ENDPOINT_URL.
    • Google Chronicle endpoint URL:
      • Stored in the environment variable: DD_OP_DESTINATION_GOOGLE_CHRONICLE_UNSTRUCTURED_ENDPOINT_URL.
    • Elasticsearch authentication username:
      • Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_USERNAME.
    • Elasticsearch authentication password:
      • Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD.
    • Elasticsearch endpoint URL:
      • Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL.
    • OpenSearch authentication username:
      • Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_USERNAME.
    • OpenSearch authentication password:
      • Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_PASSWORD.
    • OpenSearch endpoint URL:
      • Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_ENDPOINT_URL.
    • Amazon OpenSearch authentication username:
      • Stored in the environment variable: DD_OP_DESTINATION_AMAZON_OPENSEARCH_USERNAME.
    • Amazon OpenSearch authentication password:
      • Stored in the environment variable: DD_OP_DESTINATION_AMAZON_OPENSEARCH_PASSWORD.
    • Amazon OpenSearch endpoint URL:
      • Stored in the environment variable: DD_OP_DESTINATION_AMAZON_OPENSEARCH_ENDPOINT_URL.
    • New Relic account ID:
      • Stored in the environment variable: DD_OP_DESTINATION_NEW_RELIC_ACCOUNT_ID.
    • New Relic license:
      • Stored in the environment variable: DD_OP_DESTINATION_NEW_RELIC_LICENSE_KEY.
  4. Follow the instructions for your environment to update the worker:
    1. Select API key をクリックして、使用する Datadog API キーを選択します。
    2. UI で提供されるコマンドを実行して Worker をインストールします。コマンドには、先ほど入力した環境変数が自動的に入力されます。
      docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
          -e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
          -e DD_SITE=<DATADOG_SITE> \
          -e <SOURCE_ENV_VARIABLE> \
          -e <DESINATION_ENV_VARIABLE> \
          -p 8088:8088 \
          datadog/observability-pipelines-worker run
      
      *: デフォルトでは、docker run コマンドは Worker がリッスンしているのと同じポートを公開します。Worker のコンテナ ポートを Docker ホストの別のポートにマッピングしたい場合は、 -p | --publish オプションを使用します。
      -p 8282:8088 datadog/observability-pipelines-worker run
      
    3. Navigate Back をクリックして、Observability Pipelines のパイプライン編集ページに戻ります。
    4. Deploy Changes をクリックします。
    1. Download the Helm chart values file.
    2. Click Select API key to choose the Datadog API key you want to use.
    3. Update the Datadog Helm chart to the latest version:
      helm repo update
      
    4. Run the command provided in the UI to install the Worker. The command is automatically populated with the environment variables you entered earlier.
      helm upgrade --install opw \
      -f values.yaml \
      --set datadog.apiKey=<DATADOG_API_KEY> \
      --set datadog.pipelineId=<PIPELINE_ID> \
      --set <SOURCE_ENV_VARIABLES> \
      --set <DESTINATION_ENV_VARIABLES> \
      --set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
      datadog/observability-pipelines-worker
      
      Note: By default, the Kubernetes Service maps incoming port <SERVICE_PORT> to the port the Worker is listening on (<TARGET_PORT>). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port and service.ports[0].targetPort values:
      --set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
      
    5. Click Navigate Back to go back to the Observability Pipelines edit pipeline page.
    6. Click Deploy Changes.
    1. Click Select API key to choose the Datadog API key you want to use.

    2. Run the one-step command provided in the UI to re-install the Worker.

      Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.

    If you prefer not to use the one-line installation script, follow these step-by-step instructions:

    1. Run the following commands to update your local apt repo and install the latest Worker version:
      sudo apt-get update
      sudo apt-get install observability-pipelines-worker datadog-signing-keys
      
    2. Add your keys, site (for example datadoghq.com for US1), source, and destination environment variables to the Worker’s environment file:
      sudo cat &lt;<EOF > /etc/default/observability-pipelines-worker
      DD_API_KEY=<DATADOG_API_KEY>
      DD_OP_PIPELINE_ID=<PIPELINE_ID>
      DD_SITE=<DATADOG_SITE>
      <SOURCE_ENV_VARIABLES>
      <DESTINATION_ENV_VARIABLES>
      EOF
      
    3. Restart the worker:
      sudo systemctl restart observability-pipelines-worker
      
    4. Click Navigate Back to go back to the Observability Pipelines edit pipeline page.
    5. Click Deploy Changes.
    1. Click Select API key to choose the Datadog API key you want to use.

    2. Run the one-step command provided in the UI to re-install the Worker.

      Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.

    If you prefer not to use the one-line installation script, follow these step-by-step instructions:

    1. Update your packages and install the latest version of Worker:
      sudo yum makecache
      sudo yum install observability-pipelines-worker
      
    2. Add your keys, site (for example datadoghq.com for US1), source, and destination updated environment variables to the Worker’s environment file:
      sudo cat &lt;&lt;-EOF > /etc/default/observability-pipelines-worker
      DD_API_KEY=<API_KEY>
      DD_OP_PIPELINE_ID=<PIPELINE_ID>
      DD_SITE=<SITE>
      <SOURCE_ENV_VARIABLES>
      <DESTINATION_ENV_VARIABLES>
      EOF
      
    3. Restart the worker:
      sudo systemctl restart observability-pipelines-worker
      
    4. Click Navigate Back to go back to the Observability Pipelines edit pipeline page.
    5. Click Deploy Changes.
    1. ドロップダウンから、パイプラインで想定されるログの量を選択します。
    2. Worker のインストールに使用する AWS リージョンを選択します。
    3. Select API key をクリックして、使用する Datadog API キーを選択します。
    4. Launch CloudFormation Template をクリックして AWS コンソールに移動し、スタックの構成を確認してから起動します。CloudFormation パラメーターが想定通りに設定されていることを確認してください。
    5. Worker のインストールに使用する VPC とサブネットを選択します。
    6. IAM の必要な権限のチェックボックスを見直して確認します。Submit をクリックしてスタックを作成します。ここでは、CloudFormation がインストールを処理し、Worker インスタンスが起動され、必要なソフトウェアがダウンロードされ、Worker が自動的に開始します。
    7. 以前の CloudFormation スタックとそれに関連するリソースを削除します。
    8. Navigate Back をクリックして、Observability Pipelines のパイプライン編集ページに戻ります。
    9. Deploy Changes をクリックします。
PREVIEWING: rtrieu/product-analytics-ui-changes