Update Existing Pipelines

이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Overview

For existing pipelines in Observability Pipelines, you can update and deploy changes for source settings, destination settings, and processors in the Observability Pipelines UI. But if you want to update source and destination environment variables, you need to manually update the Worker with the new values.

Update an existing pipeline

  1. Navigate to Observability Pipelines.
  2. Select the pipeline you want to update.
  3. Click Edit Pipeline in the top right corner.
  4. Make changes to the pipeline.
    • If you are updating the source or destination settings shown in the tiles, or updating and adding processors, make the changes and then click Deploy Changes.
    • To update source or destination environment variables, click Go to Worker Installation Steps and see Update source or destination variables for instructions.

Update source or destination variables

On the the Worker installation page:

  1. Select your platform in the Choose your installation platform dropdown menu.
  2. If you want to update source environment variables, update the information for your log source.
    • Datadog Agent address:
      • The Observability Pipelines Worker listens to this socket address to receive logs from the Datadog Agent.
      • Stored in the environment variableDD_OP_SOURCE_DATADOG_AGENT_ADDRESS.
    • Fluent socket address and port:
      • The Observability Pipelines Worker listens on this address for incoming log messages.
      • Stored in the environment variable DD_OP_SOURCE_FLUENT_ADDRESS.

    There are no environment variables for the Google Pub/Sub source.

    • HTTP/s endpoint URL:
      • The Observability Pipelines Worker collects log events from this endpoint. For example, https://127.0.0.8/logs.
      • Stored as the environment variable: DD_OP_SOURCE_HTTP_CLIENT_ENDPOINT_URL.
    • If you are using basic authentication:
      • HTTP/S endpoint authentication username and password.
      • Stored as the environment variables: DD_OP_SOURCE_HTTP_CLIENT_USERNAME and DD_OP_SOURCE_HTTP_CLIENT_PASSWORD.
    • If you are using bearer authentication:
      • HTTP/S endpoint bearer token.
      • Stored as the environment variable: DD_OP_SOURCE_HTTP_CLIENT_BEARER_TOKEN.
    • HTTP/S server address:
      • The Observability Pipelines Worker listens to this socket address, such as 0.0.0.0:9997, for your HTTP client logs.
      • Stored in the environment variable: DD_OP_SOURCE_HTTP_SERVER_ADDRESS.
    • Logstash address and port:
      • The Observability Pipelines Worker listens on this address, such as 0.0.0.0:9997, for incoming log messages.
      • Stored in the environment variable as: DD_OP_SOURCE_LOGSTASH_ADDRESS
    • Splunk HEC 주소:
      • 관측 가능성 파이프라인 Worker가 로그를 수신하기 위해 수신 대기하는 바인딩 주소는 원래 Splunk 인덱서용입니다. 예: 0.0.0.0:8088 참고: /services/collector/event는 엔드포인트에 자동으로 추가됩니다.
      • 환경 변수 DD_OP_SOURCE_SPLUNK_HEC_ADDRESS에 저장됩니다.
    • Splunk TCP address:
      • The Observability Pipelines Worker listens to this socket address to receive logs from the Splunk Forwarder. For example, 0.0.0.0:9997.
      • Stored in the environment variable DD_OP_SOURCE_SPLUNK_TCP_ADDRESS.
    • Sumo Logic address:
      • The bind address that your Observability Pipelines Worker listens on to receive logs originally intended for the Sumo Logic HTTP Source. For example, 0.0.0.0:80.
        Note: /receiver/v1/http/ path is automatically appended to the endpoint.
      • Stored in the environment variable DD_OP_SOURCE_SUMO_LOGIC_ADDRESS.
    • rsyslog or syslog-ng address:
      • The Observability Pipelines Worker listens on this bind address to receive logs from the Syslog forwarder. For example, 0.0.0.0:9997.
      • Stored in the environment variable DD_OP_SOURCE_SYSLOG_ADDRESS.
  3. If you want to update destination environment variables, update the information for your log destination.

    Amazon S3

    • AWS access key ID of your S3 archive:

      • Stored in the environment variable: DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_ACCESS_KEY_ID
    • AWS secret access key ID of your S3 archive:

      • The AWS secret access key ID for the S3 archive bucket.
      • Stored in the environment variable DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_SECRET_KEY.

    Google Cloud Storage

    There are no environment variables to configure.

    Azure Storage

    • Azure connections string to give the Worker access to your Azure Storage bucket.
      • Stored in the environment variable: DD_OP_DESTINATION_DATADOG_ARCHIVES_AZURE_BLOB_CONNECTION_STRING.

    환경 변수가 필요하지 않습니다.

    • Splunk HEC token:
      • The Splunk HEC token for the Splunk indexer.
      • Stored in the environment variable DD_OP_DESTINATION_SPLUNK_HEC_TOKEN.
    • Base URL of the Splunk instance:
      • The Splunk HTTP Event Collector endpoint your Observability Pipelines Worker sends processed logs to. For example, https://hec.splunkcloud.com:8088.
        Note: /services/collector/event path is automatically appended to the endpoint.
      • Stored in the environment variable DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL.
    • Unique URL generated for the HTTP Logs and Metrics Source to receive log data.
      • The Sumo Logic HTTP Source endpoint. The Observability Pipelines Worker sends processed logs to this endpoint. For example, https://<ENDPOINT>.collection.sumologic.com/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>, where:
        • <ENDPOINT> is your Sumo collection endpoint.
        • <UNIQUE_HTTP_COLLECTOR_CODE> is the string that follows the last forward slash (/) in the upload URL for the HTTP source.
      • Stored in the environment variable DD_OP_DESTINATION_SUMO_LOGIC_HTTP_COLLECTOR_URL.
    • The rsyslog or syslog-ng endpoint URL. For example, 127.0.0.1:9997.
      • The Observability Pipelines Worker sends logs to this address and port.
      • Stored as the environment variable: DD_OP_DESTINATION_SYSLOG_ENDPOINT_URL.
    • Google Chronicle endpoint URL:
      • Stored in the environment variable: DD_OP_DESTINATION_GOOGLE_CHRONICLE_UNSTRUCTURED_ENDPOINT_URL.
    • Elasticsearch authentication username:
      • Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_USERNAME.
    • Elasticsearch authentication password:
      • Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD.
    • Elasticsearch endpoint URL:
      • Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL.
    • OpenSearch authentication username:
      • Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_USERNAME.
    • OpenSearch authentication password:
      • Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_PASSWORD.
    • OpenSearch endpoint URL:
      • Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_ENDPOINT_URL.
    • Amazon OpenSearch authentication username:
      • Stored in the environment variable: DD_OP_DESTINATION_AMAZON_OPENSEARCH_USERNAME.
    • Amazon OpenSearch authentication password:
      • Stored in the environment variable: DD_OP_DESTINATION_AMAZON_OPENSEARCH_PASSWORD.
    • Amazon OpenSearch endpoint URL:
      • Stored in the environment variable: DD_OP_DESTINATION_AMAZON_OPENSEARCH_ENDPOINT_URL.
    • New Relic account ID:
      • Stored in the environment variable: DD_OP_DESTINATION_NEW_RELIC_ACCOUNT_ID.
    • New Relic license:
      • Stored in the environment variable: DD_OP_DESTINATION_NEW_RELIC_LICENSE_KEY.
  4. Follow the instructions for your environment to update the worker:
    1. API 키 선택을 클릭해 사용하고 싶은 Datadog API 키를 선택하세요.
    2. 안내된 명령을 UI에 실행해 Worker를 설치하세요. 이 명령을 사용하면 내가 이전에 입력한 환경 변수가 자동으로 채워집니다.
      docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
          -e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
          -e DD_SITE=<DATADOG_SITE> \
          -e <SOURCE_ENV_VARIABLE> \
          -e <DESINATION_ENV_VARIABLE> \
          -p 8088:8088 \
          datadog/observability-pipelines-worker run
      
      참고: 기본적으로 docker run 명령은 Worker가 수신 중인 포트와 동일한 포트를 노출합니다. Worker의 컨테이너 포트를 도커(Docker) 호스트의 다른 포트에 매핑하려면 -p | --publish 옵션을 사용하세요.
      -p 8282:8088 datadog/observability-pipelines-worker run
      
    3. Observability Pipelines로 돌아가 파이프라인 페이지를 편집하려면 Navigate Back을 클릭하세요.
    4. Deploy Changes를 클릭하세요.
    1. Download the Helm chart values file.
    2. Click Select API key to choose the Datadog API key you want to use.
    3. Update the Datadog Helm chart to the latest version:
      helm repo update
      
    4. Run the command provided in the UI to install the Worker. The command is automatically populated with the environment variables you entered earlier.
      helm upgrade --install opw \
      -f values.yaml \
      --set datadog.apiKey=<DATADOG_API_KEY> \
      --set datadog.pipelineId=<PIPELINE_ID> \
      --set <SOURCE_ENV_VARIABLES> \
      --set <DESTINATION_ENV_VARIABLES> \
      --set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
      datadog/observability-pipelines-worker
      
      Note: By default, the Kubernetes Service maps incoming port <SERVICE_PORT> to the port the Worker is listening on (<TARGET_PORT>). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port and service.ports[0].targetPort values:
      --set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
      
    5. Click Navigate Back to go back to the Observability Pipelines edit pipeline page.
    6. Click Deploy Changes.
    1. API 키 선택을 클릭해 사용하고 싶은 Datadog API 키를 선택하세요.

    2. UI에 제공된 원스텝 명령을 실행하여 Worker를 다시 설치합니다.

      참고: /etc/default/observability-pipelines-worker에서 Worker가 사용하는 환경 변수는 설치 스크립트를 실행할 때 업데이트되지 않습니다. 변경이 필요한 경우 파일을 수동으로 업데이트하고 Worker를 다시 시작하세요.

    한 줄 설치 스크립트를 사용하지 않으려면 다음 단계별 지침을 따르세요.

    1. 다음 명령을 실행하여 로컬 apt 리포지토리를 업데이트하고 최신 Worker 버전을 설치하세요.
      sudo apt-get update
      sudo apt-get install observability-pipelines-worker datadog-signing-keys
      
    2. Worker의 환경 파일에 키, 사이트(예: US1의 경우 datadoghq.com ), 소스 및 대상 환경 변수를 추가합니다.
      sudo cat &lt;<EOF > /etc/default/observability-pipelines-worker
      DD_API_KEY=<DATADOG_API_KEY>
      DD_OP_PIPELINE_ID=<PIPELINE_ID>
      DD_SITE=<DATADOG_SITE>
      <SOURCE_ENV_VARIABLES>
      <DESTINATION_ENV_VARIABLES>
      EOF
      
    3. Worker를 다시 시작합니다.
      sudo systemctl restart observability-pipelines-worker
      
    4. 관측 가능성 파이프라인으로 돌아가 파이프라인 페이지를 편집하려면 뒤로 탐색을 클릭하세요.
    5. 배포 변경 사항을 클릭하세요.
    1. API 키 선택을 클릭해 사용하고 싶은 Datadog API 키를 선택하세요.

    2. UI에 제공된 원스텝 명령을 실행하여 Worker를 다시 설치합니다.

      참고: /etc/default/observability-pipelines-worker에서 Worker가 사용하는 환경 변수는 설치 스크립트를 실행할 때 업데이트되지 않습니다. 변경이 필요한 경우 파일을 수동으로 업데이트하고 Worker를 다시 시작하세요.

    한 줄 설치 스크립트를 사용하지 않으려면 다음 단계별 지침을 따르세요.

    1. 패키지를 업데이트하고 최신 버전의 Worker를 설치하세요.
      sudo yum makecache
      sudo yum install observability-pipelines-worker
      
    2. Worker의 환경 파일에 키, 사이트(예: US1의 경우 datadoghq.com), 소스 및 대상 업데이트된 환경 변수를 추가합니다.
      sudo cat &lt;&lt;-EOF > /etc/default/observability-pipelines-worker
      DD_API_KEY=<API_KEY>
      DD_OP_PIPELINE_ID=<PIPELINE_ID>
      DD_SITE=<SITE>
      <SOURCE_ENV_VARIABLES>
      <DESTINATION_ENV_VARIABLES>
      EOF
      
    3. Worker를 다시 시작합니다.
      sudo systemctl restart observability-pipelines-worker
      
    4. 관측 가능성 파이프라인으로 돌아가 파이프라인 페이지를 편집하려면 뒤로 탐색을 클릭하세요.
    5. 배포 변경 사항을 클릭하세요.
    1. 드롭다운에서 예상 로그 볼륨을 선택하세요.
    2. Worker를 설치할 때 사용할 AWS 리전을 선택하세요.
    3. API 키 선택을 클릭해 사용하고 싶은 Datadog API 키를 선택하세요.
    4. Launch CloudFormation Template을 클릭해 AWS 콘솔로 이동해 스택 구성을 검토한 후 실행하세요. CloudFormation 파라미터가 올바로 설정되어 있는지 다시 확인하세요.
    5. Worker를 설치할 때 사용할 VPC와 서브넷을 선택하세요.
    6. IAM과 관련해 필요한 권한 체크 상자가 모두 선택되어 있는지 검토하세요. Submit을 클릭해 스택을 생성하세요. 이 지점부터 CloudFormation에서 설치를 처리합니다. Worker 인스턴스가 실행되고, 필요한 소프트웨어가 설치되며, Worker가 자동으로 시작됩니다.
    7. 이전 CloudFormation 스택과 관련 리소스를 모두 삭제하세요.
    8. Observability Pipelines로 돌아가 파이프라인 페이지를 편집하려면 Navigate Back을 클릭하세요.
    9. Deploy Changes를 클릭하세요.
PREVIEWING: rtrieu/product-analytics-ui-changes