이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Observability Pipelines is not available on the US1-FED Datadog site.

OP Worker 버전 1.8 이하에서 버전 2.0 이상으로 업그레이드할 경우 기존 파이프라인에 오류가 발생합니다. OP Worker 버전이 1.8 이하이고 해당 버전을 계속 사용하고 싶을 경우에는 버전을 업그레이드하지 마세요. OP Worker 2.0 이상을 사용하고 싶을 경우에는 OP Worker 버전 1.8 이하의 파이프라인을 OP Worker 2.x.로 마이그레이션해야 합니다.

Datadog에서는 OP Worker 버전 2.0 이상으로 업그레이드하는 것을 권고합니다. 주 OP Worker 버전으로 업그레이드하고 업데이트해야 최신 OP Worker의 기능, 버그 수정, 보안 업데이트 서비스를 받을 수 있습니다.

Overview

The Observability Pipelines Worker can collect, process, and route logs from any source to any destination. Using Datadog, you can build and manage all of your Observability Pipelines Worker deployments at scale.

This guide walks you through deploying the Worker in your common tools cluster and configuring it to send logs in a Datadog-rehydratable format to a cloud storage for archiving.

Deployment Modes

관측 파이프라인용 원격 구성이 프라이빗 베타 서비스 중입니다. 서비스에 액세스하려면 Datadog 지원팀 이나 고객 성공 매니저에게 문의하세요.

원격 구성 프라이빗 베타에 등록하면 텍스트 편집기에서 파이프라인 구성을 업데이트한 후 수동으로 변경 사항을 출시하는 대신 변경 사항을 Datadog UI에서 작업자로 원격 출시할 수 있습니다. 파이프라인을 생성할 때 배포 방법을 선택하고 작업자를 설치하세요.

파이프라인을 배포한 후 배포 모드를 변경하는 방법에 관해서는 배포 모드 업데이트를 참고하세요.

Assumptions

  • You are already using Datadog and want to use Observability Pipelines.
  • You have administrative access to the clusters where the Observability Pipelines Worker is going to be deployed, as well as to the workloads that are going to be aggregated.
  • You have a common tools cluster or security cluster for your environment to which all other clusters are connected.

Prerequisites

Before installing, make sure you have:

You can generate both of these in Observability Pipelines.

Provider-specific requirements

Ensure that your machine is configured to run Docker.

To run the Worker on your Kubernetes nodes, you need a minimum of two nodes with one CPU and 512MB RAM available. Datadog recommends creating a separate node pool for the Workers, which is also the recommended configuration for production deployments.

  • The EBS CSI driver is required. To see if it is installed, run the following command and look for ebs-csi-controller in the list:

    kubectl get pods -n kube-system
    
  • A StorageClass is required for the Workers to provision the correct EBS drives. To see if it is installed already, run the following command and look for io2 in the list:

    kubectl get storageclass
    

    If io2 is not present, download the StorageClass YAML and kubectl apply it.

  • The AWS Load Balancer controller is required. To see if it is installed, run the following command and look for aws-load-balancer-controller in the list:

    helm list -A
    
  • Datadog recommends using Amazon EKS >= 1.16.

See Best Practices for OPW Aggregator Architecture for production-level requirements.

There are no provider-specific requirements for APT-based Linux.

There are no provider-specific requirements for APT-based Linux.

To run the Worker in your AWS account, you need administrative access to that account and the following information:

  • The VPC ID your instances will run in.
  • The subnet IDs your instances will run in.
  • The AWS region your VPC is located in.

Set up Log Archives

When you install the Observability Pipelines Worker later on, the sample configuration provided includes a sink for sending logs to Amazon S3 under a Datadog-rehydratable format. To use this configuration, create an S3 bucket for your archives and set up an IAM policy that allows the Workers to write to the S3 bucket. Then, connect the S3 bucket to Datadog Log Archives.

See AWS Pricing for inter-region data transfer fees and how cloud storage costs may be impacted.

Create an S3 bucket and set up an IAM policy

  1. Navigate to Amazon S3. Create an S3 bucket to send your archives to. Do not make your bucket publicly readable.

  2. Create a policy with the following permissions. Make sure to update the bucket name to the name of the S3 bucket you created earlier.

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "DatadogUploadAndRehydrateLogArchives",
          "Effect": "Allow",
          "Action": ["s3:PutObject", "s3:GetObject"],
          "Resource": "arn:aws:s3:::<MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1>/*"
        },
        {
          "Sid": "DatadogUploadAndRehydrateLogArchives",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::<MY_BUCKET_NAME>"
        }
      ]
    }
    
  1. Create an IAM user and attach the above policy to it. Create access credentials for the IAM user. Save these credentials as AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY.
  1. Navigate to Amazon S3. Create an S3 bucket to send your archives to. Do not make your bucket publicly readable.

  2. Create a policy with the following permissions. Make sure to update the bucket name to the name of the S3 bucket you created earlier.

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "DatadogUploadAndRehydrateLogArchives",
          "Effect": "Allow",
          "Action": ["s3:PutObject", "s3:GetObject"],
          "Resource": "arn:aws:s3:::<MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1>/*"
        },
        {
          "Sid": "DatadogUploadAndRehydrateLogArchives",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::<MY_BUCKET_NAME>"
        }
      ]
    }
    
  1. Create a service account to use the policy you created above.
  1. Navigate to Amazon S3. Create an S3 bucket to send your archives to. Do not make your bucket publicly readable.

  2. Create a policy with the following permissions. Make sure to update the bucket name to the name of the S3 bucket you created earlier.

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "DatadogUploadAndRehydrateLogArchives",
          "Effect": "Allow",
          "Action": ["s3:PutObject", "s3:GetObject"],
          "Resource": "arn:aws:s3:::<MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1>/*"
        },
        {
          "Sid": "DatadogUploadAndRehydrateLogArchives",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::<MY_BUCKET_NAME>"
        }
      ]
    }
    
  1. Create an IAM user and attach the above policy to it. Create access credentials for the IAM user. Save these credentials as AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY.
  1. Navigate to Amazon S3. Create an S3 bucket to send your archives to. Do not make your bucket publicly readable.

  2. Create a policy with the following permissions. Make sure to update the bucket name to the name of the S3 bucket you created earlier.

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "DatadogUploadAndRehydrateLogArchives",
          "Effect": "Allow",
          "Action": ["s3:PutObject", "s3:GetObject"],
          "Resource": "arn:aws:s3:::<MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1>/*"
        },
        {
          "Sid": "DatadogUploadAndRehydrateLogArchives",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::<MY_BUCKET_NAME>"
        }
      ]
    }
    
  1. Create an IAM user and attach the policy above to it. Create access credentials for the IAM user. Save these credentials as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
  1. Navigate to Amazon S3. Create an S3 bucket to send your archives to. Do not make your bucket publicly readable.

  2. Create a policy with the following permissions. Make sure to update the bucket name to the name of the S3 bucket you created earlier.

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "DatadogUploadAndRehydrateLogArchives",
          "Effect": "Allow",
          "Action": ["s3:PutObject", "s3:GetObject"],
          "Resource": "arn:aws:s3:::<MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1>/*"
        },
        {
          "Sid": "DatadogUploadAndRehydrateLogArchives",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::<MY_BUCKET_NAME>"
        }
      ]
    }
    
  1. Attach the policy to the IAM Instance Profile that is created with Terraform, which you can find under the iam-role-name output.

Connect the S3 bucket to Datadog Log Archives

You need to connect the S3 bucket you created earlier to Datadog Log Archives so that you can rehydrate the archives later on.

  1. Navigate to Datadog Log Forwarding.
  2. Click + New Archive.
  3. Enter a descriptive archive name.
  4. Add a query that filters out all logs going through log pipelines so that those logs do not go into this archive. For example, add the query observability_pipelines_read_only_archive, assuming that no logs going through the pipeline have that tag added.
  5. Select AWS S3.
  6. Select the AWS Account that your bucket is in.
  7. Enter the name of the S3 bucket.
  8. Optionally, enter a path.
  9. Check the confirmation statement.
  10. Optionally, add tags and define the maximum scan size for rehydration. See Advanced settings for more information.
  11. Click Save.

See the Log Archives documentation for additional information.

Install the Observability Pipelines Worker

The Observability Pipelines Worker Docker image is published to Docker Hub here.

  1. Download the sample pipeline configuration file.

  2. Run the following command to start the Observability Pipelines Worker with Docker:

    docker run -i -e DD_API_KEY=<API_KEY> \
      -e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
      -e DD_SITE=<SITE> \
      -e AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> \
      -e AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> \
      -e DD_ARCHIVES_BUCKET=<AWS_BUCKET_NAME> \
      -e DD_ARCHIVES_SERVICE_ACCOUNT=<BUCKET_AWS_REGION> \
      -p 8282:8282 \
      -v ./pipeline.yaml:/etc/observability-pipelines-worker/pipeline.yaml:ro \
      datadog/observability-pipelines-worker run
    

    Replace these placeholders with the following information:

    • <API_KEY> with your Datadog API key.
    • <PIPELINES_ID> with your Observability Pipelines configuration ID.
    • <SITE> with .
    • AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with the AWS credentials you created earlier.
    • <AWS_BUCKET_NAME> with the name of the S3 bucket storing the logs.
    • <BUCKET_AWS_REGION> with the AWS region of the target service.
    • ./pipeline.yaml must be the relative or absolute path to the configuration you downloaded in step 1.
  1. Download the Helm chart values file for AWS EKS.

  2. In the Helm chart, replace these placeholders with the following information:

    • datadog.apiKey with your Datadog API key.
    • datadog.pipelineId with your Observability Pipelines configuration ID.
    • site with .
    • ${DD_ARCHIVES_SERVICE_ACCOUNT} in serviceAccount.name with the service account name.
    • ${DD_ARCHIVES_BUCKET} in pipelineConfig.sinks.datadog_archives with the name of the S3 bucket storing the logs.
    • ${DD_ARCHIVES_SERVICE_ACCOUNT} in pipelineConfig.sinks.datadog_archives with the AWS region of the target service.
  3. Install it in your cluster with the following commands:

    helm repo add datadog https://helm.datadoghq.com
    
    helm repo update
    
    helm upgrade --install \
        opw datadog/observability-pipelines-worker \
        -f aws_eks.yaml
    
  1. Run the following commands to set up APT to download through HTTPS:

    sudo apt-get update
    sudo apt-get install apt-transport-https curl gnupg
    
  2. Run the following commands to set up the Datadog deb repo on your system and create a Datadog archive keyring:

    sudo sh -c "echo 'deb [signed-by=/usr/share/keyrings/datadog-archive-keyring.gpg] https://apt.datadoghq.com/ stable observability-pipelines-worker-1' > /etc/apt/sources.list.d/datadog-observability-pipelines-worker.list"
    sudo touch /usr/share/keyrings/datadog-archive-keyring.gpg
    sudo chmod a+r /usr/share/keyrings/datadog-archive-keyring.gpg
    curl https://keys.datadoghq.com/DATADOG_APT_KEY_CURRENT.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
    curl https://keys.datadoghq.com/DATADOG_APT_KEY_06462314.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
    curl https://keys.datadoghq.com/DATADOG_APT_KEY_F14F620E.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
    curl https://keys.datadoghq.com/DATADOG_APT_KEY_C0962C7D.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
    
  3. Run the following commands to update your local apt repo and install the Worker:

    sudo apt-get update
    sudo apt-get install observability-pipelines-worker datadog-signing-keys
    
  4. Add your keys and the site () to the Worker’s environment variables. Replace <AWS_BUCKET_NAME> with the name of the S3 bucket storing the logs and <BUCKET_AWS_REGION> with the AWS region of the target service.

    sudo cat <<-EOF > /etc/default/observability-pipelines-worker
    AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID>
    AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY>
    DD_ARCHIVES_BUCKET=<AWS_BUCKET_NAME>
    DD_ARCHIVES_SERVICE_ACCOUNT=<BUCKET_AWS_REGION>
    EOF
    
  5. Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml on the host.

  6. Start the worker:

    sudo systemctl restart observability-pipelines-worker
    
  1. Run the following commands to set up the Datadog rpm repo on your system:

    cat <<EOF > /etc/yum.repos.d/datadog-observability-pipelines-worker.repo
    [observability-pipelines-worker]
    name = Observability Pipelines Worker
    baseurl = https://yum.datadoghq.com/stable/observability-pipelines-worker-1/\$basearch/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://keys.datadoghq.com/DATADOG_RPM_KEY_CURRENT.public
           https://keys.datadoghq.com/DATADOG_RPM_KEY_4F09D16B.public
           https://keys.datadoghq.com/DATADOG_RPM_KEY_B01082D3.public
           https://keys.datadoghq.com/DATADOG_RPM_KEY_FD4BF915.public
           https://keys.datadoghq.com/DATADOG_RPM_KEY_E09422B3.public
    EOF
    

    Note: If you are running RHEL 8.1 or CentOS 8.1, use repo_gpgcheck=0 instead of repo_gpgcheck=1 in the configuration above.

  2. Update your packages and install the Worker:

    sudo yum makecache
    sudo yum install observability-pipelines-worker
    
  3. Add your keys and the site () to the Worker’s environment variables. Replace <AWS_BUCKET_NAME> with the name of the S3 bucket storing the logs and <BUCKET_AWS_REGION> with the AWS region of the target service.

    sudo cat <<-EOF > /etc/default/observability-pipelines-worker
    AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID>
    AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY>
    DD_ARCHIVES_BUCKET=<AWS_BUCKET_NAME>
    DD_ARCHIVES_SERVICE_ACCOUNT=<BUCKET_AWS_REGION>
    EOF
    
  4. Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml on the host.

  5. Start the worker:

    sudo systemctl restart observability-pipelines-worker
    
  1. Download the the sample configuration.
  2. Set up the Worker module in your existing Terraform using the sample configuration. Make sure to update the values in vpc-id, subnet-ids, and region to match your AWS deployment in the configuration. Also, update the values in datadog-api-key and pipeline-id to match your pipeline.

Load balancing

Production-oriented setup is not included in the Docker instructions. Instead, refer to your company’s standards for load balancing in containerized environments. If you are testing on your local machine, configuring a load balancer is unnecessary.

Use the load balancers provided by your cloud provider. The load balancers adjust based on autoscaling events that the default Helm setup is configured for. The load balancers are internal-facing, so they are only accessible inside your network.

Use the load balancer URL given to you by Helm when you configure the Datadog Agent.

NLBs provisioned by the AWS Load Balancer Controller are used.

See Capacity Planning and Scaling for load balancer recommendations when scaling the Worker.

Cross-availability-zone load balancing

The provided Helm configuration tries to simplify load balancing, but you must take into consideration the potential price implications of cross-AZ traffic. Wherever possible, the samples try to avoid creating situations where multiple cross-AZ hops can happen.

The sample configurations do not enable the cross-zone load balancing feature available in this controller. To enable it, add the following annotation to the service block:

service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true

See AWS Load Balancer Controller for more details.

Given the single-machine nature of the installation, there is no built-in support for load-balancing. Provision your own load balancers using your company’s standard.

Given the single-machine nature of the installation, there is no built-in support for load-balancing. You need to provision your own load balancers based on your company’s standard.

The Terraform module provisions an NLB to point at the instances. The DNS address is returned in the lb-dns output in Terraform.

Buffering

Observability Pipelines includes multiple buffering strategies that allow you to increase the resilience of your cluster to downstream faults. The provided sample configurations use disk buffers, the capacities of which are rated for approximately 10 minutes of data at 10Mbps/core for Observability Pipelines deployments. That is often enough time for transient issues to resolve themselves, or for incident responders to decide what needs to be done with the observability data.

By default, the Observability Pipelines Worker’s data directory is set to /var/lib/observability-pipelines-worker. Make sure that your host machine has a sufficient amount of storage capacity allocated to the container’s mountpoint.

For AWS, Datadog recommends using the io2 EBS drive family. Alternatively, the gp3 drives could also be used.

By default, the Observability Pipelines Worker’s data directory is set to /var/lib/observability-pipelines-worker - if you are using the sample configuration, you should ensure that this has at least 288GB of space available for buffering.

Where possible, it is recommended to have a separate SSD mounted at that location.

By default, the Observability Pipelines Worker’s data directory is set to /var/lib/observability-pipelines-worker - if you are using the sample configuration, you should ensure that this has at least 288GB of space available for buffering.

Where possible, it is recommended to have a separate SSD mounted at that location.

By default, a 288GB EBS drive is allocated to each instance, and the sample configuration above is set to use that for buffering.

Connect the Datadog Agent to the Observability Pipelines Worker

To send Datadog Agent logs to the Observability Pipelines Worker, update your agent configuration with the following:

observability_pipelines_worker:
  logs:
    enabled: true
    url: "http://<OPW_HOST>:8282"

OPW_HOST is the IP of the load balancer or machine you set up earlier. For single-host Docker-based installs, this is the IP address of the underlying host. For Kubernetes-based installs, you can retrieve it by running the following command and copying the EXTERNAL-IP:

kubectl get svc opw-observability-pipelines-worker

For Terraform installs, the lb-dns output provides the necessary value.

At this point, your observability data should be going to the Worker and then sent along to your S3 archive.

Updating deployment modes

파이프라인을 배포한 후 배포 방법을 변경할 수 있습니다. 예를 들어 수동 관리형 파이프라인에서 원격 구성이 활성화된 파이프라인으로 변경하거나 그 반대 방향으로도 바꿀 수 있습니다.

원격 구성 배포에서 수동 관리형 배포로 바꾸는 방법:

  1. Observability Pipeline으로 이동해 파이프라인을 선택하세요.
  2. 톱니바퀴 아이콘을 클릭해 설정으로 이동하세요.
  3. Deployment Mode에서 manual을 선택해 활성화하세요.
  4. DD_OP_REMOTE_CONFIGURATION_ENABLED 플래그를 false로 설정하고 작업자를 재시작하세요. 이 플래그로 작업자를 재시작하지 않으면 원격 구성이 활성화된 상태로 계속 진행되며, 작업자가 로컬 구성 파일을 통해 수동으로 업데이트되지 않습니다.

수동 관리형 배포에서 원격 구성 배포로 바꾸는 방법:

  1. Observability Pipeline으로 이동해 파이프라인을 선택하세요.
  2. 톱니바퀴 아이콘을 클릭해 설정으로 이동하세요.
  3. Deployment Mode에서 Remote Configuration을 선택해 활성화하세요.
  4. DD_OP_REMOTE_CONFIGURATION_ENABLED 플래그를 true로 설정하고 작업자를 재시작하세요. 이 플래그로 작업자를 재시작해야 UI에서 배포된 구성으로 폴링됩니다.
  5. 작업자가 새 버전 구성을 받도록 버전 내역에서 버전 하나를 배포하세요. 버전을 클릭하세요. Edit as Draft를 선택한 후 Deploy를 클릭하세요.

Rehydrate your archives

See Rehydrating from Archives for instructions on how to rehydrate your archive in Datadog so that you can start analyzing and investigating those logs.

Further reading

PREVIEWING: mcretzman/DOCS-9337-add-cloud-info-byoti