- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
The Observability Pipelines Worker can collect, process, and route logs from any source to any destination. Using Datadog, you can build and manage all of your Observability Pipelines Worker deployments at scale.
There are several ways to get started with the Observability Pipelines Worker.
This document walks you through the quickstart installation steps and then provides resources for next steps. Use and operation of this software is governed by the End User License Agreement.
원격 구성 프라이빗 베타에 등록하면 텍스트 편집기에서 파이프라인 구성을 업데이트한 후 수동으로 변경 사항을 출시하는 대신 변경 사항을 Datadog UI에서 작업자로 원격 출시할 수 있습니다. 파이프라인을 생성할 때 배포 방법을 선택하고 작업자를 설치하세요.
파이프라인을 배포한 후 배포 모드를 변경하는 방법에 관해서는 배포 모드 업데이트를 참고하세요.
To install the Observability Pipelines Worker, you need the following:
To generate a new API key and pipeline:
Follow the below instructions to install the Worker and deploy a sample pipeline configuration that uses demo data.
The Observability Pipelines Worker Docker image is published to Docker Hub here.
Download the sample pipeline configuration file. This configuration emits demo data, parses and structures the data, and then sends them to the console and Datadog. See Configurations for more information about the source, transform, and sink used in the sample configuration.
Run the following command to start the Observability Pipelines Worker with Docker:
docker run -i -e DD_API_KEY=<API_KEY> \
-e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
-e DD_SITE=<SITE> \
-p 8282:8282 \
-v ./pipeline.yaml:/etc/observability-pipelines-worker/pipeline.yaml:ro \
datadog/observability-pipelines-worker run
Replace <API_KEY>
with your Datadog API key, <PIPELINES_ID>
with your Observability Pipelines configuration ID, and <SITE>
with . Note:
./pipeline.yaml
must be the relative or absolute path to the configuration you downloaded in step 1.
Download the Helm chart values file for AWS EKS. See Configurations for more information about the source, transform, and sink used in the sample configuration.
In the Helm chart, replace the datadog.apiKey
and datadog.pipelineId
values to match your pipeline and use for the
site
value. Then, install it in your cluster with the following commands:
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install \
opw datadog/observability-pipelines-worker \
-f aws_eks.yaml
Download the Helm chart values file for Azure AKS. See Configurations for more information about the source, transform, and sink used in the sample configuration.
In the Helm chart, replace the datadog.apiKey
and datadog.pipelineId
values to match your pipeline and use for the
site
value. Then, install it in your cluster with the following commands:
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install \
opw datadog/observability-pipelines-worker \
-f azure_aks.yaml
Download the Helm chart values file for Google GKE. See Configurations for more information about the source, transform, and sink used in the sample configuration.
In the Helm chart, replace the datadog.apiKey
and datadog.pipelineId
values to match your pipeline and use for the
site
value. Then, install it in your cluster with the following commands:
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install \
opw datadog/observability-pipelines-worker \
-f google_gke.yaml
Install the Worker with the one-line install script or manually.
Run the one-line install command to install the Worker. Replace <DD_API_KEY>
with your Datadog API key, <PIPELINES_ID>
with your Observability Pipelines ID, and <SITE>
with .
DD_API_KEY=<DD_API_KEY> DD_OP_PIPELINE_ID=<PIPELINES_ID> DD_SITE=<SITE> bash -c "$(curl -L https://install.datadoghq.com/scripts/install_script_op_worker1.sh)"
Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml
on the host. See Configurations for more information about the source, transform, and sink used in the sample configuration.
Start the worker:
sudo systemctl restart observability-pipelines-worker
Run the following commands to set up APT to download through HTTPS:
sudo apt-get update
sudo apt-get install apt-transport-https curl gnupg
Run the following commands to set up the Datadog deb
repo on your system and create a Datadog archive keyring:
sudo sh -c "echo 'deb [signed-by=/usr/share/keyrings/datadog-archive-keyring.gpg] https://apt.datadoghq.com/ stable observability-pipelines-worker-1' > /etc/apt/sources.list.d/datadog-observability-pipelines-worker.list"
sudo touch /usr/share/keyrings/datadog-archive-keyring.gpg
sudo chmod a+r /usr/share/keyrings/datadog-archive-keyring.gpg
curl https://keys.datadoghq.com/DATADOG_APT_KEY_CURRENT.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_06462314.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_F14F620E.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
curl https://keys.datadoghq.com/DATADOG_APT_KEY_C0962C7D.public | sudo gpg --no-default-keyring --keyring /usr/share/keyrings/datadog-archive-keyring.gpg --import --batch
Run the following commands to update your local apt
repo and install the Worker:
sudo apt-get update
sudo apt-get install observability-pipelines-worker datadog-signing-keys
Add your keys and the site () to the Worker’s environment variables:
sudo cat <<-EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<SITE>
EOF
Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml
on the host.
Start the Worker:
sudo systemctl restart observability-pipelines-worker
Install the Worker with the one-line install script or manually.
Run the one-line install command to install the Worker. Replace <DD_API_KEY>
with your Datadog API key, <PIPELINES_ID>
with your Observability Pipelines ID, and <SITE>
with .
DD_API_KEY=<DD_API_KEY> DD_OP_PIPELINE_ID=<PIPELINES_ID> DD_SITE=<SITE> bash -c "$(curl -L https://install.datadoghq.com/scripts/install_script_op_worker1.sh)"
Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml
on the host. See Configurations for more information about the source, transform, and sink used in the sample configuration.
Run the following command to start the Worker:
sudo systemctl restart observability-pipelines-worker
Run the following commands to set up the Datadog rpm
repo on your system:
cat <<EOF > /etc/yum.repos.d/datadog-observability-pipelines-worker.repo
[observability-pipelines-worker]
name = Observability Pipelines Worker
baseurl = https://yum.datadoghq.com/stable/observability-pipelines-worker-1/\$basearch/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://keys.datadoghq.com/DATADOG_RPM_KEY_CURRENT.public
https://keys.datadoghq.com/DATADOG_RPM_KEY_4F09D16B.public
https://keys.datadoghq.com/DATADOG_RPM_KEY_B01082D3.public
https://keys.datadoghq.com/DATADOG_RPM_KEY_FD4BF915.public
https://keys.datadoghq.com/DATADOG_RPM_KEY_E09422B3.public
EOF
Note: If you are running RHEL 8.1 or CentOS 8.1, use repo_gpgcheck=0
instead of repo_gpgcheck=1
in the configuration above.
Update your packages and install the Worker:
sudo yum makecache
sudo yum install observability-pipelines-worker
Add your keys and the site () to the Worker’s environment variables:
sudo cat <<-EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<SITE>
EOF
Download the sample configuration file to /etc/observability-pipelines-worker/pipeline.yaml
on the host. See Configurations for more information about the source, transform, and sink used in the sample configuration.
Run the following command to start the Worker:
sudo systemctl restart observability-pipelines-worker
vpc-id
, subnet-ids
, and region
to match your AWS deployment in the configuration. Also,update the values in datadog-api-key
and pipeline-id
to match your pipeline.See Configurations for more information about the source, transform, and sink used in the sample configuration.
See Working with Data for more information on transforming your data.
파이프라인을 배포한 후 배포 방법을 변경할 수 있습니다. 예를 들어 수동 관리형 파이프라인에서 원격 구성이 활성화된 파이프라인으로 변경하거나 그 반대 방향으로도 바꿀 수 있습니다.
원격 구성 배포에서 수동 관리형 배포로 바꾸는 방법:
DD_OP_REMOTE_CONFIGURATION_ENABLED
플래그를 false
로 설정하고 작업자를 재시작하세요. 이 플래그로 작업자를 재시작하지 않으면 원격 구성이 활성화된 상태로 계속 진행되며, 작업자가 로컬 구성 파일을 통해 수동으로 업데이트되지 않습니다.수동 관리형 배포에서 원격 구성 배포로 바꾸는 방법:
DD_OP_REMOTE_CONFIGURATION_ENABLED
플래그를 true
로 설정하고 작업자를 재시작하세요. 이 플래그로 작업자를 재시작해야 UI에서 배포된 구성으로 폴링됩니다.The quickstart walked you through installing the Worker and deploying a sample pipeline configuration. For instructions on how to install the Worker to receive and route data from your Datadog Agents to Datadog or to receive and route data from your Splunk HEC to Splunk and Datadog, select your specific use case:
For recommendations on deploying and scaling multiple Workers:
추가 유용한 문서, 링크 및 기사: