- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
For existing pipelines in Observability Pipelines, you can update and deploy changes for source settings, destination settings, and processors in the Observability Pipelines UI. But if you want to update source and destination environment variables, you need to manually update the Worker with the new values.
On the the Worker installation page:
DD_OP_SOURCE_DATADOG_AGENT_ADDRESS
.DD_OP_SOURCE_FLUENT_ADDRESS
.https://127.0.0.8/logs
.DD_OP_SOURCE_HTTP_CLIENT_ENDPOINT_URL
.DD_OP_SOURCE_HTTP_CLIENT_USERNAME
and DD_OP_SOURCE_HTTP_CLIENT_PASSWORD
.DD_OP_SOURCE_HTTP_CLIENT_BEARER_TOKEN
.0.0.0.0:8088
/services/collector/event
is automatically appended to the endpoint.DD_OP_SOURCE_SPLUNK_HEC_ADDRESS
.0.0.0.0:9997
.DD_OP_SOURCE_SPLUNK_TCP_ADDRESS
.0.0.0.0:80
./receiver/v1/http/
path is automatically appended to the endpoint.DD_OP_SOURCE_SUMO_LOGIC_ADDRESS
.0.0.0.0:9997
.DD_OP_SOURCE_SYSLOG_ADDRESS
.AWS access key ID of your S3 archive:
DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_ACCESS_KEY_ID
AWS secret access key ID of your S3 archive:
DD_OP_DESTINATION_DATADOG_ARCHIVES_AWS_SECRET_KEY
.There are no environment variables to configure.
DD_OP_DESTINATION_DATADOG_ARCHIVES_AZURE_BLOB_CONNECTION_STRING
.No environment variables required.
DD_OP_DESTINATION_SPLUNK_HEC_TOKEN
.https://hec.splunkcloud.com:8088
./services/collector/event
path is automatically appended to the endpoint.DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL
.https://<ENDPOINT>.collection.sumologic.com/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>
, where:<ENDPOINT>
is your Sumo collection endpoint.<UNIQUE_HTTP_COLLECTOR_CODE>
is the string that follows the last forward slash (/
) in the upload URL for the HTTP source.DD_OP_DESTINATION_SUMO_LOGIC_HTTP_COLLECTOR_URL
.127.0.0.1:9997
.DD_OP_DESTINATION_SYSLOG_ENDPOINT_URL
.DD_OP_DESTINATION_GOOGLE_CHRONICLE_UNSTRUCTURED_ENDPOINT_URL
.DD_OP_DESTINATION_ELASTICSEARCH_USERNAME
.DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD
.DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL
.DD_OP_DESTINATION_OPENSEARCH_USERNAME
.DD_OP_DESTINATION_OPENSEARCH_PASSWORD
.DD_OP_DESTINATION_OPENSEARCH_ENDPOINT_URL
.DD_OP_DESTINATION_AMAZON_OPENSEARCH_USERNAME
.DD_OP_DESTINATION_AMAZON_OPENSEARCH_PASSWORD
.DD_OP_DESTINATION_AMAZON_OPENSEARCH_ENDPOINT_URL
.docker run -i -e DD_API_KEY=<DATADOG_API_KEY> \
-e DD_OP_PIPELINE_ID=<PIPELINE_ID> \
-e DD_SITE=<DATADOG_SITE> \
-e <SOURCE_ENV_VARIABLE> \
-e <DESINATION_ENV_VARIABLE> \
-p 8088:8088 \
datadog/observability-pipelines-worker run
docker run
command exposes the same port the Worker is listening on. If you want to map the Worker’s container port to a different port on the Docker host, use the -p | --publish
option:-p 8282:8088 datadog/observability-pipelines-worker run
helm repo update
helm upgrade --install opw \
-f aws_eks.yaml \
--set datadog.apiKey=<DATADOG_API_KEY> \
--set datadog.pipelineId=<PIPELINE_ID> \
--set <SOURCE_ENV_VARIABLES> \
--set <DESTINATION_ENV_VARIABLES> \
--set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
datadog/observability-pipelines-worker
<SERVICE_PORT>
to the port the Worker is listening on (<TARGET_PORT>)
. If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port
and service.ports[0].targetPort
values:--set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install opw \
-f azure_aks.yaml \
--set datadog.apiKey=<DATADOG_API_KEY> \
--set datadog.pipelineId=<PIPELINE_ID> \
--set <SOURCE_ENV_VARIABLES> \
--set <DESTINATION_ENV_VARIABLES> \
--set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
datadog/observability-pipelines-worker
<SERVICE_PORT>
to the port the Worker is listening on (<TARGET_PORT>
). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port
and service.ports[0].targetPort
values:--set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install opw \
-f google_gke.yaml \
--set datadog.apiKey=<DATADOG_API_KEY> \
--set datadog.pipelineId=<PIPELINE_ID> \
--set <SOURCE_ENV_VARIABLES> \
--set <DESTINATION_ENV_VARIABLES> \
--set service.ports[0].protocol=TCP,service.ports[0].port=<SERVICE_PORT>,service.ports[0].targetPort=<TARGET_PORT> \
datadog/observability-pipelines-worker
<SERVICE_PORT>
to the port the Worker is listening on (<TARGET_PORT>
). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port
and service.ports[0].targetPort
values:--set service.ports[0].protocol=TCP,service.ports[0].port=8088,service.ports[0].targetPort=8282
Click Select API key to choose the Datadog API key you want to use.
Run the one-step command provided in the UI to re-install the Worker.
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker
are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
If you prefer not to use the one-line installation script, follow these step-by-step instructions:
apt
repo and install the latest Worker version:sudo apt-get update
sudo apt-get install observability-pipelines-worker datadog-signing-keys
datadoghq.com
for US1), source, and destination environment variables to the Worker’s environment file:sudo cat <<EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<DATADOG_API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<DATADOG_SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
sudo systemctl restart observability-pipelines-worker
Click Select API key to choose the Datadog API key you want to use.
Run the one-step command provided in the UI to re-install the Worker.
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker
are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
If you prefer not to use the one-line installation script, follow these step-by-step instructions:
sudo yum makecache
sudo yum install observability-pipelines-worker
datadoghq.com
for US1), source, and destination updated environment variables to the Worker’s environment file:sudo cat <<-EOF > /etc/default/observability-pipelines-worker
DD_API_KEY=<API_KEY>
DD_OP_PIPELINE_ID=<PIPELINE_ID>
DD_SITE=<SITE>
<SOURCE_ENV_VARIABLES>
<DESTINATION_ENV_VARIABLES>
EOF
sudo systemctl restart observability-pipelines-worker