- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
This document goes over bootstrapping the Observability Pipelines Worker and referencing files in Kubernetes.
DD_OP_DATA_DIR/config
.
Modifying files under that location while OPW is running might have adverse effects.Bootstrap the Observability Pipelines Worker within your infrastructure before you set up a pipeline. These environment variables are separate from the pipeline environment variables. The location of the related directories and files:
var/lib/observability-pipelines-worker
/etc/observability-pipelines-worker/bootstrap.yaml
/etc/default/observability-pipelines-worker
To set bootstrap options, do one of the following:
bootstrap.yaml
and start the Worker instance with --bootstrap-config /path/to/bootstrap.yaml
.The following is a list of bootstrap options, their related pipeline environment variables, and which variables have a higher precedence (priority).
api_key
DD_API_KEY
DD_API_KEY
pipeline_id
DD_OP_PIPELINE_ID
DD_OP_PIPELINE_ID
site
DD_SITE
DD_SITE
datadoghq.com
).data_dir
DD_OP_DATA_DIR
DD_OP_DATA_DIR
/var/lib/observability-pipelines-worker
). This is the file system directory that the Observability Pipelines Worker uses for local state.tags: []
DD_OP_TAGS
DD_OP_TAGS
threads
DD_OP_THREADS
DD_OP_THREADS
proxy
DD_PROXY_HTTP
, DD_PROXY_HTTPS
, DD_PROXY_NO_PROXY
DD_PROXY_HTTP(S)
HTTP(S)_PROXY
proxy
:DD_PROXY_HTTP(S)
and HTTP(S)_PROXY
environment variables need to be already exported in your environment for the Worker to resolve them. They cannot be prepended to the Worker installation script.If you are referencing files in Kubernetes for Google Cloud Storage authentication, TLS certificates for certain sources, or an enrichment table processor, you need to use volumeMounts[*].subPath
to mount files from a configMap
or secret
.
For example, if you have a secret
defined as:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
credentials1.json: bXktc2VjcmV0LTE=
credentials2.json: bXktc2VjcmV0LTI=
Then you need to override extraVolumes
and extraVolumeMounts
in the values.yaml
file to mount the secret files to Observability Pipelines Worker pods using subPath
:
# extraVolumes -- Specify additional Volumes to use.
extraVolumes:
- name: my-secret-volume
secret:
secretName: my-secret
# extraVolumeMounts -- Specify Additional VolumeMounts to use.
extraVolumeMounts:
- name: my-secret-volume
mountPath: /var/lib/observability-pipelines-worker/config/credentials1.json
subPath: credentials1.json
- name: my-secret-volume
mountPath: /var/lib/observability-pipelines-worker/config/credentials2.json
subPath: credentials2.json
Note: If you override thedatadog.dataDir
parameter, you need to override the mountPath
as well.
추가 유용한 문서, 링크 및 기사: