- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
If you experience unexpected behavior with Datadog Observability Pipelines (OP), there are a few common issues you can investigate, and this guide may help resolve issues quickly. If you continue to have trouble, reach out to Datadog support for further assistance.
To view information about the Observability Pipelines Workers running for an active pipeline:
@op_work.id:<worker_id>
to the search query.If you can access your Observability Pipelines Workers locally, use the tap
command to see the raw data sent through your pipeline’s source and processors.
The Observability Pipelines Worker API allows you to interact with the Worker’s processes with the tap
command. If you are using the Helm charts provided when you set up a pipeline, then the API has already been enabled. Otherwise, make sure the environment variable DD_OP_API_ENABLED
is set to true
in /etc/observability-pipelines-worker/bootstrap.yaml
. See Bootstrap options for more information. This sets up the API to listen on localhost
and port 8686
, which is what the CLI for tap
is expecting.
top
to find the component IDYou need the source’s or processor’s component ID to tap
into it. Use the top
command to find the ID of the component you want to tap
into:
observability-pipelines-worker top
tap
to see your dataIf you are on the same host as the Worker, run the following command to tap
the output of the component:
observability-pipelines-worker tap <component_ID>
If you are using a containerized environment, use the docker exec
or kubectl exec
command to get a shell into the container to run the above tap
command.
Observability Pipelines destinations batch events before sending them to the downstream integration. For example, the Amazon S3, Google Cloud Storage, and Azure Storage destinations have a batch timeout of 900 seconds. If the other batch parameters (maximum events and maximum bytes) have not been met within the 900-second timeout, the batch is flushed at 900 seconds. This means the destination component can take up to 15 minutes to send out a batch of events to the downstream integration.
These are the batch parameters for each destination:
Destination | Maximum Events | Maximum Bytes | Timeout (seconds) |
---|---|---|---|
Amazon OpenSearch | None | 10,000,000 | 1 |
Amazon S3 (Datadog Log Archives) | None | 100,000,000 | 900 |
Azure Storage (Datadog Log Archives) | None | 100,000,000 | 900 |
Datadog Logs | 1,000 | 4,250,000 | 5 |
Elasticsearch | None | 10,000,000 | 1 |
Google Chronicle | None | 1,000,000 | 15 |
Google Cloud Storage (Datadog Log Archives) | None | 100,000,000 | 900 |
New Relic | 100 | 1,000,000 | 1 |
OpenSearch | None | 10,000,000 | 1 |
Splunk HTTP Event Collector (HEC) | None | 1,000,000 | 1 |
Sumo Logic Hosted Collecter | None | 10,000,000 | 1 |
Note: The rsyslog and syslog-ng destinations do not batch events.
See event batching for more information.