- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Knative for Anthos is a flexible serverless development platform for hybrid and multicloud environments. Knative for Anthos is Google’s managed and fully supported Knative offering.
Use the Datadog Google Cloud Platform integration to collect metrics from Knative for Anthos.
If you haven’t already, set up the Google Cloud Platform integration.
If you are already authenticating your Knative for Anthos services using Workload Identity, then no further steps are needed.
If you have not enabled Workload Identity, you must migrate to use Workload Identity to start collecting Knative metrics. This involves binding a Kubernetes service account to a Google service account and configuring each service that you want to collect metrics from to use Workload Identity.
For detailed setup instructions, see Google Cloud Workload Identity.
Knative for Anthos exposes service logs. Knative logs can be collected with Google Cloud Logging and sent to a Dataflow job through a Cloud Pub/Sub topic. If you haven’t already, set up logging with the Datadog Dataflow template.
Once this is done, export your Google Cloud Run logs from Google Cloud Logging to the Pub/Sub:
Go to Knative for Anthos, click on your desired services and navigate to the Logs tab.
Click on View in Logs Explorer to go to the Google Cloud Logging Page.
Click Create Sink and name the sink accordingly.
Choose “Cloud Pub/Sub” as the destination and select the Pub/Sub that was created for that purpose. Note: The Pub/Sub can be located in a different project.
Click Create and wait for the confirmation message to show up.
gcp.knative.eventing.broker.event_count (count) | Number of events received by a broker. |
gcp.knative.eventing.trigger.event_count (count) | Number of events received by a trigger. |
gcp.knative.eventing.trigger.event_dispatch_latencies.avg (gauge) | Average of time spent dispatching an event to a trigger subscriber. Shown as millisecond |
gcp.knative.eventing.trigger.event_dispatch_latencies.p99 (gauge) | 99th percentile of time spent dispatching an event to a trigger subscriber. Shown as millisecond |
gcp.knative.eventing.trigger.event_dispatch_latencies.p95 (gauge) | 95th percentile of time spent dispatching an event to a trigger subscriber. Shown as millisecond |
gcp.knative.eventing.trigger.event_processing_latencies.avg (gauge) | Average of time spent processing an event before it is dispatched to a trigger subscriber. Shown as millisecond |
gcp.knative.eventing.trigger.event_processing_latencies.p99 (gauge) | 99th percentile of time spent processing an event before it is dispatched to a trigger subscriber. Shown as millisecond |
gcp.knative.eventing.trigger.event_processing_latencies.p95 (gauge) | 95th percentile of time spent processing an event before it is dispatched to a trigger subscriber. Shown as millisecond |
gcp.knative.serving.activator.request_count (count) | The number of requests that are routed to the activator. Shown as request |
gcp.knative.serving.activator.request_latencies.avg (gauge) | Average of service request times in milliseconds for requests that go through the activator. Shown as millisecond |
gcp.knative.serving.activator.request_latencies.p99 (gauge) | 99th percentile of service request times in milliseconds for requests that go through the activator. Shown as millisecond |
gcp.knative.serving.activator.request_latencies.p95 (gauge) | 95th percentile of service request times in milliseconds for requests that go through the activator. Shown as millisecond |
gcp.knative.serving.autoscaler.actual_pods (gauge) | Number of pods that are allocated currently. |
gcp.knative.serving.autoscaler.desired_pods (gauge) | Number of pods autoscaler wants to allocate. |
gcp.knative.serving.autoscaler.panic_mode (gauge) | Set to 1 if autoscaler is in panic mode for the revision, otherwise 0. |
gcp.knative.serving.autoscaler.panic_request_concurrency (gauge) | Average requests concurrency observed per pod during the shorter panic autoscaling window. Shown as request |
gcp.knative.serving.autoscaler.requested_pods (gauge) | Number of pods autoscaler requested from Kubernetes. |
gcp.knative.serving.autoscaler.stable_request_concurrency (gauge) | Average requests concurrency observed per pod during the stable autoscaling window. Shown as request |
gcp.knative.serving.autoscaler.target_concurrency_per_pod (gauge) | The desired average requests concurrency per pod during the stable autoscaling window. Shown as request |
gcp.knative.serving.revision.request_count (count) | The number of requests reaching the revision. Shown as request |
gcp.knative.serving.revision.request_latencies.avg (gauge) | Average of service request times in milliseconds for requests reaching the revision. Shown as millisecond |
gcp.knative.serving.revision.request_latencies.p99 (gauge) | 99th percentile of service request times in milliseconds for requests reaching the revision. Shown as millisecond |
gcp.knative.serving.revision.request_latencies.p95 (gauge) | 95th percentile of service request times in milliseconds for requests reaching the revision. Shown as millisecond |
The Knative for Anthos integration does not include any events.
The Knative for Anthos integration does not include any service checks.
Need help? Contact Datadog support.
추가 유용한 문서, 링크 및 기사: