Knative para Anthos

Información general

Knative para Anthos es una plataforma de desarrollo serverless flexible para entornos híbridos y multinube. Knative para Anthos es la oferta de Knative gestionada y totalmente compatible con Google.

Utiliza la integración de Datadog Google Cloud Platform para recopilar métricas de Knative para Anthos.

Configuración

Recopilación de métricas

Instalación

Si aún no lo has hecho, configura la integración de Google Cloud Platform.

Si ya estás autenticando tus servicios de Knative para Anthos con Workload Identity, entonces no se necesitan más pasos.

Si no has habilitado Workload Identity, debes migrar para utilizar Workload Identity y empezar a recopilar métricas de Knative. Esto implica vincular una cuenta de servicio de Kubernetes a una cuenta de servicio de Google y configurar cada servicio del que desees recopilar métricas para utilizar Workload Identity.

Para obtener instrucciones detalladas de configuración, consulta Google Cloud Workload Identity.

APM

Knative para Anthos expone logs de servicio. Los logs de Knative pueden recopilarse con Google Cloud Logging y enviarse a un trabajo de Dataflow a través de un tema Cloud Pub/Sub. Si aún no lo has hecho, configura el registro con la plantilla de Datadog Dataflow.

Una vez hecho esto, exporta tus logs de Google Cloud Run desde Google Cloud Logging a Pub/Sub:

  1. Ve a Knative para Anthos, haz clic en los servicios deseados y navega hasta la pestaña Logs.

  2. Haz clic en View in Logs Explorer (Ver en Logs Explorer) para ir a la Google Cloud Logging Page (Página de registro de Google Cloud).

  3. Haz clic en Create sink (Crear sink) y asigna al sink el nombre correspondiente.

  4. Elige “Cloud Pub/Sub” como destino y selecciona el Pub/Sub creado a tal efecto. Nota: El Pub/Sub puede estar ubicado en un proyecto diferente.

    Export Google Cloud Pub/Sub Logs to Pub Sub
  5. Haz clic en Create (Crear) y espera a que aparezca el mensaje de confirmación.

Datos recopilados

Métricas

gcp.knative.eventing.broker.event_count
(count)
Number of events received by a broker.
gcp.knative.eventing.trigger.event_count
(count)
Number of events received by a trigger.
gcp.knative.eventing.trigger.event_dispatch_latencies.avg
(gauge)
Average of time spent dispatching an event to a trigger subscriber.
Shown as millisecond
gcp.knative.eventing.trigger.event_dispatch_latencies.p99
(gauge)
99th percentile of time spent dispatching an event to a trigger subscriber.
Shown as millisecond
gcp.knative.eventing.trigger.event_dispatch_latencies.p95
(gauge)
95th percentile of time spent dispatching an event to a trigger subscriber.
Shown as millisecond
gcp.knative.eventing.trigger.event_processing_latencies.avg
(gauge)
Average of time spent processing an event before it is dispatched to a trigger subscriber.
Shown as millisecond
gcp.knative.eventing.trigger.event_processing_latencies.p99
(gauge)
99th percentile of time spent processing an event before it is dispatched to a trigger subscriber.
Shown as millisecond
gcp.knative.eventing.trigger.event_processing_latencies.p95
(gauge)
95th percentile of time spent processing an event before it is dispatched to a trigger subscriber.
Shown as millisecond
gcp.knative.serving.activator.request_count
(count)
The number of requests that are routed to the activator.
Shown as request
gcp.knative.serving.activator.request_latencies.avg
(gauge)
Average of service request times in milliseconds for requests that go through the activator.
Shown as millisecond
gcp.knative.serving.activator.request_latencies.p99
(gauge)
99th percentile of service request times in milliseconds for requests that go through the activator.
Shown as millisecond
gcp.knative.serving.activator.request_latencies.p95
(gauge)
95th percentile of service request times in milliseconds for requests that go through the activator.
Shown as millisecond
gcp.knative.serving.autoscaler.actual_pods
(gauge)
Number of pods that are allocated currently.
gcp.knative.serving.autoscaler.desired_pods
(gauge)
Number of pods autoscaler wants to allocate.
gcp.knative.serving.autoscaler.panic_mode
(gauge)
Set to 1 if autoscaler is in panic mode for the revision, otherwise 0.
gcp.knative.serving.autoscaler.panic_request_concurrency
(gauge)
Average requests concurrency observed per pod during the shorter panic autoscaling window.
Shown as request
gcp.knative.serving.autoscaler.requested_pods
(gauge)
Number of pods autoscaler requested from Kubernetes.
gcp.knative.serving.autoscaler.stable_request_concurrency
(gauge)
Average requests concurrency observed per pod during the stable autoscaling window.
Shown as request
gcp.knative.serving.autoscaler.target_concurrency_per_pod
(gauge)
The desired average requests concurrency per pod during the stable autoscaling window.
Shown as request
gcp.knative.serving.revision.request_count
(count)
The number of requests reaching the revision.
Shown as request
gcp.knative.serving.revision.request_latencies.avg
(gauge)
Average of service request times in milliseconds for requests reaching the revision.
Shown as millisecond
gcp.knative.serving.revision.request_latencies.p99
(gauge)
99th percentile of service request times in milliseconds for requests reaching the revision.
Shown as millisecond
gcp.knative.serving.revision.request_latencies.p95
(gauge)
95th percentile of service request times in milliseconds for requests reaching the revision.
Shown as millisecond

Eventos

La integración de Knative para Anthos no incluye ningún evento.

Checks de servicio

La integración de Knative para Anthos no incluye ningún check de servicio.

Resolución de problemas

¿Necesitas ayuda? Ponte en contacto con el servicio de asistencia de Datadog.

Leer más

Más enlaces, artículos y documentación útiles:

PREVIEWING: mervebolat/span-id-preprocessing