This page is not yet available in Spanish. We are working on its translation.
If you have any questions or feedback about our current translation project,
feel free to reach out to us!Overview
Google Cloud Vertex AI empowers machine learning developers, data scientists, and data engineers to take their projects from
ideation to deployment, quickly and cost-effectively. Train high-quality custom machine learning models with minimal
machine learning expertise and effort.
Setup
Installation
Metric collection
Google Cloud Vertex AI is included in the Google Cloud Platform integration package.
If you haven’t already, set up the Google Cloud Platform integration first to begin collecting out-of-the-box metrics.
Configuration
To collect Vertex AI labels as tags, enable the Cloud Asset Viewer role.
You can use service account impersonation and automatic project discovery to integrate Datadog with Google Cloud.
This method enables you to monitor all projects visible to a service account by assigning IAM roles
in the relevant projects. You can assign these roles to projects individually, or you can configure
Datadog to monitor groups of projects by assigning these roles at the organization or folder level.
Assigning roles in this way allows Datadog to automatically discover and monitor all projects in the
given scope, including any new projects that may be added to the group in the future.
Log collection
Google Cloud Vertex AI logs are collected with Google Cloud Logging and sent to a Dataflow job through a Cloud Pub/Sub topic. If you haven’t already, set up logging with the Datadog Dataflow template.
Once this is done, export your Google Cloud Vertex AI logs from Google Cloud Logging to the Pub/Sub topic:
- Go to the Google Cloud Logging page and filter Google Cloud Vertex AI logs.
- Click Create Sink and name the sink accordingly.
- Choose “Cloud Pub/Sub” as the destination and select the Pub/Sub topic that was created for that purpose. Note: The Pub/Sub topic can be located in a different project.
- Click Create and wait for the confirmation message to show up.
Data Collected
Metrics
gcp.aiplatform.prediction.online.cpu.utilization (gauge) | Fraction of CPU allocated by the deployed model replica and currently in use. May exceed 100% if the machine type has multiple CPUs. Sampled every 60 seconds. After sampling data is not visible for up to 360 seconds. Shown as fraction |
gcp.aiplatform.prediction.online.memory.bytes_used (gauge) | Amount of memory allocated by the deployed model replica and currently in use. Sampled every 60 seconds. After sampling data is not visible for up to 360 seconds. Shown as byte |
gcp.aiplatform.prediction.online.prediction_latencies.samplecount (count) | Online prediction latency of the public deployed model. Sampled every 60 seconds. After sampling data is not visible for up to 360 seconds. Shown as microsecond |
gcp.aiplatform.prediction.online.prediction_latencies.avg (gauge) | Average Online prediction latency of the deployed model. Shown as microsecond |
gcp.aiplatform.prediction.online.prediction_count (count) | Number of online predictions. Shown as prediction |
gcp.aiplatform.prediction.online.network.sent_bytes_count (count) | Number of bytes sent over the network by the deployed model replica. Sampled every 60 seconds. After sampling data is not visible for up to 360 seconds. Shown as byte |
gcp.aiplatform.prediction.online.network.received_bytes_count (count) | Number of bytes received over the network by the deployed model replica. Sampled every 60 seconds. After sampling data is not visible for up to 360 seconds. Shown as byte |
gcp.aiplatform.prediction.online.target_replicas (count) | Target number of active replicas needed for the deployed model. Sampled every 60 seconds. After sampling data is not visible for up to 120 seconds. Shown as worker |
gcp.aiplatform.prediction.online.replicas (count) | Number of active replicas used by the deployed model. Sampled every 60 seconds. After sampling data is not visible for up to 120 seconds. Shown as worker |
gcp.aiplatform.prediction.online.response_count (count) | Number of different online prediction response codes. Shown as response |
gcp.aiplatform.prediction.online.error_count (count) | Number of online prediction errors. Shown as error |
gcp.aiplatform.online_prediction_requests_per_base_model (count) | Online prediction requests per minute per project per base model. Shown as request |
gcp.aiplatform.prediction.online.accelerator.duty_cycle (gauge) | Fraction of CPU allocated by the deployed model replica and currently in use. May exceed 100% if the machine type has multiple CPUs. Sampled every 60 seconds. After sampling data is not visible for up to 360 seconds. Shown as fraction |
gcp.aiplatform.prediction.online.accelerator.memory.bytes_used (gauge) | Amount of accelerator memory allocated by the deployed model replica. Shown as byte |
gcp.aiplatform.prediction.online.private.prediction_latencies.avg (gauge) | Average Online prediction latency of the private deployed model. Shown as microsecond |
gcp.aiplatform.prediction.online.private.prediction_latencies.samplecount (count) | Online prediction latency of the private deployed model. Sampled every 60 seconds. After sampling data is not visible for up to 360 seconds. Shown as microsecond |
gcp.aiplatform.prediction.online.private.response_count (count) | Online prediction response count of the private deployed model. Shown as response |
gcp.aiplatform.quota.online_prediction_requests_per_base_model.exceeded (count) | Number of attempts to exceed the limit on quota metric aiplatform.googleapis.com/onlinepredictionrequestsperbase_model. Shown as error |
gcp.aiplatform.quota.online_prediction_requests_per_base_model.limit (gauge) | Current limit on quota metric aiplatform.googleapis.com/onlinepredictionrequestsperbase_model. Shown as request |
gcp.aiplatform.quota.online_prediction_requests_per_base_model.usage (count) | Current usage on quota metric aiplatform.googleapis.com/onlinepredictionrequestsperbase_model. Shown as request |
Service Checks
Google Cloud Vertex AI does not include any service checks.
Events
Google Cloud Vertex AI does not include any events.
Troubleshooting
Need help? Contact Datadog support.
Further reading
Más enlaces, artículos y documentación útiles: