Google Cloud Run Services
Overview
Google Cloud Run is a fully managed serverless platform for deploying and scaling container-based applications in Google Cloud. Datadog provides metrics and logs collection for these services through our Google Cloud Integration. This page describes the process of instrumenting your application code running in Google Cloud Run. We only support Google Cloud Run Services, not Google Cloud Run Jobs.
Setup
The recommended process for instrumenting Google Cloud Run applications is to install a tracer and use a Sidecar to collect the custom metrics and traces from your application. The application is configured to write its logs to a volume shared with the sidecar which then forwards them to Datadog.
Applications
Set up a Datadog tracing library, configure the application to send dogstatsd
metrics to port 8125
, and send correctly-formatted logs to the shared volume.
For custom metrics, use the Distribution Metrics to correctly aggregate data from multiple Google Cloud Run instances.
Add the dd-trace-js
library to your application.
app.js
// The tracer includes a dogstatsd client. The tracer is actually started with `NODE_OPTIONS`
// so that we can take advantage of startup tracing.
// The tracer will inject the current trace ID into logs with `DD_LOGS_INJECTION`.
// The tracer will send profiling information with `DD_PROFILING_ENABLED`.
const tracer = require('dd-trace').init();
const express = require("express");
const app = express();
const { createLogger, format, transports } = require('winston');
// We can use the DD_SERVERLESS_LOG_PATH environment variable if it is available.
// While this is not necessary, it the log forwarding configuration centralized
// in the cloud run configuration.
const logFilename = process.env.DD_SERVERLESS_LOG_PATH?.replace("*.log", "app.log") || "/shared-logs/logs/app.log";
console.log(`writing logs to ${logFilename}`);
const logger = createLogger({
level: 'info',
exitOnError: false,
format: format.json(),
transports: [new transports.File({ filename: logFilename })],
});
app.get("/", (_, res) => {
logger.info("Hello!");
tracer.dogstatsd.distribution("our-sample-app.sample-metric", 1);
res.status(200).json({ msg: "A traced endpoint with custom metrics" });
});
const port = process.env.PORT || 8080;
app.listen(port);
You can use npm install dd-trace
to add the tracer to your package.
Dockerfile
Your Dockerfile
can look something like this. This will create a minmimal application container with metrics, traces, logs, and profiling. Note that the dockerfile needs to be built for the the x86_64 architecture (use the --platform linux/arm64
parameter for docker build
).
FROM node:22-slim
COPY app.js package.json package-lock.json .
RUN npm ci --only=production
# Initialize the tracer
ENV NODE_OPTIONS="--require dd-trace/init"
EXPOSE 8080
CMD ["node", "app.js"]
Details
The dd-trace-js
library provides support for Tracing, Metrics, and Profiling.
Set the NODE_OPTIONS="--require dd-trace/init"
environment variable in your docker container to include the dd-trace/init
module when the Node.js process starts.
Application Logs need to be sent to a file that the sidecar container can access. The container setup is detailed below. Log and Trace Correlation possible when logging is combined with the dd-trace-js
library. The sidecar finds log files based on the DD_SERVERLESS_LOG_PATH
environment variable, usually /shared-volume/logs/*.log
which will forward all of files ending in .log
in the /shared-volume/logs
directory. The application container needs the DD_LOGS_INJECTION
environment variable to be set since we are using NODE_OPTIONS
to actually start our tracer. If you do not use NODE_OPTIONS
, call the dd-trace
init
method with the logInjection: true
configuration parameter:
const tracer = require('dd-trace').init({
logInjection: true,
});
Set DD_PROFILING_ENABLED
to enable Profiling.
Example Code
# add the example code here, with traces, custom metrics, profiling, and logs
Details
Tracing
Profiling
Metrics
Logs
Example Code
// add the example code here, with traces, custom metrics, profiling, and logs
Details
Tracing
Profiling
Metrics
Logs
Example Code
// add the example code here, with traces, custom metrics, profiling, and logs
Details
Tracing
Profiling
Metrics
Logs
Example Code
// add the example code here, with traces, custom metrics, profiling, and logs
Details
Tracing
Profiling
Metrics
Logs
Example Code
// add the example code here, with traces, custom metrics, profiling, and logs
Details
Tracing
Profiling
Metrics
Logs
Containers
A sidecar gcr.io/datadoghq/serverless-init:latest
container is used to collect telemetry from your application container and send it to datadog. The sidecar container is configured with a healthcheck for correct starup, and a shared volume for log forwarding, and the environment variables documented below.
Environment Variables
Variable | Container | Description |
---|
DD_SERVERLESS_LOG_PATH | Sidecar (and Application, see notes) | The path where the agent will look for logs. For example /shared-volume/logs/*.log . - Required |
DD_API_KEY | Sidecar | Datadog API key - Required |
DD_SITE | Sidecar | Datadog site - Required |
DD_LOGS_INJECTION | Sidecar and Application | When true , enrich all logs with trace data for supported loggers in Java, Node, .NET, and PHP. See additional docs for Python, Go, and Ruby. See also the details for your runtime above. |
DD_SERVICE | Sidecar and Application | See Unified Service Tagging. |
DD_VERSION | Sidecar | See Unified Service Tagging. |
DD_ENV | Sidecar | See Unified Service Tagging. |
DD_TAGS | Sidecar | See Unified Service Tagging. |
DD_HEALTH_PORT | Sidecar | The port for sidecar health checks. For example 9999 |
The DD_SERVERLESS_LOG_PATH
environment variable is not required on the application. But it can be set there and then used to configure the application’s log filename. This avoids manually synchronizing the Cloud Run service’s log path with the application code that writes to it.
The DD_LOGS_ENABLED
environment variable is not required.
TODO: write something about DD_SOURCE
.
- On the Cloud Run service page, select Edit & Deploy New Revision.
- Open the Volumes main tab and create a new volume for log forwarding.
- Make an
In-Memory
volume called shared-logs
. - You may set a size limit if necessary.
- Open the Containers main tab and click Add Container to add a new
gcr.io/datadoghq/serverless-init:latest
sidecar container. - Click Add health check to add a
Startup check
for the container.- Select the
TCP
probe type. - Choose any free port (
9999
, for example). We will need this port number shortly for the DD_HEALTH_PORT
variable.
- Click the Variables & Secrets tab and add the required environment variables.
- The
DD_HEALTH_PORT
variable should be the port for the TCP health check you configured. - The
DD_SERVERLESS_LOG_PATH
variable should be set to /shared-logs/logs/*.log
where /shared-logs
is the volume mount point we will use in the next step. - See the table above for the other required and suggested Environment Variables.
- Click the Volume Mounts tab and add the logs volume mount.
- Mount it at the location that matches the prefix of
DD_SERVERLESS_LOG_PATH
, for example /shared-logs
for a /shared-logs/logs/*.log
log path.
- Edit the application container.
- Click the Volume Mounts tab and add the logs volume mount.
- Mount it to the same location that you did for the sidecar container, for example
/shared-logs
.
- Click the Variables & Secrets tab and set the
DD_SERVICE
and DD_LOGS_INJECTION
environment variables as you did for the sidecar. - Click the Settings tab and set the Container start up order to Depends on the sidecar container.
- Deploy the application.
Add a service
label
Add a service
label which matches the DD_SERVICE
value on the containers to the Google Cloud service. Access this through the service list, through the Labels button after selecting the service.
Futher Reading
Additional helpful documentation, links, and articles: