- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`Google Cloud Run is a fully managed serverless platform for deploying and scaling container-based applications. Datadog provides monitoring and log collection for Cloud Run through the Google Cloud integration.
serverless-init
, see Instrument Google Cloud Run with serverless-init.In your main application, add the dd-trace-js
library. See Tracing Node.js applications for instructions.
Set ENV NODE_OPTIONS="--require dd-trace/init"
. This specifies that the dd-trace/init
module is required when the Node.js process starts.
The profiler is shipped within Datadog tracing libraries. If you are already using APM to collect traces for your application, you can skip installing the library and go directly to enabling the profiler. See Enabling the Node.js Profiler to add the environment variables.
The tracing library also collects custom metrics. See the code examples.
The Datadog sidecar collects logs through a shared volume. To forward logs from your main container to the sidecar, configure your application to write all logs to a location such as shared-volume/logs/*.log
using the steps below. During the container step, add the environment variable DD_SERVERLESS_LOG_PATH
and a shared Volume Mount to both the main and sidecar container. If you decide to deploy using YAML or Terraform, the environment variables, health check, and volume mount are already added.
To set up logging in your application, see Node.js Log Collection. To set up trace log correlation, see Correlating Node.js Logs and Traces.
In your main application, add the dd-trace-py
library. See Tracing Python Applications for instructions. You can also use Tutorial - Enabling Tracing for a Python Application and Datadog Agent in Containers.
The profiler is shipped within Datadog tracing libraries. If you are already using APM to collect traces for your application, you can skip installing the library and go directly to enabling the profiler. See Enabling the Python Profiler to add the environment variables.
The tracing library also collects custom metrics. See the code examples.
The Datadog sidecar collects logs through a shared volume. To forward logs from your main container to the sidecar, configure your application to write all logs to a location such as shared-volume/logs/*.log
using the steps below. During the container step, add the environment variable DD_SERVERLESS_LOG_PATH
and a shared Volume Mount to both the main and sidecar container. If you decide to deploy using YAML or Terraform, the environment variables, health check, and volume mount are already added.
To set up logging in your application, see Python Log Collection. Python Logging Best Practices can also be helpful. To set up trace log correlation, see Correlating Python Logs and Traces.
In your main application, add the dd-trace-java
library. Follow the instructions in Tracing Java Applications or use the following example Dockerfile to add and start the tracing library with automatic instrumentation:
FROM eclipse-temurin:17-jre-jammy
WORKDIR /app
COPY target/cloudrun-java-1.jar cloudrun-java-1.jar
# Add the Datadog tracer
ADD 'https://dtdg.co/latest-java-tracer' dd-java-agent.jar
EXPOSE 8080
# Start the Datadog tracer with the javaagent argument
ENTRYPOINT [ "java", "-javaagent:dd-java-agent.jar", "-jar", "cloudrun-java-1.jar" ]
The profiler is shipped within Datadog tracing libraries. If you are already using APM to collect traces for your application, you can skip installing the library and go directly to enabling the profiler. See Enabling the Java Profiler to add the environment variables.
To collect custom metrics, install the Java DogStatsD client.
The Datadog sidecar collects logs through a shared volume. To forward logs from your main container to the sidecar, configure your application to write all logs to a location such as shared-volume/logs/*.log
using the steps below. During the container step, add the environment variable DD_SERVERLESS_LOG_PATH
and a shared Volume Mount to both the main and sidecar container. If you decide to deploy using YAML or Terraform, the environment variables, health check, and volume mount are already added.
To set up logging in your application, see Java Log Collection. To set up trace log correlation, see Correlating Java Logs and Traces.
In your main application, add the dd-trace-go
library. See Tracing Go Applications for instructions.
The profiler is shipped within Datadog tracing libraries. If you are already using APM to collect traces for your application, you can skip installing the library and go directly to enabling the profiler. See Enabling the Go Profiler to add the environment variables.
The tracing library also collects custom metrics. See the code examples.
The Datadog sidecar collects logs through a shared volume. To forward logs from your main container to the sidecar, configure your application to write all logs to a location such as shared-volume/logs/*.log
using the steps below. During the container step, add the environment variable DD_SERVERLESS_LOG_PATH
and a shared Volume Mount to both the main and sidecar container. If you decide to deploy using YAML or Terraform, the environment variables, health check, and volume mount are already added.
To set up logging in your application, see Go Log Collection. To set up trace log correlation, see Correlating Go Logs and Traces.
In your main application, add the .NET tracing library. See Tracing .NET Applications for instructions.
Example Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:8.0-jammy
WORKDIR /app
COPY ./bin/Release/net8.0/publish /app
ADD https://github.com/DataDog/dd-trace-dotnet/releases/download/v2.56.0/datadog-dotnet-apm_2.56.0_amd64.deb /opt/datadog/datadog-dotnet-apm_2.56.0_amd64.deb
RUN dpkg -i /opt/datadog/datadog-dotnet-apm_2.56.0_amd64.deb
RUN mkdir -p /shared-volume/logs/
ENV CORECLR_ENABLE_PROFILING=1
ENV CORECLR_PROFILER={846F5F1C-F9AE-4B07-969E-05C26BC060D8}
ENV CORECLR_PROFILER_PATH=/opt/datadog/Datadog.Trace.ClrProfiler.Native.so
ENV DD_DOTNET_TRACER_HOME=/opt/datadog/
ENV DD_TRACE_DEBUG=true
ENTRYPOINT ["dotnet", "dotnet.dll"]
The profiler is shipped within Datadog tracing libraries. If you are already using APM to collect traces for your application, you can skip installing the library and go directly to enabling the profiler. See Enabling the .NET Profiler to add the environment variables. The previous Dockerfile example also has the environment variables for the profiler.
The tracing library also collects custom metrics. See the code examples.
The Datadog sidecar collects logs through a shared volume. To forward logs from your main container to the sidecar, configure your application to write all logs to a location such as shared-volume/logs/*.log
using the steps below. During the container step, add the environment variable DD_SERVERLESS_LOG_PATH
and a shared Volume Mount to both the main and sidecar container. If you decide to deploy using YAML or Terraform, the environment variables, health check, and volume mount are already added.
To set up logging in your application, see C# Log Collection. To set up trace log correlation, see Correlating .NET Logs and Traces.
In your main application, add the dd-trace-php
library. See Tracing PHP Applications for instructions.
The tracing library also collects custom metrics. See the code examples.
The Datadog sidecar collects logs through a shared volume. To forward logs from your main container to the sidecar, configure your application to write all logs to a location such as shared-volume/logs/*.log
using the steps below. During the container step, add the environment variable DD_SERVERLESS_LOG_PATH
and a shared Volume Mount to both the main and sidecar container. If you decide to deploy using YAML or Terraform, the environment variables, health check, and volume mount are already added.
To set up logging in your application, see PHP Log Collection. To set up trace log correlation, see Correlating PHP Logs and Traces.
In Cloud Run, select Edit & Deploy New Revision.
At the bottom of the page, select Add Container.
For Container image URL, select gcr.io/datadoghq/serverless-init:latest
.
Go to Volume Mounts and set up a volume mount for logs. Ensure that the mount path matches your application’s write location. For example:
Go to Settings and add a startup check.
Go to Variables & Secrets and add the following environment variables as name-value pairs:
DD_SERVICE
: A name for your service. For example, gcr-sidecar-test
.DD_ENV
: A name for your environment. For example, dev
.DD_SERVERLESS_LOG_PATH
: Your log path. For example, /shared-volume/logs/*.log
.DD_API_KEY
: Your Datadog API key.DD_HEALTH_PORT
: The port you selected for the startup check in the previous step.For a list of all environment variables, including additional tags, see Environment variables.
DD_SERVICE
environment variable that you set for the sidecar container.Tag your GCP entity with the service
label to correlate your traces with your service:
Add the same value from DD_SERVICE
to a service
label on your Cloud Run Service, inside the info panel of your service.
Name | Value |
---|---|
service | The name of your service matching the DD_SERVICE env var. |
For more information on how to add labels, see Google Cloud’s Configure labels for services documentation.
To deploy your Cloud Run service with YAML service specification, use the following example configuration file.
Create a YAML file that contains the following:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: '<SERVICE_NAME>'
labels:
cloud.googleapis.com/location: '<LOCATION>'
service: '<SERVICE_NAME>'
spec:
template:
metadata:
labels:
service: '<SERVICE_NAME>'
annotations:
autoscaling.knative.dev/maxScale: '100' # The maximum number of instances that can be created for this service. https://cloud.google.com/run/docs/reference/rest/v1/RevisionTemplate
run.googleapis.com/container-dependencies: '{"run-sidecar-1":["serverless-init-1"]}' # Configure container start order for sidecar deployments https://cloud.google.com/run/docs/configuring/services/containers#container-ordering
run.googleapis.com/startup-cpu-boost: 'true' # The startup CPU boost feature for revisions provides additional CPU during instance startup time and for 10 seconds after the instance has started. https://cloud.google.com/run/docs/configuring/services/cpu#startup-boost
spec:
containers:
- env:
- name: DD_SERVICE
value: '<SERVICE_NAME>'
image: '<CONTAINER_IMAGE>'
name: run-sidecar-1
ports:
- containerPort: 8080
name: http1
resources:
limits:
cpu: 1000m
memory: 512Mi
startupProbe:
failureThreshold: 1
periodSeconds: 240
tcpSocket:
port: 8080
timeoutSeconds: 240
volumeMounts:
- mountPath: /shared-volume
name: shared-volume
- env:
- name: DD_SERVERLESS_LOG_PATH
value: shared-volume/logs/*.log
- name: DD_SITE
value: '<DATADOG_SITE>'
- name: DD_ENV
value: serverless
- name: DD_API_KEY
value: '<API_KEY>'
- name: DD_SERVICE
value: '<SERVICE_NAME>'
- name: DD_VERSION
value: '<VERSION>'
- name: DD_LOG_LEVEL
value: debug
- name: DD_LOGS_INJECTION
value: 'true'
- name: DD_HEALTH_PORT
value: '12345'
image: gcr.io/datadoghq/serverless-init:latest
name: serverless-init-1
resources:
limits:
cpu: 1000m
memory: 512Mi # Can be updated to a higher memory if needed
startupProbe:
failureThreshold: 3
periodSeconds: 10
tcpSocket:
port: 12345
timeoutSeconds: 1
volumeMounts:
- mountPath: /shared-volume
name: shared-volume
volumes:
- emptyDir:
medium: Memory
sizeLimit: 512Mi
name: shared-volume
traffic: # make this revision and all future ones serve 100% of the traffic as soon as possible, overriding any established traffic split
- latestRevision: true
percent: 100
In this example, the environment variables, startup health check, and volume mount are already added. If you don’t want to enable logs, remove the shared volume. Ensure the container port for the main container is the same as the one exposed in your Dockerfile/service.
Supply placeholder values:
<SERVICE_NAME>
: A name for your service. For example, gcr-sidecar-test
. See Unified Service Tagging.<LOCATION>
: The region you are deploying your service in. For example, us-central
.<DATADOG_SITE>
: Your Datadog site,
.<API_KEY>
: Your Datadog API key.<VERSION>
: The version number of your deployment. See Unified Service Tagging.<CONTAINER_IMAGE>
: The image of the code you are deploying to Cloud Run. For example, us-docker.pkg.dev/cloudrun/container/hello
.Run:
gcloud run services replace <FILENAME>.yaml
To deploy your Cloud Run service with Terraform, use the following example configuration file. In this example, the environment variables, startup health check, and volume mount are already added. If you don’t want to enable logs, remove the shared volume. Ensure the container port for the main container is the same as the one exposed in your Dockerfile/service. If you do not want to allow public access, remove the IAM policy section.
provider "google" {
project = "<PROJECT_ID>"
region = "<LOCATION>" # example: us-central1
}
resource "google_cloud_run_service" "terraform_with_sidecar" {
name = "<SERVICE_NAME>"
location = "<LOCATION>"
template {
metadata {
annotations = {
# Correctly formatted container-dependencies annotation
"run.googleapis.com/container-dependencies" = jsonencode({main-app = ["sidecar-container"]})
}
labels = {
service = "<SERVICE_NAME>"
}
}
spec {
# Define shared volume
volumes {
name = "shared-volume"
empty_dir {
medium = "Memory"
}
}
# Main application container
containers {
name = "main-app"
image = "<CONTAINER_IMAGE>"
# Expose a port for the main container
ports {
container_port = 8080
}
# Mount the shared volume
volume_mounts {
name = "shared-volume"
mount_path = "/shared-volume"
}
# Startup Probe for TCP Health Check
startup_probe {
tcp_socket {
port = 8080
}
initial_delay_seconds = 0 # Delay before the probe starts
period_seconds = 10 # Time between probes
failure_threshold = 3 # Number of failures before marking as unhealthy
timeout_seconds = 1 # Number of failures before marking as unhealthy
}
# Environment variables for the main container
env {
name = "DD_SERVICE"
value = "<SERVICE_NAME>"
}
# Resource limits for the main container
resources {
limits = {
memory = "512Mi"
cpu = "1"
}
}
}
# Sidecar container
containers {
name = "sidecar-container"
image = "gcr.io/datadoghq/serverless-init:latest"
# Mount the shared volume
volume_mounts {
name = "shared-volume"
mount_path = "/shared-volume"
}
# Startup Probe for TCP Health Check
startup_probe {
tcp_socket {
port = 12345
}
initial_delay_seconds = 0 # Delay before the probe starts
period_seconds = 10 # Time between probes
failure_threshold = 3 # Number of failures before marking as unhealthy
timeout_seconds = 1
}
# Environment variables for the sidecar container
env {
name = "DD_SITE"
value = "<DATADOG_SITE>"
}
env {
name = "DD_SERVERLESS_LOG_PATH"
value = "shared-volume/logs/*.log"
}
env {
name = "DD_ENV"
value = "serverless"
}
env {
name = "DD_API_KEY"
value = "<API_KEY>"
}
env {
name = "DD_SERVICE"
value = "<SERVICE_NAME>"
}
env {
name = "DD_VERSION"
value = "<VERSION>"
}
env {
name = "DD_LOG_LEVEL"
value = "debug"
}
env {
name = "DD_LOGS_INJECTION"
value = "true"
}
env {
name = "DD_HEALTH_PORT"
value = "12345"
}
# Resource limits for the sidecar
resources {
limits = {
memory = "512Mi"
cpu = "1"
}
}
}
}
}
# Define traffic splitting
traffic {
percent = 100
latest_revision = true
}
}
# IAM Member to allow public access (optional, adjust as needed)
resource "google_cloud_run_service_iam_member" "invoker" {
service = google_cloud_run_service.terraform_with_sidecar.name
location = google_cloud_run_service.terraform_with_sidecar.location
role = "roles/run.invoker"
member = "allUsers"
}
Supply placeholder values:
<PROJECT_ID>
: Your Google Cloud project ID.<LOCATION>
: The region you are deploying your service in. For example, us-central1
.<SERVICE_NAME>
: A name for your service. For example, gcr-sidecar-test
. See Unified Service Tagging.<CONTAINER_IMAGE>
: The image of the code you are deploying to Cloud Run.<DATADOG_SITE>
: Your Datadog site,
.<API_KEY>
: Your Datadog API key.<VERSION>
: The version number of your deployment. See Unified Service Tagging.Variable | Description |
---|---|
DD_API_KEY | Datadog API key - Required |
DD_SITE | Datadog site - Required |
DD_LOGS_INJECTION | When true, enrich all logs with trace data for supported loggers in Java, Node, .NET, and PHP. See additional docs for Python, Go, and Ruby. |
DD_SERVICE | See Unified Service Tagging. |
DD_VERSION | See Unified Service Tagging. |
DD_ENV | See Unified Service Tagging. |
DD_SOURCE | See Unified Service Tagging. |
DD_TAGS | See Unified Service Tagging. |
Do not use the DD_LOGS_ENABLED
environment variable. This variable is only used for the serverless-init install method.
The following example contains a single app with tracing, metrics, and logs set up.
const tracer = require('dd-trace').init({
logInjection: true,
});
const express = require("express");
const app = express();
const { createLogger, format, transports } = require('winston');
const logger = createLogger({
level: 'info',
exitOnError: false,
format: format.json(),
transports: [new transports.File({ filename: `/shared-volume/logs/app.log`}),
],
});
app.get("/", (_, res) => {
logger.info("Welcome!");
res.sendStatus(200);
});
app.get("/hello", (_, res) => {
logger.info("Hello!");
metricPrefix = "nodejs-cloudrun";
// Send three unique metrics, just so we're testing more than one single metric
metricsToSend = ["sample_metric_1", "sample_metric_2", "sample_metric_3"];
metricsToSend.forEach((metric) => {
for (let i = 0; i < 20; i++) {
tracer.dogstatsd.distribution(`${metricPrefix}.${metric}`, 1);
}
});
res.status(200).json({ msg: "Sending metrics to Datadog" });
});
const port = process.env.PORT || 8080;
app.listen(port);
import ddtrace
from flask import Flask, render_template, request
import logging
from datadog import initialize, statsd
ddtrace.patch(logging=True)
app = Flask(__name__)
options = {
'statsd_host':'127.0.0.1',
'statsd_port':8125
}
FORMAT = ('%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] '
'[dd.service=%(dd.service)s dd.env=%(dd.env)s dd.version=%(dd.version)s dd.trace_id=%(dd.trace_id)s dd.span_id=%(dd.span_id)s] '
'- %(message)s')
logging.basicConfig(level=logging.DEBUG, filename='app.log', format=FORMAT)
logger = logging.getLogger(__name__)
logger.level = logging.INFO
ddlogs = []
@ddtrace.tracer.wrap(service="dd_gcp_log_forwader")
@app.route('/', methods=["GET"])
def index():
log = request.args.get("log")
if log != None:
with tracer.trace('sending_logs') as span:
statsd.increment('dd.gcp.logs.sent')
span.set_tag('logs', 'nina')
logger.info(log)
ddlogs.append(log)
return render_template("home.html", logs=ddlogs)
if __name__ == '__main__':
tracer.configure(port=8126)
initialize(**options)
app.run(debug=True)
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Datadog Test</title>
</head>
<body>
<h1>Welcome to Datadog!💜</h1>
<form action="">
<input type="text" name="log" placeholder="Enter Log">
<button>Add Log</button>
</form>
<h3>Logs Sent to Datadog:</h3>
<ul>
{% for log in logs%}
{% if log %}
<li>{{ log }}</li>
{% endif %}
{% endfor %}
</ul>
</body>
</html>
package com.example.springboot;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import com.timgroup.statsd.NonBlockingStatsDClientBuilder;
import com.timgroup.statsd.StatsDClient;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
@RestController
public class HelloController {
Private static final StatsDClient Statsd = new NonBlockingStatsDClientBuilder().hostname("localhost").build();
private static final Log logger = LogFactory.getLog(HelloController.class);
@GetMapping("/")
public String index() {
Statsd.incrementCounter("page.views");
logger.info("Hello Cloud Run!");
return "💜 Hello Cloud Run! 💜";
}
}
package main
import (
"fmt"
"log"
"net/http"
"os"
"path/filepath"
"github.com/DataDog/datadog-go/v5/statsd"
"github.com/DataDog/dd-trace-go/v2/ddtrace"
"github.com/DataDog/dd-trace-go/v2/ddtrace/tracer"
)
const logDir = "/shared-volume/logs"
var logFile *os.File
var logCounter int
var dogstatsdClient *statsd.Client
func handler(w http.ResponseWriter, r *http.Request) {
log.Println("Yay!! Main container works")
span := tracer.StartSpan("maincontainer", tracer.ResourceName("/handler"))
defer span.Finish()
logCounter++
writeLogsToFile(fmt.Sprintf("received request %d", logCounter), span.Context())
dogstatsdClient.Incr("request.count", []string{"test-tag"}, 1)
}
func writeLogsToFile(log_msg string, context ddtrace.SpanContext) {
span := tracer.StartSpan(
"writeLogToFile",
tracer.ResourceName("/writeLogsToFile"),
tracer.ChildOf(context))
defer span.Finish()
_, err := logFile.WriteString(log_msg + "\n")
if err != nil {
log.Println("Error writing to log file:", err)
}
}
func main() {
log.Print("Main container started...")
err := os.MkdirAll(logDir, 0755)
if err != nil {
panic(err)
}
logFilePath := filepath.Join(logDir, "maincontainer.log")
log.Println("Saving logs in ", logFilePath)
logFileLocal, err := os.OpenFile(logFilePath, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0644)
if err != nil {
panic(err)
}
defer logFileLocal.Close()
logFile = logFileLocal
dogstatsdClient, err = statsd.New("localhost:8125")
if err != nil {
panic(err)
}
defer dogstatsdClient.Close()
tracer.Start()
defer tracer.Stop()
http.HandleFunc("/", handler)
log.Fatal(http.ListenAndServe(":8080", nil))
}
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.RazorPages;
using Serilog;
using Serilog.Formatting.Json;
using Serilog.Formatting.Compact;
using Serilog.Sinks.File;
using StatsdClient;
namespace dotnet.Pages;
public class IndexModel : PageModel
{
private readonly static DogStatsdService _dsd;
static IndexModel()
{
var dogstatsdConfig = new StatsdConfig
{
StatsdServerName = "127.0.0.1",
StatsdPort = 8125,
};
_dsd = new DogStatsdService();
_dsd.Configure(dogstatsdConfig);
Log.Logger = new LoggerConfiguration()
.WriteTo.File(new RenderedCompactJsonFormatter(), "/shared-volume/logs/app.log")
.CreateLogger();
}
public void OnGet()
{
_dsd.Increment("page.views");
Log.Information("Hello Cloud Run!");
}
}
<?php
require __DIR__ . '/vendor/autoload.php';
use DataDog\DogStatsd;
use Monolog\Logger;
use Monolog\Handler\StreamHandler;
use Monolog\Formatter\JsonFormatter;
$statsd = new DogStatsd(
array('host' => '127.0.0.1',
'port' => 8125,
)
);
$log = new logger('datadog');
$formatter = new JsonFormatter();
$stream = new StreamHandler('/shared-volume/logs/app.log', Logger::DEBUG);
$stream->setFormatter($formatter);
$log->pushHandler($stream);
$log->info("Hello Datadog!");
echo '💜 Hello Datadog! 💜';
$log->info("sending a metric");
$statsd->increment('page.views', 1, array('environment'=>'dev'));
?>