이 페이지는 아직 한국어로 제공되지 않으며 번역 작업 중입니다. 번역에 관한 질문이나 의견이 있으시면 언제든지 저희에게 연락해 주십시오.

Overview

The OpenTelemetry Demo is a microservices demo application developed by the community to demonstrate OpenTelemetry (OTel) instrumentation and its observability capabilities. It is an e-commerce web page composed of multiple microservices communicating with each other through HTTP and gRPC. All services are instrumented with OpenTelemetry and produce traces, metrics, and logs.

This page guides you through the steps required to deploy the OpenTelemetry Demo and send its data to Datadog.

Prerequisites

To complete this guide, ensure you have the following:

  1. Create a Datadog account if you haven’t yet.
  2. Find or create your Datadog API key.
  3. 6 GB of free RAM for the application.

You can deploy the demo using Docker or Kubernetes (with Helm). Choose your preferred deployment method and make sure you have the necessary tools installed:

  • Docker
  • Docker Compose v2.0.0+
  • Make (optional)
  • Kubernetes 1.24+
  • Helm 3.9+
  • An active Kubernetes cluster with kubectl configured to connect to it

Configuring and deploying the demo

Cloning the repository

Clone the opentelemetry-demo repository to your device:

git clone https://github.com/open-telemetry/opentelemetry-demo.git

Configuring the OpenTelemetry Collector

To send the demo’s telemetry data to Datadog you need to add three components to the the OpenTelemetry Collector configuration:

  • Resource Processor is an optional component which is recommended, used to set the env tag for Datadog.
  • Datadog Connector is responsible for computing Datadog APM Trace Metrics.
  • Datadog Exporter is responsible for exporting Traces, Metrics and Logs to Datadog.

Complete the following steps to configure these three components.

  1. Open the demo repository. Create a file called docker-compose.override.yml in the root folder.

  2. Open the created file. Paste the following content and set the Datadog site and Datadog API key environment variables:

    services: 
      otelcol:
        command: 
          - "--config=/etc/otelcol-config.yml"
          - "--config=/etc/otelcol-config-extras.yml"
          - "--feature-gates=exporter.datadogexporter.UseLogsAgentExporter"
        environment:
          - DD_SITE_PARAMETER=<Your API Site>
          - DD_API_KEY=<Your API Key>
    
  3. To configure the OpenTelemetry Collector, open src/otelcollector/otelcol-config-extras.yml and add the following to the file:

    exporters:
      datadog:
        traces:
          span_name_as_resource_name: true
          trace_buffer: 500
        hostname: "otelcol-docker"
        api:
          site: ${env:DD_SITE_PARAMETER}
          key: ${env:DD_API_KEY}
    
    processors:
      resource:
        attributes:
          - key: deployment.environment
            value: "otel"
            action: upsert
    
    connectors:
      datadog/connector:
        traces:
          span_name_as_resource_name: true
    
    service:
      pipelines:
        traces:
          processors: [resource, batch]
          exporters: [otlp, debug, spanmetrics, datadog, datadog/connector]
        metrics:
          receivers: [docker_stats, httpcheck/frontendproxy, otlp, prometheus, redis, spanmetrics, datadog/connector]
          processors: [resource, batch]
          exporters: [otlphttp/prometheus, debug, datadog]
        logs:
          processors: [resource, batch]
          exporters: [opensearch, debug, datadog]
    

    By default, the collector in the demo application merges the configuration from two files:

    • src/otelcollector/otelcol-config.yml: contains the default configuration for the collector.
    • src/otelcollector/otelcol-config-extras.yml: used to add extra configuration to the collector.
    When merging YAML values, objects are merged and arrays are replaced. That's why there are more components specified in the pipelines than actually configured. The previous configuration does not replace the values configured in the main otelcol-config file.
  1. Create a secret named dd-secrets to store Datadog Site and API Key secrets:

    kubectl create secret generic dd-secrets --from-literal="DD_SITE_PARAMETER=<Your API Site>" --from-literal="DD_API_KEY=<Your API Key>"
    
  2. Add the OpenTelemetry Helm chart to your repo to manage and deploy the OpenTelemetry Demo:

    helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
    
  3. Create a file named my-values-file.yml with the following content:

    opentelemetry-collector:
      extraEnvsFrom:
        - secretRef:
            name: dd-secrets
      config:
        exporters:
          datadog:
            traces:
              span_name_as_resource_name: true
              trace_buffer: 500
            hostname: "otelcol-helm"
            api:
              site: ${DD_SITE_PARAMETER}
              key: ${DD_API_KEY}
    
        processors:
          resource:
            attributes:
              - key: deployment.environment
                value: "otel"
                action: upsert
    
        connectors:
          datadog/connector:
            traces:
              span_name_as_resource_name: true
    
        service:
          pipelines:
            traces:
              processors: [resource, batch]
              exporters: [otlp, debug, spanmetrics, datadog, datadog/connector]
            metrics:
              receivers: [httpcheck/frontendproxy, otlp, redis, spanmetrics, datadog/connector]
              processors: [resource, batch]
              exporters: [otlphttp/prometheus, debug, datadog]
            logs:
              processors: [resource, batch]
              exporters: [opensearch, debug, datadog]
    
    When merging YAML values, objects are merged and arrays are replaced. That's why there are more components specified in the pipelines than actually configured. The previous configuration does not replace the values configured in the main otelcol-config file.

Running the demo

If you have make installed, you can use the following command to start the demo:

make start

If you don’t have make installed, you can use the the docker compose command directly:

docker compose up --force-recreate --remove-orphans --detach

To deploy the demo application on Kubernetes using Helm, run the following command:

helm install my-otel-demo open-telemetry/opentelemetry-demo --values my-values-file.yml

You can access the Astronomy Shop web UI to explore the application and observe how the telemetry data is generated.

  1. If you are running a local cluster, you need to port forward the frontend proxy:

    kubectl port-forward svc/my-otel-demo-frontendproxy 8080:8080
    
  2. Go to http://localhost:8080.

Telemetry data correlation

The instrumentation steps used in all services from the Demo can be found on the main OpenTelemetry documentation.

You can find the language in which each service was implemented as well as its documentation in the language feature reference table.

Exploring OpenTelemetry data in Datadog

When the OTel Demo is running, the built-in load generator simulates traffic in the application. After a couple of seconds you can see data arriving in Datadog.

Service Catalog

View all services that are part of the OTel Demo:

  1. Go to APM > Service Catalog.
View Service Catalog page with list of services from OpenTelemetry demo application
  1. Select Map to see how the services are connected. Change the Map layout to Cluster or Flow to view the map in different modes.
View Service Map Flow with all services connected
  1. Select the List view, then select a service to view a performance summary in the side panel.
View summary of performance and setup guidance from specific service

Trace Explorer

Explore traces received from the OTel Demo:

  1. From Performance > Setup Guidance, click View Traces to open the Trace Explorer, with the selected service applied as a filter.
Traces view with all indexed spans for checkout service
  1. Select an indexed span to view the full trace details for this transaction.
Trace view with all spans belonging to that specific transaction
  1. Navigate through the tabs to view additional details:
    • Infrastructure metrics for the services reporting Host Metrics.
    • Runtime metrics for the services that have already been implemented.
    • Log entries correlated with this trace.
    • Span links linked to this trace.

Trace Queries

Datadog allows you to filter and group the received OpenTelemetry data. For example, to find all transactions from a specific user, you can use Trace Queries.

The OTel Demo sends user.id as span tags, so you can use this to filter all transactions triggered by the user:

  1. From Info in the side panel, hover over the line with the user ID, click the cog icon, and select filter by @app.user.id:<user_id>.

  2. Remove any previous filters, leaving only @app.user.id applied to view all transactions containing spans with the specified user ID.

Trace query filtering all spans that contain a specific app.user.id

Error Tracking

The OpenTelemetry Demo includes flagd, a feature flag evaluation engine for simulating error scenarios.

  1. Open the src/flagd/demo.flagd.json file and set the defaultVariant to on for one of the cases. See the OpenTelemetry Demo documentation for available cases.
  2. After the demo starts producing errors, you can visualize and track down the affected services in Datadog.
Error tracking view showing error PaymentService Fail Feature Flag Enabled

Further Reading

PREVIEWING: rtrieu/product-analytics-ui-changes