This page describes the EKS Fargate integration. For ECS Fargate, see the documentation for Datadog's ECS Fargate integration.
Amazon EKS on AWS Fargate is a managed Kubernetes service that automates certain aspects of deployment and maintenance for any standard Kubernetes environment. Kubernetes nodes are managed by AWS Fargate and abstracted away from the user.
Note: Cloud Network Monitoring (CNM) is not supported for EKS Fargate.
These steps cover the setup of the Datadog Agent v7.17+ in a container within Amazon EKS on AWS Fargate. See the Datadog-Amazon EKS integration documentation if you are not using AWS Fargate.
AWS Fargate pods are not physical pods, which means they exclude host-based system-checks, like CPU, memory, etc. In order to collect data from your AWS Fargate pods, you must run the Agent as a sidecar of your application pod with custom RBAC, which enables these features:
Kubernetes metrics collection from the pod running your application containers and the Agent
If you don’t specify through AWS Fargate Profile that your pods should run on fargate, your pods can use classical EC2 machines. If it’s the case see the Datadog-Amazon EKS integration setup in order to collect data from them. This works by running the Agent as an EC2-type workload. The Agent setup is the same as that of the Kubernetes Agent setup, and all options are available. To deploy the Agent on EC2 nodes, use the DaemonSet setup for the Datadog Agent.
You can run the Agent as a sidecar by using the Datadog Admission Controller (requires Cluster Agent v7.52+) or with manual sidecar configuration. With the Admission Controller, you can inject an Agent sidecar into every pod that has the label agent.datadoghq.com/sidecar:fargate.
With manual configuration, you must modify every workload manifest when adding or changing the Agent sidecar. Datadog recommends you use the Admission Controller.
Set up RBAC in the application namespace(s). See the AWS EKS Fargate RBAC section on this page.
Bind above RBAC to application pod by setting Service Account name.
Create a Kubernetes secret datadog-secret containing your Datadog API key and Cluster Agent token in the Datadog installation and application namespaces:
For more information how these secrets are used, see the Cluster Agent Setup.
Note: You cannot change the name of the secret containing the Datadog API key and Cluster Agent token. It must be datadog-secret for the Agent in the sidecar to connect to Datadog.
Setup
Create a DatadogAgent custom resource in the datadog-agent.yaml with Admission Controller enabled:
After the Cluster Agent reaches a running state and registers Admission Controller mutating webhooks, an Agent sidecar is automatically injected into any pod created with the label agent.datadoghq.com/sidecar:fargate.
The Admission Controller does not mutate pods that are already created.
Example result
The following is a spec.containers snippet from a Redis deployment where the Admission Controller injected an Agent sidecar. The sidecar is automatically configured using internal defaults, with additional settings to run in an EKS Fargate environment. The sidecar uses the image repository and tags set in datadog-agent.yaml. Communication between Cluster Agent and sidecars is enabled by default.
To further configure the Agent or its container resources, use the properties in your DatadogAgent resource. Use the spec.features.admissionController.agentSidecarInjection.profiles to add environment variable definitions and resource settings. Use the spec.features.admissionController.agentSidecarInjection.selectors property to configure a custom selector to target workload pods instead of updating the workload to add agent.datadoghq.com/sidecar:fargate labels.
Create a DatadogAgent custom resource in datadog-values.yaml file that configures a sidecar profile and a custom pod selector.
Example
In the following example, a selector targets all pods with the label "app": redis. The sidecar profile configures a DD_PROCESS_AGENT_PROCESS_COLLECTION_ENABLED environment variable and resource settings.
After the Cluster Agent reaches a running state and registers Admission Controller mutating webhooks, an Agent sidecar is automatically injected into any pod created with the label app:redis.
The Admission Controller does not mutate pods that are already created.
Example result
The following is a spec.containers snippet from a Redis deployment where the Admission Controller injected an Agent sidecar. The environment variables and resource settings from datadog-agent.yaml are automatically applied.
Set up RBAC in the application namespace(s). See the AWS EKS Fargate RBAC section on this page.
Bind above RBAC to application pod by setting Service Account name.
Create a Kubernetes secret datadog-secret containing your Datadog API key and Cluster Agent token in the Datadog installation and application namespaces:
For more information how these secrets are used, see the Cluster Agent Setup.
Note: You cannot change the name of the secret containing the Datadog API key and Cluster Agent token. It must be datadog-secret for the Agent in the sidecar to connect to Datadog.
Setup
Create a file, datadog-values.yaml, that contains:
Note: Use agents.enabled=false for a Fargate-only cluster. On a mixed cluster, set agents.enabled=true to create a DaemonSet for monitoring workloads on EC2 instances.
After the Cluster Agent reaches a running state and registers Admission Controller mutating webhooks, an Agent sidecar is automatically injected into any pod created with the label agent.datadoghq.com/sidecar:fargate.
The Admission Controller does not mutate pods that are already created.
Example result
The following is a spec.containers snippet from a Redis deployment where the Admission Controller injected an Agent sidecar. The sidecar is automatically configured using internal defaults, with additional settings to run in an EKS Fargate environment. The sidecar uses the image repository and tags set in the Helm values. Communication between Cluster Agent and sidecars is enabled by default.
To further configure the Agent or its container resources, use the Helm property clusterAgent.admissionController.agentSidecarInjection.profiles to add environment variable definitions and resource settings. Use the clusterAgent.admissionController.agentSidecarInjection.selectors property to configure a custom selector to target workload pods instead of updating the workload to add agent.datadoghq.com/sidecar:fargate labels.
Create a Helm datadog-values.yaml file that configures a sidecar profile and a custom pod selector.
Example
In the following example, a selector targets all pods with the label "app": redis. The sidecar profile configures a DD_PROCESS_AGENT_PROCESS_COLLECTION_ENABLED environment variable and resource settings.
Note: Use agents.enabled=false for a Fargate-only cluster. On a mixed cluster, set agents.enabled=true to create a DaemonSet for monitoring workloads on EC2 instances.
After the Cluster Agent reaches a running state and registers Admission Controller mutating webhooks, an Agent sidecar is automatically injected into any pod created with the label app:redis.
The Admission Controller does not mutate pods that are already created.
Example result
The following is a spec.containers snippet from a Redis deployment where the Admission Controller injected an Agent sidecar. The environment variables and resource settings from datadog-values.yaml are automatically applied.
To start collecting data from your Fargate type pod, deploy the Datadog Agent v7.17+ as a sidecar of your application. This is the minimum configuration required to collect metrics from your application running in the pod, notice the addition of DD_EKS_FARGATE=true in the manifest to deploy your Datadog Agent sidecar.
apiVersion:apps/v1kind:Deploymentmetadata:name:"<APPLICATION_NAME>"namespace:defaultspec:selector:matchLabels:app:"<APPLICATION_NAME>"replicas:1template:metadata:labels:app:"<APPLICATION_NAME>"name:"<POD_NAME>"spec:serviceAccountName:datadog-agentcontainers:- name:"<APPLICATION_NAME>"image:"<APPLICATION_IMAGE>"## Running the Agent as a side-car- image:datadog/agentname:datadog-agentenv:- name:DD_API_KEYvalue:"<YOUR_DATADOG_API_KEY>"## Set DD_SITE to "datadoghq.eu" to send your## Agent data to the Datadog EU site- name:DD_SITEvalue:"datadoghq.com"- name:DD_EKS_FARGATEvalue:"true"- name:DD_CLUSTER_NAMEvalue:"<CLUSTER_NAME>"- name:DD_KUBERNETES_KUBELET_NODENAMEvalueFrom:fieldRef:apiVersion:v1fieldPath:spec.nodeNameresources:requests:memory:"256Mi"cpu:"200m"limits:memory:"256Mi"cpu:"200m"
Note: Add your desired kube_cluster_name:<CLUSTER_NAME> to the list of DD_TAGS to ensure your metrics are tagged by your desired cluster. You can append additional tags here as space separated <KEY>:<VALUE> tags. For Agents 7.34+ and 6.34+, this is not required. Instead, set the DD_CLUSTER_NAME environment variable.
Running the Cluster Agent or the Cluster Checks Runner
When using EKS Fargate, there are two possible scenarios depending on whether or not the EKS cluster is running mixed workloads (Fargate/non-Fargate).
If the EKS cluster runs Fargate and non-Fargate workloads, and you want to monitor the non-Fargate workload through Node Agent DaemonSet, add the Cluster Agent/Cluster Checks Runner to this deployment. For more information, see the Cluster Agent Setup.
The Cluster Agent token must be reachable from the Fargate tasks you want to monitor. If you are using the Helm Chart or Datadog Operator, this is not reachable by default because a secret in the target namespace is created.
You have two options for this to work properly:
Use an hardcoded token value (clusterAgent.token in Helm, credentials.token in the Datadog Operator); convenient, but less secure.
Use a manually-created secret (clusterAgent.tokenExistingSecret in Helm, not available in the Datadog Operator) and replicate it in all namespaces where Fargate tasks need to be monitored; secure, but requires extra operations.
Note: The token value requires a minimum of 32 characters.
If the EKS cluster runs only Fargate workloads, you need a standalone Cluster Agent deployment. And, as described above, choose one of the two options for making the token reachable.
In both cases, you need to change the Datadog Agent sidecar manifest in order to allow communication with the Cluster Agent:
env:- name:DD_CLUSTER_AGENT_ENABLEDvalue:"true"- name:DD_CLUSTER_AGENT_AUTH_TOKENvalue: <hardcoded token value> # Use valueFrom:if you're using a secret- name:DD_CLUSTER_AGENT_URLvalue:https://<CLUSTER_AGENT_SERVICE_NAME>.<CLUSTER_AGENT_SERVICE_NAMESPACE>.svc.cluster.local:5005- name:DD_ORCHESTRATOR_EXPLORER_ENABLED# Required to get Kubernetes resources viewvalue:"true"- name:DD_CLUSTER_NAMEvalue:<CLUSTER_NAME>
apiVersion:apps/v1kind:Deploymentmetadata:name:"<APPLICATION_NAME>"namespace:defaultspec:replicas:1selector:matchLabels:app:"<APPLICATION_NAME>"template:metadata:labels:app:"<APPLICATION_NAME>"name:"<POD_NAME>"annotations:ad.datadoghq.com/<CONTAINER_NAME>.check_names:'[<CHECK_NAME>]'ad.datadoghq.com/<CONTAINER_NAME>.init_configs:'[<INIT_CONFIG>]'ad.datadoghq.com/<CONTAINER_NAME>.instances:'[<INSTANCE_CONFIG>]'spec:serviceAccountName:datadog-agentcontainers:- name:"<APPLICATION_NAME>"image:"<APPLICATION_IMAGE>"## Running the Agent as a side-car- image:datadog/agentname:datadog-agentenv:- name:DD_API_KEYvalue:"<YOUR_DATADOG_API_KEY>"## Set DD_SITE to "datadoghq.eu" to send your## Agent data to the Datadog EU site- name:DD_SITEvalue:"datadoghq.com"- name:DD_EKS_FARGATEvalue:"true"- name:DD_KUBERNETES_KUBELET_NODENAMEvalueFrom:fieldRef:apiVersion:v1fieldPath:spec.nodeNameresources:requests:memory:"256Mi"cpu:"200m"limits:memory:"256Mi"cpu:"200m"
Container metrics are not available in Fargate because the cgroups volume from the host can’t be mounted into the Agent. The Live Containers view reports 0 for CPU and Memory.
Set up the container port 8125 over your Agent container to forward DogStatsD metrics from your application container to Datadog.
apiVersion:apps/v1kind:Deploymentmetadata:name:"<APPLICATION_NAME>"namespace:defaultspec:replicas:1selector:matchLabels:app:"<APPLICATION_NAME>"template:metadata:labels:app:"<APPLICATION_NAME>"name:"<POD_NAME>"spec:serviceAccountName:datadog-agentcontainers:- name:"<APPLICATION_NAME>"image:"<APPLICATION_IMAGE>"## Running the Agent as a side-car- image:datadog/agentname:datadog-agent## Enabling port 8125 for DogStatsD metric collectionports:- containerPort:8125name:dogstatsdportprotocol:UDPenv:- name:DD_API_KEYvalue:"<YOUR_DATADOG_API_KEY>"## Set DD_SITE to "datadoghq.eu" to send your## Agent data to the Datadog EU site- name:DD_SITEvalue:"datadoghq.com"- name:DD_EKS_FARGATEvalue:"true"- name:DD_KUBERNETES_KUBELET_NODENAMEvalueFrom:fieldRef:apiVersion:v1fieldPath:spec.nodeNameresources:requests:memory:"256Mi"cpu:"200m"limits:memory:"256Mi"cpu:"200m"
Monitor EKS Fargate logs by using Fluent Bit to route EKS logs to CloudWatch Logs and the Datadog Forwarder to route logs to Datadog.
To configure Fluent Bit to send logs to CloudWatch, create a Kubernetes ConfigMap that specifies CloudWatch Logs as its output. The ConfigMap specifies the log group, region, prefix string, and whether to automatically create the log group.
kind:ConfigMapapiVersion:v1metadata:name:aws-loggingnamespace:aws-observabilitydata:output.conf:| [OUTPUT]
Name cloudwatch_logs
Match *
region us-east-1
log_group_name awslogs-https
log_stream_prefix awslogs-firelens-example
auto_create_group true
Use the Datadog Forwarder to collect logs from Cloudwatch and send them to Datadog.
apiVersion:apps/v1kind:Deploymentmetadata:name:"<APPLICATION_NAME>"namespace:defaultspec:replicas:1selector:matchLabels:app:"<APPLICATION_NAME>"template:metadata:labels:app:"<APPLICATION_NAME>"name:"<POD_NAME>"spec:serviceAccountName:datadog-agent## Putting the agent in the same namespace as the application for origin detection with cgroup v2shareProcessNamespace:truecontainers:- name:"<APPLICATION_NAME>"image:"<APPLICATION_IMAGE>"## Running the Agent as a side-car- image:datadog/agentname:datadog-agent## Enabling port 8126 for Trace collectionports:- containerPort:8126name:traceportprotocol:TCPenv:- name:DD_API_KEYvalue:"<YOUR_DATADOG_API_KEY>"## Set DD_SITE to "datadoghq.eu" to send your## Agent data to the Datadog EU site- name:DD_SITEvalue:"datadoghq.com"- name:DD_EKS_FARGATEvalue:"true"- name:DD_APM_ENABLEDvalue:"true"- name:DD_KUBERNETES_KUBELET_NODENAMEvalueFrom:fieldRef:apiVersion:v1fieldPath:spec.nodeNameresources:requests:memory:"256Mi"cpu:"200m"limits:memory:"256Mi"cpu:"200m"
For Agent 6.19+/7.19+, Process Collection is available. Enable shareProcessNamespace on your pod spec to collect all processes running on your Fargate pod. For example:
The eks_fargate check submits a heartbeat metric eks.fargate.pods.running that is tagged by pod_name and virtual_node so you can keep track of how many pods are running.
The Datadog Agent container is designed to run as the dd-agent user (UID: 100). If you override the default security context by setting, for example, runAsUser: 1000 in your pod spec, the container fails to start due to insufficient permissions. You may see errors such as:
Since Datadog Cluster Agent v7.62+, overriding the security context for the Datadog Agent sidecar allows you to maintain consistent security standards across your Kubernetes deployments. Whether using the DatadogAgent custom resource or Helm values, you can ensure that the Agent container runs with the appropriate user, dd-agent (UID 100), as needed by your environment.
By following the examples, you can deploy the Agent sidecar in environments where the default Pod security context must be overridden.