Amazon EKS on AWS Fargate
Overview
This page describes the EKS Fargate integration. For ECS Fargate, see the documentation for Datadog's
ECS Fargate integration.
Amazon EKS on AWS Fargate is a managed Kubernetes service that automates certain aspects of deployment and maintenance for any standard Kubernetes environment. Kubernetes nodes are managed by AWS Fargate and abstracted away from the user.
Note: Network Performance Monitoring (NPM) is not supported for EKS Fargate.
Setup
These steps cover the setup of the Datadog Agent v7.17+ in a container within Amazon EKS on AWS Fargate. See the Datadog-Amazon EKS integration documentation if you are not using AWS Fargate.
AWS Fargate pods are not physical pods, which means they exclude host-based system-checks, like CPU, memory, etc. In order to collect data from your AWS Fargate pods, you must run the Agent as a sidecar of your application pod with custom RBAC, which enables these features:
- Kubernetes metrics collection from the pod running your application containers and the Agent
- Autodiscovery
- Configuration of custom Agent Checks to target containers in the same pod
- APM and DogStatsD for containers in the same pod
EC2 Node
If you don’t specify through AWS Fargate Profile that your pods should run on fargate, your pods can use classical EC2 machines. If it’s the case see the Datadog-Amazon EKS integration setup in order to collect data from them. This works by running the Agent as an EC2-type workload. The Agent setup is the same as that of the Kubernetes Agent setup, and all options are available. To deploy the Agent on EC2 nodes, use the DaemonSet setup for the Datadog Agent.
Installation
To get the best observability coverage monitoring workloads in AWS EKS Fargate, install the Datadog integrations for:
Also, set up integrations for any other AWS services you are running with EKS (for example, ELB).
Manual installation
To install, download the custom Agent image: datadog/agent
with version v7.17 or above.
If the Agent is running as a sidecar, it can communicate only with containers on the same pod. Run an Agent for every pod you wish to monitor.
Configuration
To collect data from your applications running in AWS EKS Fargate over a Fargate node, follow these setup steps:
To have EKS Fargate containers in the Datadog Live Container View, enable shareProcessNamespace
on your pod spec. See Process Collection.
AWS EKS Fargate RBAC
Use the following Agent RBAC when deploying the Agent as a sidecar in AWS EKS Fargate:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: datadog-agent
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- endpoints
verbs:
- get
- list
- apiGroups:
- ""
resources:
- nodes/metrics
- nodes/spec
- nodes/stats
- nodes/proxy
- nodes/pods
- nodes/healthz
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: datadog-agent
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: datadog-agent
subjects:
- kind: ServiceAccount
name: datadog-agent
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: datadog-agent
namespace: default
Running the Agent as a sidecar
You can run the Agent as a sidecar by using the Datadog Admission Controller (requires Cluster Agent v7.52+) or with manual sidecar configuration. With the Admission Controller, you can inject an Agent sidecar into every pod that has the label agent.datadoghq.com/sidecar:fargate
.
With manual configuration, you must modify every workload manifest when adding or changing the Agent sidecar. Datadog recommends you use the Admission Controller.
Admission Controller using Datadog Operator
The setup below configures the Cluster Agent to communicate with the Agent sidecars, allowing access to features such as events collection, Kubernetes resources view, and cluster checks.
Prerequisites
Set up RBAC in the application namespace(s). See the AWS EKS Fargate RBAC section on this page.
Bind above RBAC to application pod by setting Service Account name.
Create a Kubernetes secret containing your Datadog API key and Cluster Agent token in the Datadog installation and application namespaces:
kubectl create secret generic datadog-secret -n datadog-agent \
--from-literal api-key=<YOUR_DATADOG_API_KEY> --from-literal token=<CLUSTER_AGENT_TOKEN>
kubectl create secret generic datadog-secret -n fargate \
--from-literal api-key=<YOUR_DATADOG_API_KEY> --from-literal token=<CLUSTER_AGENT_TOKEN>
For more information how these secrets are used, see the Cluster Agent Setup.
Setup
Create a DatadogAgent
custom resource in the datadog-agent.yaml
with Admission Controller enabled:
apiVersion: datadoghq.com/v2alpha1
kind: DatadogAgent
metadata:
name: datadog
spec:
global:
clusterAgentTokenSecret:
secretName: datadog-secret
keyName: token
credentials:
apiSecret:
secretName: datadog-secret
keyName: api-key
features:
admissionController:
agentSidecarInjection:
enabled: true
provider: fargate
Then apply the new configuration:
kubectl apply -n datadog-agent -f datadog-agent.yaml
After the Cluster Agent reaches a running state and registers Admission Controller mutating webhooks, an Agent sidecar is automatically injected into any pod created with the label agent.datadoghq.com/sidecar:fargate
.
The Admission Controller does not mutate pods that are already created.
Example result
The following is a spec.containers
snippet from a Redis deployment where the Admission Controller injected an Agent sidecar. The sidecar is automatically configured using internal defaults, with additional settings to run in an EKS Fargate environment. The sidecar uses the image repository and tags set in datadog-agent.yaml
. Communication between Cluster Agent and sidecars is enabled by default.
containers:
- args:
- redis-server
image: redis:latest
# ...
- env:
- name: DD_API_KEY
valueFrom:
secretKeyRef:
key: api-key
name: datadog-secret
- name: DD_CLUSTER_AGENT_AUTH_TOKEN
valueFrom:
secretKeyRef:
key: token
name: datadog-secret
- name: DD_EKS_FARGATE
value: "true"
# ...
image: gcr.io/datadoghq/agent:7.51.0
imagePullPolicy: IfNotPresent
name: datadog-agent-injected
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 200m
memory: 256Mi
Sidecar profiles and custom selectors
To further configure the Agent or its container resources, use the properties in your DatadogAgent
resource. Use the spec.features.admissionController.agentSidecarInjection.profiles
to add environment variable definitions and resource settings. Use the spec.features.admissionController.agentSidecarInjection.selectors
property to configure a custom selector to target workload pods instead of updating the workload to add agent.datadoghq.com/sidecar:fargate
labels.
Create a DatadogAgent
custom resource in datadog-values.yaml
file that configures a sidecar profile and a custom pod selector.
Example
In the following example, a selector targets all pods with the label "app": redis
. The sidecar profile configures a DD_PROCESS_AGENT_PROCESS_COLLECTION_ENABLED
environment variable and resource settings.
spec:
features:
admissionController:
agentSidecarInjection:
enabled: true
provider: fargate
selectors:
- objectSelector:
matchLabels:
"app": redis
profiles:
- env:
- name: DD_PROCESS_AGENT_PROCESS_COLLECTION_ENABLED
value: "true"
resources:
requests:
cpu: "400m"
memory: "256Mi"
limits:
cpu: "800m"
memory: "512Mi"
Then apply the new configuration:
kubectl apply -n datadog-agent -f datadog-agent.yaml
After the Cluster Agent reaches a running state and registers Admission Controller mutating webhooks, an Agent sidecar is automatically injected into any pod created with the label app:redis
.
The Admission Controller does not mutate pods that are already created.
Example result
The following is a spec.containers
snippet from a Redis deployment where the Admission Controller injected an Agent sidecar. The environment variables and resource settings from datadog-agent.yaml
are automatically applied.
labels:
app: redis
eks.amazonaws.com/fargate-profile: fp-fargate
pod-template-hash: 7b86c456c4
# ...
containers:
- args:
- redis-server
image: redis:latest
# ...
- env:
- name: DD_API_KEY
valueFrom:
secretKeyRef:
key: api-key
name: datadog-secret
# ...
- name: DD_PROCESS_AGENT_PROCESS_COLLECTION_ENABLED
value: "true"
# ...
image: gcr.io/datadoghq/agent:7.51.0
imagePullPolicy: IfNotPresent
name: datadog-agent-injected
resources:
limits:
cpu: 800m
memory: 512Mi
requests:
cpu: 400m
memory: 256Mi
Admission Controller using Helm
This feature requires Cluster Agent v7.52.0+.
The setup below configures the Cluster Agent to communicate with the Agent sidecars, allowing access to features such as events collection, Kubernetes resources view, and cluster checks.
Prerequisites
Set up RBAC in the application namespace(s). See the AWS EKS Fargate RBAC section on this page.
Bind above RBAC to application pod by setting Service Account name.
Create a Kubernetes secret containing your Datadog API key and Cluster Agent token in the Datadog installation and application namespaces:
kubectl create secret generic datadog-secret -n datadog-agent \
--from-literal api-key=<YOUR_DATADOG_API_KEY> --from-literal token=<CLUSTER_AGENT_TOKEN>
kubectl create secret generic datadog-secret -n fargate \
--from-literal api-key=<YOUR_DATADOG_API_KEY> --from-literal token=<CLUSTER_AGENT_TOKEN>
For more information how these secrets are used, see the Cluster Agent Setup.
Setup
Install the Datadog Agent with the Cluster Agent and Admission Controller enabled:
helm install datadog datadog/datadog -n datadog-agent \
--set datadog.clusterName=cluster-name \
--set agents.enabled=false \
--set datadog.apiKeyExistingSecret=datadog-secret \
--set clusterAgent.tokenExistingSecret=datadog-secret \
--set clusterAgent.admissionController.agentSidecarInjection.enabled=true \
--set clusterAgent.admissionController.agentSidecarInjection.provider=fargate
Note: Use agents.enabled=false
for a Fargate-only cluster. On a mixed cluster, set agents.enabled=true
to create a DaemonSet for monitoring workloads on EC2 instances.
After the Cluster Agent reaches a running state and registers Admission Controller mutating webhooks, an Agent sidecar is automatically injected into any pod created with the label agent.datadoghq.com/sidecar:fargate
.
The Admission Controller does not mutate pods that are already created.
Example result
The following is a spec.containers
snippet from a Redis deployment where the Admission Controller injected an Agent sidecar. The sidecar is automatically configured using internal defaults, with additional settings to run in an EKS Fargate environment. The sidecar uses the image repository and tags set in the Helm values. Communication between Cluster Agent and sidecars is enabled by default.
containers:
- args:
- redis-server
image: redis:latest
# ...
- env:
- name: DD_API_KEY
valueFrom:
secretKeyRef:
key: api-key
name: datadog-secret
- name: DD_CLUSTER_AGENT_AUTH_TOKEN
valueFrom:
secretKeyRef:
key: token
name: datadog-secret
- name: DD_EKS_FARGATE
value: "true"
# ...
image: gcr.io/datadoghq/agent:7.51.0
imagePullPolicy: IfNotPresent
name: datadog-agent-injected
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 200m
memory: 256Mi
Sidecar profiles and custom selectors
To further configure the Agent or its container resources, use the Helm property clusterAgent.admissionController.agentSidecarInjection.profiles
to add environment variable definitions and resource settings. Use the clusterAgent.admissionController.agentSidecarInjection.selectors
property to configure a custom selector to target workload pods instead of updating the workload to add agent.datadoghq.com/sidecar:fargate
labels.
Create a Helm datadog-values.yaml
file that configures a sidecar profile and a custom pod selector.
Example
In the following example, a selector targets all pods with the label "app": redis
. The sidecar profile configures a DD_PROCESS_AGENT_PROCESS_COLLECTION_ENABLED
environment variable and resource settings.
clusterAgent:
admissionController:
agentSidecarInjection:
selectors:
- objectSelector:
matchLabels:
"app": redis
profiles:
- env:
- name: DD_PROCESS_AGENT_PROCESS_COLLECTION_ENABLED
value: "true"
resources:
requests:
cpu: "400m"
memory: "256Mi"
limits:
cpu: "800m"
memory: "512Mi"
Install the chart:
helm install datadog datadog/datadog -n datadog-agent \
--set datadog.clusterName=cluster-name \
--set agents.enabled=false \
--set datadog.apiKeyExistingSecret=datadog-secret \
--set clusterAgent.tokenExistingSecret=datadog-secret \
--set clusterAgent.admissionController.agentSidecarInjection.enabled=true \
--set clusterAgent.admissionController.agentSidecarInjection.provider=fargate \
-f datadog-values.yaml
Note: Use agents.enabled=false
for a Fargate-only cluster. On a mixed cluster, set agents.enabled=true
to create a DaemonSet for monitoring workloads on EC2 instances.
After the Cluster Agent reaches a running state and registers Admission Controller mutating webhooks, an Agent sidecar is automatically injected into any pod created with the label app:redis
.
The Admission Controller does not mutate pods that are already created.
Example result
The following is a spec.containers
snippet from a Redis deployment where the Admission Controller injected an Agent sidecar. The environment variables and resource settings from datadog-values.yaml
are automatically applied.
labels:
app: redis
eks.amazonaws.com/fargate-profile: fp-fargate
pod-template-hash: 7b86c456c4
# ...
containers:
- args:
- redis-server
image: redis:latest
# ...
- env:
- name: DD_API_KEY
valueFrom:
secretKeyRef:
key: api-key
name: datadog-secret
# ...
- name: DD_PROCESS_AGENT_PROCESS_COLLECTION_ENABLED
value: "true"
# ...
image: gcr.io/datadoghq/agent:7.51.0
imagePullPolicy: IfNotPresent
name: datadog-agent-injected
resources:
limits:
cpu: 800m
memory: 512Mi
requests:
cpu: 400m
memory: 256Mi
Manual
To start collecting data from your Fargate type pod, deploy the Datadog Agent v7.17+ as a sidecar of your application. This is the minimum configuration required to collect metrics from your application running in the pod, notice the addition of DD_EKS_FARGATE=true
in the manifest to deploy your Datadog Agent sidecar.
apiVersion: apps/v1
kind: Deployment
metadata:
name: "<APPLICATION_NAME>"
namespace: default
spec:
selector:
matchLabels:
app: "<APPLICATION_NAME>"
replicas: 1
template:
metadata:
labels:
app: "<APPLICATION_NAME>"
name: "<POD_NAME>"
spec:
serviceAccountName: datadog-agent
containers:
- name: "<APPLICATION_NAME>"
image: "<APPLICATION_IMAGE>"
## Running the Agent as a side-car
- image: datadog/agent
name: datadog-agent
env:
- name: DD_API_KEY
value: "<YOUR_DATADOG_API_KEY>"
## Set DD_SITE to "datadoghq.eu" to send your
## Agent data to the Datadog EU site
- name: DD_SITE
value: "datadoghq.com"
- name: DD_EKS_FARGATE
value: "true"
- name: DD_CLUSTER_NAME
value: "<CLUSTER_NAME>"
- name: DD_KUBERNETES_KUBELET_NODENAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "200m"
Note: Don’t forget to replace <YOUR_DATADOG_API_KEY>
with the Datadog API key from your organization.
Note: Add your desired kube_cluster_name:<CLUSTER_NAME>
to the list of DD_TAGS
to ensure your metrics are tagged by your desired cluster. You can append additional tags here as space separated <KEY>:<VALUE>
tags. For Agents 7.34+
and 6.34+
, this is not required. Instead, set the DD_CLUSTER_NAME
environment variable.
Running the Cluster Agent or the Cluster Checks Runner
Datadog recommends you run the Cluster Agent to access features such as events collection, Kubernetes resources view, and cluster checks.
When using EKS Fargate, there are two possible scenarios depending on whether or not the EKS cluster is running mixed workloads (Fargate/non-Fargate).
If the EKS cluster runs Fargate and non-Fargate workloads, and you want to monitor the non-Fargate workload through Node Agent DaemonSet, add the Cluster Agent/Cluster Checks Runner to this deployment. For more information, see the Cluster Agent Setup.
The Cluster Agent token must be reachable from the Fargate tasks you want to monitor. If you are using the Helm Chart or Datadog Operator, this is not reachable by default because a secret in the target namespace is created.
You have two options for this to work properly:
- Use an hardcoded token value (
clusterAgent.token
in Helm, credentials.token
in the Datadog Operator); convenient, but less secure. - Use a manually-created secret (
clusterAgent.tokenExistingSecret
in Helm, not available in the Datadog Operator) and replicate it in all namespaces where Fargate tasks need to be monitored; secure, but requires extra operations.
Note: The token
value requires a minimum of 32 characters.
If the EKS cluster runs only Fargate workloads, you need a standalone Cluster Agent deployment. And, as described above, choose one of the two options for making the token reachable.
Use the following Helm values.yaml
:
datadog:
apiKey: <YOUR_DATADOG_API_KEY>
clusterName: <CLUSTER_NAME>
agents:
enabled: false
clusterAgent:
enabled: true
replicas: 2
env:
- name: DD_EKS_FARGATE
value: "true"
In both cases, you need to change the Datadog Agent sidecar manifest in order to allow communication with the Cluster Agent:
env:
- name: DD_CLUSTER_AGENT_ENABLED
value: "true"
- name: DD_CLUSTER_AGENT_AUTH_TOKEN
value: <hardcoded token value> # Use valueFrom: if you're using a secret
- name: DD_CLUSTER_AGENT_URL
value: https://<CLUSTER_AGENT_SERVICE_NAME>.<CLUSTER_AGENT_SERVICE_NAMESPACE>.svc.cluster.local:5005
- name: DD_ORCHESTRATOR_EXPLORER_ENABLED # Required to get Kubernetes resources view
value: "true"
- name: DD_CLUSTER_NAME
value: <CLUSTER_NAME>
For insights into your EKS cluster performance, enable a Cluster Check Runner to collect metrics from the kube-state-metrics
service.
Metrics collection
Integration metrics
Use Autodiscovery labels with your application container to start collecting its metrics for the supported Agent integrations.
apiVersion: apps/v1
kind: Deployment
metadata:
name: "<APPLICATION_NAME>"
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: "<APPLICATION_NAME>"
template:
metadata:
labels:
app: "<APPLICATION_NAME>"
name: "<POD_NAME>"
annotations:
ad.datadoghq.com/<CONTAINER_NAME>.check_names: '[<CHECK_NAME>]'
ad.datadoghq.com/<CONTAINER_NAME>.init_configs: '[<INIT_CONFIG>]'
ad.datadoghq.com/<CONTAINER_NAME>.instances: '[<INSTANCE_CONFIG>]'
spec:
serviceAccountName: datadog-agent
containers:
- name: "<APPLICATION_NAME>"
image: "<APPLICATION_IMAGE>"
## Running the Agent as a side-car
- image: datadog/agent
name: datadog-agent
env:
- name: DD_API_KEY
value: "<YOUR_DATADOG_API_KEY>"
## Set DD_SITE to "datadoghq.eu" to send your
## Agent data to the Datadog EU site
- name: DD_SITE
value: "datadoghq.com"
- name: DD_EKS_FARGATE
value: "true"
- name: DD_KUBERNETES_KUBELET_NODENAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "200m"
Notes:
- Don’t forget to replace
<YOUR_DATADOG_API_KEY>
with the Datadog API key from your organization. - Container metrics are not available in Fargate because the
cgroups
volume from the host can’t be mounted into the Agent. The Live Containers view reports 0 for CPU and Memory.
DogStatsD
Set up the container port 8125
over your Agent container to forward DogStatsD metrics from your application container to Datadog.
apiVersion: apps/v1
kind: Deployment
metadata:
name: "<APPLICATION_NAME>"
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: "<APPLICATION_NAME>"
template:
metadata:
labels:
app: "<APPLICATION_NAME>"
name: "<POD_NAME>"
spec:
serviceAccountName: datadog-agent
containers:
- name: "<APPLICATION_NAME>"
image: "<APPLICATION_IMAGE>"
## Running the Agent as a side-car
- image: datadog/agent
name: datadog-agent
## Enabling port 8125 for DogStatsD metric collection
ports:
- containerPort: 8125
name: dogstatsdport
protocol: UDP
env:
- name: DD_API_KEY
value: "<YOUR_DATADOG_API_KEY>"
## Set DD_SITE to "datadoghq.eu" to send your
## Agent data to the Datadog EU site
- name: DD_SITE
value: "datadoghq.com"
- name: DD_EKS_FARGATE
value: "true"
- name: DD_KUBERNETES_KUBELET_NODENAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "200m"
Note: Don’t forget to replace <YOUR_DATADOG_API_KEY>
with the Datadog API key from your organization.
Live containers
Datadog Agent v6.19+ supports live containers in the EKS Fargate integration. Live containers appear on the Containers page.
Live processes
Datadog Agent v6.19+ supports live processes in the EKS Fargate integration. Live processes appear on the Processes page. To enable live processes, enable shareProcessNamespace in the pod spec.
Kubernetes resources view
To collect Kubernetes resource views, you need a Cluster Agent setup.
Log collection
Collecting logs from EKS on Fargate with Fluent Bit.
Monitor EKS Fargate logs by using Fluent Bit to route EKS logs to CloudWatch Logs and the Datadog Forwarder to route logs to Datadog.
To configure Fluent Bit to send logs to CloudWatch, create a Kubernetes ConfigMap that specifies CloudWatch Logs as its output. The ConfigMap specifies the log group, region, prefix string, and whether to automatically create the log group.
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
data:
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match *
region us-east-1
log_group_name awslogs-https
log_stream_prefix awslogs-firelens-example
auto_create_group true
Use the Datadog Forwarder to collect logs from Cloudwatch and send them to Datadog.
Traces collection
Set up the container port 8126
over your Agent container to collect traces from your application container. Read more about how to set up tracing.
apiVersion: apps/v1
kind: Deployment
metadata:
name: "<APPLICATION_NAME>"
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: "<APPLICATION_NAME>"
template:
metadata:
labels:
app: "<APPLICATION_NAME>"
name: "<POD_NAME>"
spec:
serviceAccountName: datadog-agent
## Putting the agent in the same namespace as the application for origin detection with cgroup v2
shareProcessNamespace: true
containers:
- name: "<APPLICATION_NAME>"
image: "<APPLICATION_IMAGE>"
## Running the Agent as a side-car
- image: datadog/agent
name: datadog-agent
## Enabling port 8126 for Trace collection
ports:
- containerPort: 8126
name: traceport
protocol: TCP
env:
- name: DD_API_KEY
value: "<YOUR_DATADOG_API_KEY>"
## Set DD_SITE to "datadoghq.eu" to send your
## Agent data to the Datadog EU site
- name: DD_SITE
value: "datadoghq.com"
- name: DD_EKS_FARGATE
value: "true"
- name: DD_APM_ENABLED
value: "true"
- name: DD_KUBERNETES_KUBELET_NODENAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "200m"
Note: Don’t forget to replace <YOUR_DATADOG_API_KEY>
with the Datadog API key from your organization.
Events collection
To collect events from your AWS EKS Fargate API server, run a Datadog Cluster Agent within your EKS cluster and Enable Event collection for your Cluster Agent.
Optionally, deploy cluster check runners in addition to setting up the Datadog Cluster Agent to enable cluster checks.
Note: You can also collect events if you run the Datadog Cluster Agent in a pod in Fargate.
Process collection
For Agent 6.19+/7.19+, Process Collection is available. Enable shareProcessNamespace
on your pod spec to collect all processes running on your Fargate pod. For example:
apiVersion: v1
kind: Pod
metadata:
name: <NAME>
spec:
shareProcessNamespace: true
...
Note: CPU and memory metrics are not available.
Data Collected
Metrics
The eks_fargate check submits a heartbeat metric eks.fargate.pods.running
that is tagged by pod_name
and virtual_node
so you can keep track of how many pods are running.
Service Checks
eks_fargate does not include any service checks.
Events
eks_fargate does not include any events.
Troubleshooting
Need help? Contact Datadog support.
Further Reading
Additional helpful documentation, links, and articles: