AWS Fargate on EKS provides a fully managed experience for running Kubernetes workloads. Amazon Data Firehose can be used with EKS’s Fluent Bit log router to collect logs in Datadog. This guide provides a comparison of log forwarding through Amazon Data Firehose and CloudWatch logs, as well as a sample EKS Fargate application to send logs to Datadog through Amazon Data Firehose.
Amazon Data Firehose and CloudWatch log forwarding
The following are key differences between using Amazon Data Firehose and CloudWatch log forwarding.
Metadata and tagging: Metadata, such as Kubernetes namespace and container ID, are accessible as structured attributes when sending logs with Amazon Data Firehose.
AWS Costs: AWS Costs may vary for individual use cases but Amazon Data Firehose ingestion is generally less expensive than comparable Cloudwatch Log ingestion.
An EKS cluster with a Fargate profile and Fargate pod execution role. In this guide, the cluster is named fargate-cluster with a Fargate profile named fargate-profile applied to the namespace fargate-namespace. If you don’t already have these resources, use Getting Started with Amazon EKS to create the cluster and Getting Started with AWS Fargate using Amazon EKS to create the Fargate profile and pod execution role.
Setup
The following steps outline the process for sending logs from a sample application deployed on an EKS cluster through Fluent Bit and an Amazon Data Firehose delivery stream to Datadog. To maximize consistency with standard Kubernetes tags in Datadog, instructions are included to remap selected attributes to tag keys.
Configure Fluent Bit for Firehose on an EKS Fargate cluster
Create the aws-observability namespace.
kubectl create namespace aws-observability
Create the following Kubernetes ConfigMap for Fluent Bit as aws-logging-configmap.yaml. Substitute the name of your delivery stream.
For the new higher performance Kinesis Firehose plugin use the plugin name kinesis_firehose instead of amazon_data_firehose.
apiVersion:v1kind:ConfigMapmetadata:name:aws-loggingnamespace:aws-observabilitydata:filters.conf:| [FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Buffer_Size 0
Kube_Meta_Cache_TTL 300sflb_log_cw:'true'output.conf:| [OUTPUT]
Name kinesis_firehose
Match kube.*
region <REGION>
delivery_stream <YOUR-DELIVERY-STREAM-NAME>
Use kubectl to apply the ConfigMap manifest.
kubectl apply -f aws-logging-configmap.yaml
Create an IAM policy and attach it to the pod execution role to allow the log router running on AWS Fargate to write to the Amazon Data Firehose. You can use the example below, replacing the ARN in the Resource field with the ARN of your delivery stream, as well as specifying your region and account ID.
Verify that sample-app pods are running in the namespace fargate-namespace.
kubectl get pods -n fargate-namespace
Expected output:
NAME READY STATUS RESTARTS AGE
sample-app-6c8b449b8f-kq2qz 1/1 Running 0 3m56s
sample-app-6c8b449b8f-nn2w7 1/1 Running 0 3m56s
sample-app-6c8b449b8f-wzsjj 1/1 Running 0 3m56s
Use kubectl describe pod to confirm that the Fargate logging feature is enabled.
kubectl describe pod <POD-NAME> -n fargate-namespace |grep Logging
Expected output:
Logging: LoggingEnabled
Normal LoggingEnabled 5m fargate-scheduler Successfully enabled logging for pod
Inspect deployment logs.
kubectl logs -l app=nginx -n fargate-namespace
Expected output:
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/01/27 16:53:42 [notice] 1#1: using the "epoll" event method
2023/01/27 16:53:42 [notice] 1#1: nginx/1.23.3
2023/01/27 16:53:42 [notice] 1#1: built by gcc 10.2.1 20210110(Debian 10.2.1-6) 2023/01/27 16:53:42 [notice] 1#1: OS: Linux 4.14.294-220.533.amzn2.x86_64
2023/01/27 16:53:42 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1024:65535
2023/01/27 16:53:42 [notice] 1#1: start worker processes
...
Verify the logs are in Datadog. In the Datadog Log Explorer, search for @aws.firehose.arn:"<ARN>", replacing <ARN> with your Amazon Data Firehose ARN, to filter for logs from the Amazon Data Firehose.
Remap attributes for log correlation
Logs from this configuration require some attributes to be remapped to maximize consistency with standard Kubernetes tags in Datadog.