- If you haven’t already, set up the Datadog Forwarder Lambda function.
- Use the AWS CLI to grant CloudWatch Logs the permission to execute your function.
- Replace
<REGION>
with the region containing your Datadog Forwarder Lambda function. - Replace
<ACCOUNT_ID>
with your 12-digit AWS account ID (excluding dashes).
aws lambda add-permission \
--region "<REGION>" \
--function-name "forwarder-function" \
--statement-id "forwarder-function" \
--principal "logs.amazonaws.com" \
--action "lambda:InvokeFunction" \
--source-arn "arn:aws:logs:<REGION>:<ACCOUNT_ID>:log-group:*" \
--source-account "<ACCOUNT_ID>"
- Create an account-level subscription filter policy. In the example provided below, all log events that contain the string
ERROR
are streamed, except those in the log groups named LogGroupToExclude1
and LogGroupToExclude2
.- Replace
FORWARDER_ARN
with the ARN of the Datadog Forwarder Lambda function.
aws logs put-account-policy \
--policy-name "ExamplePolicyLambda" \
--policy-type "SUBSCRIPTION_FILTER_POLICY" \
--policy-document '{"DestinationArn":"<FORWARDER_ARN>", "FilterPattern": "", "Distribution": "Random"}' \
--scope "ALL"
Note: To exclude certain log groups from log forwarding, use the --selection-criteria
option as outlined in the command reference.
Create an S3 bucket and a role for Amazon Data Firehose
The following steps guide you through creating a bucket and IAM role. This role grants Amazon Data Firehose permission to put data into your Amazon S3 bucket in case of delivery failures.
- Use the AWS CLI to create an S3 Bucket. Optionally, you can use an existing bucket.
- Replace
<BUCKET_NAME>
with the name for your S3 bucket. - Replace
<REGION>
with the region for your S3 bucket.
aws s3api create-bucket \
--bucket MY-BUCKET \
--create-bucket-configuration LocationConstraint=<REGION>
- Create a
TrustPolicyForFirehose.json
file with the following statement:
{
"Statement": {
"Effect": "Allow",
"Principal": { "Service": "firehose.amazonaws.com" },
"Action": "sts:AssumeRole"
}
}
- Create an IAM role, specifying the trust policy file:
Note: The returned Role.Arn value is used in a later step.
aws iam create-role \
--role-name FirehosetoS3Role \
--assume-role-policy-document file://./TrustPolicyForFirehose.json
- Create a
PermissionsForFirehose.json
file with the following statement:- Replace
<BUCKET_NAME>
with the name of your S3 bucket.
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject" ],
"Resource": [
"arn:aws:s3:::<BUCKET_NAME>",
"arn:aws:s3:::<BUCKET_NAME>/*" ]
}
]
}
- Associate the permissions policy with the role:
aws iam put-role-policy \
--role-name FirehosetoS3Role \
--policy-name Permissions-Policy-For-Firehose \
--policy-document file://./PermissionsForFirehose.json
Create Amazon Data Firehose delivery stream
The following steps guide you through creating and configuring an Amazon Data Firehose delivery stream.
- Go to Amazon Data Firehose in the AWS console.
- Click Create Firehose stream.
- In the Source field, select the source of your logs:
- Select
Amazon Kinesis Data Streams
if your logs are coming from a Kinesis Data Stream - Select
Direct PUT
if your logs are coming directly from a CloudWatch log group
- In the Destination field, select
Datadog
. - If your Source is
Amazon Kinesis Data Streams
, select your Kinesis data stream under Source settings. - Optionally, give the Firehose stream a descriptive name.
- In the Destination settings section, choose the Datadog logs HTTP endpoint URL that corresponds to your Datadog site.
- For Authentication, a valid Datadog API key is needed. You can either:
- Select Use API key and paste the key’s value in the API key field.
- Select Use AWS Secrets Manager and choose a secret containing your valid Datadog API key value in the Secret name dropdown.
- For Content encoding, select
GZIP
. - Optionally, configure the Retry duration, the buffer settings, or add Parameters (which are attached as tags to your logs).
Note: Datadog recommends setting the Buffer size to 2
MiB if the logs are single-line messages. - In the Backup settings section, select the S3 bucket for receiving any failed events that exceed the retry duration.
Note: To ensure any logs that fail to be delivered by the delivery stream are still sent to Datadog, set the Datadog Forwarder Lambda function to forward logs from this S3 bucket. - Click Create Firehose stream.
Create role for CloudWatch Logs
The following steps guide you through creating an IAM role for CloudWatch logs. This role grants CloudWatch Logs permission to put data into your Firehose delivery stream.
- Create a
./TrustPolicyForCWL.json
file with the following statement:- Replace
<ACCOUNT_ID>
with your 12-digit AWS account ID (excluding dashes). - Replace
<REGION>
with the region of your CloudWatch logs.
{
"Statement": {
"Effect": "Allow",
"Principal": { "Service": "logs.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringLike": {
"aws:SourceArn": "arn:aws:logs:<REGION>:<ACCOUNT_ID>:*"
}
}
}
}
- Create an IAM role, specifying the trust policy file:
aws iam create-role \
--role-name CWLtoKinesisFirehoseRole \
--assume-role-policy-document file://./TrustPolicyForCWL.json
Note: The returned Role.Arn value is used in a later step.
- Create a
./PermissionsForCWL.json
file with the following statement:- Replace
<REGION>
with the region containing your Datadog Forwarder Lambda function. - Replace
<ACCOUNT_ID>
with your 12-digit AWS account ID (excluding dashes). - Replace
<DELIVERY_STREAM_NAME>
with the name of your delivery stream.
{
"Statement":[
{
"Effect":"Allow",
"Action":["firehose:PutRecord"],
"Resource":[
"arn:aws:firehose:<REGION>:<ACCOUNT_ID>:deliverystream/<DELIVERY_STREAM_NAME>"]
}
]
}
- Associate the permissions policy with the role:
aws iam put-role-policy \
--role-name CWLtoKinesisFirehoseRole \
--policy-name Permissions-Policy-For-CWL \
--policy-document file://./PermissionsForCWL.json
Create the CloudWatch Logs account-level subscription filter policy
Before completing this step, the Amazon Data Firehose delivery stream must be in the Active
state.
- Create the CloudWatch Logs account-level subscription filter policy. The policy immediately starts the flow of real-time log data from the chosen log group to your Amazon Data Firehose delivery stream:
- Replace
<POLICY_NAME>
with a name for the subscription filter policy. - Replace
<CLOUDWATCH_LOGS_ROLE>
with the ARN of the CloudWatch logs role. - Replace
<DELIVERY_STREAM_ARN>
with the ARN of the Amazon Data Firehose delivery stream.
aws logs put-account-policy \
--policy-name "<POLICY_NAME>" \
--policy-type "SUBSCRIPTION_FILTER_POLICY" \
--policy-document '{"RoleArn":"<CLOUDWATCH_LOGS_ROLE>", "DestinationArn":"<DELIVERY_STREAM_ARN>", "FilterPattern": "", "Distribution": "Random"}' \
--scope "ALL"
Note: To exclude certain log groups from log forwarding, use the --selection-criteria
option as outlined in the command reference.