- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
The Datadog Forwarder is an AWS Lambda function that ships logs from AWS to Datadog, specifically:
For Serverless customers using the Forwarder to forward metrics, traces, and logs from AWS Lambda logs to Datadog, you should migrate to the Datadog Lambda Extension to collect telemetry directly from the Lambda execution environments. The Forwarder is still available for use in Serverless Monitoring, but will not be updated to support the latest features.
For more information about sending AWS services logs with the Datadog Forwarder, read the Send AWS Services Logs with the Datadog Lambda Function guide.
Datadog recommends using CloudFormation to automatically install the Forwarder. You can also complete the setup process using Terraform or manually. Once installed, you can subscribe the Forwarder to log sources such as S3 buckets or CloudWatch log groups by setting up triggers.
DdApiKey
and select the appropriate DdSite
. All other parameters are optional.Forwarder
.If you had previously enabled your AWS Integration using the following CloudFormation template from your AWS integration page in Datadog, your account may already be provisioned with a Datadog Lambda Forwarder function if you decided to include it. In that case, you will only need to install the Datadog Lambda in additional AWS regions in your account where you wish to export logs.
Note: The Datadog Lambda Forwarder function code block is expected to be empty, as the logic is implemented through a Lambda layer.
Install the Forwarder using the Terraform resource aws_cloudformation_stack
as a wrapper on top of the provided CloudFormation template.
Datadog recommends creating separate Terraform configurations:
DdApiKeySecretArn
parameter.By separating the configurations of the API key and the forwarder, you do not have to provide the Datadog API key when updating the forwarder. To update or upgrade the forwarder in the future, apply the forwarder configuration again.
variable "dd_api_key" {
type = string
description = "Datadog API key"
}
resource "aws_secretsmanager_secret" "dd_api_key" {
name = "datadog_api_key"
description = "Encrypted Datadog API Key"
}
resource "aws_secretsmanager_secret_version" "dd_api_key" {
secret_id = aws_secretsmanager_secret.dd_api_key.id
secret_string = var.dd_api_key
}
output "dd_api_key" {
value = aws_secretsmanager_secret.dd_api_key.arn
}
# Use the Datadog Forwarder to ship logs from S3 and CloudWatch, as well as observability data from Lambda functions to Datadog. For more information, see https://github.com/DataDog/datadog-serverless-functions/tree/master/aws/logs_monitoring
resource "aws_cloudformation_stack" "datadog_forwarder" {
name = "datadog-forwarder"
capabilities = ["CAPABILITY_IAM", "CAPABILITY_NAMED_IAM", "CAPABILITY_AUTO_EXPAND"]
parameters = {
DdApiKeySecretArn = "REPLACE ME WITH THE SECRETS ARN",
DdSite = "<SITE>",
FunctionName = "datadog-forwarder"
}
template_url = "https://datadog-cloudformation-template.s3.amazonaws.com/aws/forwarder/latest.yaml"
}
Note: Ensure that the DdSite
parameter matches your Datadog site. Select your site on the right side of this page. Replace <SITE>
in the above sample configuration with .
If you can’t install the Forwarder using the provided CloudFormation template, you can install the Forwarder manually following the steps below. Feel free to open an issue or pull request to let us know if there is anything we can improve to make the template work for you.
aws-dd-forwarder-<VERSION>.zip
from the latest releases.DD_API_KEY_SECRET_ARN
with the secret ARN on the Lambda function, and add the secretsmanager:GetSecretValue
permission to the Lambda execution role.s3:GetObject
permission to the Lambda execution role.DD_ENHANCED_METRICS
to false
on the Forwarder. This stops the Forwarder from generating enhanced metrics itself, but it will still forward custom metrics from other lambdas.DD_S3_BUCKET_NAME
to the bucket name. Also provide s3:GetObject
, s3:PutObject
, s3:ListBucket
, and s3:DeleteObject
permissions on this bucket to the Lambda execution role. This bucket is used to store the different tags cache i.e. Lambda, S3, Step Function and Log Group. Additionally, this bucket will be used to store unforwarded events incase of forwarding exceptions.DD_STORE_FAILED_EVENTS
to true
to enable the forwarder to also store event data in the S3 bucket. In case of exceptions when sending logs, metrics or traces to intake, the forwarder will store relevant data in the S3 bucket. On custom invocations i.e. on receiving an event with the retry
keyword set to a non empty string (which can be manually triggered - see below), the forwarder will retry sending the stored events. When successful it will clear up the storage in the bucket.aws lambda invoke --function-name <function-name> --payload '{"retry":"true"}' out
dd_forwarder_version
, such as 3.73.0
, in case you run into issues with the new version and need to rollback.https://datadog-cloudformation-template.s3.amazonaws.com/aws/forwarder/latest.yaml
. You can also replace latest
with a specific version, such as 3.73.0.yaml
, if needed. Make sure to review the changesets before applying the update.If you encounter issues upgrading to the latest version, check the Troubleshooting section.
Starting version 3.107.0 a new feature is added to enable Lambda function to store unforwarded events incase of exceptions on the intake point. If the feature is enabled using DD_STORE_FAILED_EVENTS
env var, failing events will be stored under a defined dir in the same S3 bucket used to store tags cache. The same bucket can be used to store logs from several Lambda functions under unique subdirs.
Starting version 3.106.0 Lambda function has been updated to add a prefix to cache filenames stored in the S3 bucket configured in DD_S3_BUCKET_NAME
. This allows to use the same bucket to store cache files from several functions.
Additionally, starting this version, the forwarder will attach custom S3 bucket tags by default to all logs exported to S3. For example, if a service is configured to send logs to a destiantion S3 bucket, the forwarder will add the bucket’s tags to the logs while pulling and forwarding the logs.
Since version 3.99.0 the Lambda function has been updated to require Python 3.11. If upgrading an older forwarder installation to +3.99.0 or above, ensure the AWS Lambda function is configured to use Python 3.11
Since version 3.98.0 the Lambda function has been updated to require Python 3.10. If upgrading an older forwarder installation to 3.98.0 or above, ensure the AWS Lambda function is configured to use Python 3.10
Since version 3.74.0 the Lambda function has been updated to require Python 3.9. If upgrading an older forwarder installation to 3.74.0 or above, ensure the AWS Lambda function is configured to use Python 3.9
Since version 3.49.0 the Lambda function has been updated to require Python 3.8. If upgrading an older forwarder installation to 3.49.0 or above, ensure the AWS Lambda function is configured to use Python 3.8
Since version 3.0.0, the forwarder Lambda function is managed by CloudFormation. To upgrade an older forwarder installation to 3.0.0 and above, follow the steps below.
Forwarder
.To safely delete the forwarder and other AWS resources created by the forwarder CloudFormation stack, follow the steps below.
Datadog recommends adjusting your Forwarder settings through CloudFormation rather than directly editing the Lambda function. Find the description of settings in the template.yaml
file and the CloudFormation stack creation user interface when you launch the stack. Feel free to submit a pull request to make additional settings adjustable through the template.
Don’t forget to check if the issue has already been fixed in the recent releases.
Set the environment variable DD_LOG_LEVEL
to debug
on the Forwarder Lambda function to enable detailed logging temporarily (don’t forget to remove it). The debugging logs should be able to show you the exact event payload the Lambda function receives and the data (log, metric or trace) payload that is sent to Datadog.
You can also add additional logging or code for deeper investigation. Find instructions for building Forwarder code with local changes from the contributing section.
Manually updating the .zip
code of the Forwarder may cause conflicts with Cloudformation updates for Forwarder installations where the code is packaged in a Lambda layer (default installation choice from version 3.33.0
) and cause invocation errors. In this case, updating the stack through Cloudformation to the latest available twice in a row (first with InstallAsLayer
set to false
, and then to true
) should solve the issue as it will remove any .zip
remnants and install the latest layer available.
If you still couldn’t figure out, please create a ticket for Datadog Support with a copy of debugging logs.
If your logs contain an attribute that Datadog parses as a timestamp, you need to make sure that the timestamp is both current and in the correct format. See Log Date Remapper to learn about which attributes are parsed as timestamps and how to make sure that the timestamp is valid.
In case you encounter the following error when creating S3 triggers, we recommend considering following a fanout architecture proposed by AWS in this article
An error occurred when creating the trigger: Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type.
We love pull requests. Here’s a quick guide.
If you would like to discuss a feature or bug fix before implementing, find us in the #serverless
channel of the Datadog Slack community.
Fork, clone, and create a branch:
git clone git@github.com:<your-username>/datadog-serverless-functions.git
git checkout -b <my-branch>
Make code changes.
Build with your local changes.
cd aws/logs_monitoring
./tools/build_bundle.sh <SEMANTIC_VERSION> # any unique version is fine
Update your testing Forwarder with the modified code and test:
# Upload in the AWS Lambda console if you don't have AWS CLI
aws lambda update-function-code \
--region <AWS_REGION> \
--function-name <FORWARDER_NAME> \
--zip-file fileb://.forwarder/aws-dd-forwarder-<SEMANTIC_VERSION>.zip
Run unit tests.
python -m unittest discover . # for code in Python
./trace_forwarder/scripts/run_tests.sh # for code in Go
Run the integration tests.
./tools/integration_tests/integration_tests.sh
# to update the snapshots if changes are expected
./tools/integration_tests/integration_tests.sh --update
If you changes affect the CloudFormation template, run the installation test against your own AWS account.
./tools/installation_test.sh
Push to your fork and submit a pull request.
If you need to ship logs to multiple Datadog organizations or other destinations, configure the AdditionalTargetLambdaArns
Cloudformation parameter to let the Datadog Forwarder copy the incoming logs to the specified Lambda functions. These additional Lambda functions will be called asynchronously with the exact same event
the Datadog Forwarder receives.
You can run the Forwarder in a VPC private subnet and send data to Datadog over AWS PrivateLink. AWS PrivateLink can only be configured with Datadog sites hosted on AWS (for example: datadoghq.com
, not datadoghq.eu
).
api
, http-logs.intake
, and trace.agent
endpoints to your VPC.UseVPC
to true
.VPCSecurityGroupIds
and VPCSubnetIds
based on your VPC settings.DdFetchLambdaTags
to false
, because AWS Resource Groups Tagging API doesn’t support PrivateLink.The DdUsePrivateLink
option has been deprecated since v3.41.0. This option was previously used to instruct the Forwarder to use a special set of PrivateLink endpoints for data intake: pvtlink.api.
, api-pvtlink.logs.
, and trace-pvtlink.agent.
. Since v3.41.0, the Forwarder can send data over PrivateLink to Datadog using the regular DNS names of intake endpoints: api.
, http-intake.logs.
, and trace.agent.
. Therefore, the DdUsePrivateLink
option is no longer needed.
If you have an older deployment of the Forwarder with DdUsePrivateLink
set to true
, then you may find mismatches between your configured PrivateLink endpoints and the ones documented in Datadog, which is expected. Although the older PrivateLink endpoints were removed from that doc, they remain to function. When upgrading the Forwarder, there is no change required, that is, you can keep DdUsePrivateLink
enabled and continue to use the older endpoints.
However, if you are interested in switching to the new endpoints, you need to follow the updated instructions above to:
api.
, http-intake.logs.
, and trace.agent.
.DdUseVPC
to true
.DdUsePrivateLink
to false
.If you must deploy the Forwarder to a VPC without direct public internet access, and you cannot use AWS PrivateLink to connect to Datadog (for example, if your organization is hosted on the Datadog EU site: datadoghq.eu
), then you can send data through a proxy.
.
.DdUseVPC
, VPCSecurityGroupIds
, and VPCSubnetIds
.DdFetchLambdaTags
option is disabled, because AWS VPC does not yet offer an endpoint for the Resource Groups Tagging API.DdApiUrl
to http://<proxy_host>:3834
or https://<proxy_host>:3834
.DdTraceIntakeUrl
to http://<proxy_host>:3835
or https://<proxy_host>:3835
.DdUrl
to <proxy_host>
and DdPort
to 3837
.Otherwise, if you are using Web Proxy:
DdHttpProxyURL
to your proxy endpoint, for example: http://<proxy_host>:<port>
, or, if your proxy has a username and password, http://<username>:<password>@<proxy_host>:<port>
.DdNoSsl
to true
if connecting to the proxy using http
.DdSkipSslValidation
to true
if connecting to the proxy using https
with a self-signed certificate.The Datadog Forwarder is signed by Datadog. To verify the integrity of the Forwarder, use the manual installation method. Create a Code Signing Configuration that includes Datadog’s Signing Profile ARN (arn:aws:signer:us-east-1:464622532012:/signing-profiles/DatadogLambdaSigningProfile/9vMI9ZAGLc
) and associate it with the Forwarder Lambda function before uploading the Forwarder ZIP file.
DdApiKey
DdApiKeySecretArn
instead.DdApiKeySecretArn
DdSite
.FunctionName
MemorySize
Timeout
ReservedConcurrency
LogRetentionInDays
DdTags
env:prod,stack:classic
.DdMultilineLogRegexPattern
\d{2}\/\d{2}\/\d{4}
for multiline logs beginning with pattern “11/10/2014”.DdUseTcp
DdNoSsl
DdUrl
DdPort
DdSkipSslValidation
DdUseCompression
DdCompressionLevel
DdForwardLog
RedactIp
\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}
with xxx.xxx.xxx.xxx
.RedactEmail
[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+
with xxxxx@xxxxx.com
.DdScrubbingRule
xxxxx
(default) or DdScrubbingRuleReplacement
(if supplied). Log scrubbing rule is applied to the full JSON-formatted log, including any metadata that is automatically added by the Lambda function. Each instance of a pattern match is replaced until no more matches are found in each log. Using inefficient regular expression, such as .*
, may slow down the Lambda function.DdScrubbingRuleReplacement
ExcludeAtMatch
ExcludeAtMatch
and IncludeAtMatch
, it is excluded.IncludeAtMatch
ExcludeAtMatch
.Filtering rules are applied to the full JSON-formatted log, including any metadata that is automatically added by the Forwarder. However, transformations applied by log pipelines, which occur after logs are sent to Datadog, cannot be used to filter logs in the Forwarder. Using an inefficient regular expression, such as .*
, may slow down the Forwarder.
Some examples of regular expressions that can be used for log filtering:
"(START|END) RequestId:\s
. The preceding "
is needed to match the start of the log message, which is in a JSON blob ({"message": "START RequestId...."}
). Datadog recommends keeping the REPORT
logs, as they are used to populate the invocations list in the serverless function views.errorMessage
.\b[4|5][0-9][0-9]\b
.message
field contains a specific JSON key/value pair: \"awsRegion\":\"us-east-1\"
.{"awsRegion": "us-east-1"}
is encoded as {\"awsRegion\":\"us-east-1\"}
. Therefore, the pattern you provide must include \
escape characters, like this: \"awsRegion\":\"us-east-1\"
.To test different patterns against your logs, turn on debug logs.
DdFetchLambdaTags
tag:GetResources
will be automatically added to the Lambda execution IAM role.DdFetchLogGroupTags
logs:ListTagsLogGroup
will be automatically added to the Lambda execution IAM role.DdFetchStepFunctionsTags
tag:GetResources
will be automatically added to the Lambda execution IAM role.DdStepFunctionTraceEnabled
SourceZipUrl
PermissionsBoundaryArn
DdUsePrivateLink
(DEPRECATED)DdHttpProxyURL
DdSkipSslValidation
to true.DdNoProxy
NO_PROXY
. It is a comma-separated list of domain names that should be excluded from the web proxy.VPCSecurityGroupIds
VPCSubnetIds
AdditionalTargetLambdaArns
event
the Datadog Forwarder receives.InstallAsLayer
LayerARN
To deploy the CloudFormation Stack with the default options, you need to have the permissions below to save your Datadog API key as a secret and create an S3 bucket to store the Forwarder’s code (ZIP file), and create Lambda functions (including execution roles and log groups).
IAM statements:
{
"Effect": "Allow",
"Action": [
"cloudformation:*",
"secretsmanager:CreateSecret",
"secretsmanager:TagResource",
"s3:CreateBucket",
"s3:GetObject",
"s3:PutEncryptionConfiguration",
"s3:PutBucketPublicAccessBlock",
"iam:CreateRole",
"iam:GetRole",
"iam:PassRole",
"iam:PutRolePolicy",
"iam:AttachRolePolicy",
"lambda:CreateFunction",
"lambda:GetFunction",
"lambda:GetFunctionConfiguration",
"lambda:GetLayerVersion",
"lambda:InvokeFunction",
"lambda:PutFunctionConcurrency",
"lambda:AddPermission",
"lambda:TagResource",
"logs:CreateLogGroup",
"logs:DescribeLogGroups",
"logs:PutRetentionPolicy"
],
"Resource": "*"
}
The following capabilities are required when creating a CloudFormation stack:
The CloudFormation Stack creates following IAM roles:
IAM statements
[
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
},
{
"Action": ["s3:GetObject"],
"Resource": "arn:aws:s3:::*",
"Effect": "Allow"
},
{
"Action": ["secretsmanager:GetSecretValue"],
"Resource": "<ARN of DdApiKeySecret>",
"Effect": "Allow"
}
]
ForwarderZipCopierRole
: The execution role for the ForwarderZipCopier Lambda function to download the Forwarder deployment ZIP file to a S3 bucket.IAM statements:
[
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
},
{
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "<S3Bucket to Store the Forwarder Zip>",
"Effect": "Allow"
}
]
Additional helpful documentation, links, and articles: