Ce processeur filtre les logs qui correspondent à la requête de filtrage spécifiée et ignore tous les logs qui ne correspondent pas. Si un log est ignoré par ce processeur, aucun des processeurs inférieurs ne le reçoit. Ce processeur peut filtrer les logs inutiles, tels que les logs de débogage ou d’avertissement.
Pour configurer le processeur de filtre :
- Définissez une requête de filtre. La requête que vous spécifiez filtre et ne transmet que les logs qui lui correspondent, en laissant de côté tous les autres logs.
The remap processor can add, drop, or rename fields within your individual log data. Use this processor to enrich your logs with additional context, remove low-value fields to reduce volume, and standardize naming across important attributes. Select add field, drop field, or rename field in the dropdown menu to get started.
Add field
Use add field to append a new key-value field to your log.
To set up the add field processor:
- Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
- Enter the field and value you want to add. To specify a nested field for your key, use the path notation:
<OUTER_FIELD>.<INNER_FIELD>
. All values are stored as strings.
Note: If the field you want to add already exists, the Worker throws an error and the existing field remains unchanged.
Drop field
Use drop field to drop a field from logging data that matches the filter you specify below. It can delete objects, so you can use the processor to drop nested keys.
To set up the drop field processor:
- Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
- Enter the key of the field you want to drop. To specify a nested field for your specified key, use the path notation:
<OUTER_FIELD>.<INNER_FIELD>
.
Note: If your specified key does not exist, your log will be unimpacted.
Rename field
Use rename field to rename a field within your log.
To set up the rename field processor:
- Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
- Enter the name of the field you want to rename in the Source field. To specify a nested field for your key, use the path notation:
<OUTER_FIELD>.<INNER_FIELD>
. Once renamed, your original field is deleted unless you enable the Preserve source tag checkbox described below.
Note: If the source key you specify doesn’t exist, a default null
value is applied to your target. - In the Target field, enter the name you want the source field to be renamed to. To specify a nested field for your specified key, use the path notation:
<OUTER_FIELD>.<INNER_FIELD>
.
Note: If the target field you specify already exists, the Worker throws an error and does not overwrite the existing target field. - Optionally, check the Preserve source tag box if you want to retain the original source field and duplicate the information from your source key to your specified target key. If this box is not checked, the source key is dropped after it is renamed.
Path notation example
For the following message structure, use outer_key.inner_key.double_inner_key
to refer to the key with the value double_inner_value
.
{
"outer_key": {
"inner_key": "inner_value",
"a": {
"double_inner_key": "double_inner_value",
"b": "b value"
},
"c": "c value"
},
"d": "d value"
}
This processor samples your logging traffic for a representative subset at the rate that you define, dropping the remaining logs. As an example, you can use this processor to sample 20% of logs from a noisy non-critical service.
The sampling only applies to logs that match your filter query and does not impact other logs. If a log is dropped at this processor, none of the processors below receives that log.
To set up the sample processor:
- Define a filter query. Only logs that match the specified filter query are sampled at the specified retention rate below. The sampled logs and the logs that do not match the filter query are sent to the next step in the pipeline.
- Set the retain field with your desired sampling rate expressed as a percentage. For example, entering
2
means 2% of logs are retained out of all the logs that match the filter query.
This processor parses logs using the grok parsing rules that are available for a set of sources. The rules are automatically applied to logs based on the log source. Therefore, logs must have a source
field with the source name. If this field is not added when the log is sent to the Observability Pipelines Worker, you can use the Add field processor to add it.
If the source
field of a log matches one of the grok parsing rule sets, the log’s message
field is checked against those rules. If a rule matches, the resulting parsed data is added in the message
field as a JSON object, overwriting the original message
.
If there isn’t a source
field on the log, or no rule matches the log message
, then no changes are made to the log and it is sent to the next step in the pipeline.
To set up the grok parser, define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline.
To test log samples for out-of-the-box rules:
- Click the Preview Library Rules button.
- Search or select a source in the dropdown menu.
- Enter a log sample to test the parsing rules for that source.
To add a custom parsing rule:
- Click Add Custom Rule.
- If you want to clone a library rule, select Clone library rule and then the library source from the dropdown menu.
- If you want to create a custom rule, select Custom and then enter the
source
. The parsing rules are applied to logs with that source
. - Enter log samples to test the parsing rules.
- Enter the rules for parsing the logs. See Parsing for more information on writing parsing rules.
Note: The url
, useragent
, and csv
filters are not available. - Click Advanced Settings if you want to add helper rules. See Using helper rules to factorize multiple parsing rules for more information.
- Click Add Rule.
The quota processor measures the logging traffic for logs that match the filter you specify. When the configured daily quota is met inside the 24-hour rolling window, the processor can either drop additional logs or send an alert using a Datadog monitor. You can configure the processor to track the total volume or the total number of events. The pipeline uses the name of the quota to identify the quota across multiple Remote Configuration deployments of the Worker.
As an example, you can configure this processor to drop new logs or trigger an alert without dropping logs after the processor has received 10 million events from a certain service in the last 24 hours.
To set up the quota processor:
- Enter a name for the quota processor.
- Define a filter query. Only logs that match the specified filter query are counted towards the daily limit.
- Logs that match the quota filter and are within the daily quota are sent to the next step in the pipeline.
- Logs that do not match the quota filter are sent to the next step of the pipeline.
- In the Unit for quota dropdown menu, select if you want to measure the quota by the number of
Events
or by the Volume
in bytes. - Set the daily quota limit and select the unit of magnitude for your desired quota.
- Check the Drop events checkbox if you want to drop all events when your quota is met. Leave it unchecked if you plan to set up a monitor that sends an alert when the quota is met.
- If logs that match the quota filter are received after the daily quota has been met and the Drop events option is selected, then those logs are dropped. In this case, only logs that did not match the filter query are sent to the next step in the pipeline.
- If logs that match the quota filter are received after the daily quota has been met and the Drop events option is not selected, then those logs and the logs that did not match the filter query are sent to the next step in the pipeline.
- Optional: Click Add Field if you want to set a quota on a specific service or region field.
a. Enter the field name you want to partition by. See the Partition example for more information.
i. Select the Ignore when missing if you want the quota applied only to events that match the partition. See the Ignore when missing example for more information.
ii. Optional: Click Overrides if you want to set different quotas for the partitioned field.
- Click Download as CSV for an example of how to structure the CSV.
- Drag and drop your overrides CSV to upload it. You can also click Browse to select the file to upload it. See the Overrides example for more information.
b. Click Add Field if you want to add another partition.
Examples
Partition example
Use Partition by if you want to set a quota on a specific service or region. For example, if you want to set a quota for 10 events per day and group the events by the service
field, enter service
into the Partition by field.
Example for the “ignore when missing” option
Select Ignore when missing if you want the quota applied only to events that match the partition. For example, if the Worker receives the following set of events:
{"service":"a", "source":"foo", "message": "..."}
{"service":"b", "source":"bar", "message": "..."}
{"service":"b", "message": "..."}
{"source":"redis", "message": "..."}
{"message": "..."}
And the Ignore when missing is selected, then the Worker:
- creates a set for logs with
service:a
and source:foo
- creates a set for logs with
service:b
and source:bar
- ignores the last three events
The quota is applied to the two sets of logs and not to the last three events.
If the Ignore when missing is not selected, the quota is applied to all five events.
Overrides example
If you are partitioning by service
and have two services: a
and b
, you can use overrides to apply different quotas for them. For example, if you want service:a
to have a quota limit of 5,000 bytes and service:b
to have a limit of 50 events, the override rules look like this:
Service | Type | Limit |
---|
a | Bytes | 5,000 |
b | Events | 50 |
The reduce processor groups multiple log events into a single log, based on the fields specified and the merge strategies selected. Logs are grouped at 10-second intervals. After the interval has elapsed for the group, the reduced log for that group is sent to the next step in the pipeline.
To set up the reduce processor:
- Define a filter query. Only logs that match the specified filter query are processed. Reduced logs and logs that do not match the filter query are sent to the next step in the pipeline.
- In the Group By section, enter the field you want to group the logs by.
- Click Add Group by Field to add additional fields.
- In the Merge Strategy section:
- In On Field, enter the name of the field you want to merge the logs on.
- Select the merge strategy in the Apply dropdown menu. This is the strategy used to combine events. See the following Merge strategies section for descriptions of the available strategies.
- Click Add Merge Strategy to add additional strategies.
Merge strategies
These are the available merge strategies for combining log events.
Name | Description |
---|
Array | Appends each value to an array. |
Concat | Concatenates each string value, delimited with a space. |
Concat newline | Concatenates each string value, delimited with a newline. |
Concat raw | Concatenates each string value, without a delimiter. |
Discard | Discards all values except the first value that was received. |
Flat unique | Creates a flattened array of all unique values that were received. |
Longest array | Keeps the longest array that was received. |
Max | Keeps the maximum numeric value that was received. |
Min | Keeps the minimum numeric value that was received. |
Retain | Discards all values except the last value that was received. Works as a way to coalesce by not retaining `null`. |
Shortest array | Keeps the shortest array that was received. |
Sum | Sums all numeric values that were received. |
The deduplicate processor removes copies of data to reduce volume and noise. It caches 5,000 messages at a time and compares your incoming logs traffic against the cached messages. For example, this processor can be used to keep only unique warning logs in the case where multiple identical warning logs are sent in succession.
To set up the deduplicate processor:
- Define a filter query. Only logs that match the specified filter query are processed. Deduped logs and logs that do not match the filter query are sent to the next step in the pipeline.
- In the Type of deduplication dropdown menu, select whether you want to
Match
on or Ignore
the fields specified below.- If
Match
is selected, then after a log passes through, future logs that have the same values for all of the fields you specify below are removed. - If
Ignore
is selected, then after a log passes through, future logs that have the same values for all of their fields, except the ones you specify below, are removed.
- Enter the fields you want to match on, or ignore. At least one field is required, and you can specify a maximum of three fields.
- Use the path notation
<OUTER_FIELD>.<INNER_FIELD>
to match subfields. See the Path notation example below.
- Click Add field to add additional fields you want to filter on.
Path notation example
For the following message structure, use outer_key.inner_key.double_inner_key
to refer to the key with the value double_inner_value
.
{
"outer_key": {
"inner_key": "inner_value",
"a": {
"double_inner_key": "double_inner_value",
"b": "b value"
},
"c": "c value"
},
"d": "d value"
}
The Sensitive Data Scanner processor scans logs to detect and redact or hash sensitive information such as PII, PCI, and custom sensitive data. You can pick from our library of predefined rules, or input custom Regex rules to scan for sensitive data.
To set up the sensitive data scanner processor:
- Define a filter query. Only logs that match the specified filter query are scanned and processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
- Click Add Scanning Rule.
- Name your scanning rule.
- In the Select scanning rule type field, select whether you want to create a rule from the library or create a custom rule.
- If you are creating a rule from the library, select the library pattern you want to use.
- If you are creating a custom rule, enter the regex pattern to check against the data.
- In the Scan entire or part of event section, select if you want to scan the Entire Event, Specific Attributes, or Exclude Attributes in the dropdown menu.
- If you selected Specific Attributes, click Add Field and enter the specific attributes you want to scan. You can add up to three fields. Use path notation (
outer_key.inner_key
) to access nested keys. For specified attributes with nested data, all nested data is scanned. - If you selected Exclude Attributes, click Add Field and enter the specific attributes you want to exclude from scanning. You can add up to three fields. Use path notation (
outer_key.inner_key
) to access nested keys. For specified attributes with nested data, all nested data is excluded.
- In the Define action on match section, select the action you want to take for the matched information. Redaction, partial redaction, and hashing are all irreversible actions.
- If you are redacting the information, specify the text to replace the matched data.
- If you are partially redacting the information, specify the number of characters you want to redact and whether to apply the partial redaction to the start or the end of your matched data.
- Note: If you select hashing, the UTF-8 bytes of the match are hashed with the 64-bit fingerprint of FarmHash.
- Optionally, add tags to all events that match the regex, so that you can filter, analyze, and alert on the events.
This processor adds a field with the name of the host that sent the log. For example, hostname: 613e197f3526
. Note: If the hostname
already exists, the Worker throws an error and does not overwrite the existing hostname
.
To set up this processor:
- Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
This processor converts the specified field into JSON objects.
To set up this processor:
- Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
- Enter the name of the field you want to parse JSON on.
Note: The parsed JSON overwrites what was originally contained in the field.
Use this processor to enrich your logs with information from a reference table, which could be a local file or database.
To set up the enrichment table processor:
- Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
- Enter the source attribute of the log. The source attribute’s value is what you want to find in the reference table.
- Enter the target attribute. The target attribute’s value stores, as a JSON object, the information found in the reference table.
- Select the type of reference table you want to use, File or GeoIP.
- For the File type:
- Enter the file path.
- Enter the column name. The column name in the enrichment table is used for matching the source attribute value. See the Enrichment file example.
- For the GeoIP type, enter the GeoIP path.
Enrichment file example
For this example, merchant_id
is used as the source attribute and merchant_info
as the target attribute.
This is the example reference table that the enrichment processor uses:
merch_id | merchant_name | city | state |
---|
803 | Andy’s Ottomans | Boise | Idaho |
536 | Cindy’s Couches | Boulder | Colorado |
235 | Debra’s Benches | Las Vegas | Nevada |
merch_id
is set as the column name the processor uses to find the source attribute’s value. Note: The source attribute’s value does not have to match the column name.
If the enrichment processor receives a log with "merchant_id":"536"
:
- The processor looks for the value
536
in the reference table’s merch_id
column. - After it finds the value, it adds the entire row of information from the reference table to the
merchant_info
attribute as a JSON object:
merchant_info {
"merchant_name":"Cindy's Couches",
"city":"Boulder",
"state":"Colorado"
}
Many types of logs are meant to be used for telemetry to track trends, such as KPIs, over long periods of time. Generating metrics from your logs is a cost-effective way to summarize log data from high-volume logs, such as CDN logs, VPC flow logs, firewall logs, and networks logs. Use the generate metrics processor to generate either a count metric of logs that match a query or a distribution metric of a numeric value contained in the logs, such as a request duration.
Note: The metrics generated are custom metrics and billed accordingly. See Custom Metrics Billing for more information.
To set up the processor:
Click Manage Metrics to create new metrics or edit existing metrics. This opens a side panel.
- If you have not created any metrics yet, enter the metric parameters as described in the Add a metric section to create a metric.
- If you have already created metrics, click on the metric’s row in the overview table to edit or delete it. Use the search bar to find a specific metric by its name, and then select the metric to edit or delete it. Click Add Metric to add another metric.
Add a metric
- Enter a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline. Note: Since a single processor can generate multiple metrics, you can define a different filter query for each metric.
- Enter a name for the metric.
- In the Define parameters section, select the metric type (count, gauge, or distribution). See the Count metric example and Distribution metric example. Also see Metrics Types for more information.
- For gauge and distribution metric types, select a log field which has a numeric (or parseable numeric string) value that is used for the value of the generated metric.
- For the distribution metric type, the log field’s value can be an array of (parseable) numerics, which is used for the generated metric’s sample set.
- The Group by field determines how the metric values are grouped together. For example, if you have hundreds of hosts spread across four regions, grouping by region allows you to graph one line for every region. The fields listed in the Group by setting are set as tags on the configured metric.
- Click Add Metric.
Metrics Types
You can generate these types of metrics for your logs. See the Metrics Types and Distributions documentation for more details.
Metric type | Description | Example |
---|
COUNT | Represents the total number of event occurrences in one time interval. This value can be reset to zero, but cannot be decreased. | You want to count the number of logs with status:error . |
GAUGE | Represents a snapshot of events in one time interval. | You want to measure the latest CPU utilization per host for all logs in the production environment. |
DISTRIBUTION | Represent the global statistical distribution of a set of values calculated across your entire distributed infrastructure in one time interval. | You want to measure the average time it takes for an API call to be made. |
Count metric example
For this status:error
log example:
{"status": "error", "env": "prod", "host": "ip-172-25-222-111.ec2.internal"}
To create a count metric that counts the number of logs that contain "status":"error"
and groups them by env
and host
, enter the following information:
Input parameters | Value |
---|
Filter query | @status:error |
Metric name | status_error_total |
Metric type | Count |
Group by | env , prod |
Distribution metric example
For this example of an API response log:
{
"timestamp": "2018-10-15T17:01:33Z",
"method": "GET",
"status": 200,
"request_body": "{"information"}",
"response_time_seconds: 10
}
To create a distribution metric that measures the average time it takes for an API call to be made, enter the following information:
Input parameters | Value |
---|
Filter query | @method |
Metric name | status_200_response |
Metric type | Distribution |
Select a log attribute | response_time_seconds |
Group by | method |
Use this processor to add a field name and value of an environment variable to the log message.
To set up this processor:
- Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline.
- Enter the field name for the environment variable.
- Enter the environment variable name.
- Click Add Environment Variable if you want to add another environment variable.
Blocked environment variables
Environment variables that match any of the following patterns are blocked from being added to log messages because the environment variable could contain sensitive data.
CONNECTIONSTRING
/ CONNECTION-STRING
/ CONNECTION_STRING
AUTH
CERT
CLIENTID
/ CLIENT-ID
/ CLIENT_ID
CREDENTIALS
DATABASEURL
/ DATABASE-URL
/ DATABASE_URL
DBURL
/ DB-URL
/ DB_URL
KEY
OAUTH
PASSWORD
PWD
ROOT
SECRET
TOKEN
USER
The environment variable is matched to the pattern and not the literal word. For example, PASSWORD
blocks environment variables like USER_PASSWORD
and PASSWORD_SECRET
from getting added to the log messages.