The Remap to OCSF processor is in Preview. Complete this
form to request access.
Use this processor to remap logs to Open Cybersecurity Schema Framework (OCSF) events. OCSF schema event classes are set for a specific log source and type. You can add multiple mappings to one processor. Note: Datadog recommends that the OCSF processor be the last processor in your pipeline, so that remapping is done after the logs have been processed by all the other processors.
To set up this processor:
Click Manage mappings. This opens a modal:
- If you have already added mappings, click on a mapping in the list to edit or delete it. You can use the search bar to find a mapping by its name. Click Add Mapping if you want to add another mapping. Select Library Mapping or Custom Mapping and click Continue.
- If you have not added any mappings yet, select Library Mapping or Custom Mapping. Click Continue.
Add a mapping
- Select the log type in the dropdown menu.
- Define a filter query. Only logs that match the specified filter query are remapped. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
- Review the sample source log and the resulting OCSF output.
- Click Save Mapping.
Library mappings
These are the library mappings available:
Log Source | Log Type | OCSF Category | Supported OCSF versions |
---|
AWS CloudTrail | Type: Management EventName: ChangePassword | Account Change (3001) | 1.3.0 1.1.0 |
Google Cloud Audit | SetIamPolicy | Account Change (3001) | 1.3.0 1.1.0 |
Google Cloud Audit | CreateSink | Account Change (3001) | 1.3.0 1.1.0 |
Google Cloud Audit | UpdateSync | Account Change (3001) | 1.3.0 1.1.0 |
Google Cloud Audit | CreateBucket | Account Change (3001) | 1.3.0 1.1.0 |
GitHub | Create User | Account Change (3001) | 1.1.0 |
Google Workspace Admin | addPrivilege | User Account Management (3005) | 1.1.0 |
Okta | User session start | Authentication (3002) | 1.1.0 |
Palo Alto Networks | Traffic | Network Activity (4001) | 1.1.0 |
When you set up a custom mapping, if you try to close or exit the modal, you are prompted to export your mapping. Datadog recommends that you export your mapping to save what you have set up so far. The exported mapping is saved as a JSON file.
To set up a custom mapping:
- Optionally, add a name for the mapping. The default name is
Custom Authentication
. - Define a filter query. Only logs that match the specified filter query are remapped. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline.
- Select the OCSF event category from the dropdown menu.
- Select the OCSF event class from the dropdown menu.
- Enter a log sample so that you can reference it when you add fields.
- Click Continue.
- Select any OCSF profiles that you want to add. See OCSF Schema Browser for more information.
- All required fields are shown. Enter the required Source Logs Fields and Fallback Values for them. If you want to manually add additional fields, click + Field. Click the trash can icon to delete a field. Note: Required fields cannot be deleted.
- The fallback value is used for the OCSF field if the log doesn’t have the source log field.
- You can add multiple fields for Source Log Fields. For example, Okta’s
user.system.start
logs have either the eventType
or legacyEventType
field. You can map both fields to the same OCSF field. - If you have your own OCSF mappings in JSON or saved a previous mapping that you want to use, click Import Configuration File. See Custom Mapping Configuration Format for more information.
- Click Continue.
- Some log source values must be mapped to OCSF values. For example, the values of a source log’s severity field that is mapped to the OCSF’s
severity_id
field, must be mapped to the OCSF severity_id
’s values. See severity_id
in Authentication [3002] for a list of OCSF values. An example of mapping severity values:Log source value | OCSF value |
---|
INFO | Informational |
WARN | Medium |
ERROR | High |
- All values that are required to be mapped to an OCSF value are listed. Click + Add Row if you want to map additional values.
- Click Save Mapping.
Filter query syntax
Each processor has a corresponding filter query in their fields. Processors only process logs that match their filter query. And for all processors except the filter processor, logs that do not match the query are sent to the next step of the pipeline. For the filter processor, logs that do not match the query are dropped.
For any attribute, tag, or key:value
pair that is not a reserved attribute, your query must start with @
. Conversely, to filter reserved attributes, you do not need to append @
in front of your filter query.
For example, to filter out and drop status:info
logs, your filter can be set as NOT (status:info)
. To filter out and drop system-status:info
, your filter must be set as NOT (@system-status:info)
.
Filter query examples:
NOT (status:debug)
: This filters for only logs that do not have the status DEBUG
.status:ok service:flask-web-app
: This filters for all logs with the status OK
from your flask-web-app
service.- This query can also be written as:
status:ok AND service:flask-web-app
.
host:COMP-A9JNGYK OR host:COMP-J58KAS
: This filter query only matches logs from the labeled hosts.@user.status:inactive
: This filters for logs with the status inactive
nested under the user
attribute.
Queries run in the Observability Pipelines Worker are case sensitive. Learn more about writing filter queries in Datadog’s Log Search Syntax.