Remap to OCSF Processor

Cette page n'est pas encore disponible en français, sa traduction est en cours.
Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.
The Remap to OCSF processor is in Preview. Complete this form to request access.

Use this processor to remap logs to Open Cybersecurity Schema Framework (OCSF) events. OCSF schema event classes are set for a specific log source and type. You can add multiple mappings to one processor. Note: Datadog recommends that the OCSF processor be the last processor in your pipeline, so that remapping is done after the logs have been processed by all the other processors.

To set up this processor:

Click Manage mappings. This opens a modal:

  • If you have already added mappings, click on a mapping in the list to edit or delete it. You can use the search bar to find a mapping by its name. Click Add Mapping if you want to add another mapping. Select Library Mapping or Custom Mapping and click Continue.
  • If you have not added any mappings yet, select Library Mapping or Custom Mapping. Click Continue.

Add a mapping

  1. Select the log type in the dropdown menu.
  2. Define a filter query. Only logs that match the specified filter query are remapped. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
  3. Review the sample source log and the resulting OCSF output.
  4. Click Save Mapping.

Library mappings

These are the library mappings available:

Log SourceLog TypeOCSF CategorySupported OCSF versions
AWS CloudTrailType: Management
EventName: ChangePassword
Account Change (3001)1.3.0
1.1.0
Google Cloud AuditSetIamPolicyAccount Change (3001)1.3.0
1.1.0
Google Cloud AuditCreateSinkAccount Change (3001)1.3.0
1.1.0
Google Cloud AuditUpdateSyncAccount Change (3001)1.3.0
1.1.0
Google Cloud AuditCreateBucketAccount Change (3001)1.3.0
1.1.0
GitHubCreate UserAccount Change (3001)1.1.0
Google Workspace AdminaddPrivilegeUser Account Management (3005)1.1.0
OktaUser session startAuthentication (3002)1.1.0
Palo Alto NetworksTrafficNetwork Activity (4001)1.1.0

When you set up a custom mapping, if you try to close or exit the modal, you are prompted to export your mapping. Datadog recommends that you export your mapping to save what you have set up so far. The exported mapping is saved as a JSON file.

To set up a custom mapping:

  1. Optionally, add a name for the mapping. The default name is Custom Authentication.
  2. Define a filter query. Only logs that match the specified filter query are remapped. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline.
  3. Select the OCSF event category from the dropdown menu.
  4. Select the OCSF event class from the dropdown menu.
  5. Enter a log sample so that you can reference it when you add fields.
  6. Click Continue.
  7. Select any OCSF profiles that you want to add. See OCSF Schema Browser for more information.
  8. All required fields are shown. Enter the required Source Logs Fields and Fallback Values for them. If you want to manually add additional fields, click + Field. Click the trash can icon to delete a field. Note: Required fields cannot be deleted.
    • The fallback value is used for the OCSF field if the log doesn’t have the source log field.
    • You can add multiple fields for Source Log Fields. For example, Okta’s user.system.start logs have either the eventType or legacyEventType field. You can map both fields to the same OCSF field.
    • If you have your own OCSF mappings in JSON or saved a previous mapping that you want to use, click Import Configuration File.
  9. Click Continue.
  10. Some log source values must be mapped to OCSF values. For example, the values of a source log’s severity field that is mapped to the OCSF’s severity_id field, must be mapped to the OCSF severity_id’s values. See severity_id in Authentication [3002] for a list of OCSF values. An example of mapping severity values:
    Log source valueOCSF value
    INFOInformational
    WARNMedium
    ERRORHigh
  11. All values that are required to be mapped to an OCSF value are listed. Click + Add Row if you want to map additional values.
  12. Click Save Mapping.

Filter query syntax

Each processor has a corresponding filter query in their fields. Processors only process logs that match their filter query. And for all processors except the filter processor, logs that do not match the query are sent to the next step of the pipeline. For the filter processor, logs that do not match the query are dropped.

For any attribute, tag, or key:value pair that is not a reserved attribute, your query must start with @. Conversely, to filter reserved attributes, you do not need to append @ in front of your filter query.

For example, to filter out and drop status:info logs, your filter can be set as NOT (status:info). To filter out and drop system-status:info, your filter must be set as NOT (@system-status:info).

Filter query examples:

  • NOT (status:debug): This filters for only logs that do not have the status DEBUG.
  • status:ok service:flask-web-app: This filters for all logs with the status OK from your flask-web-app service.
    • This query can also be written as: status:ok AND service:flask-web-app.
  • host:COMP-A9JNGYK OR host:COMP-J58KAS: This filter query only matches logs from the labeled hosts.
  • @user.status:inactive: This filters for logs with the status inactive nested under the user attribute.

Queries run in the Observability Pipelines Worker are case sensitive. Learn more about writing filter queries in Datadog’s Log Search Syntax.

PREVIEWING: guacbot/translation-pipeline