This page is not yet available in Spanish. We are working on its translation.
If you have any questions or feedback about our current translation project,
feel free to reach out to us!The Remap to OCSF processor is in Preview. Complete this
form to request access.
Use this processor to remap logs to Open Cybersecurity Schema Framework (OCSF) events. OCSF schema event classes are set for a specific log source and type. You can add multiple mappings to one processor. Note: Datadog recommends that the OCSF processor be the last processor in your pipeline, so that remapping is done after the logs have been processed by all the other processors.
To set up this processor:
Click Manage mappings. This opens a modal:
- If you have already added mappings, click on a mapping in the list to edit or delete it. You can use the search bar to find a mapping by its name. Click Add Mapping if you want to add another mapping. Select Library Mapping or Custom Mapping and click Continue.
- If you have not added any mappings yet, select Library Mapping or Custom Mapping. Click Continue.
Add a mapping
- Select the log type in the dropdown menu.
- Define a filter query. Only logs that match the specified filter query are remapped. All logs, regardless of whether they do or do not match the filter query, are sent to the next step in the pipeline.
- Review the sample source log and the resulting OCSF output.
- Click Save Mapping.
Library mappings
These are the library mappings available:
Log Source | Log Type | OCSF Category | Supported OCSF versions |
---|
AWS CloudTrail | Type: Management EventName: ChangePassword | Account Change (3001) | 1.3.0 1.1.0 |
Google Cloud Audit | SetIamPolicy | Account Change (3001) | 1.3.0 1.1.0 |
Google Cloud Audit | CreateSink | Account Change (3001) | 1.3.0 1.1.0 |
Google Cloud Audit | UpdateSync | Account Change (3001) | 1.3.0 1.1.0 |
Google Cloud Audit | CreateBucket | Account Change (3001) | 1.3.0 1.1.0 |
GitHub | Create User | Account Change (3001) | 1.1.0 |
Google Workspace Admin | addPrivilege | User Account Management (3005) | 1.1.0 |
Okta | User session start | Authentication (3002) | 1.1.0 |
Palo Alto Networks | Traffic | Network Activity (4001) | 1.1.0 |
When you set up a custom mapping, if you try to close or exit the modal, you are prompted to export your mapping. Datadog recommends that you export your mapping to save what you have set up so far. The exported mapping is saved as a JSON file.
To set up a custom mapping:
- Optionally, add a name for the mapping. The default name is
Custom Authentication
. - Define a filter query. Only logs that match the specified filter query are remapped. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline.
- Select the OCSF event category from the dropdown menu.
- Select the OCSF event class from the dropdown menu.
- Enter a log sample so that you can reference it when you add fields.
- Click Continue.
- Select any OCSF profiles that you want to add. See OCSF Schema Browser for more information.
- All required fields are shown. Enter the required Source Logs Fields and Fallback Values for them. If you want to manually add additional fields, click + Field. Click the trash can icon to delete a field. Note: Required fields cannot be deleted.
- The fallback value is used for the OCSF field if the log doesn’t have the source log field.
- You can add multiple fields for Source Log Fields. For example, Okta’s
user.system.start
logs have either the eventType
or legacyEventType
field. You can map both fields to the same OCSF field. - If you have your own OCSF mappings in JSON or saved a previous mapping that you want to use, click Import Configuration File. See Custom Mapping Configuration Format for more information.
- Click Continue.
- Some log source values must be mapped to OCSF values. For example, the values of a source log’s severity field that is mapped to the OCSF’s
severity_id
field, must be mapped to the OCSF severity_id
’s values. See severity_id
in Authentication [3002] for a list of OCSF values. An example of mapping severity values:Log source value | OCSF value |
---|
INFO | Informational |
WARN | Medium |
ERROR | High |
- All values that are required to be mapped to an OCSF value are listed. Click + Add Row if you want to map additional values.
- Click Save Mapping.
Sintaxis de las consultas de filtro
Cada procesador tiene una consulta de filtro correspondiente en sus campos. Los procesadores sólo procesan los logs que coinciden con su consulta de filtro. Y en todos los procesadores, excepto el procesador de filtro, los logs que no coinciden con la consulta se envían al siguiente paso de la cadena. Para el procesador de filtro, los logs que no coinciden con la consulta se descartan.
Para cualquier atributo, etiqueta (tag) o par key:value
que no sea un atributo reservado, la consulta debe empezar por @
. Por el contrario, para filtrar atributos reservados, no es necesario añadir @
delante de la consulta de filtro.
Por ejemplo, para filtrar y descartar logs status:info
, tu filtro puede definirse como NOT (status:info)
. Para filtrar y descartar system-status:info
, el filtro debe ser NOT (@system-status:info)
.
Ejemplos de consulta de filtro:
NOT (status:debug)
: Esto filtra sólo los logs que no tienen el estado DEBUG
.status:ok service:flask-web-app
: Esto filtra todos los logs con el estado OK
de tu servicioflask-web-app
.- Esta consulta también se puede escribir como:
status:ok AND service:flask-web-app
.
host:COMP-A9JNGYK OR host:COMP-J58KAS
: Esta consulta de filtro sólo coincide con los logs de hosts etiquetados.@user.status:inactive
: Esto filtra los logs con el estado inactive
anidado bajo el atributo user
.
Las consultas ejecutadas en el worker de Observability Pipelines distinguen entre mayúsculas y minúsculas. Obtén más información sobre cómo escribir consultas de filtro con la sintaxis de búsqueda de logs de Datadog.