This page is not yet available in Spanish. We are working on its translation.
If you have any questions or feedback about our current translation project, feel free to reach out to us!

This processor splits nested arrays into distinct events so that you can query, filter, alert, and visualize data within an array. The arrays need to already be parsed. For example, the processor can process [item_1, item_2], but cannot process "[item_1, item2]". The items in the array can be JSON objects, strings, integers, floats, or Booleans. All unmodified fields are added to the child events. For example, if you are sending the following items to the Observability Pipelines Worker:

{
    "host": "my-host",
    "env": "prod",
    "batched_items": [item_1, item_2]
}

Use the Split Array processor to send each item in batched_items as a separate event:

{
    "host": "my-host",
    "env": "prod",
    "batched_items": item_1
}
{
    "host": "my-host",
    "env": "prod",
    "batched_items": item_2
}

See the split array example for a more detailed example.

To set up this processor:

Click Manage arrays to split to add an array to split or edit an existing array to split. This opens a side panel.

  • If you have not created any arrays yet, enter the array parameters as described in the Add a new array section below.
  • If you have already created arrays, click on the array’s row in the table to edit or delete it. Use the search bar to find a specific array, and then select the array to edit or delete it. Click Add Array to Split to add a new array.
Add a new array
  1. Define a filter query. Only logs that match the specified filter query are processed. All logs, regardless of whether they match the filter query, are sent to the next step in the pipeline.
  2. Enter the path to the array field. Use the path notation <OUTER_FIELD>.<INNER_FIELD> to match subfields. See the Path notation example below.
  3. Click Save.
Split array example

This is an example event:

{
    "ddtags": ["tag1", "tag2"],
    "host": "my-host",
    "env": "prod",
    "message": {
        "isMessage": true,
        "myfield" : {
            "timestamp":14500000,
            "firstarray":["one", 2]
        },
    },
    "secondarray": [
    {
        "some":"json",
        "Object":"works"
    }, 44]
}

If the processor is splitting the arrays "message.myfield.firstarray" and "secondarray", it outputs child events that are identical to the parent event, except for the values of "message.myfield.firstarray" and "secondarray", which becomes a single item from their respective original array. Each child event is a unique combination of items from the two arrays, so four child events (2 items * 2 items = 4 combinations) are created in this example.

{
    "ddtags": ["tag1", "tag2"],
    "host": "my-host",
    "env": "prod",
    "message": {
        "isMessage": true,
        "myfield" : {"timestamp":14500000, "firstarray":"one"},
    },
    "secondarray": {
        "some":"json",
        "Object":"works"
    }
}
{
    "ddtags": ["tag1", "tag2"],
    "host": "my-host",
    "env": "prod",
    "message": {
        "isMessage": true,
        "myfield" : {"timestamp":14500000, "firstarray":"one"},
        },
    "secondarray": 44
}
{
    "ddtags": ["tag1", "tag2"],
    "host": "my-host",
    "env": "prod",
    "message": {
        "isMessage": true,
        "myfield" : {"timestamp":14500000, "firstarray":2},
        },
    "secondarray": {
            "some":"json",
            "object":"works"
        }
}
{
    "ddtags": ["tag1", "tag2"],
    "host": "my-host",
    "env": "prod",
    "message": {
        "isMessage": true,
        "myfield" : {"timestamp":14500000, "firstarray":2},
        },
    "secondarray": 44
}
Path notation example

For the following message structure, use outer_key.inner_key.double_inner_key to refer to the key with the value double_inner_value.

{
    "outer_key": {
        "inner_key": "inner_value",
        "a": {
            "double_inner_key": "double_inner_value",
            "b": "b value"
        },
        "c": "c value"
    },
    "d": "d value"
}

Sintaxis de las consultas de filtro

Cada procesador tiene una consulta de filtro correspondiente en sus campos. Los procesadores sólo procesan los logs que coinciden con su consulta de filtro. Y en todos los procesadores, excepto el procesador de filtro, los logs que no coinciden con la consulta se envían al siguiente paso de la cadena. Para el procesador de filtro, los logs que no coinciden con la consulta se descartan.

Para cualquier atributo, etiqueta (tag) o par key:value que no sea un atributo reservado, la consulta debe empezar por @. Por el contrario, para filtrar atributos reservados, no es necesario añadir @ delante de la consulta de filtro.

Por ejemplo, para filtrar y descartar logs status:info, tu filtro puede definirse como NOT (status:info). Para filtrar y descartar system-status:info, el filtro debe ser NOT (@system-status:info).

Ejemplos de consulta de filtro:

  • NOT (status:debug): Esto filtra sólo los logs que no tienen el estado DEBUG.
  • status:ok service:flask-web-app: Esto filtra todos los logs con el estado OK de tu servicioflask-web-app.
    • Esta consulta también se puede escribir como: status:ok AND service:flask-web-app.
  • host:COMP-A9JNGYK OR host:COMP-J58KAS: Esta consulta de filtro sólo coincide con los logs de hosts etiquetados.
  • @user.status:inactive: Esto filtra los logs con el estado inactive anidado bajo el atributo user.

Las consultas ejecutadas en el worker de Observability Pipelines distinguen entre mayúsculas y minúsculas. Obtén más información sobre cómo escribir consultas de filtro con la sintaxis de búsqueda de logs de Datadog.

PREVIEWING: domalessi/docs-10186