Destino Amazon Security Lake

This product is not supported for your selected Datadog site. ().

Utiliza el destino Amazon Security Lake de Observability Pipelines para enviar logs a Amazon Security Lake.

Requisitos previos

Antes de configurar el destino Amazon Security Lake, debes hacer lo siguiente:

  1. Follow the Getting Started with Amazon Security Lake to set up Amazon Security Lake, and make sure to:
    • Enable Amazon Security Lake for the AWS account.
    • Select the AWS regions where S3 buckets will be created for OCSF data.
  2. Follow Collecting data from custom sources in Security Lake to create a custom source in Amazon Security Lake.
    • When you configure a custom log source in Security Lake in the AWS console:
      • Enter a source name.
      • Select the OCSF event class for the log source and type.
      • Enter the account details for the AWS account that will write logs to Amazon Security Lake:
        • AWS account ID
        • External ID
    • Select Create and use a new service for service access.
    • Take note of the name of the bucket that is created because you need it when you set up the Amazon Security Lake destination later on.
      • To find the bucket name, navigate to Custom Sources. The bucket name is in the location for your custom source. For example, if the location is s3://aws-security-data-lake-us-east-2-qjh9pr8hy/ext/op-api-activity-test, the bucket name is aws-security-data-lake-us-east-2-qjh9pr8hy.

Configuración

Configura el destino Amazon Security Lake y sus variables de entorno cuando configures un pipeline. La siguiente información se configura en la interfaz de usuario del pipeline.

Configurar el destino

  1. Enter your S3 bucket name.
  2. Enter the AWS region.
  3. Enter the custom source name.
  4. Optionally, select an AWS authentication option.
    1. Enter the ARN of the IAM role you want to assume.
    2. Optionally, enter the assumed role session name and external ID.
  5. Optionally, toggle the switch to enable TLS. If you enable TLS, the following certificate and key files are required.
    Note: All file paths are made relative to the configuration data directory, which is /var/lib/observability-pipelines-worker/config/ by default. See Advanced Configurations for more information. The file must be owned by the observability-pipelines-worker group and observability-pipelines-worker user, or at least readable by the group or user.
    • Server Certificate Path: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).
    • CA Certificate Path: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).
    • Private Key Path: The path to the .key private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.

Notes:

  • When you add the Amazon Security Lake destination, the OCSF processor is automatically added so that you can convert your logs to Parquet before they are sent to Amazon Security Lake. See Remap to OCSF documentation for setup instructions.
  • Only logs formatted by the OCSF processor are converted to Parquet.

Configurar las variables de entorno

There are no environment variables to configure for the Amazon Security Lake destination.

Cómo funciona el destino

Autenticación de AWS

The Observability Pipelines Worker uses the standard AWS credential provider chain for authentication. See AWS SDKs and Tools standardized credential providers for more information.

Permisos

For Observability Pipelines to send logs to Amazon Security Lake, the following policy permissions are required:

  • s3:ListBucket
  • s3:PutObject

Procesamiento de eventos por lotes

Un lote de eventos se descarga cuando se cumple uno de estos parámetros. Consulta procesamiento de eventos por lotes para obtener más información.

Eventos máximosBytes máximosTiempo de espera (segundos)
Ninguno256,000,000300
PREVIEWING: sezen.leblay/aap-setup-page-java