Host Agent Log collection

Log collection requires the Datadog Agent v6.0+. Older versions of the Agent do not include the log collection interface. If you are not using the Agent already, follow the Agent installation instructions.

See Observability Pipelines if you want to send logs using another vendor’s collector or forwarder, or you want to preprocess your log data within your environment before shipping.

Activate log collection

Collecting logs is not enabled by default in the Datadog Agent. If you are running the Agent in a Kubernetes or Docker environment, see the dedicated Kubernetes Log Collection or Docker Log Collection documentation.

To enable log collection with an Agent running on your host, change logs_enabled: false to logs_enabled: true in the Agent’s main configuration file (datadog.yaml).

datadog.yaml

## @param logs_enabled - boolean - optional - default: false
## @env DD_LOGS_ENABLED - boolean - optional - default: false
## Enable Datadog Agent log collection by setting logs_enabled to true.
logs_enabled: false

## @param logs_config - custom object - optional
## Enter specific configurations for your Log collection.
## Uncomment this parameter and the one below to enable them.
## See https://docs.datadoghq.com/agent/logs/
logs_config:

  ## @param container_collect_all - boolean - optional - default: false
  ## @env DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL - boolean - optional - default: false
  ## Enable container log collection for all the containers (see ac_exclude to filter out containers)
  container_collect_all: false

  ## @param logs_dd_url - string - optional
  ## @env DD_LOGS_CONFIG_LOGS_DD_URL - string - optional
  ## Define the endpoint and port to hit when using a proxy for logs. The logs are forwarded in TCP
  ## therefore the proxy must be able to handle TCP connections.
  logs_dd_url: <ENDPOINT>:<PORT>

  ## @param logs_no_ssl - boolean - optional - default: false
  ## @env DD_LOGS_CONFIG_LOGS_NO_SSL - optional - default: false
  ## Disable the SSL encryption. This parameter should only be used when logs are
  ## forwarded locally to a proxy. It is highly recommended to then handle the SSL encryption
  ## on the proxy side.
  logs_no_ssl: false

  ## @param processing_rules - list of custom objects - optional
  ## @env DD_LOGS_CONFIG_PROCESSING_RULES - list of custom objects - optional
  ## Global processing rules that are applied to all logs. The available rules are
  ## "exclude_at_match", "include_at_match" and "mask_sequences". More information in Datadog documentation:
  ## https://docs.datadoghq.com/agent/logs/advanced_log_collection/#global-processing-rules
  processing_rules:
    - type: <RULE_TYPE>
      name: <RULE_NAME>
      pattern: <RULE_PATTERN>

  ## @param force_use_http - boolean - optional - default: false
  ## @env DD_LOGS_CONFIG_FORCE_USE_HTTP - boolean - optional - default: false
  ## By default, the Agent sends logs in HTTPS batches to port 443 if HTTPS connectivity can
  ## be established at Agent startup, and falls back to TCP otherwise. Set this parameter to `true` to
  ## always send logs with HTTPS (recommended).
  ## Warning: force_use_http means HTTP over TCP, not HTTP over HTTPS. Please use logs_no_ssl for HTTP over HTTPS.
  force_use_http: true

  ## @param force_use_tcp - boolean - optional - default: false
  ## @env DD_LOGS_CONFIG_FORCE_USE_TCP - boolean - optional - default: false
  ## By default, logs are sent through HTTPS if possible, set this parameter
  ## to `true` to always send logs via TCP. If `force_use_http` is set to `true`, this parameter
  ## is ignored.
  force_use_tcp: true

  ## @param use_compression - boolean - optional - default: true
  ## @env DD_LOGS_CONFIG_USE_COMPRESSION - boolean - optional - default: true
  ## This parameter is available when sending logs with HTTPS. If enabled, the Agent
  ## compresses logs before sending them.
  use_compression: true

  ## @param compression_level - integer - optional - default: 6
  ## @env DD_LOGS_CONFIG_COMPRESSION_LEVEL - boolean - optional - default: false
  ## The compression_level parameter accepts values from 0 (no compression)
  ## to 9 (maximum compression but higher resource usage). Only takes effect if
  ## `use_compression` is set to `true`.
  compression_level: 6

  ## @param batch_wait - integer - optional - default: 5
  ## @env DD_LOGS_CONFIG_BATCH_WAIT - integer - optional - default: 5
  ## The maximum time (in seconds) the Datadog Agent waits to fill each batch of logs before sending.
  batch_wait: 5

  ## @param open_files_limit - integer - optional - default: 500
  ## @env DD_LOGS_CONFIG_OPEN_FILES_LIMIT - integer - optional - default: 500
  ## The maximum number of files that can be tailed in parallel.
  ## Note: the default for Mac OS is 200. The default for
  ## all other systems is 500.
  open_files_limit: 500

  ## @param file_wildcard_selection_mode - string - optional - default: `by_name`
  ## @env DD_LOGS_CONFIG_FILE_WILDCARD_SELECTION_MODE - string - optional - default: `by_name`
  ## The strategy used to prioritize wildcard matches if they exceed the open file limit.
  ##
  ## Choices are `by_name` and `by_modification_time`.
  ##
  ## `by_name` means that each log source is considered and the matching files are ordered
  ## in reverse name order. While there are less than `logs_config.open_files_limit` files
  ## being tailed, this process repeats, collecting from each configured source.
  ##
  ## `by_modification_time` takes all log sources and first adds any log sources that
  ## point to a specific file. Next, it finds matches for all wildcard sources.
  ## This resulting list is ordered by which files have been most recently modified
  ## and the top `logs_config.open_files_limit` most recently modified files are
  ## chosen for tailing.
  ##
  ## WARNING: `by_modification_time` is less performant than `by_name` and will trigger
  ## more disk I/O at the configured wildcard log paths.
  file_wildcard_selection_mode: by_name

  ## @param max_message_size_bytes - integer - optional - default: 256000
  ## @env DD_LOGS_CONFIG_MAX_MESSAGE_SIZE_BYTES - integer - optional - default : 256000
  ## The maximum size of single log message in bytes. If maxMessageSizeBytes exceeds
  ## the documented API limit of 1MB - any payloads larger than 1MB will be dropped by the intake.
  https://docs.datadoghq.com/api/latest/logs/
  max_message_size_bytes: 256000

  ## @param integrations_logs_files_max_size - integer - optional - default: 100
  ## @env DD_LOGS_CONFIG_INTEGRATIONS_LOGS_FILES_MAX_SIZE - integer - optional - default: 100
  ## The combined size in MB of all the integration logs files the Agent is allowed to write.
  integrations_logs_files_max_size

Starting with Agent v6.19+/v7.19+, HTTPS transport is the default transport used. For more details on how to enforce HTTPS/TCP transport, refer to the Agent transport documentation.

To send logs with environment variables, configure the following:

  • DD_LOGS_ENABLED=true

After activating log collection, the Agent is ready to forward logs to Datadog. Next, configure the Agent on where to collect logs from.

Custom log collection

Datadog Agent v6 can collect logs and forward them to Datadog from files, the network (TCP or UDP), journald, and Windows channels:

  1. In the conf.d/ directory at the root of your Agent’s configuration directory, create a new <CUSTOM_LOG_SOURCE>.d/ folder that is accessible by the Datadog user.
  2. Create a new conf.yaml file in this new folder.
  3. Add a custom log collection configuration group with the parameters below.
  4. Restart your Agent to take into account this new configuration.
  5. Run the Agent’s status subcommand and look for <CUSTOM_LOG_SOURCE> under the Checks section.

If there are permission errors, see Permission issues tailing log files to troubleshoot.

Below are examples of custom log collection setup:

To gather logs from your <APP_NAME> application stored in <PATH_LOG_FILE>/<LOG_FILE_NAME>.log create a <APP_NAME>.d/conf.yaml file at the root of your Agent’s configuration directory with the following content:

logs:
  - type: file
    path: "<PATH_LOG_FILE>/<LOG_FILE_NAME>.log"
    service: "<APP_NAME>"
    source: "<SOURCE>"

On Windows, use the path <DRIVE_LETTER>:\\<PATH_LOG_FILE>\\<LOG_FILE_NAME>.log, and verify that the user ddagentuser has read and write access to the log file.

Note: A log line needs to be terminated with a newline character, \n or \r\n, otherwise the Agent waits indefinitely and does not send the log line.

To gather logs from your <APP_NAME> application that forwards its logs to TCP port 10518, create a <APP_NAME>.d/conf.yaml file at the root of your Agent’s configuration directory with the following content:

logs:
  - type: tcp
    port: 10518
    service: "<APP_NAME>"
    source: "<CUSTOM_SOURCE>"

If you are using Serilog, Serilog.Sinks.Network is an option for connecting with UDP.

In the Agent version 7.31.0+, the TCP connection stays open indefinitely even when idle.

Notes:

  • The Agent supports raw string, JSON, and Syslog formatted logs. If you are sending logs in batch, use line break characters to separate your logs.
  • A log line needs to be terminated with a newline character, \n or \r\n, otherwise the Agent waits indefinitely and does not send the log line.

To gather logs from journald, create a journald.d/conf.yaml file at the root of your Agent’s configuration directory with the following content:

logs:
  - type: journald
    path: /var/log/journal/

Refer to the journald integration documentation for more details regarding the setup for containerized environments and units filtering.

To send Windows events as logs to Datadog, add the channels to conf.d/win32_event_log.d/conf.yaml manually or use the Datadog Agent Manager.

To see your channel list, run the following command in a PowerShell:

Get-WinEvent -ListLog *

To see the most active channels, run the following command in a PowerShell:

Get-WinEvent -ListLog * | sort RecordCount -Descending

Then add the channels to your win32_event_log.d/conf.yaml configuration file:

logs:
  - type: windows_event
    channel_path: "<CHANNEL_1>"
    source: "<CHANNEL_1>"
    service: "<SERVICE>"
    sourcecategory: windowsevent

  - type: windows_event
    channel_path: "<CHANNEL_2>"
    source: "<CHANNEL_2>"
    service: "<SERVICE>"
    sourcecategory: windowsevent

Edit the <CHANNEL_X> parameters with the Windows channel name you want to collect events from. Set the corresponding source parameter to the same channel name to benefit from the integration automatic processing pipeline setup.

Finally, restart the Agent.

List of all available parameters for log collection:

ParameterRequiredDescription
typeYesThe type of log input source. Valid values are: tcp, udp, file, windows_event, docker, or journald.
portYesIf type is tcp or udp, set the port for listening to logs.
pathYesIf type is file or journald, set the file path for gathering logs.
channel_pathYesIf type is windows_event, list the Windows event channels for collecting logs.
serviceYesThe name of the service owning the log. If you instrumented your service with Datadog APM, this must be the same service name. Check the unified service tagging instructions when configuring service across multiple data types.
sourceYesThe attribute that defines which integration is sending the logs. If the logs do not come from an existing integration, then this field may include a custom source name. However, it is recommended that you match this value to the namespace of any related custom metrics you are collecting, for example: myapp from myapp.request.count.
include_unitsNoIf type is journald, list of the specific journald units to include.
exclude_pathsNoIf type is file, and path contains a wildcard character, list the matching file or files to exclude from log collection. This is available for Agent version >= 6.18.
exclude_unitsNoIf type is journald, list of the specific journald units to exclude.
sourcecategoryNoThe attribute used to define the category a source attribute belongs to, for example: source:postgres, sourcecategory:database or source: apache, sourcecategory: http_web_access.
start_positionNoSee Start position for more information.
encodingNoIf type is file, set the encoding for the Agent to read the file. Set it to utf-16-le for UTF-16 little-endian, utf-16-be for UTF-16 big-endian, or shift-jis for Shift JIS. If set to any other value, the Agent reads the file as UTF-8. Added utf-16-le and utf-16be in Agent v6.23/v7.23, shift-jis in Agent v6.34/v7.34
tagsNoA list of tags added to each log collected (learn more about tagging).

Start position

The start_position parameter is supported by file and journald tailer types. The start_position is always beginning when tailing a container.

Support:

  • File: Agent 6.19+/7.19+
  • Journald: Agent 6.38+/7.38+

If type is file:

  • Set the position for the Agent to start reading the file.
  • Valid values are beginning, end, forceBeginning, and forceEnd (default: end).
  • The beginning position does not support paths with wildcards.

If type is journald:

  • Set the position for the Agent to start reading the journal.
  • Valid values are beginning, end, forceBeginning, and forceEnd (default: end).

Precedence

For both file and journald tailer types, if an end or beginning position is specified, but an offset is stored, the offset takes precedence. Using forceBeginning or forceEnd forces the Agent to use the specified value even if there is a stored offset.

Further Reading

PREVIEWING: may/unit-testing