Observability Pipelines

Observability Pipelines allows you to collect and process logs within your own infrastructure, and then route them to downstream integrations.

Note: This endpoint is in Preview.

GET https://api.ap1.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.datadoghq.eu/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.ddog-gov.com/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.us3.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines

Información general

Retrieve a list of pipelines. This endpoint requires the observability_pipelines_read permission.

Argumentos

Cadenas de consulta

Nombre

Tipo

Descripción

page[size]

integer

Size for a given page. The maximum allowed value is 100.

page[number]

integer

Specific page number to return.

Respuesta

OK

Represents the response payload containing a list of pipelines and associated metadata.

Expand All

Campo

Tipo

Descripción

data [required]

[object]

The schema data.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The datadog_logs destination forwards logs to Datadog Log Management.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 2

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 3

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

acl [required]

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata [required]

[object]

Custom metadata key-value pairs added to each object.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 4

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The sumo_logic destination forwards logs to Sumo Logic.

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 6

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 7

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 8

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 9

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 10

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 11

object

The google_chronicle destination sends logs to Google Chronicle.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 12

object

The new_relic destination sends logs to the New Relic platform.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 13

object

The sentinel_one destination sends logs to SentinelOne.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 14

object

The opensearch destination writes logs to an OpenSearch cluster.

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 15

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

processors

[ <oneOf>]

A list of processors that transform or enrich log data.

Option 1

object

The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 3

object

The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

drop_events [required]

boolean

If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota is exceeded. Options:

  • drop: Drop the event.

  • no_action: Let the event pass through.

  • overflow_routing: Route to an overflow destination.

    Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 4

object

The add_fields processor adds static key-value fields to logs.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 5

object

The remove_fields processor deletes specified fields from logs.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

The PipelineRemoveFieldsProcessor inputs.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 6

object

The rename_fields processor changes field names.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 7

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

metrics [required]

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 8

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

percentage

double

The percentage of logs to sample.

rate

int64

Number of events to sample (1 in N).

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 9

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules [required]

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 10

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 11

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 12

object

The add_env_vars processor adds environment variable values to log events.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 13

object

The dedupe processor removes duplicate fields in log events.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 14

object

The enrichment_table processor enriches logs using a static CSV file or GeoIP database.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The throttle processor limits the number of events that pass through over a given time window.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The kafka source ingests data from Apache Kafka topics.

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 2

object

The datadog_agent source collects logs from the Datadog Agent.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 3

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 4

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 6

object

The fluentd source ingests logs from a Fluentd-compatible service.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 7

object

The fluent_bit source ingests logs from Fluent Bit.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 8

object

The http_server source collects logs over HTTP POST from external services.

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The sumo_logic source receives logs from Sumo Logic collectors.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 10

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 11

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 12

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 13

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 14

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: basic,bearer

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 15

object

The logstash source ingests logs from a Logstash forwarder.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

meta

object

Metadata about the response.

totalCount

int64

The total number of pipelines.

{
  "data": [
    {
      "attributes": {
        "config": {
          "destinations": [
            {
              "id": "datadog-logs-destination",
              "inputs": [
                "filter-processor"
              ],
              "type": "datadog_logs"
            }
          ],
          "processors": [
            {
              "id": "filter-processor",
              "include": "service:my-service",
              "inputs": [
                "datadog-agent-source"
              ],
              "type": "filter"
            }
          ],
          "sources": [
            {
              "group_id": "consumer-group-0",
              "id": "kafka-source",
              "librdkafka_options": [
                {
                  "name": "fetch.message.max.bytes",
                  "value": "1048576"
                }
              ],
              "sasl": {
                "mechanism": "string"
              },
              "tls": {
                "ca_file": "string",
                "crt_file": "/path/to/cert.crt",
                "key_file": "string"
              },
              "topics": [
                "topic1",
                "topic2"
              ],
              "type": "kafka"
            }
          ]
        },
        "name": "Main Observability Pipeline"
      },
      "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
      "type": "pipelines"
    }
  ],
  "meta": {
    "totalCount": 42
  }
}

Bad Request

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Authorized

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Ejemplo de código

                  # Curl command
curl -X GET "https://api.ap1.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines" \ -H "Accept: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}"

Note: This endpoint is in Preview.

POST https://api.ap1.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.datadoghq.eu/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.ddog-gov.com/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.us3.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelineshttps://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines

Información general

Create a new pipeline. This endpoint requires the observability_pipelines_deploy permission.

Solicitud

Body Data (required)

Expand All

Campo

Tipo

Descripción

data [required]

object

Contains the the pipeline configuration.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The datadog_logs destination forwards logs to Datadog Log Management.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 2

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 3

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

acl [required]

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata [required]

[object]

Custom metadata key-value pairs added to each object.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 4

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The sumo_logic destination forwards logs to Sumo Logic.

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 6

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 7

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 8

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 9

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 10

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 11

object

The google_chronicle destination sends logs to Google Chronicle.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 12

object

The new_relic destination sends logs to the New Relic platform.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 13

object

The sentinel_one destination sends logs to SentinelOne.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 14

object

The opensearch destination writes logs to an OpenSearch cluster.

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 15

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

processors

[ <oneOf>]

A list of processors that transform or enrich log data.

Option 1

object

The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 3

object

The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

drop_events [required]

boolean

If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota is exceeded. Options:

  • drop: Drop the event.

  • no_action: Let the event pass through.

  • overflow_routing: Route to an overflow destination.

    Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 4

object

The add_fields processor adds static key-value fields to logs.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 5

object

The remove_fields processor deletes specified fields from logs.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

The PipelineRemoveFieldsProcessor inputs.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 6

object

The rename_fields processor changes field names.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 7

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

metrics [required]

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 8

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

percentage

double

The percentage of logs to sample.

rate

int64

Number of events to sample (1 in N).

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 9

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules [required]

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 10

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 11

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 12

object

The add_env_vars processor adds environment variable values to log events.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 13

object

The dedupe processor removes duplicate fields in log events.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 14

object

The enrichment_table processor enriches logs using a static CSV file or GeoIP database.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The throttle processor limits the number of events that pass through over a given time window.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The kafka source ingests data from Apache Kafka topics.

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 2

object

The datadog_agent source collects logs from the Datadog Agent.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 3

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 4

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 6

object

The fluentd source ingests logs from a Fluentd-compatible service.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 7

object

The fluent_bit source ingests logs from Fluent Bit.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 8

object

The http_server source collects logs over HTTP POST from external services.

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The sumo_logic source receives logs from Sumo Logic collectors.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 10

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 11

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 12

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 13

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 14

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: basic,bearer

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 15

object

The logstash source ingests logs from a Logstash forwarder.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

name [required]

string

Name of the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "datadog-logs-destination",
            "inputs": [
              "filter-processor"
            ],
            "type": "datadog_logs"
          }
        ],
        "processors": [
          {
            "id": "filter-processor",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "type": "filter"
          }
        ],
        "sources": [
          {
            "id": "datadog-agent-source",
            "type": "datadog_agent"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "type": "pipelines"
  }
}

Respuesta

OK

Top-level schema representing a pipeline.

Expand All

Campo

Tipo

Descripción

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The datadog_logs destination forwards logs to Datadog Log Management.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 2

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 3

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

acl [required]

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata [required]

[object]

Custom metadata key-value pairs added to each object.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 4

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The sumo_logic destination forwards logs to Sumo Logic.

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 6

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 7

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 8

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 9

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 10

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 11

object

The google_chronicle destination sends logs to Google Chronicle.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 12

object

The new_relic destination sends logs to the New Relic platform.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 13

object

The sentinel_one destination sends logs to SentinelOne.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 14

object

The opensearch destination writes logs to an OpenSearch cluster.

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 15

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

processors

[ <oneOf>]

A list of processors that transform or enrich log data.

Option 1

object

The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 3

object

The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

drop_events [required]

boolean

If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota is exceeded. Options:

  • drop: Drop the event.

  • no_action: Let the event pass through.

  • overflow_routing: Route to an overflow destination.

    Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 4

object

The add_fields processor adds static key-value fields to logs.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 5

object

The remove_fields processor deletes specified fields from logs.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

The PipelineRemoveFieldsProcessor inputs.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 6

object

The rename_fields processor changes field names.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 7

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

metrics [required]

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 8

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

percentage

double

The percentage of logs to sample.

rate

int64

Number of events to sample (1 in N).

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 9

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules [required]

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 10

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 11

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 12

object

The add_env_vars processor adds environment variable values to log events.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 13

object

The dedupe processor removes duplicate fields in log events.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 14

object

The enrichment_table processor enriches logs using a static CSV file or GeoIP database.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The throttle processor limits the number of events that pass through over a given time window.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The kafka source ingests data from Apache Kafka topics.

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 2

object

The datadog_agent source collects logs from the Datadog Agent.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 3

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 4

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 6

object

The fluentd source ingests logs from a Fluentd-compatible service.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 7

object

The fluent_bit source ingests logs from Fluent Bit.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 8

object

The http_server source collects logs over HTTP POST from external services.

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The sumo_logic source receives logs from Sumo Logic collectors.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 10

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 11

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 12

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 13

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 14

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: basic,bearer

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 15

object

The logstash source ingests logs from a Logstash forwarder.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "datadog-logs-destination",
            "inputs": [
              "filter-processor"
            ],
            "type": "datadog_logs"
          }
        ],
        "processors": [
          {
            "id": "filter-processor",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "type": "filter"
          }
        ],
        "sources": [
          {
            "group_id": "consumer-group-0",
            "id": "kafka-source",
            "librdkafka_options": [
              {
                "name": "fetch.message.max.bytes",
                "value": "1048576"
              }
            ],
            "sasl": {
              "mechanism": "string"
            },
            "tls": {
              "ca_file": "string",
              "crt_file": "/path/to/cert.crt",
              "key_file": "string"
            },
            "topics": [
              "topic1",
              "topic2"
            ],
            "type": "kafka"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "type": "pipelines"
  }
}

Bad Request

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Authorized

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Conflict

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Ejemplo de código

                          # Curl command
curl -X POST "https://api.ap1.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines" \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}" \ -d @- << EOF { "data": { "attributes": { "config": { "destinations": [ { "id": "datadog-logs-destination", "inputs": [ "filter-processor" ], "type": "datadog_logs" } ], "processors": [ { "id": "filter-processor", "include": "service:my-service", "inputs": [ "datadog-agent-source" ], "type": "filter" } ], "sources": [ { "id": "datadog-agent-source", "type": "datadog_agent" } ] }, "name": "Main Observability Pipeline" }, "type": "pipelines" } } EOF
// Create a new pipeline returns "OK" response

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"os"

	"github.com/DataDog/datadog-api-client-go/v2/api/datadog"
	"github.com/DataDog/datadog-api-client-go/v2/api/datadogV2"
)

func main() {
	body := datadogV2.ObservabilityPipelineCreateRequest{
		Data: datadogV2.ObservabilityPipelineCreateRequestData{
			Attributes: datadogV2.ObservabilityPipelineDataAttributes{
				Config: datadogV2.ObservabilityPipelineConfig{
					Destinations: []datadogV2.ObservabilityPipelineConfigDestinationItem{
						datadogV2.ObservabilityPipelineConfigDestinationItem{
							ObservabilityPipelineDatadogLogsDestination: &datadogV2.ObservabilityPipelineDatadogLogsDestination{
								Id: "datadog-logs-destination",
								Inputs: []string{
									"filter-processor",
								},
								Type: datadogV2.OBSERVABILITYPIPELINEDATADOGLOGSDESTINATIONTYPE_DATADOG_LOGS,
							}},
					},
					Processors: []datadogV2.ObservabilityPipelineConfigProcessorItem{
						datadogV2.ObservabilityPipelineConfigProcessorItem{
							ObservabilityPipelineFilterProcessor: &datadogV2.ObservabilityPipelineFilterProcessor{
								Id:      "filter-processor",
								Include: "service:my-service",
								Inputs: []string{
									"datadog-agent-source",
								},
								Type: datadogV2.OBSERVABILITYPIPELINEFILTERPROCESSORTYPE_FILTER,
							}},
					},
					Sources: []datadogV2.ObservabilityPipelineConfigSourceItem{
						datadogV2.ObservabilityPipelineConfigSourceItem{
							ObservabilityPipelineDatadogAgentSource: &datadogV2.ObservabilityPipelineDatadogAgentSource{
								Id:   "datadog-agent-source",
								Type: datadogV2.OBSERVABILITYPIPELINEDATADOGAGENTSOURCETYPE_DATADOG_AGENT,
							}},
					},
				},
				Name: "Main Observability Pipeline",
			},
			Type: "pipelines",
		},
	}
	ctx := datadog.NewDefaultContext(context.Background())
	configuration := datadog.NewConfiguration()
	configuration.SetUnstableOperationEnabled("v2.CreatePipeline", true)
	apiClient := datadog.NewAPIClient(configuration)
	api := datadogV2.NewObservabilityPipelinesApi(apiClient)
	resp, r, err := api.CreatePipeline(ctx, body)

	if err != nil {
		fmt.Fprintf(os.Stderr, "Error when calling `ObservabilityPipelinesApi.CreatePipeline`: %v\n", err)
		fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
	}

	responseContent, _ := json.MarshalIndent(resp, "", "  ")
	fmt.Fprintf(os.Stdout, "Response from `ObservabilityPipelinesApi.CreatePipeline`:\n%s\n", responseContent)
}

Instructions

First install the library and its dependencies and then save the example to main.go and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" go run "main.go"
// Create a new pipeline returns "OK" response

import com.datadog.api.client.ApiClient;
import com.datadog.api.client.ApiException;
import com.datadog.api.client.v2.api.ObservabilityPipelinesApi;
import com.datadog.api.client.v2.model.ObservabilityPipeline;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfig;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigDestinationItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigSourceItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineCreateRequest;
import com.datadog.api.client.v2.model.ObservabilityPipelineCreateRequestData;
import com.datadog.api.client.v2.model.ObservabilityPipelineDataAttributes;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSource;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSourceType;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestination;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestinationType;
import com.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessor;
import com.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessorType;
import java.util.Collections;

public class Example {
  public static void main(String[] args) {
    ApiClient defaultClient = ApiClient.getDefaultApiClient();
    defaultClient.setUnstableOperationEnabled("v2.createPipeline", true);
    ObservabilityPipelinesApi apiInstance = new ObservabilityPipelinesApi(defaultClient);

    ObservabilityPipelineCreateRequest body =
        new ObservabilityPipelineCreateRequest()
            .data(
                new ObservabilityPipelineCreateRequestData()
                    .attributes(
                        new ObservabilityPipelineDataAttributes()
                            .config(
                                new ObservabilityPipelineConfig()
                                    .destinations(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigDestinationItem(
                                                new ObservabilityPipelineDatadogLogsDestination()
                                                    .id("datadog-logs-destination")
                                                    .inputs(
                                                        Collections.singletonList(
                                                            "filter-processor"))
                                                    .type(
                                                        ObservabilityPipelineDatadogLogsDestinationType
                                                            .DATADOG_LOGS))))
                                    .processors(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigProcessorItem(
                                                new ObservabilityPipelineFilterProcessor()
                                                    .id("filter-processor")
                                                    .include("service:my-service")
                                                    .inputs(
                                                        Collections.singletonList(
                                                            "datadog-agent-source"))
                                                    .type(
                                                        ObservabilityPipelineFilterProcessorType
                                                            .FILTER))))
                                    .sources(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigSourceItem(
                                                new ObservabilityPipelineDatadogAgentSource()
                                                    .id("datadog-agent-source")
                                                    .type(
                                                        ObservabilityPipelineDatadogAgentSourceType
                                                            .DATADOG_AGENT)))))
                            .name("Main Observability Pipeline"))
                    .type("pipelines"));

    try {
      ObservabilityPipeline result = apiInstance.createPipeline(body);
      System.out.println(result);
    } catch (ApiException e) {
      System.err.println("Exception when calling ObservabilityPipelinesApi#createPipeline");
      System.err.println("Status code: " + e.getCode());
      System.err.println("Reason: " + e.getResponseBody());
      System.err.println("Response headers: " + e.getResponseHeaders());
      e.printStackTrace();
    }
  }
}

Instructions

First install the library and its dependencies and then save the example to Example.java and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" java "Example.java"
"""
Create a new pipeline returns "OK" response
"""

from datadog_api_client import ApiClient, Configuration
from datadog_api_client.v2.api.observability_pipelines_api import ObservabilityPipelinesApi
from datadog_api_client.v2.model.observability_pipeline_config import ObservabilityPipelineConfig
from datadog_api_client.v2.model.observability_pipeline_create_request import ObservabilityPipelineCreateRequest
from datadog_api_client.v2.model.observability_pipeline_create_request_data import (
    ObservabilityPipelineCreateRequestData,
)
from datadog_api_client.v2.model.observability_pipeline_data_attributes import ObservabilityPipelineDataAttributes
from datadog_api_client.v2.model.observability_pipeline_datadog_agent_source import (
    ObservabilityPipelineDatadogAgentSource,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_agent_source_type import (
    ObservabilityPipelineDatadogAgentSourceType,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_logs_destination import (
    ObservabilityPipelineDatadogLogsDestination,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_logs_destination_type import (
    ObservabilityPipelineDatadogLogsDestinationType,
)
from datadog_api_client.v2.model.observability_pipeline_filter_processor import ObservabilityPipelineFilterProcessor
from datadog_api_client.v2.model.observability_pipeline_filter_processor_type import (
    ObservabilityPipelineFilterProcessorType,
)

body = ObservabilityPipelineCreateRequest(
    data=ObservabilityPipelineCreateRequestData(
        attributes=ObservabilityPipelineDataAttributes(
            config=ObservabilityPipelineConfig(
                destinations=[
                    ObservabilityPipelineDatadogLogsDestination(
                        id="datadog-logs-destination",
                        inputs=[
                            "filter-processor",
                        ],
                        type=ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS,
                    ),
                ],
                processors=[
                    ObservabilityPipelineFilterProcessor(
                        id="filter-processor",
                        include="service:my-service",
                        inputs=[
                            "datadog-agent-source",
                        ],
                        type=ObservabilityPipelineFilterProcessorType.FILTER,
                    ),
                ],
                sources=[
                    ObservabilityPipelineDatadogAgentSource(
                        id="datadog-agent-source",
                        type=ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT,
                    ),
                ],
            ),
            name="Main Observability Pipeline",
        ),
        type="pipelines",
    ),
)

configuration = Configuration()
configuration.unstable_operations["create_pipeline"] = True
with ApiClient(configuration) as api_client:
    api_instance = ObservabilityPipelinesApi(api_client)
    response = api_instance.create_pipeline(body=body)

    print(response)

Instructions

First install the library and its dependencies and then save the example to example.py and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" python3 "example.py"
# Create a new pipeline returns "OK" response

require "datadog_api_client"
DatadogAPIClient.configure do |config|
  config.unstable_operations["v2.create_pipeline".to_sym] = true
end
api_instance = DatadogAPIClient::V2::ObservabilityPipelinesAPI.new

body = DatadogAPIClient::V2::ObservabilityPipelineCreateRequest.new({
  data: DatadogAPIClient::V2::ObservabilityPipelineCreateRequestData.new({
    attributes: DatadogAPIClient::V2::ObservabilityPipelineDataAttributes.new({
      config: DatadogAPIClient::V2::ObservabilityPipelineConfig.new({
        destinations: [
          DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestination.new({
            id: "datadog-logs-destination",
            inputs: [
              "filter-processor",
            ],
            type: DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,
          }),
        ],
        processors: [
          DatadogAPIClient::V2::ObservabilityPipelineFilterProcessor.new({
            id: "filter-processor",
            include: "service:my-service",
            inputs: [
              "datadog-agent-source",
            ],
            type: DatadogAPIClient::V2::ObservabilityPipelineFilterProcessorType::FILTER,
          }),
        ],
        sources: [
          DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSource.new({
            id: "datadog-agent-source",
            type: DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,
          }),
        ],
      }),
      name: "Main Observability Pipeline",
    }),
    type: "pipelines",
  }),
})
p api_instance.create_pipeline(body)

Instructions

First install the library and its dependencies and then save the example to example.rb and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
// Create a new pipeline returns "OK" response
use datadog_api_client::datadog;
use datadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfig;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigDestinationItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigSourceItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineCreateRequest;
use datadog_api_client::datadogV2::model::ObservabilityPipelineCreateRequestData;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDataAttributes;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSource;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSourceType;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestination;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestinationType;
use datadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessor;
use datadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessorType;

#[tokio::main]
async fn main() {
    let body =
        ObservabilityPipelineCreateRequest::new(
            ObservabilityPipelineCreateRequestData::new(
                ObservabilityPipelineDataAttributes::new(
                    ObservabilityPipelineConfig::new(
                        vec![
                            ObservabilityPipelineConfigDestinationItem::ObservabilityPipelineDatadogLogsDestination(
                                Box::new(
                                    ObservabilityPipelineDatadogLogsDestination::new(
                                        "datadog-logs-destination".to_string(),
                                        vec!["filter-processor".to_string()],
                                        ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,
                                    ),
                                ),
                            )
                        ],
                        vec![
                            ObservabilityPipelineConfigProcessorItem::ObservabilityPipelineFilterProcessor(
                                Box::new(
                                    ObservabilityPipelineFilterProcessor::new(
                                        "filter-processor".to_string(),
                                        "service:my-service".to_string(),
                                        vec!["datadog-agent-source".to_string()],
                                        ObservabilityPipelineFilterProcessorType::FILTER,
                                    ),
                                ),
                            )
                        ],
                        vec![
                            ObservabilityPipelineConfigSourceItem::ObservabilityPipelineDatadogAgentSource(
                                Box::new(
                                    ObservabilityPipelineDatadogAgentSource::new(
                                        "datadog-agent-source".to_string(),
                                        ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,
                                    ),
                                ),
                            )
                        ],
                    ),
                    "Main Observability Pipeline".to_string(),
                ),
                "pipelines".to_string(),
            ),
        );
    let mut configuration = datadog::Configuration::new();
    configuration.set_unstable_operation_enabled("v2.CreatePipeline", true);
    let api = ObservabilityPipelinesAPI::with_config(configuration);
    let resp = api.create_pipeline(body).await;
    if let Ok(value) = resp {
        println!("{:#?}", value);
    } else {
        println!("{:#?}", resp.unwrap_err());
    }
}

Instructions

First install the library and its dependencies and then save the example to src/main.rs and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" cargo run
/**
 * Create a new pipeline returns "OK" response
 */

import { client, v2 } from "@datadog/datadog-api-client";

const configuration = client.createConfiguration();
configuration.unstableOperations["v2.createPipeline"] = true;
const apiInstance = new v2.ObservabilityPipelinesApi(configuration);

const params: v2.ObservabilityPipelinesApiCreatePipelineRequest = {
  body: {
    data: {
      attributes: {
        config: {
          destinations: [
            {
              id: "datadog-logs-destination",
              inputs: ["filter-processor"],
              type: "datadog_logs",
            },
          ],
          processors: [
            {
              id: "filter-processor",
              include: "service:my-service",
              inputs: ["datadog-agent-source"],
              type: "filter",
            },
          ],
          sources: [
            {
              id: "datadog-agent-source",
              type: "datadog_agent",
            },
          ],
        },
        name: "Main Observability Pipeline",
      },
      type: "pipelines",
    },
  },
};

apiInstance
  .createPipeline(params)
  .then((data: v2.ObservabilityPipeline) => {
    console.log(
      "API called successfully. Returned data: " + JSON.stringify(data)
    );
  })
  .catch((error: any) => console.error(error));

Instructions

First install the library and its dependencies and then save the example to example.ts and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" tsc "example.ts"

Note: This endpoint is in Preview.

GET https://api.ap1.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.datadoghq.eu/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.ddog-gov.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.us3.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}

Información general

Get a specific pipeline by its ID. This endpoint requires the observability_pipelines_read permission.

Argumentos

Parámetros de ruta

Nombre

Tipo

Descripción

pipeline_id [required]

string

The ID of the pipeline to retrieve.

Respuesta

OK

Top-level schema representing a pipeline.

Expand All

Campo

Tipo

Descripción

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The datadog_logs destination forwards logs to Datadog Log Management.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 2

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 3

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

acl [required]

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata [required]

[object]

Custom metadata key-value pairs added to each object.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 4

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The sumo_logic destination forwards logs to Sumo Logic.

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 6

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 7

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 8

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 9

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 10

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 11

object

The google_chronicle destination sends logs to Google Chronicle.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 12

object

The new_relic destination sends logs to the New Relic platform.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 13

object

The sentinel_one destination sends logs to SentinelOne.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 14

object

The opensearch destination writes logs to an OpenSearch cluster.

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 15

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

processors

[ <oneOf>]

A list of processors that transform or enrich log data.

Option 1

object

The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 3

object

The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

drop_events [required]

boolean

If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota is exceeded. Options:

  • drop: Drop the event.

  • no_action: Let the event pass through.

  • overflow_routing: Route to an overflow destination.

    Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 4

object

The add_fields processor adds static key-value fields to logs.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 5

object

The remove_fields processor deletes specified fields from logs.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

The PipelineRemoveFieldsProcessor inputs.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 6

object

The rename_fields processor changes field names.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 7

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

metrics [required]

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 8

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

percentage

double

The percentage of logs to sample.

rate

int64

Number of events to sample (1 in N).

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 9

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules [required]

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 10

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 11

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 12

object

The add_env_vars processor adds environment variable values to log events.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 13

object

The dedupe processor removes duplicate fields in log events.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 14

object

The enrichment_table processor enriches logs using a static CSV file or GeoIP database.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The throttle processor limits the number of events that pass through over a given time window.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The kafka source ingests data from Apache Kafka topics.

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 2

object

The datadog_agent source collects logs from the Datadog Agent.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 3

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 4

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 6

object

The fluentd source ingests logs from a Fluentd-compatible service.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 7

object

The fluent_bit source ingests logs from Fluent Bit.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 8

object

The http_server source collects logs over HTTP POST from external services.

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The sumo_logic source receives logs from Sumo Logic collectors.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 10

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 11

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 12

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 13

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 14

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: basic,bearer

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 15

object

The logstash source ingests logs from a Logstash forwarder.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "datadog-logs-destination",
            "inputs": [
              "filter-processor"
            ],
            "type": "datadog_logs"
          }
        ],
        "processors": [
          {
            "id": "filter-processor",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "type": "filter"
          }
        ],
        "sources": [
          {
            "group_id": "consumer-group-0",
            "id": "kafka-source",
            "librdkafka_options": [
              {
                "name": "fetch.message.max.bytes",
                "value": "1048576"
              }
            ],
            "sasl": {
              "mechanism": "string"
            },
            "tls": {
              "ca_file": "string",
              "crt_file": "/path/to/cert.crt",
              "key_file": "string"
            },
            "topics": [
              "topic1",
              "topic2"
            ],
            "type": "kafka"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "type": "pipelines"
  }
}

Forbidden

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Ejemplo de código

                  # Path parameters
export pipeline_id="CHANGE_ME"
# Curl command
curl -X GET "https://api.ap1.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/${pipeline_id}" \ -H "Accept: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}"
"""
Get a specific pipeline returns "OK" response
"""

from os import environ
from datadog_api_client import ApiClient, Configuration
from datadog_api_client.v2.api.observability_pipelines_api import ObservabilityPipelinesApi

# there is a valid "pipeline" in the system
PIPELINE_DATA_ID = environ["PIPELINE_DATA_ID"]

configuration = Configuration()
configuration.unstable_operations["get_pipeline"] = True
with ApiClient(configuration) as api_client:
    api_instance = ObservabilityPipelinesApi(api_client)
    response = api_instance.get_pipeline(
        pipeline_id=PIPELINE_DATA_ID,
    )

    print(response)

Instructions

First install the library and its dependencies and then save the example to example.py and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" python3 "example.py"
# Get a specific pipeline returns "OK" response

require "datadog_api_client"
DatadogAPIClient.configure do |config|
  config.unstable_operations["v2.get_pipeline".to_sym] = true
end
api_instance = DatadogAPIClient::V2::ObservabilityPipelinesAPI.new

# there is a valid "pipeline" in the system
PIPELINE_DATA_ID = ENV["PIPELINE_DATA_ID"]
p api_instance.get_pipeline(PIPELINE_DATA_ID)

Instructions

First install the library and its dependencies and then save the example to example.rb and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
// Get a specific pipeline returns "OK" response

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"os"

	"github.com/DataDog/datadog-api-client-go/v2/api/datadog"
	"github.com/DataDog/datadog-api-client-go/v2/api/datadogV2"
)

func main() {
	// there is a valid "pipeline" in the system
	PipelineDataID := os.Getenv("PIPELINE_DATA_ID")

	ctx := datadog.NewDefaultContext(context.Background())
	configuration := datadog.NewConfiguration()
	configuration.SetUnstableOperationEnabled("v2.GetPipeline", true)
	apiClient := datadog.NewAPIClient(configuration)
	api := datadogV2.NewObservabilityPipelinesApi(apiClient)
	resp, r, err := api.GetPipeline(ctx, PipelineDataID)

	if err != nil {
		fmt.Fprintf(os.Stderr, "Error when calling `ObservabilityPipelinesApi.GetPipeline`: %v\n", err)
		fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
	}

	responseContent, _ := json.MarshalIndent(resp, "", "  ")
	fmt.Fprintf(os.Stdout, "Response from `ObservabilityPipelinesApi.GetPipeline`:\n%s\n", responseContent)
}

Instructions

First install the library and its dependencies and then save the example to main.go and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" go run "main.go"
// Get a specific pipeline returns "OK" response

import com.datadog.api.client.ApiClient;
import com.datadog.api.client.ApiException;
import com.datadog.api.client.v2.api.ObservabilityPipelinesApi;
import com.datadog.api.client.v2.model.ObservabilityPipeline;

public class Example {
  public static void main(String[] args) {
    ApiClient defaultClient = ApiClient.getDefaultApiClient();
    defaultClient.setUnstableOperationEnabled("v2.getPipeline", true);
    ObservabilityPipelinesApi apiInstance = new ObservabilityPipelinesApi(defaultClient);

    // there is a valid "pipeline" in the system
    String PIPELINE_DATA_ID = System.getenv("PIPELINE_DATA_ID");

    try {
      ObservabilityPipeline result = apiInstance.getPipeline(PIPELINE_DATA_ID);
      System.out.println(result);
    } catch (ApiException e) {
      System.err.println("Exception when calling ObservabilityPipelinesApi#getPipeline");
      System.err.println("Status code: " + e.getCode());
      System.err.println("Reason: " + e.getResponseBody());
      System.err.println("Response headers: " + e.getResponseHeaders());
      e.printStackTrace();
    }
  }
}

Instructions

First install the library and its dependencies and then save the example to Example.java and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" java "Example.java"
// Get a specific pipeline returns "OK" response
use datadog_api_client::datadog;
use datadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;

#[tokio::main]
async fn main() {
    // there is a valid "pipeline" in the system
    let pipeline_data_id = std::env::var("PIPELINE_DATA_ID").unwrap();
    let mut configuration = datadog::Configuration::new();
    configuration.set_unstable_operation_enabled("v2.GetPipeline", true);
    let api = ObservabilityPipelinesAPI::with_config(configuration);
    let resp = api.get_pipeline(pipeline_data_id.clone()).await;
    if let Ok(value) = resp {
        println!("{:#?}", value);
    } else {
        println!("{:#?}", resp.unwrap_err());
    }
}

Instructions

First install the library and its dependencies and then save the example to src/main.rs and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" cargo run
/**
 * Get a specific pipeline returns "OK" response
 */

import { client, v2 } from "@datadog/datadog-api-client";

const configuration = client.createConfiguration();
configuration.unstableOperations["v2.getPipeline"] = true;
const apiInstance = new v2.ObservabilityPipelinesApi(configuration);

// there is a valid "pipeline" in the system
const PIPELINE_DATA_ID = process.env.PIPELINE_DATA_ID as string;

const params: v2.ObservabilityPipelinesApiGetPipelineRequest = {
  pipelineId: PIPELINE_DATA_ID,
};

apiInstance
  .getPipeline(params)
  .then((data: v2.ObservabilityPipeline) => {
    console.log(
      "API called successfully. Returned data: " + JSON.stringify(data)
    );
  })
  .catch((error: any) => console.error(error));

Instructions

First install the library and its dependencies and then save the example to example.ts and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" tsc "example.ts"

Note: This endpoint is in Preview.

PUT https://api.ap1.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.datadoghq.eu/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.ddog-gov.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.us3.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}

Información general

Update a pipeline. This endpoint requires the observability_pipelines_deploy permission.

Argumentos

Parámetros de ruta

Nombre

Tipo

Descripción

pipeline_id [required]

string

The ID of the pipeline to update.

Solicitud

Body Data (required)

Expand All

Campo

Tipo

Descripción

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The datadog_logs destination forwards logs to Datadog Log Management.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 2

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 3

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

acl [required]

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata [required]

[object]

Custom metadata key-value pairs added to each object.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 4

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The sumo_logic destination forwards logs to Sumo Logic.

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 6

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 7

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 8

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 9

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 10

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 11

object

The google_chronicle destination sends logs to Google Chronicle.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 12

object

The new_relic destination sends logs to the New Relic platform.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 13

object

The sentinel_one destination sends logs to SentinelOne.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 14

object

The opensearch destination writes logs to an OpenSearch cluster.

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 15

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

processors

[ <oneOf>]

A list of processors that transform or enrich log data.

Option 1

object

The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 3

object

The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

drop_events [required]

boolean

If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota is exceeded. Options:

  • drop: Drop the event.

  • no_action: Let the event pass through.

  • overflow_routing: Route to an overflow destination.

    Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 4

object

The add_fields processor adds static key-value fields to logs.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 5

object

The remove_fields processor deletes specified fields from logs.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

The PipelineRemoveFieldsProcessor inputs.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 6

object

The rename_fields processor changes field names.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 7

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

metrics [required]

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 8

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

percentage

double

The percentage of logs to sample.

rate

int64

Number of events to sample (1 in N).

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 9

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules [required]

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 10

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 11

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 12

object

The add_env_vars processor adds environment variable values to log events.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 13

object

The dedupe processor removes duplicate fields in log events.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 14

object

The enrichment_table processor enriches logs using a static CSV file or GeoIP database.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The throttle processor limits the number of events that pass through over a given time window.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The kafka source ingests data from Apache Kafka topics.

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 2

object

The datadog_agent source collects logs from the Datadog Agent.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 3

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 4

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 6

object

The fluentd source ingests logs from a Fluentd-compatible service.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 7

object

The fluent_bit source ingests logs from Fluent Bit.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 8

object

The http_server source collects logs over HTTP POST from external services.

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The sumo_logic source receives logs from Sumo Logic collectors.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 10

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 11

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 12

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 13

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 14

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: basic,bearer

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 15

object

The logstash source ingests logs from a Logstash forwarder.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "updated-datadog-logs-destination-id",
            "inputs": [
              "filter-processor"
            ],
            "type": "datadog_logs"
          }
        ],
        "processors": [
          {
            "id": "filter-processor",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "type": "filter"
          }
        ],
        "sources": [
          {
            "id": "datadog-agent-source",
            "type": "datadog_agent"
          }
        ]
      },
      "name": "Updated Pipeline Name"
    },
    "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "type": "pipelines"
  }
}

Respuesta

OK

Top-level schema representing a pipeline.

Expand All

Campo

Tipo

Descripción

data [required]

object

Contains the pipeline’s ID, type, and configuration attributes.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The datadog_logs destination forwards logs to Datadog Log Management.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 2

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 3

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

acl [required]

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata [required]

[object]

Custom metadata key-value pairs added to each object.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 4

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The sumo_logic destination forwards logs to Sumo Logic.

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 6

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 7

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 8

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 9

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 10

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 11

object

The google_chronicle destination sends logs to Google Chronicle.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 12

object

The new_relic destination sends logs to the New Relic platform.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 13

object

The sentinel_one destination sends logs to SentinelOne.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 14

object

The opensearch destination writes logs to an OpenSearch cluster.

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 15

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

processors

[ <oneOf>]

A list of processors that transform or enrich log data.

Option 1

object

The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 3

object

The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

drop_events [required]

boolean

If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota is exceeded. Options:

  • drop: Drop the event.

  • no_action: Let the event pass through.

  • overflow_routing: Route to an overflow destination.

    Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 4

object

The add_fields processor adds static key-value fields to logs.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 5

object

The remove_fields processor deletes specified fields from logs.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

The PipelineRemoveFieldsProcessor inputs.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 6

object

The rename_fields processor changes field names.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 7

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

metrics [required]

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 8

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

percentage

double

The percentage of logs to sample.

rate

int64

Number of events to sample (1 in N).

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 9

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules [required]

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 10

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 11

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 12

object

The add_env_vars processor adds environment variable values to log events.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 13

object

The dedupe processor removes duplicate fields in log events.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 14

object

The enrichment_table processor enriches logs using a static CSV file or GeoIP database.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The throttle processor limits the number of events that pass through over a given time window.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The kafka source ingests data from Apache Kafka topics.

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 2

object

The datadog_agent source collects logs from the Datadog Agent.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 3

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 4

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 6

object

The fluentd source ingests logs from a Fluentd-compatible service.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 7

object

The fluent_bit source ingests logs from Fluent Bit.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 8

object

The http_server source collects logs over HTTP POST from external services.

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The sumo_logic source receives logs from Sumo Logic collectors.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 10

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 11

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 12

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 13

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 14

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: basic,bearer

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 15

object

The logstash source ingests logs from a Logstash forwarder.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

name [required]

string

Name of the pipeline.

id [required]

string

Unique identifier for the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "datadog-logs-destination",
            "inputs": [
              "filter-processor"
            ],
            "type": "datadog_logs"
          }
        ],
        "processors": [
          {
            "id": "filter-processor",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "type": "filter"
          }
        ],
        "sources": [
          {
            "group_id": "consumer-group-0",
            "id": "kafka-source",
            "librdkafka_options": [
              {
                "name": "fetch.message.max.bytes",
                "value": "1048576"
              }
            ],
            "sasl": {
              "mechanism": "string"
            },
            "tls": {
              "ca_file": "string",
              "crt_file": "/path/to/cert.crt",
              "key_file": "string"
            },
            "topics": [
              "topic1",
              "topic2"
            ],
            "type": "kafka"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
    "type": "pipelines"
  }
}

Bad Request

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Authorized

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Found

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Conflict

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Ejemplo de código

                          # Path parameters
export pipeline_id="CHANGE_ME"
# Curl command
curl -X PUT "https://api.ap1.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/${pipeline_id}" \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}" \ -d @- << EOF { "data": { "attributes": { "config": { "destinations": [ { "id": "updated-datadog-logs-destination-id", "inputs": [ "filter-processor" ], "type": "datadog_logs" } ], "processors": [ { "id": "filter-processor", "include": "service:my-service", "inputs": [ "datadog-agent-source" ], "type": "filter" } ], "sources": [ { "id": "datadog-agent-source", "type": "datadog_agent" } ] }, "name": "Updated Pipeline Name" }, "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "type": "pipelines" } } EOF
// Update a pipeline returns "OK" response

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"os"

	"github.com/DataDog/datadog-api-client-go/v2/api/datadog"
	"github.com/DataDog/datadog-api-client-go/v2/api/datadogV2"
)

func main() {
	// there is a valid "pipeline" in the system
	PipelineDataID := os.Getenv("PIPELINE_DATA_ID")

	body := datadogV2.ObservabilityPipeline{
		Data: datadogV2.ObservabilityPipelineData{
			Attributes: datadogV2.ObservabilityPipelineDataAttributes{
				Config: datadogV2.ObservabilityPipelineConfig{
					Destinations: []datadogV2.ObservabilityPipelineConfigDestinationItem{
						datadogV2.ObservabilityPipelineConfigDestinationItem{
							ObservabilityPipelineDatadogLogsDestination: &datadogV2.ObservabilityPipelineDatadogLogsDestination{
								Id: "updated-datadog-logs-destination-id",
								Inputs: []string{
									"filter-processor",
								},
								Type: datadogV2.OBSERVABILITYPIPELINEDATADOGLOGSDESTINATIONTYPE_DATADOG_LOGS,
							}},
					},
					Processors: []datadogV2.ObservabilityPipelineConfigProcessorItem{
						datadogV2.ObservabilityPipelineConfigProcessorItem{
							ObservabilityPipelineFilterProcessor: &datadogV2.ObservabilityPipelineFilterProcessor{
								Id:      "filter-processor",
								Include: "service:my-service",
								Inputs: []string{
									"datadog-agent-source",
								},
								Type: datadogV2.OBSERVABILITYPIPELINEFILTERPROCESSORTYPE_FILTER,
							}},
					},
					Sources: []datadogV2.ObservabilityPipelineConfigSourceItem{
						datadogV2.ObservabilityPipelineConfigSourceItem{
							ObservabilityPipelineDatadogAgentSource: &datadogV2.ObservabilityPipelineDatadogAgentSource{
								Id:   "datadog-agent-source",
								Type: datadogV2.OBSERVABILITYPIPELINEDATADOGAGENTSOURCETYPE_DATADOG_AGENT,
							}},
					},
				},
				Name: "Updated Pipeline Name",
			},
			Id:   PipelineDataID,
			Type: "pipelines",
		},
	}
	ctx := datadog.NewDefaultContext(context.Background())
	configuration := datadog.NewConfiguration()
	configuration.SetUnstableOperationEnabled("v2.UpdatePipeline", true)
	apiClient := datadog.NewAPIClient(configuration)
	api := datadogV2.NewObservabilityPipelinesApi(apiClient)
	resp, r, err := api.UpdatePipeline(ctx, PipelineDataID, body)

	if err != nil {
		fmt.Fprintf(os.Stderr, "Error when calling `ObservabilityPipelinesApi.UpdatePipeline`: %v\n", err)
		fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
	}

	responseContent, _ := json.MarshalIndent(resp, "", "  ")
	fmt.Fprintf(os.Stdout, "Response from `ObservabilityPipelinesApi.UpdatePipeline`:\n%s\n", responseContent)
}

Instructions

First install the library and its dependencies and then save the example to main.go and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" go run "main.go"
// Update a pipeline returns "OK" response

import com.datadog.api.client.ApiClient;
import com.datadog.api.client.ApiException;
import com.datadog.api.client.v2.api.ObservabilityPipelinesApi;
import com.datadog.api.client.v2.model.ObservabilityPipeline;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfig;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigDestinationItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigProcessorItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineConfigSourceItem;
import com.datadog.api.client.v2.model.ObservabilityPipelineData;
import com.datadog.api.client.v2.model.ObservabilityPipelineDataAttributes;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSource;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogAgentSourceType;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestination;
import com.datadog.api.client.v2.model.ObservabilityPipelineDatadogLogsDestinationType;
import com.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessor;
import com.datadog.api.client.v2.model.ObservabilityPipelineFilterProcessorType;
import java.util.Collections;

public class Example {
  public static void main(String[] args) {
    ApiClient defaultClient = ApiClient.getDefaultApiClient();
    defaultClient.setUnstableOperationEnabled("v2.updatePipeline", true);
    ObservabilityPipelinesApi apiInstance = new ObservabilityPipelinesApi(defaultClient);

    // there is a valid "pipeline" in the system
    String PIPELINE_DATA_ID = System.getenv("PIPELINE_DATA_ID");

    ObservabilityPipeline body =
        new ObservabilityPipeline()
            .data(
                new ObservabilityPipelineData()
                    .attributes(
                        new ObservabilityPipelineDataAttributes()
                            .config(
                                new ObservabilityPipelineConfig()
                                    .destinations(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigDestinationItem(
                                                new ObservabilityPipelineDatadogLogsDestination()
                                                    .id("updated-datadog-logs-destination-id")
                                                    .inputs(
                                                        Collections.singletonList(
                                                            "filter-processor"))
                                                    .type(
                                                        ObservabilityPipelineDatadogLogsDestinationType
                                                            .DATADOG_LOGS))))
                                    .processors(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigProcessorItem(
                                                new ObservabilityPipelineFilterProcessor()
                                                    .id("filter-processor")
                                                    .include("service:my-service")
                                                    .inputs(
                                                        Collections.singletonList(
                                                            "datadog-agent-source"))
                                                    .type(
                                                        ObservabilityPipelineFilterProcessorType
                                                            .FILTER))))
                                    .sources(
                                        Collections.singletonList(
                                            new ObservabilityPipelineConfigSourceItem(
                                                new ObservabilityPipelineDatadogAgentSource()
                                                    .id("datadog-agent-source")
                                                    .type(
                                                        ObservabilityPipelineDatadogAgentSourceType
                                                            .DATADOG_AGENT)))))
                            .name("Updated Pipeline Name"))
                    .id(PIPELINE_DATA_ID)
                    .type("pipelines"));

    try {
      ObservabilityPipeline result = apiInstance.updatePipeline(PIPELINE_DATA_ID, body);
      System.out.println(result);
    } catch (ApiException e) {
      System.err.println("Exception when calling ObservabilityPipelinesApi#updatePipeline");
      System.err.println("Status code: " + e.getCode());
      System.err.println("Reason: " + e.getResponseBody());
      System.err.println("Response headers: " + e.getResponseHeaders());
      e.printStackTrace();
    }
  }
}

Instructions

First install the library and its dependencies and then save the example to Example.java and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" java "Example.java"
"""
Update a pipeline returns "OK" response
"""

from os import environ
from datadog_api_client import ApiClient, Configuration
from datadog_api_client.v2.api.observability_pipelines_api import ObservabilityPipelinesApi
from datadog_api_client.v2.model.observability_pipeline import ObservabilityPipeline
from datadog_api_client.v2.model.observability_pipeline_config import ObservabilityPipelineConfig
from datadog_api_client.v2.model.observability_pipeline_data import ObservabilityPipelineData
from datadog_api_client.v2.model.observability_pipeline_data_attributes import ObservabilityPipelineDataAttributes
from datadog_api_client.v2.model.observability_pipeline_datadog_agent_source import (
    ObservabilityPipelineDatadogAgentSource,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_agent_source_type import (
    ObservabilityPipelineDatadogAgentSourceType,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_logs_destination import (
    ObservabilityPipelineDatadogLogsDestination,
)
from datadog_api_client.v2.model.observability_pipeline_datadog_logs_destination_type import (
    ObservabilityPipelineDatadogLogsDestinationType,
)
from datadog_api_client.v2.model.observability_pipeline_filter_processor import ObservabilityPipelineFilterProcessor
from datadog_api_client.v2.model.observability_pipeline_filter_processor_type import (
    ObservabilityPipelineFilterProcessorType,
)

# there is a valid "pipeline" in the system
PIPELINE_DATA_ID = environ["PIPELINE_DATA_ID"]

body = ObservabilityPipeline(
    data=ObservabilityPipelineData(
        attributes=ObservabilityPipelineDataAttributes(
            config=ObservabilityPipelineConfig(
                destinations=[
                    ObservabilityPipelineDatadogLogsDestination(
                        id="updated-datadog-logs-destination-id",
                        inputs=[
                            "filter-processor",
                        ],
                        type=ObservabilityPipelineDatadogLogsDestinationType.DATADOG_LOGS,
                    ),
                ],
                processors=[
                    ObservabilityPipelineFilterProcessor(
                        id="filter-processor",
                        include="service:my-service",
                        inputs=[
                            "datadog-agent-source",
                        ],
                        type=ObservabilityPipelineFilterProcessorType.FILTER,
                    ),
                ],
                sources=[
                    ObservabilityPipelineDatadogAgentSource(
                        id="datadog-agent-source",
                        type=ObservabilityPipelineDatadogAgentSourceType.DATADOG_AGENT,
                    ),
                ],
            ),
            name="Updated Pipeline Name",
        ),
        id=PIPELINE_DATA_ID,
        type="pipelines",
    ),
)

configuration = Configuration()
configuration.unstable_operations["update_pipeline"] = True
with ApiClient(configuration) as api_client:
    api_instance = ObservabilityPipelinesApi(api_client)
    response = api_instance.update_pipeline(pipeline_id=PIPELINE_DATA_ID, body=body)

    print(response)

Instructions

First install the library and its dependencies and then save the example to example.py and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" python3 "example.py"
# Update a pipeline returns "OK" response

require "datadog_api_client"
DatadogAPIClient.configure do |config|
  config.unstable_operations["v2.update_pipeline".to_sym] = true
end
api_instance = DatadogAPIClient::V2::ObservabilityPipelinesAPI.new

# there is a valid "pipeline" in the system
PIPELINE_DATA_ID = ENV["PIPELINE_DATA_ID"]

body = DatadogAPIClient::V2::ObservabilityPipeline.new({
  data: DatadogAPIClient::V2::ObservabilityPipelineData.new({
    attributes: DatadogAPIClient::V2::ObservabilityPipelineDataAttributes.new({
      config: DatadogAPIClient::V2::ObservabilityPipelineConfig.new({
        destinations: [
          DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestination.new({
            id: "updated-datadog-logs-destination-id",
            inputs: [
              "filter-processor",
            ],
            type: DatadogAPIClient::V2::ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,
          }),
        ],
        processors: [
          DatadogAPIClient::V2::ObservabilityPipelineFilterProcessor.new({
            id: "filter-processor",
            include: "service:my-service",
            inputs: [
              "datadog-agent-source",
            ],
            type: DatadogAPIClient::V2::ObservabilityPipelineFilterProcessorType::FILTER,
          }),
        ],
        sources: [
          DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSource.new({
            id: "datadog-agent-source",
            type: DatadogAPIClient::V2::ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,
          }),
        ],
      }),
      name: "Updated Pipeline Name",
    }),
    id: PIPELINE_DATA_ID,
    type: "pipelines",
  }),
})
p api_instance.update_pipeline(PIPELINE_DATA_ID, body)

Instructions

First install the library and its dependencies and then save the example to example.rb and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
// Update a pipeline returns "OK" response
use datadog_api_client::datadog;
use datadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;
use datadog_api_client::datadogV2::model::ObservabilityPipeline;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfig;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigDestinationItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigProcessorItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineConfigSourceItem;
use datadog_api_client::datadogV2::model::ObservabilityPipelineData;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDataAttributes;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSource;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogAgentSourceType;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestination;
use datadog_api_client::datadogV2::model::ObservabilityPipelineDatadogLogsDestinationType;
use datadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessor;
use datadog_api_client::datadogV2::model::ObservabilityPipelineFilterProcessorType;

#[tokio::main]
async fn main() {
    // there is a valid "pipeline" in the system
    let pipeline_data_id = std::env::var("PIPELINE_DATA_ID").unwrap();
    let body =
        ObservabilityPipeline::new(
            ObservabilityPipelineData::new(
                ObservabilityPipelineDataAttributes::new(
                    ObservabilityPipelineConfig::new(
                        vec![
                            ObservabilityPipelineConfigDestinationItem::ObservabilityPipelineDatadogLogsDestination(
                                Box::new(
                                    ObservabilityPipelineDatadogLogsDestination::new(
                                        "updated-datadog-logs-destination-id".to_string(),
                                        vec!["filter-processor".to_string()],
                                        ObservabilityPipelineDatadogLogsDestinationType::DATADOG_LOGS,
                                    ),
                                ),
                            )
                        ],
                        vec![
                            ObservabilityPipelineConfigProcessorItem::ObservabilityPipelineFilterProcessor(
                                Box::new(
                                    ObservabilityPipelineFilterProcessor::new(
                                        "filter-processor".to_string(),
                                        "service:my-service".to_string(),
                                        vec!["datadog-agent-source".to_string()],
                                        ObservabilityPipelineFilterProcessorType::FILTER,
                                    ),
                                ),
                            )
                        ],
                        vec![
                            ObservabilityPipelineConfigSourceItem::ObservabilityPipelineDatadogAgentSource(
                                Box::new(
                                    ObservabilityPipelineDatadogAgentSource::new(
                                        "datadog-agent-source".to_string(),
                                        ObservabilityPipelineDatadogAgentSourceType::DATADOG_AGENT,
                                    ),
                                ),
                            )
                        ],
                    ),
                    "Updated Pipeline Name".to_string(),
                ),
                pipeline_data_id.clone(),
                "pipelines".to_string(),
            ),
        );
    let mut configuration = datadog::Configuration::new();
    configuration.set_unstable_operation_enabled("v2.UpdatePipeline", true);
    let api = ObservabilityPipelinesAPI::with_config(configuration);
    let resp = api.update_pipeline(pipeline_data_id.clone(), body).await;
    if let Ok(value) = resp {
        println!("{:#?}", value);
    } else {
        println!("{:#?}", resp.unwrap_err());
    }
}

Instructions

First install the library and its dependencies and then save the example to src/main.rs and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" cargo run
/**
 * Update a pipeline returns "OK" response
 */

import { client, v2 } from "@datadog/datadog-api-client";

const configuration = client.createConfiguration();
configuration.unstableOperations["v2.updatePipeline"] = true;
const apiInstance = new v2.ObservabilityPipelinesApi(configuration);

// there is a valid "pipeline" in the system
const PIPELINE_DATA_ID = process.env.PIPELINE_DATA_ID as string;

const params: v2.ObservabilityPipelinesApiUpdatePipelineRequest = {
  body: {
    data: {
      attributes: {
        config: {
          destinations: [
            {
              id: "updated-datadog-logs-destination-id",
              inputs: ["filter-processor"],
              type: "datadog_logs",
            },
          ],
          processors: [
            {
              id: "filter-processor",
              include: "service:my-service",
              inputs: ["datadog-agent-source"],
              type: "filter",
            },
          ],
          sources: [
            {
              id: "datadog-agent-source",
              type: "datadog_agent",
            },
          ],
        },
        name: "Updated Pipeline Name",
      },
      id: PIPELINE_DATA_ID,
      type: "pipelines",
    },
  },
  pipelineId: PIPELINE_DATA_ID,
};

apiInstance
  .updatePipeline(params)
  .then((data: v2.ObservabilityPipeline) => {
    console.log(
      "API called successfully. Returned data: " + JSON.stringify(data)
    );
  })
  .catch((error: any) => console.error(error));

Instructions

First install the library and its dependencies and then save the example to example.ts and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" tsc "example.ts"

Note: This endpoint is in Preview.

DELETE https://api.ap1.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.datadoghq.eu/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.ddog-gov.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.us3.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/{pipeline_id}

Información general

Delete a pipeline. This endpoint requires the observability_pipelines_delete permission.

Argumentos

Parámetros de ruta

Nombre

Tipo

Descripción

pipeline_id [required]

string

The ID of the pipeline to delete.

Respuesta

OK

Forbidden

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Found

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Conflict

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Ejemplo de código

                  # Path parameters
export pipeline_id="CHANGE_ME"
# Curl command
curl -X DELETE "https://api.ap1.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/${pipeline_id}" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}"
"""
Delete a pipeline returns "OK" response
"""

from os import environ
from datadog_api_client import ApiClient, Configuration
from datadog_api_client.v2.api.observability_pipelines_api import ObservabilityPipelinesApi

# there is a valid "pipeline" in the system
PIPELINE_DATA_ID = environ["PIPELINE_DATA_ID"]

configuration = Configuration()
configuration.unstable_operations["delete_pipeline"] = True
with ApiClient(configuration) as api_client:
    api_instance = ObservabilityPipelinesApi(api_client)
    api_instance.delete_pipeline(
        pipeline_id=PIPELINE_DATA_ID,
    )

Instructions

First install the library and its dependencies and then save the example to example.py and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" python3 "example.py"
# Delete a pipeline returns "OK" response

require "datadog_api_client"
DatadogAPIClient.configure do |config|
  config.unstable_operations["v2.delete_pipeline".to_sym] = true
end
api_instance = DatadogAPIClient::V2::ObservabilityPipelinesAPI.new

# there is a valid "pipeline" in the system
PIPELINE_DATA_ID = ENV["PIPELINE_DATA_ID"]
api_instance.delete_pipeline(PIPELINE_DATA_ID)

Instructions

First install the library and its dependencies and then save the example to example.rb and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
// Delete a pipeline returns "OK" response

package main

import (
	"context"
	"fmt"
	"os"

	"github.com/DataDog/datadog-api-client-go/v2/api/datadog"
	"github.com/DataDog/datadog-api-client-go/v2/api/datadogV2"
)

func main() {
	// there is a valid "pipeline" in the system
	PipelineDataID := os.Getenv("PIPELINE_DATA_ID")

	ctx := datadog.NewDefaultContext(context.Background())
	configuration := datadog.NewConfiguration()
	configuration.SetUnstableOperationEnabled("v2.DeletePipeline", true)
	apiClient := datadog.NewAPIClient(configuration)
	api := datadogV2.NewObservabilityPipelinesApi(apiClient)
	r, err := api.DeletePipeline(ctx, PipelineDataID)

	if err != nil {
		fmt.Fprintf(os.Stderr, "Error when calling `ObservabilityPipelinesApi.DeletePipeline`: %v\n", err)
		fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
	}
}

Instructions

First install the library and its dependencies and then save the example to main.go and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" go run "main.go"
// Delete a pipeline returns "OK" response

import com.datadog.api.client.ApiClient;
import com.datadog.api.client.ApiException;
import com.datadog.api.client.v2.api.ObservabilityPipelinesApi;

public class Example {
  public static void main(String[] args) {
    ApiClient defaultClient = ApiClient.getDefaultApiClient();
    defaultClient.setUnstableOperationEnabled("v2.deletePipeline", true);
    ObservabilityPipelinesApi apiInstance = new ObservabilityPipelinesApi(defaultClient);

    // there is a valid "pipeline" in the system
    String PIPELINE_DATA_ID = System.getenv("PIPELINE_DATA_ID");

    try {
      apiInstance.deletePipeline(PIPELINE_DATA_ID);
    } catch (ApiException e) {
      System.err.println("Exception when calling ObservabilityPipelinesApi#deletePipeline");
      System.err.println("Status code: " + e.getCode());
      System.err.println("Reason: " + e.getResponseBody());
      System.err.println("Response headers: " + e.getResponseHeaders());
      e.printStackTrace();
    }
  }
}

Instructions

First install the library and its dependencies and then save the example to Example.java and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" java "Example.java"
// Delete a pipeline returns "OK" response
use datadog_api_client::datadog;
use datadog_api_client::datadogV2::api_observability_pipelines::ObservabilityPipelinesAPI;

#[tokio::main]
async fn main() {
    // there is a valid "pipeline" in the system
    let pipeline_data_id = std::env::var("PIPELINE_DATA_ID").unwrap();
    let mut configuration = datadog::Configuration::new();
    configuration.set_unstable_operation_enabled("v2.DeletePipeline", true);
    let api = ObservabilityPipelinesAPI::with_config(configuration);
    let resp = api.delete_pipeline(pipeline_data_id.clone()).await;
    if let Ok(value) = resp {
        println!("{:#?}", value);
    } else {
        println!("{:#?}", resp.unwrap_err());
    }
}

Instructions

First install the library and its dependencies and then save the example to src/main.rs and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" cargo run
/**
 * Delete a pipeline returns "OK" response
 */

import { client, v2 } from "@datadog/datadog-api-client";

const configuration = client.createConfiguration();
configuration.unstableOperations["v2.deletePipeline"] = true;
const apiInstance = new v2.ObservabilityPipelinesApi(configuration);

// there is a valid "pipeline" in the system
const PIPELINE_DATA_ID = process.env.PIPELINE_DATA_ID as string;

const params: v2.ObservabilityPipelinesApiDeletePipelineRequest = {
  pipelineId: PIPELINE_DATA_ID,
};

apiInstance
  .deletePipeline(params)
  .then((data: any) => {
    console.log(
      "API called successfully. Returned data: " + JSON.stringify(data)
    );
  })
  .catch((error: any) => console.error(error));

Instructions

First install the library and its dependencies and then save the example to example.ts and run following commands:

    
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" tsc "example.ts"

Note: This endpoint is in Preview.

POST https://api.ap1.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/validatehttps://api.datadoghq.eu/api/v2/remote_config/products/obs_pipelines/pipelines/validatehttps://api.ddog-gov.com/api/v2/remote_config/products/obs_pipelines/pipelines/validatehttps://api.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/validatehttps://api.us3.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/validatehttps://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/validate

Información general

Validates a pipeline configuration without creating or updating any resources. Returns a list of validation errors, if any. This endpoint requires the observability_pipelines_read permission.

Solicitud

Body Data (required)

Expand All

Campo

Tipo

Descripción

data [required]

object

Contains the the pipeline configuration.

attributes [required]

object

Defines the pipeline’s name and its components (sources, processors, and destinations).

config [required]

object

Specifies the pipeline's configuration, including its sources, processors, and destinations.

destinations [required]

[ <oneOf>]

A list of destination components where processed logs are sent.

Option 1

object

The datadog_logs destination forwards logs to Datadog Log Management.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be datadog_logs. Allowed enum values: datadog_logs

default: datadog_logs

Option 2

object

The amazon_s3 destination sends your logs in Datadog-rehydratable format to an Amazon S3 bucket for archiving.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

bucket [required]

string

S3 bucket name.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys.

region [required]

string

AWS region of the S3 bucket.

storage_class [required]

enum

S3 storage class. Allowed enum values: STANDARD,REDUCED_REDUNDANCY,INTELLIGENT_TIERING,STANDARD_IA,EXPRESS_ONEZONE,ONEZONE_IA,GLACIER,GLACIER_IR,DEEP_ARCHIVE

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 3

object

The google_cloud_storage destination stores logs in a Google Cloud Storage (GCS) bucket. It requires a bucket name, GCP authentication, and metadata fields.

acl [required]

enum

Access control list setting for objects written to the bucket. Allowed enum values: private,project-private,public-read,authenticated-read,bucket-owner-read,bucket-owner-full-control

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

bucket [required]

string

Name of the GCS bucket.

id [required]

string

Unique identifier for the destination component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

key_prefix

string

Optional prefix for object keys within the GCS bucket.

metadata [required]

[object]

Custom metadata key-value pairs added to each object.

name [required]

string

The metadata key.

value [required]

string

The metadata value.

storage_class [required]

enum

Storage class used for objects stored in GCS. Allowed enum values: STANDARD,NEARLINE,COLDLINE,ARCHIVE

type [required]

enum

The destination type. Always google_cloud_storage. Allowed enum values: google_cloud_storage

default: google_cloud_storage

Option 4

object

The splunk_hec destination forwards logs to Splunk using the HTTP Event Collector (HEC).

auto_extract_timestamp

boolean

If true, Splunk tries to extract timestamps from incoming log events. If false, Splunk assigns the time the event was received.

encoding

enum

Encoding format for log events. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

index

string

Optional name of the Splunk index where logs are written.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

sourcetype

string

The Splunk sourcetype to assign to log events.

type [required]

enum

The destination type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The sumo_logic destination forwards logs to Sumo Logic.

encoding

enum

The output encoding format. Allowed enum values: json,raw_message,logfmt

header_custom_fields

[object]

A list of custom headers to include in the request to Sumo Logic.

name [required]

string

The header field name.

value [required]

string

The header field value.

header_host_name

string

Optional override for the host name header.

header_source_category

string

Optional override for the source category header.

header_source_name

string

Optional override for the source name header.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 6

object

The elasticsearch destination writes logs to an Elasticsearch cluster.

api_version

enum

The Elasticsearch API version to use. Set to auto to auto-detect. Allowed enum values: auto,v6,v7,v8

bulk_index

string

The index to write logs to in Elasticsearch.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be elasticsearch. Allowed enum values: elasticsearch

default: elasticsearch

Option 7

object

The rsyslog destination forwards logs to an external rsyslog server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 8

object

The syslog_ng destination forwards logs to an external syslog-ng server over TCP or UDP using the syslog protocol.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

keepalive

int64

Optional socket keepalive duration in milliseconds.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The destination type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 9

object

The azure_storage destination forwards logs to an Azure Blob Storage container.

blob_prefix

string

Optional prefix for blobs written to the container.

container_name [required]

string

The name of the Azure Blob Storage container to store logs in.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be azure_storage. Allowed enum values: azure_storage

default: azure_storage

Option 10

object

The microsoft_sentinel destination forwards logs to Microsoft Sentinel.

client_id [required]

string

Azure AD client ID used for authentication.

dcr_immutable_id [required]

string

The immutable ID of the Data Collection Rule (DCR).

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

table [required]

string

The name of the Log Analytics table where logs are sent.

tenant_id [required]

string

Azure AD tenant ID.

type [required]

enum

The destination type. The value should always be microsoft_sentinel. Allowed enum values: microsoft_sentinel

default: microsoft_sentinel

Option 11

object

The google_chronicle destination sends logs to Google Chronicle.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

customer_id [required]

string

The Google Chronicle customer ID.

encoding

enum

The encoding format for the logs sent to Chronicle. Allowed enum values: json,raw_message

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

log_type

string

The log type metadata associated with the Chronicle destination.

type [required]

enum

The destination type. The value should always be google_chronicle. Allowed enum values: google_chronicle

default: google_chronicle

Option 12

object

The new_relic destination sends logs to the New Relic platform.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The New Relic region. Allowed enum values: us,eu

type [required]

enum

The destination type. The value should always be new_relic. Allowed enum values: new_relic

default: new_relic

Option 13

object

The sentinel_one destination sends logs to SentinelOne.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

region [required]

enum

The SentinelOne region to send logs to. Allowed enum values: us,eu,ca,data_set_us

type [required]

enum

The destination type. The value should always be sentinel_one. Allowed enum values: sentinel_one

default: sentinel_one

Option 14

object

The opensearch destination writes logs to an OpenSearch cluster.

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be opensearch. Allowed enum values: opensearch

default: opensearch

Option 15

object

The amazon_opensearch destination writes logs to Amazon OpenSearch.

auth [required]

object

Authentication settings for the Amazon OpenSearch destination. The strategy field determines whether basic or AWS-based authentication is used.

assume_role

string

The ARN of the role to assume (used with aws strategy).

aws_region

string

AWS region

external_id

string

External ID for the assumed role (used with aws strategy).

session_name

string

Session name for the assumed role (used with aws strategy).

strategy [required]

enum

The authentication strategy to use. Allowed enum values: basic,aws

bulk_index

string

The index to write logs to.

id [required]

string

The unique identifier for this component.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The destination type. The value should always be amazon_opensearch. Allowed enum values: amazon_opensearch

default: amazon_opensearch

processors

[ <oneOf>]

A list of processors that transform or enrich log data.

Option 1

object

The filter processor allows conditional processing of logs based on a Datadog search query. Logs that match the include query are passed through; others are discarded.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs should pass through the filter. Logs that match this query continue to downstream components; others are dropped.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be filter. Allowed enum values: filter

default: filter

Option 2

object

The parse_json processor extracts JSON from a specified field and flattens it into the event. This is useful when logs contain embedded JSON as a string.

field [required]

string

The name of the log field that contains a JSON string.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be parse_json. Allowed enum values: parse_json

default: parse_json

Option 3

object

The Quota Processor measures logging traffic for logs that match a specified filter. When the configured daily quota is met, the processor can drop or alert.

drop_events [required]

boolean

If set to true, logs that matched the quota filter and sent after the quota has been met are dropped; only logs that did not match the filter query continue through the pipeline.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

ignore_when_missing_partitions

boolean

If true, the processor skips quota checks when partition fields are missing from the logs.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

name [required]

string

Name of the quota.

overflow_action

enum

The action to take when the quota is exceeded. Options:

  • drop: Drop the event.

  • no_action: Let the event pass through.

  • overflow_routing: Route to an overflow destination.

    Allowed enum values: drop,no_action,overflow_routing

overrides

[object]

A list of alternate quota rules that apply to specific sets of events, identified by matching field values. Each override can define a custom limit.

fields [required]

[object]

A list of field matchers used to apply a specific override. If an event matches all listed key-value pairs, the corresponding override limit is enforced.

name [required]

string

The field name.

value [required]

string

The field value.

limit [required]

object

The maximum amount of data or number of events allowed before the quota is enforced. Can be specified in bytes or events.

enforce [required]

enum

Unit for quota enforcement in bytes for data size or events for count. Allowed enum values: bytes,events

limit [required]

int64

The limit for quota enforcement.

partition_fields

[string]

A list of fields used to segment log traffic for quota enforcement. Quotas are tracked independently by unique combinations of these field values.

type [required]

enum

The processor type. The value should always be quota. Allowed enum values: quota

default: quota

Option 4

object

The add_fields processor adds static key-value fields to logs.

fields [required]

[object]

A list of static fields (key-value pairs) that is added to each log event processed by this component.

name [required]

string

The field name.

value [required]

string

The field value.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be add_fields. Allowed enum values: add_fields

default: add_fields

Option 5

object

The remove_fields processor deletes specified fields from logs.

fields [required]

[string]

A list of field names to be removed from each log event.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

The PipelineRemoveFieldsProcessor inputs.

type [required]

enum

The processor type. The value should always be remove_fields. Allowed enum values: remove_fields

default: remove_fields

Option 6

object

The rename_fields processor changes field names.

fields [required]

[object]

A list of rename rules specifying which fields to rename in the event, what to rename them to, and whether to preserve the original fields.

destination [required]

string

The field name to assign the renamed value to.

preserve_source [required]

boolean

Indicates whether the original field, that is received from the source, should be kept (true) or removed (false) after renaming.

source [required]

string

The original field name in the log event that should be renamed.

id [required]

string

A unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

type [required]

enum

The processor type. The value should always be rename_fields. Allowed enum values: rename_fields

default: rename_fields

Option 7

object

The generate_datadog_metrics processor creates custom metrics from logs and sends them to Datadog. Metrics can be counters, gauges, or distributions and optionally grouped by log fields.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

metrics [required]

[object]

Configuration for generating individual metrics.

group_by

[string]

Optional fields used to group the metric series.

include [required]

string

Datadog filter query to match logs for metric generation.

metric_type [required]

enum

Type of metric to create. Allowed enum values: count,gauge,distribution

name [required]

string

Name of the custom metric to be created.

value [required]

 <oneOf>

Specifies how the value of the generated metric is computed.

Option 1

object

Strategy that increments a generated metric by one for each matching event.

strategy [required]

enum

Increments the metric by 1 for each matching event. Allowed enum values: increment_by_one

Option 2

object

Strategy that increments a generated metric based on the value of a log field.

field [required]

string

Name of the log field containing the numeric value to increment the metric by.

strategy [required]

enum

Uses a numeric field in the log event as the metric increment. Allowed enum values: increment_by_field

type [required]

enum

The processor type. Always generate_datadog_metrics. Allowed enum values: generate_datadog_metrics

default: generate_datadog_metrics

Option 8

object

The sample processor allows probabilistic sampling of logs at a fixed rate.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

percentage

double

The percentage of logs to sample.

rate

int64

Number of events to sample (1 in N).

type [required]

enum

The processor type. The value should always be sample. Allowed enum values: sample

default: sample

Option 9

object

The parse_grok processor extracts structured fields from unstructured log messages using Grok patterns.

disable_library_rules

boolean

If set to true, disables the default Grok rules provided by Datadog.

id [required]

string

A unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

The list of Grok parsing rules. If multiple matching rules are provided, they are evaluated in order. The first successful match is applied.

match_rules [required]

[object]

A list of Grok parsing rules that define how to extract fields from the source field. Each rule must contain a name and a valid Grok pattern.

name [required]

string

The name of the rule.

rule [required]

string

The definition of the Grok rule.

source [required]

string

The name of the field in the log event to apply the Grok rules to.

support_rules [required]

[object]

A list of Grok helper rules that can be referenced by the parsing rules.

name [required]

string

The name of the Grok helper rule.

rule [required]

string

The definition of the Grok helper rule.

type [required]

enum

The processor type. The value should always be parse_grok. Allowed enum values: parse_grok

default: parse_grok

Option 10

object

The sensitive_data_scanner processor detects and optionally redacts sensitive data in log events.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this component.

rules [required]

[object]

A list of rules for identifying and acting on sensitive data patterns.

keyword_options

object

Configuration for keywords used to reinforce sensitive data pattern detection.

keywords [required]

[string]

A list of keywords to match near the sensitive pattern.

proximity [required]

int64

Maximum number of tokens between a keyword and a sensitive value match.

name [required]

string

A name identifying the rule.

on_match [required]

 <oneOf>

Defines what action to take when sensitive data is matched.

Option 1

object

Configuration for completely redacting matched sensitive data.

action [required]

enum

Action type that completely replaces the matched sensitive data with a fixed replacement string to remove all visibility. Allowed enum values: redact

options [required]

object

Configuration for fully redacting sensitive data.

replace [required]

string

The ObservabilityPipelineSensitiveDataScannerProcessorActionRedactOptions replace.

Option 2

object

Configuration for hashing matched sensitive values.

action [required]

enum

Action type that replaces the matched sensitive data with a hashed representation, preserving structure while securing content. Allowed enum values: hash

options

object

The ObservabilityPipelineSensitiveDataScannerProcessorActionHash options.

Option 3

object

Configuration for partially redacting matched sensitive data.

action [required]

enum

Action type that redacts part of the sensitive data while preserving a configurable number of characters, typically used for masking purposes (e.g., show last 4 digits of a credit card). Allowed enum values: partial_redact

options [required]

object

Controls how partial redaction is applied, including character count and direction.

characters [required]

int64

The ObservabilityPipelineSensitiveDataScannerProcessorActionPartialRedactOptions characters.

direction [required]

enum

Indicates whether to redact characters from the first or last part of the matched value. Allowed enum values: first,last

pattern [required]

 <oneOf>

Pattern detection configuration for identifying sensitive data using either a custom regex or a library reference.

Option 1

object

Defines a custom regex-based pattern for identifying sensitive data in logs.

options [required]

object

Options for defining a custom regex pattern.

rule [required]

string

A regular expression used to detect sensitive values. Must be a valid regex.

type [required]

enum

Indicates a custom regular expression is used for matching. Allowed enum values: custom

Option 2

object

Specifies a pattern from Datadog’s sensitive data detection library to match known sensitive data types.

options [required]

object

Options for selecting a predefined library pattern and enabling keyword support.

id [required]

string

Identifier for a predefined pattern from the sensitive data scanner pattern library.

use_recommended_keywords

boolean

Whether to augment the pattern with recommended keywords (optional).

type [required]

enum

Indicates that a predefined library pattern is used. Allowed enum values: library

scope [required]

 <oneOf>

Determines which parts of the log the pattern-matching rule should be applied to.

Option 1

object

Includes only specific fields for sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Applies the rule only to included fields. Allowed enum values: include

Option 2

object

Excludes specific fields from sensitive data scanning.

options [required]

object

Fields to which the scope rule applies.

fields [required]

[string]

The ObservabilityPipelineSensitiveDataScannerProcessorScopeOptions fields.

target [required]

enum

Excludes specific fields from processing. Allowed enum values: exclude

Option 3

object

Applies scanning across all available fields.

target [required]

enum

Applies the rule to all fields. Allowed enum values: all

tags [required]

[string]

Tags assigned to this rule for filtering and classification.

type [required]

enum

The processor type. The value should always be sensitive_data_scanner. Allowed enum values: sensitive_data_scanner

default: sensitive_data_scanner

Option 11

object

The ocsf_mapper processor transforms logs into the OCSF schema using a predefined mapping configuration.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mappings [required]

[object]

A list of mapping rules to convert events to the OCSF format.

include [required]

string

A Datadog search query used to select the logs that this mapping should apply to.

mapping [required]

 <oneOf>

Defines a single mapping rule for transforming logs into the OCSF schema.

Option 1

enum

Predefined library mappings for common log formats. Allowed enum values: CloudTrail Account Change,GCP Cloud Audit CreateBucket,GCP Cloud Audit CreateSink,GCP Cloud Audit SetIamPolicy,GCP Cloud Audit UpdateSink,Github Audit Log API Activity,Google Workspace Admin Audit addPrivilege,Microsoft 365 Defender Incident,Microsoft 365 Defender UserLoggedIn,Okta System Log Authentication,Palo Alto Networks Firewall Traffic

type [required]

enum

The processor type. The value should always be ocsf_mapper. Allowed enum values: ocsf_mapper

default: ocsf_mapper

Option 12

object

The add_env_vars processor adds environment variable values to log events.

id [required]

string

The unique identifier for this component. Used to reference this processor in the pipeline.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

type [required]

enum

The processor type. The value should always be add_env_vars. Allowed enum values: add_env_vars

default: add_env_vars

variables [required]

[object]

A list of environment variable mappings to apply to log fields.

field [required]

string

The target field in the log event.

name [required]

string

The name of the environment variable to read.

Option 13

object

The dedupe processor removes duplicate fields in log events.

fields [required]

[string]

A list of log field paths to check for duplicates.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

mode [required]

enum

The deduplication mode to apply to the fields. Allowed enum values: match,ignore

type [required]

enum

The processor type. The value should always be dedupe. Allowed enum values: dedupe

default: dedupe

Option 14

object

The enrichment_table processor enriches logs using a static CSV file or GeoIP database.

file

object

Defines a static enrichment table loaded from a CSV file.

encoding [required]

object

File encoding format.

delimiter [required]

string

The encoding delimiter.

includes_headers [required]

boolean

The encoding includes_headers.

type [required]

enum

Specifies the encoding format (e.g., CSV) used for enrichment tables. Allowed enum values: csv

key [required]

[object]

Key fields used to look up enrichment values.

column [required]

string

The items column.

comparison [required]

enum

Defines how to compare key fields for enrichment table lookups. Allowed enum values: equals

field [required]

string

The items field.

path [required]

string

Path to the CSV file.

schema [required]

[object]

Schema defining column names and their types.

column [required]

string

The items column.

type [required]

enum

Declares allowed data types for enrichment table columns. Allowed enum values: string,boolean,integer,float,date,timestamp

geoip

object

Uses a GeoIP database to enrich logs based on an IP field.

key_field [required]

string

Path to the IP field in the log.

locale [required]

string

Locale used to resolve geographical names.

path [required]

string

Path to the GeoIP database file.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

target [required]

string

Path where enrichment results should be stored in the log.

type [required]

enum

The processor type. The value should always be enrichment_table. Allowed enum values: enrichment_table

default: enrichment_table

Option 15

object

The reduce processor aggregates and merges logs based on matching keys and merge strategies.

group_by [required]

[string]

A list of fields used to group log events for merging.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

merge_strategies [required]

[object]

List of merge strategies defining how values from grouped events should be combined.

path [required]

string

The field path in the log event.

strategy [required]

enum

The merge strategy to apply. Allowed enum values: discard,retain,sum,max,min,array,concat,concat_newline,concat_raw,shortest_array,longest_array,flat_unique

type [required]

enum

The processor type. The value should always be reduce. Allowed enum values: reduce

default: reduce

Option 16

object

The throttle processor limits the number of events that pass through over a given time window.

group_by

[string]

Optional list of fields used to group events before the threshold has been reached.

id [required]

string

The unique identifier for this processor.

include [required]

string

A Datadog search query used to determine which logs this processor targets.

inputs [required]

[string]

A list of component IDs whose output is used as the input for this processor.

threshold [required]

int64

the number of events allowed in a given time window. Events sent after the threshold has been reached, are dropped.

type [required]

enum

The processor type. The value should always be throttle. Allowed enum values: throttle

default: throttle

window [required]

double

The time window in seconds over which the threshold applies.

sources [required]

[ <oneOf>]

A list of configured data sources for the pipeline.

Option 1

object

The kafka source ingests data from Apache Kafka topics.

group_id [required]

string

Consumer group ID used by the Kafka client.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

librdkafka_options

[object]

Optional list of advanced Kafka client configuration options, defined as key-value pairs.

name [required]

string

The name of the librdkafka configuration option to set.

value [required]

string

The value assigned to the specified librdkafka configuration option.

sasl

object

Specifies the SASL mechanism for authenticating with a Kafka cluster.

mechanism

enum

SASL mechanism used for Kafka authentication. Allowed enum values: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

topics [required]

[string]

A list of Kafka topic names to subscribe to. The source ingests messages from each topic specified.

type [required]

enum

The source type. The value should always be kafka. Allowed enum values: kafka

default: kafka

Option 2

object

The datadog_agent source collects logs from the Datadog Agent.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be datadog_agent. Allowed enum values: datadog_agent

default: datadog_agent

Option 3

object

The splunk_tcp source receives logs from a Splunk Universal Forwarder over TCP. TLS is supported for secure transmission.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_tcp. Allowed enum values: splunk_tcp

default: splunk_tcp

Option 4

object

The splunk_hec source implements the Splunk HTTP Event Collector (HEC) API.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always splunk_hec. Allowed enum values: splunk_hec

default: splunk_hec

Option 5

object

The amazon_s3 source ingests logs from an Amazon S3 bucket. It supports AWS authentication and TLS encryption.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

region [required]

string

AWS region where the S3 bucket resides.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. Always amazon_s3. Allowed enum values: amazon_s3

default: amazon_s3

Option 6

object

The fluentd source ingests logs from a Fluentd-compatible service.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be `fluentd. Allowed enum values: fluentd

default: fluentd

Option 7

object

The fluent_bit source ingests logs from Fluent Bit.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (for example, as the input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be fluent_bit. Allowed enum values: fluent_bit

default: fluent_bit

Option 8

object

The http_server source collects logs over HTTP POST from external services.

auth_strategy [required]

enum

HTTP authentication method. Allowed enum values: none,plain

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

Unique ID for the HTTP server source.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_server. Allowed enum values: http_server

default: http_server

Option 9

object

The sumo_logic source receives logs from Sumo Logic collectors.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

type [required]

enum

The source type. The value should always be sumo_logic. Allowed enum values: sumo_logic

default: sumo_logic

Option 10

object

The rsyslog source listens for logs over TCP or UDP from an rsyslog server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be rsyslog. Allowed enum values: rsyslog

default: rsyslog

Option 11

object

The syslog_ng source listens for logs over TCP or UDP from a syslog-ng server using the syslog protocol.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

mode [required]

enum

Protocol used by the syslog source to receive messages. Allowed enum values: tcp,udp

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be syslog_ng. Allowed enum values: syslog_ng

default: syslog_ng

Option 12

object

The amazon_data_firehose source ingests logs from AWS Data Firehose.

auth

object

AWS authentication credentials used for accessing AWS services such as S3. If omitted, the system’s default credentials are used (for example, the IAM role and environment variables).

assume_role

string

The Amazon Resource Name (ARN) of the role to assume.

external_id

string

A unique identifier for cross-account role assumption.

session_name

string

A session identifier used for logging and tracing the assumed role session.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be amazon_data_firehose. Allowed enum values: amazon_data_firehose

default: amazon_data_firehose

Option 13

object

The google_pubsub source ingests logs from a Google Cloud Pub/Sub subscription.

auth [required]

object

GCP credentials used to authenticate with Google Cloud Storage.

credentials_file [required]

string

Path to the GCP service account key file.

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

project [required]

string

The GCP project ID that owns the Pub/Sub subscription.

subscription [required]

string

The Pub/Sub subscription name from which messages are consumed.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be google_pubsub. Allowed enum values: google_pubsub

default: google_pubsub

Option 14

object

The http_client source scrapes logs from HTTP endpoints at regular intervals.

auth_strategy

enum

Optional authentication strategy for HTTP requests. Allowed enum values: basic,bearer

decoding [required]

enum

The decoding format used to interpret incoming logs. Allowed enum values: bytes,gelf,json,syslog

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

scrape_interval_secs

int64

The interval (in seconds) between HTTP scrape requests.

scrape_timeout_secs

int64

The timeout (in seconds) for each scrape request.

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be http_client. Allowed enum values: http_client

default: http_client

Option 15

object

The logstash source ingests logs from a Logstash forwarder.

id [required]

string

The unique identifier for this component. Used to reference this component in other parts of the pipeline (e.g., as input to downstream components).

tls

object

Configuration for enabling TLS encryption between the pipeline component and external services.

ca_file

string

Path to the Certificate Authority (CA) file used to validate the server’s TLS certificate.

crt_file [required]

string

Path to the TLS client certificate file used to authenticate the pipeline component with upstream or downstream services.

key_file

string

Path to the private key file associated with the TLS client certificate. Used for mutual TLS authentication.

type [required]

enum

The source type. The value should always be logstash. Allowed enum values: logstash

default: logstash

name [required]

string

Name of the pipeline.

type [required]

string

The resource type identifier. For pipeline resources, this should always be set to pipelines.

default: pipelines

{
  "data": {
    "attributes": {
      "config": {
        "destinations": [
          {
            "id": "datadog-logs-destination",
            "inputs": [
              "filter-processor"
            ],
            "type": "datadog_logs"
          }
        ],
        "processors": [
          {
            "id": "filter-processor",
            "include": "service:my-service",
            "inputs": [
              "datadog-agent-source"
            ],
            "type": "filter"
          }
        ],
        "sources": [
          {
            "id": "datadog-agent-source",
            "type": "datadog_agent"
          }
        ]
      },
      "name": "Main Observability Pipeline"
    },
    "type": "pipelines"
  }
}

Respuesta

OK

Response containing validation errors.

Expand All

Campo

Tipo

Descripción

errors

[object]

The ValidationResponse errors.

meta [required]

object

Describes additional metadata for validation errors, including field names and error messages.

field

string

The field name that caused the error.

id

string

The ID of the component in which the error occurred.

message [required]

string

The detailed error message.

title [required]

string

A short, human-readable summary of the error.

{
  "errors": [
    {
      "meta": {
        "field": "region",
        "id": "datadog-agent-source",
        "message": "Field 'region' is required"
      },
      "title": "Field 'region' is required"
    }
  ]
}

Bad Request

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Not Authorized

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Too many requests

API error response.

Expand All

Campo

Tipo

Descripción

errors [required]

[string]

A list of errors.

{
  "errors": [
    "Bad Request"
  ]
}

Ejemplo de código

                          # Curl command
curl -X POST "https://api.ap1.datadoghq.com"https://api.datadoghq.eu"https://api.ddog-gov.com"https://api.datadoghq.com"https://api.us3.datadoghq.com"https://api.us5.datadoghq.com/api/v2/remote_config/products/obs_pipelines/pipelines/validate" \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "DD-API-KEY: ${DD_API_KEY}" \ -H "DD-APPLICATION-KEY: ${DD_APP_KEY}" \ -d @- << EOF { "data": { "attributes": { "config": { "destinations": [ { "id": "datadog-logs-destination", "inputs": [ "filter-processor" ], "type": "datadog_logs" } ], "processors": [ { "id": "filter-processor", "include": "service:my-service", "inputs": [ "datadog-agent-source" ], "type": "filter" } ], "sources": [ { "id": "datadog-agent-source", "type": "datadog_agent" } ] }, "name": "Main Observability Pipeline" }, "type": "pipelines" } } EOF

PREVIEWING: emilia/INA-7367