Observability Pipelines is not available on the US1-FED Datadog site.
A sink is a destination for events. Each sink’s design and transmission method is determined by the downstream service with which it interacts. For example, the socket
sink streams individual events, while the aws_s3
sink buffers and flushes data.
Supports AMQP version 0.9.1
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfigures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
A templated field.
The exchange to publish messages to.
Configure the AMQP message properties.
AMQP message properties.
Configure the AMQP message properties.
AMQP properties configuration.
Content-Encoding for the AMQP messages.
Content-Type for the AMQP messages.
A templated field.
Template used to generate a routing key which corresponds to a queue binding.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
URI for the AMQP server.
The URI has the format of
amqp://<user>:<password>@<host>:<port>/<vhost>?timeout=<seconds>
.
The default vhost can be specified by using a value of %2f
.
To connect over TLS, a scheme of amqps
can be specified instead. For example,
amqps://...
. Additional TLS settings, such as client certificate verification, can be
configured under the tls
section.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
encoding: ''
exchange: string
properties: ''
routing_key: ''
type: amqp
Configuration for the appsignal
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Transformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The URI for the AppSignal API to send data to.
A valid app-level AppSignal Push API key.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Configures the TLS options for incoming/outgoing connections.
Configures the TLS options for incoming/outgoing connections.
Whether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: gzip
encoding: {}
endpoint: 'https://appsignal-endpoint.net'
push_api_key: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tls: ''
type: appsignal
Configuration for the aws_cloudwatch_logs
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for interacting with AWS services.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Dynamically create a log group if it does not already exist.
This ignores create_missing_stream
directly after creating the group and creates
the first stream.
Dynamically create a log stream if it does not already exist.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The group name of the target CloudWatch Logs stream.
Outbound HTTP request settings.
Additional HTTP headers to add to every HTTP request.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
The maximum number of requests allowed within the rate_limit_duration_secs
time window.
The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
The maximum amount of time to wait between retries.
The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
The stream name of the target CloudWatch Logs stream.
There can only be one writer to a log stream at a time. If multiple instances are writing to
the same log group, the stream name must include an identifier that is guaranteed to be
unique per instance.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Custom endpoint for use with AWS-compatible services.
acknowledgements:
enabled: null
assume_role: string
auth:
imds:
connect_timeout_seconds: 1
max_attempts: 4
read_timeout_seconds: 1
load_timeout_secs: null
region: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: none
create_missing_group: true
create_missing_stream: true
encoding: ''
group_name: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
headers: {}
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
stream_name: string
tls: ''
type: aws_cloudwatch_logs
Configuration for the aws_cloudwatch_metrics
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for interacting with AWS services.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
The default namespace to use for metrics that do not have one.
Metrics with the same name can only be differentiated by their namespace, and not all
metrics have their own namespace.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Custom endpoint for use with AWS-compatible services.
acknowledgements:
enabled: null
assume_role: string
auth:
imds:
connect_timeout_seconds: 1
max_attempts: 4
read_timeout_seconds: 1
load_timeout_secs: null
region: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: none
default_namespace: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tls: ''
type: aws_cloudwatch_metrics
Configuration for the aws_kinesis_firehose
sink.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullControls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for interacting with AWS services.
Compression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Whether or not to retry successful requests containing partial failures.
The stream name of the target Kinesis Firehose delivery stream.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Custom endpoint for use with AWS-compatible services.
batch:
max_bytes: null
max_events: null
timeout_secs: null
type: aws_kinesis_firehose
Configuration for the aws_kinesis_streams
sink.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullThe log field used as the Kinesis record’s partition key value.
If not specified, a unique partition key is generated for each Kinesis record.
A wrapper around OwnedValuePath
that allows it to be used in Vector config.
This requires a valid path to be used. If you want to allow optional paths,
use [optional_path::OptionalValuePath].
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for interacting with AWS services.
Compression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Whether or not to retry successful requests containing partial failures.
The stream name of the target Kinesis Firehose delivery stream.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Custom endpoint for use with AWS-compatible services.
batch:
max_bytes: null
max_events: null
timeout_secs: null
partition_key_field: ''
type: aws_kinesis_streams
Configuration for the aws_s3
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for interacting with AWS services.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullThe S3 bucket name.
This must not include a leading s3://
or a trailing /
.
Compression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Some cloud storage API clients and browsers handle decompression transparently, so
depending on how they are accessed, files may not always appear to be compressed.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Whether or not to append a UUID v4 token to the end of the object key.
The UUID is appended to the timestamp portion of the object key, such that if the object key
generated is date=2022-07-18/1658176486
, setting this field to true
results
in an object key that looks like date=2022-07-18/1658176486-30f6652c-71da-4f9f-800d-a1189c47c547
.
This ensures there are no name collisions, and can be useful in high-volume workloads where
object keys must be unique.
The filename extension to use in the object key.
This overrides setting the extension based on the configured compression
.
The timestamp format for the time component of the object key.
By default, object keys are appended with a timestamp that reflects when the objects are
sent to S3, such that the resulting object key is functionally equivalent to joining the key
prefix with the formatted timestamp, such as date=2022-07-18/1658176486
.
This would represent a key_prefix
set to date=%F/
and the timestamp of Mon Jul 18 2022
20:34:44 GMT+0000, with the filename_time_format
being set to %s
, which renders
timestamps in seconds since the Unix epoch.
Supports the common strftime
specifiers found in most
languages.
When set to an empty string, no timestamp is appended to the key prefix.
A prefix to apply to all object keys.
Prefixes are useful for partitioning objects, such as by creating an object key that
stores objects under a particular directory. If using a prefix for this purpose, it must end
in /
to act as a directory path. A trailing /
is not automatically added.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Canned ACL to apply to the created objects.
For more information, see Canned ACL.
S3 Canned ACLs.
For more information, see Canned ACL.
Bucket/object are private.
The bucket/object owner is granted the FULL_CONTROL
permission, and no one else has
access.
This is the default.
Bucket/object can be read publicly.
The bucket/object owner is granted the FULL_CONTROL
permission, and anyone in the
AllUsers
grantee group is granted the READ
permission.
Bucket/object can be read and written publicly.
The bucket/object owner is granted the FULL_CONTROL
permission, and anyone in the
AllUsers
grantee group is granted the READ
and WRITE
permissions.
This is generally not recommended.
Bucket/object are private, and readable by EC2.
The bucket/object owner is granted the FULL_CONTROL
permission, and the AWS EC2 service is
granted the READ
permission for the purpose of reading Amazon Machine Image (AMI) bundles
from the given bucket.
Bucket/object can be read by authenticated users.
The bucket/object owner is granted the FULL_CONTROL
permission, and anyone in the
AuthenticatedUsers
grantee group is granted the READ
permission.
Object is private, except to the bucket owner.
The object owner is granted the FULL_CONTROL
permission, and the bucket owner is granted the READ
permission.
Only relevant when specified for an object: this canned ACL is otherwise ignored when
specified for a bucket.
bucket-owner-full-control
bucket-owner-full-control
Object is semi-private.
Both the object owner and bucket owner are granted the FULL_CONTROL
permission.
Only relevant when specified for an object: this canned ACL is otherwise ignored when
specified for a bucket.
Bucket can have logs written.
The LogDelivery
grantee group is granted WRITE
and READ_ACP
permissions.
Only relevant when specified for a bucket: this canned ACL is otherwise ignored when
specified for an object.
For more information about logs, see Amazon S3 Server Access Logging.
Overrides what content encoding has been applied to the object.
Directly comparable to the Content-Encoding
HTTP header.
If not specified, the compression scheme used dictates this value.
Overrides the MIME type of the object.
Directly comparable to the Content-Type
HTTP header.
If not specified, the compression scheme used dictates this value.
When compression
is set to none
, the value text/x-log
is used.
Grants READ
, READ_ACP
, and WRITE_ACP
permissions on the created objects to the named grantee.
This allows the grantee to read the created objects and their metadata, as well as read and
modify the ACL on the created objects.
Grants READ
permissions on the created objects to the named grantee.
This allows the grantee to read the created objects and their metadata.
Grants READ_ACP
permissions on the created objects to the named grantee.
This allows the grantee to read the ACL on the created objects.
Grants WRITE_ACP
permissions on the created objects to the named grantee.
This allows the grantee to modify the ACL on the created objects.
AWS S3 Server-Side Encryption algorithms.
The Server-side Encryption algorithm used when storing these objects.
AWS S3 Server-Side Encryption algorithms.
More information on each algorithm can be found in the AWS documentation.
Each object is encrypted with AES-256 using a unique key.
This corresponds to the SSE-S3
option.
Each object is encrypted with AES-256 using keys managed by AWS KMS.
Depending on whether or not a KMS key ID is specified, this corresponds either to the
SSE-KMS
option (keys generated/managed by KMS) or the SSE-C
option (keys generated by
the customer, managed by KMS).
Specifies the ID of the AWS Key Management Service (AWS KMS) symmetrical customer managed
customer master key (CMK) that is used for the created objects.
Only applies when server_side_encryption
is configured to use KMS.
If not specified, Amazon S3 uses the AWS managed CMK in AWS to protect the data.
Infrequently Accessed (single Availability zone).
Glacier Flexible Retrieval.
The tag-set for the object.
Custom endpoint for use with AWS-compatible services.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Event data is not delimited at all.
Event data is not delimited at all.
Event data is delimited by a single ASCII (7-bit) character.
Options for the character delimited encoder.
The ASCII (7-bit) character that delimits byte sequences.
Event data is delimited by a single ASCII (7-bit) character.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is delimited by a newline (LF) character.
Event data is delimited by a newline (LF) character.
acknowledgements:
enabled: null
auth:
imds:
connect_timeout_seconds: 1
max_attempts: 4
read_timeout_seconds: 1
load_timeout_secs: null
region: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
bucket: string
compression: gzip
filename_append_uuid: true
filename_extension: string
filename_time_format: '%s'
key_prefix: date=%F
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tls: ''
type: aws_s3
Configuration for the aws_sqs
sink.
The URL of the Amazon SQS queue to which messages are sent.
Custom endpoint for use with AWS-compatible services.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for interacting with AWS services.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The message deduplication ID value to allow AWS to identify duplicate messages.
This value is a template which should result in a unique string for each event. See the AWS
documentation for more about how AWS does message deduplication.
The tag that specifies that a message belongs to a specific message group.
Can be applied only to FIFO queues.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
queue_url: string
type: aws_sqs
Configuration for the axiom
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
The Axiom dataset to write to.
The Axiom organization ID.
Only required when using personal tokens.
Outbound HTTP request settings.
Additional HTTP headers to add to every HTTP request.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
The maximum number of requests allowed within the rate_limit_duration_secs
time window.
The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
The maximum amount of time to wait between retries.
The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
URI of the Axiom endpoint to send data to.
Only required if not using Axiom Cloud.
acknowledgements:
enabled: null
compression: none
dataset: string
org_id: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
headers: {}
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tls: ''
token: string
url: string
type: axiom
Configuration for the azure_blob
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullWhether or not to append a UUID v4 token to the end of the blob key.
The UUID is appended to the timestamp portion of the object key, such that if the blob key
generated is date=2022-07-18/1658176486
, setting this field to true
results
in an blob key that looks like
date=2022-07-18/1658176486-30f6652c-71da-4f9f-800d-a1189c47c547
.
This ensures there are no name collisions, and can be useful in high-volume workloads where
blob keys must be unique.
A prefix to apply to all blob keys.
Prefixes are useful for partitioning objects, such as by creating a blob key that
stores blobs under a particular directory. If using a prefix for this purpose, it must end
in /
to act as a directory path. A trailing /
is not automatically added.
The timestamp format for the time component of the blob key.
By default, blob keys are appended with a timestamp that reflects when the blob are sent to
Azure Blob Storage, such that the resulting blob key is functionally equivalent to joining
the blob prefix with the formatted timestamp, such as date=2022-07-18/1658176486
.
This would represent a blob_prefix
set to date=%F/
and the timestamp of Mon Jul 18 2022
20:34:44 GMT+0000, with the filename_time_format
being set to %s
, which renders
timestamps in seconds since the Unix epoch.
Supports the common strftime
specifiers found in most
languages.
When set to an empty string, no timestamp is appended to the blob prefix.
Compression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
The Azure Blob Storage Account connection string.
Authentication with access key is the only supported authentication method.
Either storage_account
, or this field, must be specified.
Wrapper for sensitive strings containing credentials
The Azure Blob Storage Account container name.
The Azure Blob Storage Endpoint URL.
This is used to override the default blob storage endpoint URL in cases where you are using
credentials read from the environment/managed identities or access tokens without using an
explicit connection_string (which already explicitly supports overriding the blob endpoint
URL).
This may only be used with storage_account
and is ignored when used with
connection_string
.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The Azure Blob Storage Account name.
Attempts to load credentials for the account in the following ways, in order:
Either connection_string
, or this field, must be specified.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Event data is not delimited at all.
Event data is not delimited at all.
Event data is delimited by a single ASCII (7-bit) character.
Options for the character delimited encoder.
The ASCII (7-bit) character that delimits byte sequences.
Event data is delimited by a single ASCII (7-bit) character.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is delimited by a newline (LF) character.
Event data is delimited by a newline (LF) character.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
blob_append_uuid: boolean
blob_prefix: blob/%F/
blob_time_format: string
compression: gzip
connection_string: ''
container_name: string
endpoint: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
storage_account: string
type: azure_blob
Configuration for the azure_monitor_logs
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe Resource ID of the Azure resource the data should be associated with.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullTransformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The record type of the data that is being submitted.
Can only contain letters, numbers, and underscores (_), and may not exceed 100 characters.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Use this option to customize the log field used as TimeGenerated
in Azure.
The setting of log_schema.timestamp_key
, usually timestamp
, is used here by default.
This field should be used in rare cases where TimeGenerated
should point to a specific log
field. For example, use this field to set the log field source_timestamp
as holding the
value that should be used as TimeGenerated
on the Azure side.
An optional path that deserializes an empty string to None
.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
azure_resource_id: string
batch:
max_bytes: null
max_events: null
timeout_secs: null
customer_id: string
encoding: {}
host: ods.opinsights.azure.com
log_type: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
shared_key: string
time_generated_key: ''
tls: ''
type: azure_monitor_logs
Configuration for the blackhole
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe interval between reporting a summary of activity.
Set to 0
to disable reporting.
The number of events, per second, that the sink is allowed to consume.
By default, there is no limit.
acknowledgements:
enabled: null
print_interval_secs: 1
rate: integer
type: blackhole
Configuration for the clickhouse
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an
HTTP header without any additional encryption beyond what is provided by the transport itself.
Configuration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an
HTTP header without any additional encryption beyond what is provided by the transport itself.
Basic authentication.
The username and password are concatenated and encoded via base64.
The basic authentication password.
Basic authentication.
The username and password are concatenated and encoded via base64.
The basic authentication username.
Bearer authentication.
The bearer token value (OAuth2, JWT, etc.) is passed as-is.
Bearer authentication.
The bearer token value (OAuth2, JWT, etc.) is passed as-is.
The bearer authentication token.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
A templated field.
The database that contains the table that data is inserted into.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Sets date_time_input_format
to best_effort
, allowing ClickHouse to properly parse RFC3339/ISO 8601.
Transformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The URI component of a request.
The endpoint of the ClickHouse server.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Sets input_format_skip_unknown_fields
, allowing ClickHouse to discard fields not present in the table schema.
A templated field.
The table that data is inserted into.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
auth: ''
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: gzip
database: ''
date_time_best_effort: boolean
encoding: {}
endpoint: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
skip_unknown_fields: boolean
table: string
tls: ''
type: clickhouse
Configuration for the console
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfigures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Event data is not delimited at all.
Event data is not delimited at all.
Event data is delimited by a single ASCII (7-bit) character.
Options for the character delimited encoder.
The ASCII (7-bit) character that delimits byte sequences.
Event data is delimited by a single ASCII (7-bit) character.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is delimited by a newline (LF) character.
Event data is delimited by a newline (LF) character.
acknowledgements:
enabled: null
target: stdout
type: console
Configuration for the databend
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an
HTTP header without any additional encryption beyond what is provided by the transport itself.
Configuration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an
HTTP header without any additional encryption beyond what is provided by the transport itself.
Basic authentication.
The username and password are concatenated and encoded via base64.
The basic authentication password.
Basic authentication.
The username and password are concatenated and encoded via base64.
The basic authentication username.
Bearer authentication.
The bearer token value (OAuth2, JWT, etc.) is passed as-is.
Bearer authentication.
The bearer token value (OAuth2, JWT, etc.) is passed as-is.
The bearer authentication token.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
The database that contains the table that data is inserted into.
Configures how events are encoded into raw bytes.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The URI component of a request.
The endpoint of the Databend server.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The table that data is inserted into.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
auth: ''
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: none
database: default
encoding:
codec: json
endpoint: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
table: string
tls: ''
type: databend
Configuration for the datadog_archives
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullS3-specific configuration options.
S3-specific configuration options.
Configuration of the authentication strategy for interacting with AWS services.
Canned ACL to apply to the created objects.
For more information, see Canned ACL.
S3 Canned ACLs.
For more information, see Canned ACL.
Bucket/object are private.
The bucket/object owner is granted the FULL_CONTROL
permission, and no one else has
access.
This is the default.
Bucket/object can be read publicly.
The bucket/object owner is granted the FULL_CONTROL
permission, and anyone in the
AllUsers
grantee group is granted the READ
permission.
Bucket/object can be read and written publicly.
The bucket/object owner is granted the FULL_CONTROL
permission, and anyone in the
AllUsers
grantee group is granted the READ
and WRITE
permissions.
This is generally not recommended.
Bucket/object are private, and readable by EC2.
The bucket/object owner is granted the FULL_CONTROL
permission, and the AWS EC2 service is
granted the READ
permission for the purpose of reading Amazon Machine Image (AMI) bundles
from the given bucket.
Bucket/object can be read by authenticated users.
The bucket/object owner is granted the FULL_CONTROL
permission, and anyone in the
AuthenticatedUsers
grantee group is granted the READ
permission.
Object is private, except to the bucket owner.
The object owner is granted the FULL_CONTROL
permission, and the bucket owner is granted the READ
permission.
Only relevant when specified for an object: this canned ACL is otherwise ignored when
specified for a bucket.
bucket-owner-full-control
bucket-owner-full-control
Object is semi-private.
Both the object owner and bucket owner are granted the FULL_CONTROL
permission.
Only relevant when specified for an object: this canned ACL is otherwise ignored when
specified for a bucket.
Bucket can have logs written.
The LogDelivery
grantee group is granted WRITE
and READ_ACP
permissions.
Only relevant when specified for a bucket: this canned ACL is otherwise ignored when
specified for an object.
For more information about logs, see Amazon S3 Server Access Logging.
Grants READ
, READ_ACP
, and WRITE_ACP
permissions on the created objects to the named grantee.
This allows the grantee to read the created objects and their metadata, as well as read and
modify the ACL on the created objects.
Grants READ
permissions on the created objects to the named grantee.
This allows the grantee to read the created objects and their metadata.
Grants READ_ACP
permissions on the created objects to the named grantee.
This allows the grantee to read the ACL on the created objects.
Grants WRITE_ACP
permissions on the created objects to the named grantee.
This allows the grantee to modify the ACL on the created objects.
AWS S3 Server-Side Encryption algorithms.
The Server-side Encryption algorithm used when storing these objects.
AWS S3 Server-Side Encryption algorithms.
More information on each algorithm can be found in the AWS documentation.
Each object is encrypted with AES-256 using a unique key.
This corresponds to the SSE-S3
option.
Each object is encrypted with AES-256 using keys managed by AWS KMS.
Depending on whether or not a KMS key ID is specified, this corresponds either to the
SSE-KMS
option (keys generated/managed by KMS) or the SSE-C
option (keys generated by
the customer, managed by KMS).
Specifies the ID of the AWS Key Management Service (AWS KMS) symmetrical customer managed
customer master key (CMK) that is used for the created objects.
Only applies when server_side_encryption
is configured to use KMS.
If not specified, Amazon S3 uses the AWS managed CMK in AWS to protect the data.
Infrequently Accessed (single Availability zone).
Glacier Flexible Retrieval.
The tag-set for the object.
Custom endpoint for use with AWS-compatible services.
default: nullThe AWS region of the target service.
default: nullABS-specific configuration options.
ABS-specific configuration options.
The Azure Blob Storage Account connection string.
Authentication with access key is the only supported authentication method.
The name of the bucket to store the archives in.
Transformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
GCS-specific configuration options.
GCS-specific configuration options.
Bucket/object can be read by authenticated users.
The bucket/object owner is granted the OWNER
permission, and anyone authenticated Google
account holder is granted the READER
permission.
bucket-owner-full-control
bucket-owner-full-control
Object is semi-private.
Both the object owner and bucket owner are granted the OWNER
permission.
Only relevant when specified for an object: this predefined ACL is otherwise ignored when
specified for a bucket.
Object is private, except to the bucket owner.
The object owner is granted the OWNER
permission, and the bucket owner is granted the
READER
permission.
Only relevant when specified for an object: this predefined ACL is otherwise ignored when
specified for a bucket.
Bucket/object are private.
The bucket/object owner is granted the OWNER
permission, and no one else has
access.
Bucket/object are private within the project.
Project owners and project editors are granted the OWNER
permission, and anyone who is
part of the project team is granted the READER
permission.
This is the default.
Bucket/object can be read publically.
The bucket/object owner is granted the OWNER
permission, and all other users, whether
authenticated or anonymous, are granted the READER
permission.
The set of metadata key:value
pairs for the created objects.
For more information, see Custom metadata.
Standard storage.
This is the default.
An API key.
Either an API key or a path to a service account credentials JSON file can be specified.
If both are unset, the GOOGLE_APPLICATION_CREDENTIALS
environment variable is checked for a filename. If no
filename is named, an attempt is made to fetch an instance service account for the compute instance the program is
running on. If this is not on a GCE instance, then you must define it with an API key or service account
credentials JSON file.
Wrapper for sensitive strings containing credentials
Path to a service account credentials JSON file.
Either an API key or a path to a service account credentials JSON file can be specified.
If both are unset, the GOOGLE_APPLICATION_CREDENTIALS
environment variable is checked for a filename. If no
filename is named, an attempt is made to fetch an instance service account for the compute instance the program is
running on. If this is not on a GCE instance, then you must define it with an API key or service account
credentials JSON file.
Skip all authentication handling. For use with integration tests only.
A prefix to apply to all object keys.
Prefixes are useful for partitioning objects, such as by creating an object key that
stores objects under a particular directory. If using a prefix for this purpose, it must end
in /
to act as a directory path. A trailing /
is not automatically added.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The name of the object storage service to use.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
aws_s3: ''
azure_blob: ''
bucket: string
encoding: {}
gcp_cloud_storage: ''
key_prefix: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
service: string
tls: ''
type: datadog_archives
Configuration for the datadog_events
sink.
DEPRECATED: The Datadog region to send events to.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe default Datadog API key to use in authentication of HTTP requests.
If an event has a Datadog API key set explicitly in its metadata, it takes
precedence over this setting.
The endpoint to send observability data to.
The endpoint must contain an HTTP scheme, and may specify a
hostname or IP address and port.
If set, overrides the site
option.
The Datadog site to send observability data to.
Configures the TLS options for incoming/outgoing connections.
Configures the TLS options for incoming/outgoing connections.
Whether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
region: ''
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
type: datadog_events
Configuration for the datadog_logs
sink.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Transformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
DEPRECATED: The Datadog region to send logs to.
Outbound HTTP request settings.
Additional HTTP headers to add to every HTTP request.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
The maximum number of requests allowed within the rate_limit_duration_secs
time window.
The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
The maximum amount of time to wait between retries.
The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe default Datadog API key to use in authentication of HTTP requests.
If an event has a Datadog API key set explicitly in its metadata, it takes
precedence over this setting.
The endpoint to send observability data to.
The endpoint must contain an HTTP scheme, and may specify a
hostname or IP address and port.
If set, overrides the site
option.
The Datadog site to send observability data to.
Configures the TLS options for incoming/outgoing connections.
Configures the TLS options for incoming/outgoing connections.
Whether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: ''
encoding: {}
region: ''
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
headers: {}
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
type: datadog_logs
Configuration for the datadog_metrics
sink.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullSets the default namespace for any metrics sent.
This namespace is only used if a metric has no existing namespace. When a namespace is
present, it is used as a prefix to the metric name, and separated with a period (.
).
DEPRECATED: The Datadog region to send metrics to.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe default Datadog API key to use in authentication of HTTP requests.
If an event has a Datadog API key set explicitly in its metadata, it takes
precedence over this setting.
The endpoint to send observability data to.
The endpoint must contain an HTTP scheme, and may specify a
hostname or IP address and port.
If set, overrides the site
option.
The Datadog site to send observability data to.
Configures the TLS options for incoming/outgoing connections.
Configures the TLS options for incoming/outgoing connections.
Whether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
batch:
max_bytes: null
max_events: null
timeout_secs: null
default_namespace: string
region: ''
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
type: datadog_metrics
Configuration for the datadog_traces
sink.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe default Datadog API key to use in authentication of HTTP requests.
If an event has a Datadog API key set explicitly in its metadata, it takes
precedence over this setting.
The endpoint to send observability data to.
The endpoint must contain an HTTP scheme, and may specify a
hostname or IP address and port.
If set, overrides the site
option.
The Datadog site to send observability data to.
Configures the TLS options for incoming/outgoing connections.
Configures the TLS options for incoming/outgoing connections.
Whether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: ''
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
type: datadog_traces
Configuration for the elasticsearch
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe API version of Elasticsearch.
Auto-detect the API version.
If the cluster state version endpoint isn't reachable, a warning is logged to
stdout, and the version is assumed to be V6 if the suppress_type_name
option is set to
true
. Otherwise, the version is assumed to be V8. In the future, the sink instead
returns an error during configuration parsing, since a wrongly assumed version could lead to
incorrect API calls.
Use the Elasticsearch 6.x API.
Use the Elasticsearch 7.x API.
Use the Elasticsearch 8.x API.
Elasticsearch Authentication strategies.
Elasticsearch Authentication strategies.
HTTP Basic Authentication.
Basic authentication password.
HTTP Basic Authentication.
Basic authentication username.
Amazon OpenSearch Service-specific authentication.
Amazon OpenSearch Service-specific authentication.
Configuration of the region/endpoint to use when interacting with an AWS service.
Configuration of the region/endpoint to use when interacting with an AWS service.
Custom endpoint for use with AWS-compatible services.
default: nullThe AWS region of the target service.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullElasticsearch bulk mode configuration.
Action to use when making requests to the Elasticsearch Bulk API.
Only index
and create
actions are supported.
default: indexA templated field.
The name of the index to write events to.
default: vector-%Y.%m.%dCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Elasticsearch data stream mode configuration.
Elasticsearch data stream mode configuration.
Automatically routes events by deriving the data stream name using specific event fields.
The format of the data stream name is <type>-<dataset>-<namespace>
, where each value comes
from the data_stream
configuration field of the same name.
If enabled, the value of the data_stream.type
, data_stream.dataset
, and
data_stream.namespace
event fields are used if they are present. Otherwise, the values
set in this configuration are used.
A templated field.
The data stream dataset used to construct the data stream at index time.
A templated field.
The data stream namespace used to construct the data stream at index time.
Automatically adds and syncs the data_stream.*
event fields if they are missing from the event.
This ensures that fields match the name of the data stream that is receiving events.
A templated field.
The data stream type used to construct the data stream at index time.
Options for determining the health of an endpoint.
Options for determining the health of an endpoint.
retry_initial_backoff_secs
Initial delay between attempts to reactivate endpoints once they become unhealthy.
Maximum delay between attempts to reactivate endpoints once they become unhealthy.
The doc_type
for your index data.
This is only relevant for Elasticsearch <= 6.X. If you are using >= 7.0 you do not need to
set this option since Elasticsearch has removed it.
Transformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The Elasticsearch endpoint to send logs to.
DEPRECATED: The endpoint must contain an HTTP scheme, and may specify a
hostname or IP address and port.
A list of Elasticsearch endpoints to send logs to.
The endpoint must contain an HTTP scheme, and may specify a
hostname or IP address and port.
The name of the event key that should map to Elasticsearch’s _id
field.
By default, the _id
field is not set, which allows Elasticsearch to set this
automatically. Setting your own Elasticsearch IDs can hinder performance.
A wrapper around OwnedValuePath
that allows it to be used in Vector config.
This requires a valid path to be used. If you want to allow optional paths,
use [optional_path::OptionalValuePath].
Configuration for the metric_to_log
transform.
Configuration for the metric_to_log
transform.
Name of the tag in the metric to use for the source host.
If present, the value of the tag is set on the generated log event in the host
field,
where the field key uses the global host_key
option.
The namespace to use for logs. This overrides the global setting.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments as
described by the native_json
codec.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
The name of the time zone to apply to timestamp conversions that do not contain an explicit
time zone.
This overrides the global timezone
option. The time zone name may be
any name in the TZ database or local
to indicate system local time.
Timezone reference.
This can refer to any valid timezone as defined in the TZ database, or "local" which refers to the system local timezone.
A named timezone.
Must be a valid name in the TZ database.
Elasticsearch Indexing mode.
Ingests documents in bulk, using the bulk API index
action.
Ingests documents in bulk, using the bulk API create
action.
Elasticsearch Data Streams only support the create
action.
The name of the pipeline to apply.
Custom parameters to add to the query string for each HTTP request sent to Elasticsearch.
Outbound HTTP request settings.
Additional HTTP headers to add to every HTTP request.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
The maximum number of requests allowed within the rate_limit_duration_secs
time window.
The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
The maximum amount of time to wait between retries.
The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
Whether or not to retry successful requests containing partial failures.
To avoid duplicates in Elasticsearch, please use option id_key
.
Whether or not to send the type
field to Elasticsearch.
DEPRECATED: The type
field was deprecated in Elasticsearch 7.x and removed in Elasticsearch 8.x.
If enabled, the doc_type
option is ignored.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
api_version: auto
auth: ''
aws: ''
batch:
max_bytes: null
max_events: null
timeout_secs: null
bulk:
action: index
index: vector-%Y.%m.%d
compression: none
data_stream: ''
distribution: ''
doc_type: _doc
encoding: {}
endpoint: string
endpoints: []
id_key: ''
metrics: ''
mode: bulk
pipeline: string
query: object
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
headers: {}
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
request_retry_partial: boolean
suppress_type_name: boolean
tls: ''
type: elasticsearch
Configuration for the gcp_chronicle_unstructured
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullThe Unique identifier (UUID) corresponding to the Chronicle instance.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The endpoint to send data to.
The type of log entries in a request.
This must be one of the supported log types, otherwise
Chronicle rejects the entry with an error.
Google Chronicle regions.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
An API key.
Either an API key or a path to a service account credentials JSON file can be specified.
If both are unset, the GOOGLE_APPLICATION_CREDENTIALS
environment variable is checked for a filename. If no
filename is named, an attempt is made to fetch an instance service account for the compute instance the program is
running on. If this is not on a GCE instance, then you must define it with an API key or service account
credentials JSON file.
Wrapper for sensitive strings containing credentials
Path to a service account credentials JSON file.
Either an API key or a path to a service account credentials JSON file can be specified.
If both are unset, the GOOGLE_APPLICATION_CREDENTIALS
environment variable is checked for a filename. If no
filename is named, an attempt is made to fetch an instance service account for the compute instance the program is
running on. If this is not on a GCE instance, then you must define it with an API key or service account
credentials JSON file.
Skip all authentication handling. For use with integration tests only.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
customer_id: string
encoding: ''
endpoint: string
log_type: string
region: ''
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tls: ''
type: gcp_chronicle_unstructured
Configuration for the gcp_cloud_storage
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe Predefined ACL to apply to created objects.
For more information, see Predefined ACLs.
Bucket/object can be read by authenticated users.
The bucket/object owner is granted the OWNER
permission, and anyone authenticated Google
account holder is granted the READER
permission.
bucket-owner-full-control
bucket-owner-full-control
Object is semi-private.
Both the object owner and bucket owner are granted the OWNER
permission.
Only relevant when specified for an object: this predefined ACL is otherwise ignored when
specified for a bucket.
Object is private, except to the bucket owner.
The object owner is granted the OWNER
permission, and the bucket owner is granted the
READER
permission.
Only relevant when specified for an object: this predefined ACL is otherwise ignored when
specified for a bucket.
Bucket/object are private.
The bucket/object owner is granted the OWNER
permission, and no one else has
access.
Bucket/object are private within the project.
Project owners and project editors are granted the OWNER
permission, and anyone who is
part of the project team is granted the READER
permission.
This is the default.
Bucket/object can be read publically.
The bucket/object owner is granted the OWNER
permission, and all other users, whether
authenticated or anonymous, are granted the READER
permission.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Whether or not to append a UUID v4 token to the end of the object key.
The UUID is appended to the timestamp portion of the object key, such that if the object key
generated is date=2022-07-18/1658176486
, setting this field to true
results
in an object key that looks like date=2022-07-18/1658176486-30f6652c-71da-4f9f-800d-a1189c47c547
.
This ensures there are no name collisions, and can be useful in high-volume workloads where
object keys must be unique.
The filename extension to use in the object key.
If not specified, the extension is determined by the compression scheme used.
The timestamp format for the time component of the object key.
By default, object keys are appended with a timestamp that reflects when the objects are
sent to S3, such that the resulting object key is functionally equivalent to joining the key
prefix with the formatted timestamp, such as date=2022-07-18/1658176486
.
This would represent a key_prefix
set to date=%F/
and the timestamp of Mon Jul 18 2022
20:34:44 GMT+0000, with the filename_time_format
being set to %s
, which renders
timestamps in seconds since the Unix epoch.
Supports the common strftime
specifiers found in most
languages.
When set to an empty string, no timestamp is appended to the key prefix.
A prefix to apply to all object keys.
Prefixes are useful for partitioning objects, such as by creating an object key that
stores objects under a particular directory. If using a prefix for this purpose, it must end
in /
in order to act as a directory path. A trailing /
is not automatically added.
The set of metadata key:value
pairs for the created objects.
For more information, see the custom metadata documentation.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The storage class for created objects.
For more information, see the storage classes documentation.
Standard storage.
This is the default.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Event data is not delimited at all.
Event data is not delimited at all.
Event data is delimited by a single ASCII (7-bit) character.
Options for the character delimited encoder.
The ASCII (7-bit) character that delimits byte sequences.
Event data is delimited by a single ASCII (7-bit) character.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is delimited by a newline (LF) character.
Event data is delimited by a newline (LF) character.
An API key.
Either an API key or a path to a service account credentials JSON file can be specified.
If both are unset, the GOOGLE_APPLICATION_CREDENTIALS
environment variable is checked for a filename. If no
filename is named, an attempt is made to fetch an instance service account for the compute instance the program is
running on. If this is not on a GCE instance, then you must define it with an API key or service account
credentials JSON file.
Wrapper for sensitive strings containing credentials
Path to a service account credentials JSON file.
Either an API key or a path to a service account credentials JSON file can be specified.
If both are unset, the GOOGLE_APPLICATION_CREDENTIALS
environment variable is checked for a filename. If no
filename is named, an attempt is made to fetch an instance service account for the compute instance the program is
running on. If this is not on a GCE instance, then you must define it with an API key or service account
credentials JSON file.
Skip all authentication handling. For use with integration tests only.
acknowledgements:
enabled: null
acl: ''
batch:
max_bytes: null
max_events: null
timeout_secs: null
bucket: string
compression: none
filename_append_uuid: true
filename_extension: string
filename_time_format: '%s'
key_prefix: string
metadata: object
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
storage_class: ''
tls: ''
type: gcp_cloud_storage
Configuration for the gcp_pubsub
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullConfigures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The endpoint to which to publish events.
The scheme (http
or https
) must be specified. No path should be included since the paths defined
by the GCP Pub/Sub
API are used.
The trailing slash /
must not be included.
The project name to which to publish events.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
The topic within the project to which to publish events.
An API key.
Either an API key or a path to a service account credentials JSON file can be specified.
If both are unset, the GOOGLE_APPLICATION_CREDENTIALS
environment variable is checked for a filename. If no
filename is named, an attempt is made to fetch an instance service account for the compute instance the program is
running on. If this is not on a GCE instance, then you must define it with an API key or service account
credentials JSON file.
Wrapper for sensitive strings containing credentials
Path to a service account credentials JSON file.
Either an API key or a path to a service account credentials JSON file can be specified.
If both are unset, the GOOGLE_APPLICATION_CREDENTIALS
environment variable is checked for a filename. If no
filename is named, an attempt is made to fetch an instance service account for the compute instance the program is
running on. If this is not on a GCE instance, then you must define it with an API key or service account
credentials JSON file.
Skip all authentication handling. For use with integration tests only.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
encoding: ''
endpoint: 'https://pubsub.googleapis.com'
project: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tls: ''
topic: string
type: gcp_pubsub
Configuration for the gcp_stackdriver_logs
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullTransformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The log ID to which to publish logs.
This is a name you create to identify this log stream.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60A monitored resource.
The monitored resource to associate the logs with.
The field of the log event from which to take the outgoing log’s severity
field.
The named field is removed from the log event if present, and must be either an integer
between 0 and 800 or a string containing one of the severity level names (case
is ignored) or a common prefix such as err
.
If no severity key is specified, the severity of outgoing records is set to 0 (DEFAULT
).
See the GCP Stackdriver Logging LogSeverity description for more details on
the value of the severity
field.
A wrapper around OwnedValuePath
that allows it to be used in Vector config.
This requires a valid path to be used. If you want to allow optional paths,
use [optional_path::OptionalValuePath].
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
An API key.
Either an API key or a path to a service account credentials JSON file can be specified.
If both are unset, the GOOGLE_APPLICATION_CREDENTIALS
environment variable is checked for a filename. If no
filename is named, an attempt is made to fetch an instance service account for the compute instance the program is
running on. If this is not on a GCE instance, then you must define it with an API key or service account
credentials JSON file.
Wrapper for sensitive strings containing credentials
Path to a service account credentials JSON file.
Either an API key or a path to a service account credentials JSON file can be specified.
If both are unset, the GOOGLE_APPLICATION_CREDENTIALS
environment variable is checked for a filename. If no
filename is named, an attempt is made to fetch an instance service account for the compute instance the program is
running on. If this is not on a GCE instance, then you must define it with an API key or service account
credentials JSON file.
Skip all authentication handling. For use with integration tests only.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
encoding: {}
log_id: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
resource: ''
severity_key: ''
tls: ''
type: gcp_stackdriver_logs
Configuration for the gcp_stackdriver_metrics
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullThe default namespace to use for metrics that do not have one.
Metrics with the same name can only be differentiated by their namespace, and not all
metrics have their own namespace.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60A monitored resource.
The monitored resource to associate the metrics with.
The monitored resource type.
For example, the type of a Compute Engine VM instance is gce_instance
.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
An API key.
Either an API key or a path to a service account credentials JSON file can be specified.
If both are unset, the GOOGLE_APPLICATION_CREDENTIALS
environment variable is checked for a filename. If no
filename is named, an attempt is made to fetch an instance service account for the compute instance the program is
running on. If this is not on a GCE instance, then you must define it with an API key or service account
credentials JSON file.
Wrapper for sensitive strings containing credentials
Path to a service account credentials JSON file.
Either an API key or a path to a service account credentials JSON file can be specified.
If both are unset, the GOOGLE_APPLICATION_CREDENTIALS
environment variable is checked for a filename. If no
filename is named, an attempt is made to fetch an instance service account for the compute instance the program is
running on. If this is not on a GCE instance, then you must define it with an API key or service account
credentials JSON file.
Skip all authentication handling. For use with integration tests only.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
default_namespace: namespace
project_id: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
resource: ''
tls: ''
type: gcp_stackdriver_metrics
Configuration for the honeycomb
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe API key that is used to authenticate against Honeycomb.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullThe dataset to which logs are sent.
Transformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60acknowledgements:
enabled: null
api_key: string
batch:
max_bytes: null
max_events: null
timeout_secs: null
dataset: string
encoding: {}
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
type: honeycomb
Configuration for the http
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an
HTTP header without any additional encryption beyond what is provided by the transport itself.
Configuration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an
HTTP header without any additional encryption beyond what is provided by the transport itself.
Basic authentication.
The username and password are concatenated and encoded via base64.
The basic authentication password.
Basic authentication.
The username and password are concatenated and encoded via base64.
The basic authentication username.
Bearer authentication.
The bearer token value (OAuth2, JWT, etc.) is passed as-is.
Bearer authentication.
The bearer token value (OAuth2, JWT, etc.) is passed as-is.
The bearer authentication token.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
DEPRECATED: A list of custom headers to add to each request.
HTTP method.
The HTTP method to use when making the request.
A string to prefix the payload with.
This option is ignored if the encoding is not character delimited JSON.
If specified, the payload_suffix
must also be specified and together they must produce a valid JSON object.
A string to suffix the payload with.
This option is ignored if the encoding is not character delimited JSON.
If specified, the payload_prefix
must also be specified and together they must produce a valid JSON object.
Outbound HTTP request settings.
Additional HTTP headers to add to every HTTP request.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
The maximum number of requests allowed within the rate_limit_duration_secs
time window.
The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
The maximum amount of time to wait between retries.
The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
The full URI to make HTTP requests to.
This should include the protocol and host, but can also include the port, path, and any other valid part of a URI.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Event data is not delimited at all.
Event data is not delimited at all.
Event data is delimited by a single ASCII (7-bit) character.
Options for the character delimited encoder.
The ASCII (7-bit) character that delimits byte sequences.
Event data is delimited by a single ASCII (7-bit) character.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is delimited by a newline (LF) character.
Event data is delimited by a newline (LF) character.
acknowledgements:
enabled: null
auth: ''
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: none
headers: object
method: post
payload_prefix: string
payload_suffix: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
headers: {}
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tls: ''
uri: string
type: http
Configuration for the humio_logs
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The base URL of the Humio instance.
The scheme (http
or https
) must be specified. No path should be included since the paths defined
by the Splunk
API are used.
The type of events sent to this sink. Humio uses this as the name of the parser to use to ingest the data.
If unset, Humio defaults it to none.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Optional name of the repository to ingest into.
In public-facing APIs, this must (if present) be equal to the repository used to create the ingest token used for authentication.
In private cluster setups, Humio can be configured to allow these to be different.
For more information, see Humio’s Format of Data.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Event fields to be added to Humio’s extra fields.
Can be used to tag events by specifying fields starting with #
.
For more information, see Humio’s Format of Data.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The source of events sent to this sink.
Typically the filename the logs originated from. Maps to @source
in Humio.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Overrides the name of the log field used to retrieve the nanosecond-enabled timestamp to send to Humio.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
The Humio ingestion token.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: none
encoding: ''
endpoint: 'https://cloud.humio.com'
event_type: ''
host_key: host
index: ''
indexed_fields: []
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
source: ''
timestamp_key: timestamp
timestamp_nanos_key: '@timestamp.nanos'
tls: ''
token: string
type: humio_logs
Configuration for the humio_metrics
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
The base URL of the Humio instance.
The scheme (http
or https
) must be specified. No path should be included since the paths defined
by the Splunk
API are used.
The type of events sent to this sink. Humio uses this as the name of the parser to use to ingest the data.
If unset, Humio defaults it to none.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Optional name of the repository to ingest into.
In public-facing APIs, this must (if present) be equal to the repository used to create the ingest token used for authentication.
In private cluster setups, Humio can be configured to allow these to be different.
For more information, see Humio’s Format of Data.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Event fields to be added to Humio’s extra fields.
Can be used to tag events by specifying fields starting with #
.
For more information, see Humio’s Format of Data.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The source of events sent to this sink.
Typically the filename the metrics originated from. Maps to @source
in Humio.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
The Humio ingestion token.
Name of the tag in the metric to use for the source host.
If present, the value of the tag is set on the generated log event in the host
field,
where the field key uses the global host_key
option.
The namespace to use for logs. This overrides the global setting.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments as
described by the native_json
codec.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
The name of the time zone to apply to timestamp conversions that do not contain an explicit
time zone.
This overrides the global timezone
option. The time zone name may be
any name in the TZ database or local
to indicate system local time.
Timezone reference.
This can refer to any valid timezone as defined in the TZ database, or "local" which refers to the system local timezone.
A named timezone.
Must be a valid name in the TZ database.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: none
endpoint: 'https://cloud.humio.com'
event_type: ''
host_key: host
index: ''
indexed_fields: []
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
source: ''
tls: ''
token: string
type: humio_metrics
Configuration for the influxdb_logs
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullTransformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The endpoint to send data to.
This should be a full HTTP URI, including the scheme, host, and port.
Use this option to customize the key containing the hostname.
The setting of log_schema.host_key
, usually host
, is used here by default.
An optional path that deserializes an empty string to None
.
The name of the InfluxDB measurement that is written to.
Use this option to customize the key containing the message.
The setting of log_schema.message_key
, usually message
, is used here by default.
An optional path that deserializes an empty string to None
.
The namespace of the measurement name to use.
DEPRECATED: When specified, the measurement name is <namespace>.vector
.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Use this option to customize the key containing the source_type.
The setting of log_schema.source_type_key
, usually source_type
, is used here by default.
An optional path that deserializes an empty string to None
.
The list of names of log fields that should be added as tags to each measurement.
By default Vector adds metric_type
as well as the configured log_schema.host_key
and
log_schema.source_type_key
options.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Configuration settings for InfluxDB v0.x/v1.x.
Configuration settings for InfluxDB v2.x.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
encoding: {}
endpoint: string
host_key: ''
measurement: string
message_key: ''
namespace: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
source_type_key: ''
tags: []
tls: ''
type: influxdb_logs
Configuration for the influxdb_metrics
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullSets the default namespace for any metrics sent.
This namespace is only used if a metric has no existing namespace. When a namespace is
present, it is used as a prefix to the metric name, and separated with a period (.
).
The endpoint to send data to.
This should be a full HTTP URI, including the scheme, host, and port.
The list of quantiles to calculate when sending distribution metrics.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60A map of additional tags, in the key/value pair format, to add to each measurement.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Configuration settings for InfluxDB v0.x/v1.x.
Configuration settings for InfluxDB v2.x.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
default_namespace: string
endpoint: string
quantiles:
- 0.5
- 0.75
- 0.9
- 0.95
- 0.99
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tags: object
tls: ''
type: influxdb_metrics
Configuration for the kafka
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullA comma-separated list of Kafka bootstrap servers.
These are the servers in a Kafka cluster that a client should use to bootstrap its
connection to the cluster, allowing discovery of all the other hosts in the cluster.
Must be in the form of host:port
, and comma-separated.
Supported compression types for Kafka.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The log field name to use for the Kafka headers.
If omitted, no headers are written.
A wrapper around OwnedTargetPath
that allows it to be used in Vector config
with prefix default to PathPrefix::Event
The log field name or tag key to use for the topic key.
If the field does not exist in the log or in the tags, a blank value is used. If
unspecified, the key is not sent.
Kafka uses a hash of the key to choose the partition or uses round-robin if the record has
no key.
A wrapper around OwnedTargetPath
that allows it to be used in Vector config
with prefix default to PathPrefix::Event
A map of advanced options to pass directly to the underlying librdkafka
client.
For more information on configuration options, see Configuration properties.
Local message timeout, in milliseconds.
Default timeout, in milliseconds, for network requests.
A templated field.
The Kafka topic name to write events to.
Configuration for SASL authentication when interacting with Kafka.
Configuration for SASL authentication when interacting with Kafka.
Enables SASL authentication.
Only PLAIN
- and SCRAM
-based mechanisms are supported when configuring SASL authentication using sasl.*
. For
other mechanisms, librdkafka_options.*
must be used directly to configure other librdkafka
-specific values.
If using sasl.kerberos.*
as an example, where *
is service.name
, principal
, kinit.md
, etc., then
librdkafka_options.*
as a result becomes librdkafka_options.sasl.kerberos.service.name
,
librdkafka_options.sasl.kerberos.principal
, etc.
See the librdkafka documentation for details.
SASL authentication is not supported on Windows.
The SASL mechanism to use.
Wrapper for sensitive strings containing credentials
Configures the TLS options for incoming/outgoing connections.
Configures the TLS options for incoming/outgoing connections.
Whether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
bootstrap_servers: string
compression: none
encoding: ''
headers_key: ''
key_field: ''
librdkafka_options: {}
message_timeout_ms: 300000
socket_timeout_ms: 60000
topic: string
type: kafka
Configuration for the logdna
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullThe default app that is set for events that do not contain a file
or app
field.
The default environment that is set for events that do not contain an env
field.
Transformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The HTTP endpoint to send logs to.
Both IP address and hostname are accepted formats.
A templated field.
The hostname that is attached to each batch of events.
The IP address that is attached to each batch of events.
The MAC address that is attached to each batch of events.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The tags that are attached to each batch of events.
acknowledgements:
enabled: null
api_key: string
batch:
max_bytes: null
max_events: null
timeout_secs: null
default_app: vector
default_env: production
encoding: {}
endpoint: 'https://logs.mezmo.com/'
hostname: string
ip: string
mac: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tags: array
type: logdna
Configuration for the loki
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an
HTTP header without any additional encryption beyond what is provided by the transport itself.
Configuration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an
HTTP header without any additional encryption beyond what is provided by the transport itself.
Basic authentication.
The username and password are concatenated and encoded via base64.
The basic authentication password.
Basic authentication.
The username and password are concatenated and encoded via base64.
The basic authentication username.
Bearer authentication.
The bearer token value (OAuth2, JWT, etc.) is passed as-is.
Bearer authentication.
The bearer token value (OAuth2, JWT, etc.) is passed as-is.
The bearer authentication token.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
Compression configuration.
Basic compression.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Loki-specific compression.
Snappy compression.
This implies sending push requests as Protocol Buffers.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The base URL of the Loki instance.
The path
value is appended to this.
A set of labels that are attached to each batch of events.
Both keys and values are templateable, which enables you to attach dynamic labels to events.
Valid label keys include *
, and prefixes ending with *
, to allow for the expansion of
objects into multiple labels. See Label expansion for more information.
Note: If the set of labels has high cardinality, this can cause drastic performance issues
with Loki. To prevent this from happening, reduce the number of unique label keys and
values.
Out-of-order event behavior.
Some sources may generate events with timestamps that aren't in chronological order. Even though the
sink sorts the events before sending them to Loki, there is a chance that another event could come in
that is out of order with the latest events sent to Loki. Prior to Loki 2.4.0, this
was not supported and would result in an error during the push request.
If you're using Loki 2.4.0 or newer, Accept
is the preferred action, which lets Loki handle
any necessary sorting/reordering. If you're using an earlier version, then you must use Drop
or RewriteTimestamp
depending on which option makes the most sense for your use case.
Rewrite the timestamp of the event to the timestamp of the latest event seen by the sink.
Accept the event.
The event is not dropped and is sent without modification.
Requires Loki 2.4.0 or newer.
The path to use in the URL of the Loki instance.
Whether or not to delete fields from the event when they are used as labels.
Whether or not to remove the timestamp from the event payload.
The timestamp is still sent as event metadata for Loki to use for indexing.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The tenant ID to specify in requests to Loki.
When running Loki locally, a tenant ID is not required.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
auth: ''
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: snappy
encoding: ''
endpoint: string
labels: object
out_of_order_action: drop
path: /loki/api/v1/push
remove_label_fields: boolean
remove_timestamp: true
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tenant_id: ''
tls: ''
type: loki
Configuration for the mezmo
(formerly logdna
) sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullThe default app that is set for events that do not contain a file
or app
field.
The default environment that is set for events that do not contain an env
field.
Transformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The HTTP endpoint to send logs to.
Both IP address and hostname are accepted formats.
A templated field.
The hostname that is attached to each batch of events.
The IP address that is attached to each batch of events.
The MAC address that is attached to each batch of events.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The tags that are attached to each batch of events.
acknowledgements:
enabled: null
api_key: string
batch:
max_bytes: null
max_events: null
timeout_secs: null
default_app: vector
default_env: production
encoding: {}
endpoint: 'https://logs.mezmo.com/'
hostname: string
ip: string
mac: string
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tags: array
type: mezmo
Configuration for the nats
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfiguration of the authentication strategy when interacting with NATS.
Configuration of the authentication strategy when interacting with NATS.
Username/password authentication.
Username/password authentication.
Username and password configuration.
Credentials file authentication. (JWT-based)
Credentials file configuration.
Path to credentials file.
Credentials file authentication. (JWT-based)
User.
Conceptually, this is equivalent to a public key.
Seed.
Conceptually, this is equivalent to a private key.
A NATS name assigned to the NATS connection.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The NATS subject to publish messages to.
Configures the TLS options for incoming/outgoing connections.
Configures the TLS options for incoming/outgoing connections.
Whether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
The NATS URL to connect to.
The URL must take the form of nats://server:port
.
If the port is not specified it defaults to 4222.
acknowledgements:
enabled: null
auth: ''
connection_name: vector
encoding: ''
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
subject: string
tls: ''
url: string
type: nats
Configuration for the new_relic
sink.
The New Relic account ID.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Transformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
A valid New Relic license key.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60account_id: string
acknowledgements:
enabled: null
api: ''
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: gzip
encoding: {}
license_key: string
region: ''
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
type: new_relic
Configuration for the papertrail
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullConfigures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The URI component of a request.
The TCP endpoint to send logs to.
TCP keepalive settings for socket-based components.
TCP keepalive settings for socket-based components.
The time to wait before starting to send TCP keepalive probes on an idle connection.
A templated field.
The value to use as the process
in Papertrail.
Configures the send buffer size using the SO_SNDBUF
option on the socket.
Configures the TLS options for incoming/outgoing connections.
Configures the TLS options for incoming/outgoing connections.
Whether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
encoding: ''
endpoint: string
keepalive: ''
process: vector
send_buffer_bytes: integer
tls: ''
type: papertrail
Configuration for the prometheus_exporter
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe address to expose for scraping.
The metrics are exposed at the typical Prometheus exporter path, /metrics
.
Configuration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an
HTTP header without any additional encryption beyond what is provided by the transport itself.
Configuration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an
HTTP header without any additional encryption beyond what is provided by the transport itself.
Basic authentication.
The username and password are concatenated and encoded via base64.
The basic authentication password.
Basic authentication.
The username and password are concatenated and encoded via base64.
The basic authentication username.
Bearer authentication.
The bearer token value (OAuth2, JWT, etc.) is passed as-is.
Bearer authentication.
The bearer token value (OAuth2, JWT, etc.) is passed as-is.
The bearer authentication token.
Default buckets to use for aggregating distribution metrics into histograms.
The default namespace for any metrics sent.
This namespace is only used if a metric has no existing namespace. When a namespace is
present, it is used as a prefix to the metric name, and separated with an underscore (_
).
It should follow the Prometheus naming conventions.
distributions_as_summaries
Whether or not to render distributions as an aggregated histogram or aggregated summary.
While distributions as a lossless way to represent a set of samples for a
metric is supported, Prometheus clients (the application being scraped, which is this sink) must
aggregate locally into either an aggregated histogram or aggregated summary.
The interval, in seconds, on which metrics are flushed.
On the flush interval, if a metric has not been seen since the last flush interval, it is
considered expired and is removed.
Be sure to configure this value higher than your client’s scrape interval.
Quantiles to use for aggregating distribution metrics into a summary.
Suppresses timestamps on the Prometheus output.
This can sometimes be useful when the source of metrics leads to their timestamps being too
far in the past for Prometheus to allow them, such as when aggregating metrics over long
time periods, or when replaying old metrics from a disk buffer.
Configures the TLS options for incoming/outgoing connections.
Configures the TLS options for incoming/outgoing connections.
Whether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
address: '0.0.0.0:9598'
auth: ''
buckets:
- 0.005
- 0.01
- 0.025
- 0.05
- 0.1
- 0.25
- 0.5
- 1
- 2.5
- 5
- 10
default_namespace: string
distributions_as_summaries: boolean
flush_period_secs: 60
quantiles:
- 0.5
- 0.75
- 0.9
- 0.95
- 0.99
suppress_timestamp: boolean
tls: ''
type: prometheus_exporter
Configuration for the prometheus_remote_write
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullAuthentication strategies.
Authentication strategies.
HTTP Basic Authentication.
Basic authentication password.
HTTP Basic Authentication.
Basic authentication username.
Bearer authentication.
A bearer token (OAuth2, JWT, etc) is passed as-is.
Bearer authentication.
A bearer token (OAuth2, JWT, etc) is passed as-is.
The bearer token to send.
Amazon Prometheus Service-specific authentication.
Amazon Prometheus Service-specific authentication.
Configuration of the region/endpoint to use when interacting with an AWS service.
Configuration of the region/endpoint to use when interacting with an AWS service.
Custom endpoint for use with AWS-compatible services.
default: nullThe AWS region of the target service.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullDefault buckets to use for aggregating distribution metrics into histograms.
Supported compression types for Prometheus Remote Write.
The default namespace for any metrics sent.
This namespace is only used if a metric has no existing namespace. When a namespace is
present, it is used as a prefix to the metric name, and separated with an underscore (_
).
It should follow the Prometheus naming conventions.
The endpoint to send data to.
The endpoint should include the scheme and the path to write to.
Quantiles to use for aggregating distribution metrics into a summary.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The tenant ID to send.
If set, a header named X-Scope-OrgID
is added to outgoing requests with the value of this setting.
This may be used by Cortex or other remote services to identify the tenant making the request.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
enabled: null
auth: ''
aws: ''
batch:
max_bytes: null
max_events: null
timeout_secs: null
buckets:
- 0.005
- 0.01
- 0.025
- 0.05
- 0.1
- 0.25
- 0.5
- 1
- 2.5
- 5
- 10
compression: snappy
default_namespace: string
endpoint: string
quantiles:
- 0.5
- 0.75
- 0.9
- 0.95
- 0.99
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tenant_id: ''
tls: ''
type: prometheus_remote_write
Configuration for the pulsar
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullAuthentication configuration.
Authentication configuration.
Basic authentication name/username.
This can be used either for basic authentication (username/password) or JWT authentication.
When used for JWT, the value should be token
.
OAuth2-specific authentication configuration.
OAuth2-specific authentication configuration.
The credentials URL.
A data URL is also supported.
Basic authentication password/token.
This can be used either for basic authentication (username/password) or JWT authentication.
When used for JWT, the value should be the signed JWT, in the compact representation.
Wrapper for sensitive strings containing credentials
The maximum size of a batch before it is flushed.
default: nullThe maximum amount of events in a batch before it is flushed.
Note this is an unsigned 32 bit integer which is a smaller capacity than
many of the other sink batch settings.
default: nullSupported compression types for Pulsar.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The endpoint to which the Pulsar client should connect to.
The endpoint should specify the pulsar protocol and port.
The log field name or tags key to use for the partition key.
If the field does not exist in the log event or metric tags, a blank value will be used.
If omitted, the key is not sent.
Pulsar uses a hash of the key to choose the topic-partition or uses round-robin if the record has no key.
An optional path that deserializes an empty string to None
.
The name of the producer. If not specified, the default name assigned by Pulsar is used.
The log field name to use for the Pulsar properties key.
If omitted, no properties will be written.
An optional path that deserializes an empty string to None
.
A templated field.
The Pulsar topic name to write events to.
acknowledgements:
enabled: null
auth: ''
batch:
max_bytes: null
max_events: null
compression: none
encoding: ''
endpoint: string
partition_key_field: ''
producer_name: string
properties_key: ''
topic: string
type: pulsar
Configuration for the redis
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullRedis data type to store messages in.
The Redis list
type.
This resembles a deque, where messages can be popped and pushed from either end.
This is the default.
The Redis channel
type.
Redis channels function in a pub/sub fashion, allowing many-to-many broadcasting and receiving.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The URL of the Redis endpoint to connect to.
The URL must take the form of protocol://server:port/db
where the protocol can either be
redis
or rediss
for connections secured via TLS.
A templated field.
The Redis key to publish messages to.
The method to use for pushing messages into a list
.
Use the rpush
method.
This pushes messages onto the tail of the list.
This is the default.
Use the lpush
method.
This pushes messages onto the head of the list.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
data_type: list
encoding: ''
endpoint: string
key: string
list_option: ''
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
type: redis
Configuration for the sematext_logs
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullTransformations to prepare an event for serialization.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The endpoint to send data to.
Setting this option overrides the region
option.
The Sematext region to send data to.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The token that is used to write to Sematext.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
encoding: {}
endpoint: string
region: us
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
token: string
type: sematext_logs
Configuration for the sematext_metrics
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullSets the default namespace for any metrics sent.
This namespace is only used if a metric has no existing namespace. When a namespace is
present, it is used as a prefix to the metric name, and separated with a period (.
).
The endpoint to send data to.
Setting this option overrides the region
option.
The Sematext region to send data to.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The token that is used to write to Sematext.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
default_namespace: string
endpoint: string
region: us
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
token: string
type: sematext_metrics
Configuration for the socket
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullacknowledgements:
enabled: null
type: socket
Configuration for the splunk_hec_logs
sink.
Splunk HEC acknowledgement configuration.
indexer_acknowledgements_enabled
The maximum number of pending acknowledgements from events sent to the Splunk HEC collector.
Once reached, the sink begins applying backpressure.
The amount of time to wait between queries to the Splunk HEC indexer acknowledgement endpoint.
The maximum number of times an acknowledgement ID is queried for its status.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullPasses the auto_extract_timestamp
option to Splunk.
This option is only relevant to Splunk v8.x and above, and is only applied when
endpoint_target
is set to event
.
Setting this to true
causes Splunk to extract the timestamp from the message text
rather than use the timestamp embedded in the event. The timestamp must be in the format
yyyy-mm-dd hh:mm:ss
.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Default Splunk HEC token.
If an event has a token set in its secrets (splunk_hec_token
), it prevails over the one set here.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
The base URL of the Splunk instance.
The scheme (http
or https
) must be specified. No path should be included since the paths defined
by the Splunk
API are used.
Splunk HEC endpoint configuration.
Events are sent to the raw endpoint.
When the raw endpoint is used, configured event metadata is sent as
query parameters on the request, except for the timestamp
field.
The name of the index to send events to.
If not specified, the default index defined within Splunk is used.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The source of events sent to this sink.
This is typically the filename the logs originated from.
If unset, the Splunk collector sets it.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
The sourcetype of events sent to this sink.
If unset, Splunk defaults to httpevent
.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Overrides the name of the log field used to retrieve the timestamp to send to Splunk HEC.
When set to “”
, a timestamp is not set in the events sent to Splunk HEC.
By default, the global log_schema.timestamp_key
option is used.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
indexer_acknowledgements_enabled: true
max_pending_acks: 1000000
query_interval: 10
retry_limit: 30
auto_extract_timestamp: boolean
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: none
default_token: string
encoding: ''
endpoint: string
endpoint_target: event
host_key: host
index: ''
indexed_fields: []
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
source: ''
sourcetype: ''
timestamp_key: timestamp
tls: ''
type: splunk_hec_logs
Configuration of the splunk_hec_metrics
sink.
Splunk HEC acknowledgement configuration.
indexer_acknowledgements_enabled
The maximum number of pending acknowledgements from events sent to the Splunk HEC collector.
Once reached, the sink begins applying backpressure.
The amount of time to wait between queries to the Splunk HEC indexer acknowledgement endpoint.
The maximum number of times an acknowledgement ID is queried for its status.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
Sets the default namespace for any metrics sent.
This namespace is only used if a metric has no existing namespace. When a namespace is
present, it is used as a prefix to the metric name, and separated with a period (.
).
Default Splunk HEC token.
If an event has a token set in its metadata, it prevails over the one set here.
The base URL of the Splunk instance.
The scheme (http
or https
) must be specified. No path should be included since the paths defined
by the Splunk
API are used.
The name of the index where to send the events to.
If not specified, the default index defined within Splunk is used.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60The source of events sent to this sink.
This is typically the filename the logs originated from.
If unset, the Splunk collector sets it.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
The sourcetype of events sent to this sink.
If unset, Splunk defaults to httpevent
.
A templated field.
In many cases, components can be configured so that part of the component's functionality can be
customized on a per-event basis. For example, you have a sink that writes events to a file and you want to
specify which file an event should go to by using an event field as part of the
input to the filename used.
By using Template
, users can specify either fixed strings or templated strings. Templated strings use a common syntax to
refer to fields in an event that is used as the input data when rendering the template. An example of a fixed string
is my-file.log
. An example of a template string is my-file-{{key}}.log
, where {{key}}
is the key's value when the template is rendered into a string.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
acknowledgements:
indexer_acknowledgements_enabled: true
max_pending_acks: 1000000
query_interval: 10
retry_limit: 30
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: none
default_namespace: string
default_token: string
endpoint: string
host_key: host
index: ''
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
source: ''
sourcetype: ''
tls: ''
type: splunk_hec_metrics
Configuration for the statsd
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullSets the default namespace for any metrics sent.
This namespace is only used if a metric has no existing namespace. When a namespace is
present, it is used as a prefix to the metric name, and separated with a period (.
).
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
default_namespace: string
type: statsd
Configuration for the vector
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe downstream Vector address to which to connect.
Both IP address and hostname are accepted formats.
The address must include a port.
The maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullWhether or not to compress requests.
If set to true
, requests are compressed with gzip
.
Middleware settings for outbound requests.
Various settings can be configured, such as concurrency and rate limits, timeouts, etc.
Configuration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or
unstable performance and sink behavior. Proceed with caution.
default: {"decrease_ratio":0.9,"ewma_alpha":0.4,"initial_concurrency":1,"rtt_deviation_scale":2.5}The fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
default: 0.9The weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with
the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has
unusually high response variability.
default: 0.4The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service's average limit if you're seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
default: 1Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable
those values are. We use that deviation when comparing the past RTT average to the current measurements, so we
can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to
an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
default: 2.5Configuration for outbound request concurrency.
A fixed concurrency of 1.
Only one request can be outstanding at any given time.
A fixed amount of concurrency will be allowed.
The time window used for the rate_limit_num
option.
default: 1The maximum number of requests allowed within the rate_limit_duration_secs
time window.
default: 9223372036854776000The maximum number of retries to make for failed requests.
The default, for all intents and purposes, represents an infinite number of retries.
default: 9223372036854776000retry_initial_backoff_secs
The amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
default: 1The maximum amount of time to wait between retries.
default: 3600The time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service's internal timeout, as this could
create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60Configures the TLS options for incoming/outgoing connections.
Configures the TLS options for incoming/outgoing connections.
Whether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order
that they are defined.
Absolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
Absolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as
an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
Absolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
Passphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
Enables certificate verification.
If enabled, certificates must not be expired and must be issued by a trusted
issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the
certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and
so on until the verification process reaches a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
Enables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by
the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Version of the configuration.
Marker type for the version two of the configuration for the vector
sink.
Marker value for version two.
acknowledgements:
enabled: null
address: string
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: boolean
request:
adaptive_concurrency:
decrease_ratio: 0.9
ewma_alpha: 0.4
initial_concurrency: 1
rtt_deviation_scale: 2.5
rate_limit_duration_secs: 1
rate_limit_num: 9223372036854776000
retry_attempts: 9223372036854776000
retry_initial_backoff_secs: 1
retry_max_duration_secs: 3600
timeout_secs: 60
tls: ''
version: ''
type: vector
Configuration for the webhdfs
sink.
Controls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
Whether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports
end-to-end acknowledgements as well, waits for events to be acknowledged by the sink
before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
default: nullThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are
serialized/compressed.
default: nullThe maximum size of a batch before it is flushed.
default: nullThe maximum age of a batch before it is flushed.
default: nullCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
Compression algorithm and compression level.
Compression level.
Allowed enum values: none,fast,best,default,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21
An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients.
The endpoint is the HDFS's web restful HTTP API endpoint.
For more information, see the HDFS Architecture documentation.
A prefix to apply to all keys.
Prefixes are useful for partitioning objects, such as by creating a blob key that
stores blobs under a particular directory. If using a prefix for this purpose, it must end
in /
to act as a directory path. A trailing /
is not automatically added.
The final file path is in the format of {root}/{prefix}{suffix}
.
The root path for WebHDFS.
Must be a valid directory.
The final file path is in the format of {root}/{prefix}{suffix}
.
Configures how events are encoded into raw bytes.
Apache Avro-specific encoder options.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
The CSV Serializer Options.
Set the capacity (in bytes) of the internal buffer used in the CSV writer.
This defaults to a reasonable setting.
The field delimiter to use when writing CSV.
Enable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in
field data are escaped instead of doubled.
The escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character
like \ (instead of escaping quotes by doubling them).
To use this double_quotes
needs to be disabled as well otherwise it is ignored
Configures the fields that will be encoded, as well as the order in which they
appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
The quote character to use when writing CSV.
The quoting style to use when writing CSV data.
This puts quotes around every field. Always.
This puts quotes around fields only when necessary.
They are necessary when fields contain a quote, delimiter or record terminator.
Quotes are also necessary when writing an empty record
(which is indistinguishable from a record with one empty field).
This puts quotes around all fields that are non-numeric.
Namely, when writing a field that does not parse as a valid float or integer,
then quotes will be used even if they aren’t strictly necessary.
This never writes quotes, even if it would produce invalid CSV data.
Encodes an event as a CSV message.
This codec must be configured with fields to encode.
Encodes an event as a GELF message.
Encodes an event as a GELF message.
Encodes an event as JSON.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Encodes an event as JSON.
Encodes an event as a logfmt message.
Encodes an event as a logfmt message.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
No encoding.
This encoding uses the message
field of a log event.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
Controls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
Tag values are exposed as single strings, the same as they were before this config
option. Tags with multiple values show the last assigned value, and null values
are ignored.
All tags are exposed as arrays of either string or null values.
Plain text encoding.
This encoding uses the message
field of a log event. For metrics, it uses an
encoding that resembles the Prometheus export format.
Be careful if you are modifying your log events (for example, by using a remap
transform) and removing the message field while doing additional parsing on it, as this
could lead to the encoding emitting empty strings for the given event.
List of fields that are excluded from the encoded event.
List of fields that are included in the encoded event.
Format used for timestamp fields.
The format in which a timestamp should be represented.
Represent the timestamp as a Unix timestamp.
Represent the timestamp as a RFC 3339 timestamp.
Event data is not delimited at all.
Event data is not delimited at all.
Event data is delimited by a single ASCII (7-bit) character.
Options for the character delimited encoder.
The ASCII (7-bit) character that delimits byte sequences.
Event data is delimited by a single ASCII (7-bit) character.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is prefixed with its length in bytes.
The prefix is a 32-bit unsigned integer, little endian.
Event data is delimited by a newline (LF) character.
Event data is delimited by a newline (LF) character.
acknowledgements:
enabled: null
batch:
max_bytes: null
max_events: null
timeout_secs: null
compression: gzip
endpoint: string
prefix: string
root: string
type: webhdfs