Para los pipelines existentes en Observability Pipelines, puedes actualizar y desplegar cambios para la configuración de origen, la configuración de destino y los procesadores en la interfaz de usuario de Observability Pipelines. Pero si deseas actualizar las variables de entorno de origen y de destino, deberás actualizar manualmente el Worker con los nuevos valores.
Haz clic en Editar pipeline en la esquina superior derecha.
Realiza cambios en el pipeline.
Si estás actualizando la configuración de origen o de destino que se muestra en los íconos o actualizando y añadiendo procesadores, realiza los cambios y, a continuación, haz clic en Desplegar cambios.
Para actualizar las variables de entorno de origen o de destino, haz clic en Ir a Pasos de instalación del Worker y consulta Actualizar variables de origen o de destino para obtener instrucciones.
The Observability Pipelines Worker listens to this socket address to receive logs from Amazon Data Firehose.
The address is stored in the environment variable AWS_DATA_FIREHOSE_ADDRESS.
Amazon S3 SQS URL
The URL of the SQS queue to which the S3 bucket sends the notification events.
Stored as the environment variable: DD_OP_SOURCE_AWS_S3_SQS_URL
AWS_CONFIG_FILE path
The path to the AWS configuration file local to this node.
Stored as the environment variable: AWS_CONFIG_FILE.
AWS_PROFILE name
The name of the profile to use within these files.
Stored as the environment variable: AWS_PROFILE.
Datadog Agent address:
The Observability Pipelines Worker listens to this socket address to receive logs from the Datadog Agent.
Stored in the environment variableDD_OP_SOURCE_DATADOG_AGENT_ADDRESS.
Fluent socket address and port:
The Observability Pipelines Worker listens on this address for incoming log messages.
Stored in the environment variable DD_OP_SOURCE_FLUENT_ADDRESS.
There are no environment variables for the Google Pub/Sub source.
HTTP/s endpoint URL:
The Observability Pipelines Worker collects log events from this endpoint. For example, https://127.0.0.8/logs.
Stored as the environment variable: DD_OP_SOURCE_HTTP_CLIENT_ENDPOINT_URL.
If you are using basic authentication:
HTTP/S endpoint authentication username and password.
Stored as the environment variables: DD_OP_SOURCE_HTTP_CLIENT_USERNAME and DD_OP_SOURCE_HTTP_CLIENT_PASSWORD.
If you are using bearer authentication:
HTTP/S endpoint bearer token.
Stored as the environment variable: DD_OP_SOURCE_HTTP_CLIENT_BEARER_TOKEN.
HTTP/S server address:
The Observability Pipelines Worker listens to this socket address, such as 0.0.0.0:9997, for your HTTP client logs.
Stored in the environment variable: DD_OP_SOURCE_HTTP_SERVER_ADDRESS.
The host and port of the Kafka bootstrap servers.
The bootstrap server that the client uses to connect to the Kafka cluster and discover all the other hosts in the cluster. The host and port must be entered in the format of host:port, such as 10.14.22.123:9092. If there is more than one server, use commas to separate them.
Stored as the environment variable: DD_OP_SOURCE_KAFKA_BOOTSTRAP_SERVERS.
If you enabled SASL:
Kafka SASL username
Stored as the environment variable: DD_OP_SOURCE_KAFKA_SASL_USERNAME.
Kafka SASL password
Stored as the environment variable: DD_OP_SOURCE_KAFKA_SASL_PASSWORD.
Logstash address and port:
The Observability Pipelines Worker listens on this address, such as 0.0.0.0:9997, for incoming log messages.
Stored in the environment variable as: DD_OP_SOURCE_LOGSTASH_ADDRESS
Splunk HEC address:
The bind address that your Observability Pipelines Worker listens on to receive logs originally intended for the Splunk indexer. For example, 0.0.0.0:8088 Note: /services/collector/event is automatically appended to the endpoint.
Stored in the environment variable DD_OP_SOURCE_SPLUNK_HEC_ADDRESS.
Splunk TCP address:
The Observability Pipelines Worker listens to this socket address to receive logs from the Splunk Forwarder. For example, 0.0.0.0:9997.
Stored in the environment variable DD_OP_SOURCE_SPLUNK_TCP_ADDRESS.
Sumo Logic address:
The bind address that your Observability Pipelines Worker listens on to receive logs originally intended for the Sumo Logic HTTP Source. For example, 0.0.0.0:80. Note: /receiver/v1/http/ path is automatically appended to the endpoint.
Stored in the environment variable DD_OP_SOURCE_SUMO_LOGIC_ADDRESS.
rsyslog or syslog-ng address:
The Observability Pipelines Worker listens on this bind address to receive logs from the Syslog forwarder. For example, 0.0.0.0:9997.
Stored in the environment variable DD_OP_SOURCE_SYSLOG_ADDRESS.
Si deseas actualizar las variables de entorno de destino, actualiza la información de tu destino de logs.
Azure connections string to give the Worker access to your Azure Storage bucket.
Stored in the environment variable: DD_OP_DESTINATION_DATADOG_ARCHIVES_AZURE_BLOB_CONNECTION_STRING.
Elasticsearch authentication username:
Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_USERNAME.
Elasticsearch authentication password:
Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_PASSWORD.
Elasticsearch endpoint URL:
Stored in the environment variable: DD_OP_DESTINATION_ELASTICSEARCH_ENDPOINT_URL.
Data collection endpoint (DCE)
Stored as the environment variable: DD_OP_DESTINATION_MICROSOFT_SENTINEL_DCE_URI
Client secret
Stored as the environment variable: DD_OP_DESTINATION_MICROSOFT_SENTINEL_CLIENT_SECRET
New Relic account ID:
Stored in the environment variable: DD_OP_DESTINATION_NEW_RELIC_ACCOUNT_ID.
New Relic license:
Stored in the environment variable: DD_OP_DESTINATION_NEW_RELIC_LICENSE_KEY.
OpenSearch authentication username:
Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_USERNAME.
OpenSearch authentication password:
Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_PASSWORD.
OpenSearch endpoint URL:
Stored in the environment variable: DD_OP_DESTINATION_OPENSEARCH_ENDPOINT_URL.
SentinelOne write access token:
Stored as the environment variable: DD_OP_DESTINATION_SENTINEL_ONE_TOKEN
Splunk HEC token:
The Splunk HEC token for the Splunk indexer.
Stored in the environment variable DD_OP_DESTINATION_SPLUNK_HEC_TOKEN.
Base URL of the Splunk instance:
The Splunk HTTP Event Collector endpoint your Observability Pipelines Worker sends processed logs to. For example, https://hec.splunkcloud.com:8088. Note: /services/collector/event path is automatically appended to the endpoint.
Stored in the environment variable DD_OP_DESTINATION_SPLUNK_HEC_ENDPOINT_URL.
Unique URL generated for the HTTP Logs and Metrics Source to receive log data.
The Sumo Logic HTTP Source endpoint. The Observability Pipelines Worker sends processed logs to this endpoint. For example, https://<ENDPOINT>.collection.sumologic.com/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>, where:
<ENDPOINT> is your Sumo collection endpoint.
<UNIQUE_HTTP_COLLECTOR_CODE> is the string that follows the last forward slash (/) in the upload URL for the HTTP source.
Stored in the environment variable DD_OP_DESTINATION_SUMO_LOGIC_HTTP_COLLECTOR_URL.
The rsyslog or syslog-ng endpoint URL. For example, 127.0.0.1:9997.
The Observability Pipelines Worker sends logs to this address and port.
Stored as the environment variable: DD_OP_DESTINATION_SYSLOG_ENDPOINT_URL.
Sigue las instrucciones de tu entorno para actualizar el worker:
Note: By default, the docker run command exposes the same port the Worker is listening on. If you want to map the Worker’s container port to a different port on the Docker host, use the -p | --publish option:
-p 8282:8088 datadog/observability-pipelines-worker run
Click Navigate Back to go back to the Observability Pipelines edit pipeline page.
Note: By default, the Kubernetes Service maps incoming port <SERVICE_PORT> to the port the Worker is listening on (<TARGET_PORT>). If you want to map the Worker’s pod port to a different incoming port of the Kubernetes Service, use the following service.ports[0].port and service.ports[0].targetPort values:
Click Navigate Back to go back to the Observability Pipelines edit pipeline page.
Click Deploy Changes.
Click Select API key to choose the Datadog API key you want to use.
Run the one-step command provided in the UI to re-install the Worker.
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
If you prefer not to use the one-line installation script, follow these step-by-step instructions:
Run the following commands to update your local apt repo and install the latest Worker version:
Click Navigate Back to go back to the Observability Pipelines edit pipeline page.
Click Deploy Changes.
Click Select API key to choose the Datadog API key you want to use.
Run the one-step command provided in the UI to re-install the Worker.
Note: The environment variables used by the Worker in /etc/default/observability-pipelines-worker are not updated on subsequent runs of the install script. If changes are needed, update the file manually and restart the Worker.
If you prefer not to use the one-line installation script, follow these step-by-step instructions:
Update your packages and install the latest version of Worker:
Click Navigate Back to go back to the Observability Pipelines edit pipeline page.
Click Deploy Changes.
Select the expected log volume for the pipeline from the dropdown.
Select the AWS region you want to use to install the Worker.
Click Select API key to choose the Datadog API key you want to use.
Click Launch CloudFormation Template to navigate to the AWS Console to review the stack configuration and then launch it. Make sure the CloudFormation parameters are set as expected.
Select the VPC and subnet that you want to use to install the Worker.
Review and check the necessary permissions checkboxes for IAM. Click Submit to create the stack. CloudFormation handles the installation at this point; the Worker instances are launched, the necessary software is downloaded, and the Worker starts automatically.
Delete the previous CloudFormation stack and resources associated with it.
Click Navigate Back to go back to the Observability Pipelines edit pipeline page.