Cette page n'est pas encore disponible en français, sa traduction est en cours. Si vous avez des questions ou des retours sur notre projet de traduction actuel, n'hésitez pas à nous contacter.
Overview
As your infrastructure and applications grow, so does your log volume and the complexity of the data. A large volume of logs can introduce a lot of noise and make it difficult to analyze and troubleshoot logs. Use Observability Pipelines’ processors to decide which logs are valuable and which ones are noisy and uninteresting, before sending your logs to their destinations. You can use the following processors in the Observability Pipeline Worker to manage your logs:
Filter: Add a query to send only a subset of logs based on your conditions.
Sample: Define a sampling rate to send only a subset of your logs.
Quota: Enforce daily limits on either the volume of log data or the number of log events.
Dedupe: Drop duplicate copies of your logs, for example, due to retries because of network issues.