Object with all Index configurations for a given organization.
Expand All
Field
Type
Description
indexes
[object]
Array of Log index configurations.
daily_limit
int64
The number of log events you can send in this index per day before you are rate-limited.
daily_limit_reset
object
Object containing options to override the default daily limit reset time.
reset_time
string
String in HH:00 format representing the time of day the daily limit should be reset. The hours must be between 00 and 23 (inclusive).
reset_utc_offset
string
String in (-|+)HH:00 format representing the UTC offset to apply to the given reset time. The hours must be between -12 and +14 (inclusive).
daily_limit_warning_threshold_percentage
double
A percentage threshold of the daily quota at which a Datadog warning event is generated.
exclusion_filters
[object]
An array of exclusion objects. The logs are tested against the query of each filter,
following the order of the array. Only the first matching active exclusion matters,
others (if any) are ignored.
filter
object
Exclusion filter is defined by a query, a sampling rule, and a active/inactive toggle.
query
string
Default query is *, meaning all logs flowing in the index would be excluded.
Scope down exclusion filter to only a subset of logs with a log query.
sample_rate [required]
double
Sample rate to apply to logs going through this exclusion filter,
a value of 1.0 excludes all logs matching the query.
is_enabled
boolean
Whether or not the exclusion filter is active.
name [required]
string
Name of the index exclusion filter.
filter [required]
object
Filter for logs.
query
string
The filter query.
is_rate_limited
boolean
A boolean stating if the index is rate limited, meaning more logs than the daily limit have been sent.
Rate limit is reset every-day at 2pm UTC.
name [required]
string
The name of the index.
num_flex_logs_retention_days
int64
The total number of days logs are stored in Standard and Flex Tier before being deleted from the index.
If Standard Tier is enabled on this index, logs are first retained in Standard Tier for the number of days specified through num_retention_days,
and then stored in Flex Tier until the number of days specified in num_flex_logs_retention_days is reached.
The available values depend on retention plans specified in your organization's contract/subscriptions.
num_retention_days
int64
The number of days logs are stored in Standard Tier before aging into the Flex Tier or being deleted from the index.
The available values depend on retention plans specified in your organization's contract/subscriptions.
"""
Get all indexes returns "OK" response
"""fromdatadog_api_clientimportApiClient,Configurationfromdatadog_api_client.v1.api.logs_indexes_apiimportLogsIndexesApiconfiguration=Configuration()withApiClient(configuration)asapi_client:api_instance=LogsIndexesApi(api_client)response=api_instance.list_log_indexes()print(response)
# Get all indexes returns "OK" responserequire"datadog_api_client"api_instance=DatadogAPIClient::V1::LogsIndexesAPI.newpapi_instance.list_log_indexes()
// Get all indexes returns "OK" response
packagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV1")funcmain(){ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV1.NewLogsIndexesApi(apiClient)resp,r,err:=api.ListLogIndexes(ctx)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `LogsIndexesApi.ListLogIndexes`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `LogsIndexesApi.ListLogIndexes`:\n%s\n",responseContent)}
// Get all indexes returns "OK" responseimportcom.datadog.api.client.ApiClient;importcom.datadog.api.client.ApiException;importcom.datadog.api.client.v1.api.LogsIndexesApi;importcom.datadog.api.client.v1.model.LogsIndexListResponse;publicclassExample{publicstaticvoidmain(String[]args){ApiClientdefaultClient=ApiClient.getDefaultApiClient();LogsIndexesApiapiInstance=newLogsIndexesApi(defaultClient);try{LogsIndexListResponseresult=apiInstance.listLogIndexes();System.out.println(result);}catch(ApiExceptione){System.err.println("Exception when calling LogsIndexesApi#listLogIndexes");System.err.println("Status code: "+e.getCode());System.err.println("Reason: "+e.getResponseBody());System.err.println("Response headers: "+e.getResponseHeaders());e.printStackTrace();}}}
// Get all indexes returns "OK" response
usedatadog_api_client::datadog;usedatadog_api_client::datadogV1::api_logs_indexes::LogsIndexesAPI;#[tokio::main]asyncfnmain(){letconfiguration=datadog::Configuration::new();letapi=LogsIndexesAPI::with_config(configuration);letresp=api.list_log_indexes().await;ifletOk(value)=resp{println!("{:#?}",value);}else{println!("{:#?}",resp.unwrap_err());}}
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com"DD_API_KEY="<DD_API_KEY>"DD_APP_KEY="<DD_APP_KEY>"cargo run
/**
* Get all indexes returns "OK" response
*/import{client,v1}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv1.LogsIndexesApi(configuration);apiInstance.listLogIndexes().then((data: v1.LogsIndexListResponse)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
The number of log events you can send in this index per day before you are rate-limited.
daily_limit_reset
object
Object containing options to override the default daily limit reset time.
reset_time
string
String in HH:00 format representing the time of day the daily limit should be reset. The hours must be between 00 and 23 (inclusive).
reset_utc_offset
string
String in (-|+)HH:00 format representing the UTC offset to apply to the given reset time. The hours must be between -12 and +14 (inclusive).
daily_limit_warning_threshold_percentage
double
A percentage threshold of the daily quota at which a Datadog warning event is generated.
exclusion_filters
[object]
An array of exclusion objects. The logs are tested against the query of each filter,
following the order of the array. Only the first matching active exclusion matters,
others (if any) are ignored.
filter
object
Exclusion filter is defined by a query, a sampling rule, and a active/inactive toggle.
query
string
Default query is *, meaning all logs flowing in the index would be excluded.
Scope down exclusion filter to only a subset of logs with a log query.
sample_rate [required]
double
Sample rate to apply to logs going through this exclusion filter,
a value of 1.0 excludes all logs matching the query.
is_enabled
boolean
Whether or not the exclusion filter is active.
name [required]
string
Name of the index exclusion filter.
filter [required]
object
Filter for logs.
query
string
The filter query.
is_rate_limited
boolean
A boolean stating if the index is rate limited, meaning more logs than the daily limit have been sent.
Rate limit is reset every-day at 2pm UTC.
name [required]
string
The name of the index.
num_flex_logs_retention_days
int64
The total number of days logs are stored in Standard and Flex Tier before being deleted from the index.
If Standard Tier is enabled on this index, logs are first retained in Standard Tier for the number of days specified through num_retention_days,
and then stored in Flex Tier until the number of days specified in num_flex_logs_retention_days is reached.
The available values depend on retention plans specified in your organization's contract/subscriptions.
num_retention_days
int64
The number of days logs are stored in Standard Tier before aging into the Flex Tier or being deleted from the index.
The available values depend on retention plans specified in your organization's contract/subscriptions.
"""
Get an index returns "OK" response
"""fromdatadog_api_clientimportApiClient,Configurationfromdatadog_api_client.v1.api.logs_indexes_apiimportLogsIndexesApiconfiguration=Configuration()withApiClient(configuration)asapi_client:api_instance=LogsIndexesApi(api_client)response=api_instance.get_logs_index(name="name",)print(response)
# Get an index returns "OK" responserequire"datadog_api_client"api_instance=DatadogAPIClient::V1::LogsIndexesAPI.newpapi_instance.get_logs_index("name")
// Get an index returns "OK" response
packagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV1")funcmain(){ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV1.NewLogsIndexesApi(apiClient)resp,r,err:=api.GetLogsIndex(ctx,"name")iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `LogsIndexesApi.GetLogsIndex`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `LogsIndexesApi.GetLogsIndex`:\n%s\n",responseContent)}
// Get an index returns "OK" responseimportcom.datadog.api.client.ApiClient;importcom.datadog.api.client.ApiException;importcom.datadog.api.client.v1.api.LogsIndexesApi;importcom.datadog.api.client.v1.model.LogsIndex;publicclassExample{publicstaticvoidmain(String[]args){ApiClientdefaultClient=ApiClient.getDefaultApiClient();LogsIndexesApiapiInstance=newLogsIndexesApi(defaultClient);try{LogsIndexresult=apiInstance.getLogsIndex("name");System.out.println(result);}catch(ApiExceptione){System.err.println("Exception when calling LogsIndexesApi#getLogsIndex");System.err.println("Status code: "+e.getCode());System.err.println("Reason: "+e.getResponseBody());System.err.println("Response headers: "+e.getResponseHeaders());e.printStackTrace();}}}
// Get an index returns "OK" response
usedatadog_api_client::datadog;usedatadog_api_client::datadogV1::api_logs_indexes::LogsIndexesAPI;#[tokio::main]asyncfnmain(){letconfiguration=datadog::Configuration::new();letapi=LogsIndexesAPI::with_config(configuration);letresp=api.get_logs_index("name".to_string()).await;ifletOk(value)=resp{println!("{:#?}",value);}else{println!("{:#?}",resp.unwrap_err());}}
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com"DD_API_KEY="<API-KEY>"DD_APP_KEY="<APP-KEY>"cargo run
/**
* Get an index returns "OK" response
*/import{client,v1}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv1.LogsIndexesApi(configuration);constparams: v1.LogsIndexesApiGetLogsIndexRequest={name:"name",};apiInstance.getLogsIndex(params).then((data: v1.LogsIndex)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
The number of log events you can send in this index per day before you are rate-limited.
daily_limit_reset
object
Object containing options to override the default daily limit reset time.
reset_time
string
String in HH:00 format representing the time of day the daily limit should be reset. The hours must be between 00 and 23 (inclusive).
reset_utc_offset
string
String in (-|+)HH:00 format representing the UTC offset to apply to the given reset time. The hours must be between -12 and +14 (inclusive).
daily_limit_warning_threshold_percentage
double
A percentage threshold of the daily quota at which a Datadog warning event is generated.
exclusion_filters
[object]
An array of exclusion objects. The logs are tested against the query of each filter,
following the order of the array. Only the first matching active exclusion matters,
others (if any) are ignored.
filter
object
Exclusion filter is defined by a query, a sampling rule, and a active/inactive toggle.
query
string
Default query is *, meaning all logs flowing in the index would be excluded.
Scope down exclusion filter to only a subset of logs with a log query.
sample_rate [required]
double
Sample rate to apply to logs going through this exclusion filter,
a value of 1.0 excludes all logs matching the query.
is_enabled
boolean
Whether or not the exclusion filter is active.
name [required]
string
Name of the index exclusion filter.
filter [required]
object
Filter for logs.
query
string
The filter query.
is_rate_limited
boolean
A boolean stating if the index is rate limited, meaning more logs than the daily limit have been sent.
Rate limit is reset every-day at 2pm UTC.
name [required]
string
The name of the index.
num_flex_logs_retention_days
int64
The total number of days logs are stored in Standard and Flex Tier before being deleted from the index.
If Standard Tier is enabled on this index, logs are first retained in Standard Tier for the number of days specified through num_retention_days,
and then stored in Flex Tier until the number of days specified in num_flex_logs_retention_days is reached.
The available values depend on retention plans specified in your organization's contract/subscriptions.
num_retention_days
int64
The number of days logs are stored in Standard Tier before aging into the Flex Tier or being deleted from the index.
The available values depend on retention plans specified in your organization's contract/subscriptions.
The number of log events you can send in this index per day before you are rate-limited.
daily_limit_reset
object
Object containing options to override the default daily limit reset time.
reset_time
string
String in HH:00 format representing the time of day the daily limit should be reset. The hours must be between 00 and 23 (inclusive).
reset_utc_offset
string
String in (-|+)HH:00 format representing the UTC offset to apply to the given reset time. The hours must be between -12 and +14 (inclusive).
daily_limit_warning_threshold_percentage
double
A percentage threshold of the daily quota at which a Datadog warning event is generated.
exclusion_filters
[object]
An array of exclusion objects. The logs are tested against the query of each filter,
following the order of the array. Only the first matching active exclusion matters,
others (if any) are ignored.
filter
object
Exclusion filter is defined by a query, a sampling rule, and a active/inactive toggle.
query
string
Default query is *, meaning all logs flowing in the index would be excluded.
Scope down exclusion filter to only a subset of logs with a log query.
sample_rate [required]
double
Sample rate to apply to logs going through this exclusion filter,
a value of 1.0 excludes all logs matching the query.
is_enabled
boolean
Whether or not the exclusion filter is active.
name [required]
string
Name of the index exclusion filter.
filter [required]
object
Filter for logs.
query
string
The filter query.
is_rate_limited
boolean
A boolean stating if the index is rate limited, meaning more logs than the daily limit have been sent.
Rate limit is reset every-day at 2pm UTC.
name [required]
string
The name of the index.
num_flex_logs_retention_days
int64
The total number of days logs are stored in Standard and Flex Tier before being deleted from the index.
If Standard Tier is enabled on this index, logs are first retained in Standard Tier for the number of days specified through num_retention_days,
and then stored in Flex Tier until the number of days specified in num_flex_logs_retention_days is reached.
The available values depend on retention plans specified in your organization's contract/subscriptions.
num_retention_days
int64
The number of days logs are stored in Standard Tier before aging into the Flex Tier or being deleted from the index.
The available values depend on retention plans specified in your organization's contract/subscriptions.
"""
Create an index returns "OK" response
"""fromdatadog_api_clientimportApiClient,Configurationfromdatadog_api_client.v1.api.logs_indexes_apiimportLogsIndexesApifromdatadog_api_client.v1.model.logs_daily_limit_resetimportLogsDailyLimitResetfromdatadog_api_client.v1.model.logs_exclusionimportLogsExclusionfromdatadog_api_client.v1.model.logs_exclusion_filterimportLogsExclusionFilterfromdatadog_api_client.v1.model.logs_filterimportLogsFilterfromdatadog_api_client.v1.model.logs_indeximportLogsIndexbody=LogsIndex(daily_limit=300000000,daily_limit_reset=LogsDailyLimitReset(reset_time="14:00",reset_utc_offset="+02:00",),daily_limit_warning_threshold_percentage=70.0,exclusion_filters=[LogsExclusion(filter=LogsExclusionFilter(query="*",sample_rate=1.0,),name="payment",),],filter=LogsFilter(query="source:python",),name="main",num_flex_logs_retention_days=360,num_retention_days=15,)configuration=Configuration()withApiClient(configuration)asapi_client:api_instance=LogsIndexesApi(api_client)response=api_instance.create_logs_index(body=body)print(response)
# Create an index returns "OK" responserequire"datadog_api_client"api_instance=DatadogAPIClient::V1::LogsIndexesAPI.newbody=DatadogAPIClient::V1::LogsIndex.new({daily_limit:300000000,daily_limit_reset:DatadogAPIClient::V1::LogsDailyLimitReset.new({reset_time:"14:00",reset_utc_offset:"+02:00",}),daily_limit_warning_threshold_percentage:70,exclusion_filters:[DatadogAPIClient::V1::LogsExclusion.new({filter:DatadogAPIClient::V1::LogsExclusionFilter.new({query:"*",sample_rate:1.0,}),name:"payment",}),],filter:DatadogAPIClient::V1::LogsFilter.new({query:"source:python",}),name:"main",num_flex_logs_retention_days:360,num_retention_days:15,})papi_instance.create_logs_index(body)
// Create an index returns "OK" response
packagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV1")funcmain(){body:=datadogV1.LogsIndex{DailyLimit:datadog.PtrInt64(300000000),DailyLimitReset:&datadogV1.LogsDailyLimitReset{ResetTime:datadog.PtrString("14:00"),ResetUtcOffset:datadog.PtrString("+02:00"),},DailyLimitWarningThresholdPercentage:datadog.PtrFloat64(70),ExclusionFilters:[]datadogV1.LogsExclusion{{Filter:&datadogV1.LogsExclusionFilter{Query:datadog.PtrString("*"),SampleRate:1.0,},Name:"payment",},},Filter:datadogV1.LogsFilter{Query:datadog.PtrString("source:python"),},Name:"main",NumFlexLogsRetentionDays:datadog.PtrInt64(360),NumRetentionDays:datadog.PtrInt64(15),}ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV1.NewLogsIndexesApi(apiClient)resp,r,err:=api.CreateLogsIndex(ctx,body)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `LogsIndexesApi.CreateLogsIndex`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `LogsIndexesApi.CreateLogsIndex`:\n%s\n",responseContent)}
// Create an index returns "OK" responseimportcom.datadog.api.client.ApiClient;importcom.datadog.api.client.ApiException;importcom.datadog.api.client.v1.api.LogsIndexesApi;importcom.datadog.api.client.v1.model.LogsDailyLimitReset;importcom.datadog.api.client.v1.model.LogsExclusion;importcom.datadog.api.client.v1.model.LogsExclusionFilter;importcom.datadog.api.client.v1.model.LogsFilter;importcom.datadog.api.client.v1.model.LogsIndex;importjava.util.Collections;publicclassExample{publicstaticvoidmain(String[]args){ApiClientdefaultClient=ApiClient.getDefaultApiClient();LogsIndexesApiapiInstance=newLogsIndexesApi(defaultClient);LogsIndexbody=newLogsIndex().dailyLimit(300000000L).dailyLimitReset(newLogsDailyLimitReset().resetTime("14:00").resetUtcOffset("+02:00")).dailyLimitWarningThresholdPercentage(70.0).exclusionFilters(Collections.singletonList(newLogsExclusion().filter(newLogsExclusionFilter().query("*").sampleRate(1.0)).name("payment"))).filter(newLogsFilter().query("source:python")).name("main").numFlexLogsRetentionDays(360L).numRetentionDays(15L);try{LogsIndexresult=apiInstance.createLogsIndex(body);System.out.println(result);}catch(ApiExceptione){System.err.println("Exception when calling LogsIndexesApi#createLogsIndex");System.err.println("Status code: "+e.getCode());System.err.println("Reason: "+e.getResponseBody());System.err.println("Response headers: "+e.getResponseHeaders());e.printStackTrace();}}}
// Create an index returns "OK" response
usedatadog_api_client::datadog;usedatadog_api_client::datadogV1::api_logs_indexes::LogsIndexesAPI;usedatadog_api_client::datadogV1::model::LogsDailyLimitReset;usedatadog_api_client::datadogV1::model::LogsExclusion;usedatadog_api_client::datadogV1::model::LogsExclusionFilter;usedatadog_api_client::datadogV1::model::LogsFilter;usedatadog_api_client::datadogV1::model::LogsIndex;#[tokio::main]asyncfnmain(){letbody=LogsIndex::new(LogsFilter::new().query("source:python".to_string()),"main".to_string(),).daily_limit(300000000).daily_limit_reset(LogsDailyLimitReset::new().reset_time("14:00".to_string()).reset_utc_offset("+02:00".to_string()),).daily_limit_warning_threshold_percentage(70.0asf64).exclusion_filters(vec![LogsExclusion::new("payment".to_string()).filter(LogsExclusionFilter::new(1.0).query("*".to_string()))]).num_flex_logs_retention_days(360).num_retention_days(15);letconfiguration=datadog::Configuration::new();letapi=LogsIndexesAPI::with_config(configuration);letresp=api.create_logs_index(body).await;ifletOk(value)=resp{println!("{:#?}",value);}else{println!("{:#?}",resp.unwrap_err());}}
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com"DD_API_KEY="<API-KEY>"DD_APP_KEY="<APP-KEY>"cargo run
/**
* Create an index returns "OK" response
*/import{client,v1}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv1.LogsIndexesApi(configuration);constparams: v1.LogsIndexesApiCreateLogsIndexRequest={body:{dailyLimit: 300000000,dailyLimitReset:{resetTime:"14:00",resetUtcOffset:"+02:00",},dailyLimitWarningThresholdPercentage: 70,exclusionFilters:[{filter:{query:"*",sampleRate: 1.0,},name:"payment",},],filter:{query:"source:python",},name:"main",numFlexLogsRetentionDays: 360,numRetentionDays: 15,},};apiInstance.createLogsIndex(params).then((data: v1.LogsIndex)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
The number of log events you can send in this index per day before you are rate-limited.
daily_limit_reset
object
Object containing options to override the default daily limit reset time.
reset_time
string
String in HH:00 format representing the time of day the daily limit should be reset. The hours must be between 00 and 23 (inclusive).
reset_utc_offset
string
String in (-|+)HH:00 format representing the UTC offset to apply to the given reset time. The hours must be between -12 and +14 (inclusive).
daily_limit_warning_threshold_percentage
double
A percentage threshold of the daily quota at which a Datadog warning event is generated.
disable_daily_limit
boolean
If true, sets the daily_limit value to null and the index is not limited on a daily basis (any
specified daily_limit value in the request is ignored). If false or omitted, the index's current
daily_limit is maintained.
exclusion_filters
[object]
An array of exclusion objects. The logs are tested against the query of each filter,
following the order of the array. Only the first matching active exclusion matters,
others (if any) are ignored.
filter
object
Exclusion filter is defined by a query, a sampling rule, and a active/inactive toggle.
query
string
Default query is *, meaning all logs flowing in the index would be excluded.
Scope down exclusion filter to only a subset of logs with a log query.
sample_rate [required]
double
Sample rate to apply to logs going through this exclusion filter,
a value of 1.0 excludes all logs matching the query.
is_enabled
boolean
Whether or not the exclusion filter is active.
name [required]
string
Name of the index exclusion filter.
filter [required]
object
Filter for logs.
query
string
The filter query.
num_flex_logs_retention_days
int64
The total number of days logs are stored in Standard and Flex Tier before being deleted from the index.
If Standard Tier is enabled on this index, logs are first retained in Standard Tier for the number of days specified through num_retention_days,
and then stored in Flex Tier until the number of days specified in num_flex_logs_retention_days is reached.
The available values depend on retention plans specified in your organization's contract/subscriptions.
Note: Changing this value affects all logs already in this index. It may also affect billing.
num_retention_days
int64
The number of days logs are stored in Standard Tier before aging into the Flex Tier or being deleted from the index.
The available values depend on retention plans specified in your organization's contract/subscriptions.
Note: Changing this value affects all logs already in this index. It may also affect billing.
The number of log events you can send in this index per day before you are rate-limited.
daily_limit_reset
object
Object containing options to override the default daily limit reset time.
reset_time
string
String in HH:00 format representing the time of day the daily limit should be reset. The hours must be between 00 and 23 (inclusive).
reset_utc_offset
string
String in (-|+)HH:00 format representing the UTC offset to apply to the given reset time. The hours must be between -12 and +14 (inclusive).
daily_limit_warning_threshold_percentage
double
A percentage threshold of the daily quota at which a Datadog warning event is generated.
exclusion_filters
[object]
An array of exclusion objects. The logs are tested against the query of each filter,
following the order of the array. Only the first matching active exclusion matters,
others (if any) are ignored.
filter
object
Exclusion filter is defined by a query, a sampling rule, and a active/inactive toggle.
query
string
Default query is *, meaning all logs flowing in the index would be excluded.
Scope down exclusion filter to only a subset of logs with a log query.
sample_rate [required]
double
Sample rate to apply to logs going through this exclusion filter,
a value of 1.0 excludes all logs matching the query.
is_enabled
boolean
Whether or not the exclusion filter is active.
name [required]
string
Name of the index exclusion filter.
filter [required]
object
Filter for logs.
query
string
The filter query.
is_rate_limited
boolean
A boolean stating if the index is rate limited, meaning more logs than the daily limit have been sent.
Rate limit is reset every-day at 2pm UTC.
name [required]
string
The name of the index.
num_flex_logs_retention_days
int64
The total number of days logs are stored in Standard and Flex Tier before being deleted from the index.
If Standard Tier is enabled on this index, logs are first retained in Standard Tier for the number of days specified through num_retention_days,
and then stored in Flex Tier until the number of days specified in num_flex_logs_retention_days is reached.
The available values depend on retention plans specified in your organization's contract/subscriptions.
num_retention_days
int64
The number of days logs are stored in Standard Tier before aging into the Flex Tier or being deleted from the index.
The available values depend on retention plans specified in your organization's contract/subscriptions.
"""
Update an index returns "OK" response
"""fromdatadog_api_clientimportApiClient,Configurationfromdatadog_api_client.v1.api.logs_indexes_apiimportLogsIndexesApifromdatadog_api_client.v1.model.logs_daily_limit_resetimportLogsDailyLimitResetfromdatadog_api_client.v1.model.logs_exclusionimportLogsExclusionfromdatadog_api_client.v1.model.logs_exclusion_filterimportLogsExclusionFilterfromdatadog_api_client.v1.model.logs_filterimportLogsFilterfromdatadog_api_client.v1.model.logs_index_update_requestimportLogsIndexUpdateRequestbody=LogsIndexUpdateRequest(daily_limit=300000000,daily_limit_reset=LogsDailyLimitReset(reset_time="14:00",reset_utc_offset="+02:00",),daily_limit_warning_threshold_percentage=70.0,disable_daily_limit=False,exclusion_filters=[LogsExclusion(filter=LogsExclusionFilter(query="*",sample_rate=1.0,),name="payment",),],filter=LogsFilter(query="source:python",),num_flex_logs_retention_days=360,num_retention_days=15,)configuration=Configuration()withApiClient(configuration)asapi_client:api_instance=LogsIndexesApi(api_client)response=api_instance.update_logs_index(name="name",body=body)print(response)
# Update an index returns "OK" responserequire"datadog_api_client"api_instance=DatadogAPIClient::V1::LogsIndexesAPI.newbody=DatadogAPIClient::V1::LogsIndexUpdateRequest.new({daily_limit:300000000,daily_limit_reset:DatadogAPIClient::V1::LogsDailyLimitReset.new({reset_time:"14:00",reset_utc_offset:"+02:00",}),daily_limit_warning_threshold_percentage:70,disable_daily_limit:false,exclusion_filters:[DatadogAPIClient::V1::LogsExclusion.new({filter:DatadogAPIClient::V1::LogsExclusionFilter.new({query:"*",sample_rate:1.0,}),name:"payment",}),],filter:DatadogAPIClient::V1::LogsFilter.new({query:"source:python",}),num_flex_logs_retention_days:360,num_retention_days:15,})papi_instance.update_logs_index("name",body)
// Update an index returns "OK" response
packagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV1")funcmain(){body:=datadogV1.LogsIndexUpdateRequest{DailyLimit:datadog.PtrInt64(300000000),DailyLimitReset:&datadogV1.LogsDailyLimitReset{ResetTime:datadog.PtrString("14:00"),ResetUtcOffset:datadog.PtrString("+02:00"),},DailyLimitWarningThresholdPercentage:datadog.PtrFloat64(70),DisableDailyLimit:datadog.PtrBool(false),ExclusionFilters:[]datadogV1.LogsExclusion{{Filter:&datadogV1.LogsExclusionFilter{Query:datadog.PtrString("*"),SampleRate:1.0,},Name:"payment",},},Filter:datadogV1.LogsFilter{Query:datadog.PtrString("source:python"),},NumFlexLogsRetentionDays:datadog.PtrInt64(360),NumRetentionDays:datadog.PtrInt64(15),}ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV1.NewLogsIndexesApi(apiClient)resp,r,err:=api.UpdateLogsIndex(ctx,"name",body)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `LogsIndexesApi.UpdateLogsIndex`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `LogsIndexesApi.UpdateLogsIndex`:\n%s\n",responseContent)}
// Update an index returns "OK" responseimportcom.datadog.api.client.ApiClient;importcom.datadog.api.client.ApiException;importcom.datadog.api.client.v1.api.LogsIndexesApi;importcom.datadog.api.client.v1.model.LogsDailyLimitReset;importcom.datadog.api.client.v1.model.LogsExclusion;importcom.datadog.api.client.v1.model.LogsExclusionFilter;importcom.datadog.api.client.v1.model.LogsFilter;importcom.datadog.api.client.v1.model.LogsIndex;importcom.datadog.api.client.v1.model.LogsIndexUpdateRequest;importjava.util.Collections;publicclassExample{publicstaticvoidmain(String[]args){ApiClientdefaultClient=ApiClient.getDefaultApiClient();LogsIndexesApiapiInstance=newLogsIndexesApi(defaultClient);LogsIndexUpdateRequestbody=newLogsIndexUpdateRequest().dailyLimit(300000000L).dailyLimitReset(newLogsDailyLimitReset().resetTime("14:00").resetUtcOffset("+02:00")).dailyLimitWarningThresholdPercentage(70.0).disableDailyLimit(false).exclusionFilters(Collections.singletonList(newLogsExclusion().filter(newLogsExclusionFilter().query("*").sampleRate(1.0)).name("payment"))).filter(newLogsFilter().query("source:python")).numFlexLogsRetentionDays(360L).numRetentionDays(15L);try{LogsIndexresult=apiInstance.updateLogsIndex("name",body);System.out.println(result);}catch(ApiExceptione){System.err.println("Exception when calling LogsIndexesApi#updateLogsIndex");System.err.println("Status code: "+e.getCode());System.err.println("Reason: "+e.getResponseBody());System.err.println("Response headers: "+e.getResponseHeaders());e.printStackTrace();}}}
// Update an index returns "OK" response
usedatadog_api_client::datadog;usedatadog_api_client::datadogV1::api_logs_indexes::LogsIndexesAPI;usedatadog_api_client::datadogV1::model::LogsDailyLimitReset;usedatadog_api_client::datadogV1::model::LogsExclusion;usedatadog_api_client::datadogV1::model::LogsExclusionFilter;usedatadog_api_client::datadogV1::model::LogsFilter;usedatadog_api_client::datadogV1::model::LogsIndexUpdateRequest;#[tokio::main]asyncfnmain(){letbody=LogsIndexUpdateRequest::new(LogsFilter::new().query("source:python".to_string())).daily_limit(300000000).daily_limit_reset(LogsDailyLimitReset::new().reset_time("14:00".to_string()).reset_utc_offset("+02:00".to_string()),).daily_limit_warning_threshold_percentage(70.0asf64).disable_daily_limit(false).exclusion_filters(vec![LogsExclusion::new("payment".to_string()).filter(LogsExclusionFilter::new(1.0).query("*".to_string()))]).num_flex_logs_retention_days(360).num_retention_days(15);letconfiguration=datadog::Configuration::new();letapi=LogsIndexesAPI::with_config(configuration);letresp=api.update_logs_index("name".to_string(),body).await;ifletOk(value)=resp{println!("{:#?}",value);}else{println!("{:#?}",resp.unwrap_err());}}
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com"DD_API_KEY="<API-KEY>"DD_APP_KEY="<APP-KEY>"cargo run
/**
* Update an index returns "OK" response
*/import{client,v1}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv1.LogsIndexesApi(configuration);constparams: v1.LogsIndexesApiUpdateLogsIndexRequest={body:{dailyLimit: 300000000,dailyLimitReset:{resetTime:"14:00",resetUtcOffset:"+02:00",},dailyLimitWarningThresholdPercentage: 70,disableDailyLimit: false,exclusionFilters:[{filter:{query:"*",sampleRate: 1.0,},name:"payment",},],filter:{query:"source:python",},numFlexLogsRetentionDays: 360,numRetentionDays: 15,},name:"name",};apiInstance.updateLogsIndex(params).then((data: v1.LogsIndex)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
Object containing the ordered list of log index names.
Expand All
Field
Type
Description
index_names [required]
[string]
Array of strings identifying by their name(s) the index(es) of your organization.
Logs are tested against the query filter of each index one by one, following the order of the array.
Logs are eventually stored in the first matching index.
"""
Get indexes order returns "OK" response
"""fromdatadog_api_clientimportApiClient,Configurationfromdatadog_api_client.v1.api.logs_indexes_apiimportLogsIndexesApiconfiguration=Configuration()withApiClient(configuration)asapi_client:api_instance=LogsIndexesApi(api_client)response=api_instance.get_logs_index_order()print(response)
# Get indexes order returns "OK" responserequire"datadog_api_client"api_instance=DatadogAPIClient::V1::LogsIndexesAPI.newpapi_instance.get_logs_index_order()
// Get indexes order returns "OK" response
packagemainimport("context""encoding/json""fmt""os""github.com/DataDog/datadog-api-client-go/v2/api/datadog""github.com/DataDog/datadog-api-client-go/v2/api/datadogV1")funcmain(){ctx:=datadog.NewDefaultContext(context.Background())configuration:=datadog.NewConfiguration()apiClient:=datadog.NewAPIClient(configuration)api:=datadogV1.NewLogsIndexesApi(apiClient)resp,r,err:=api.GetLogsIndexOrder(ctx)iferr!=nil{fmt.Fprintf(os.Stderr,"Error when calling `LogsIndexesApi.GetLogsIndexOrder`: %v\n",err)fmt.Fprintf(os.Stderr,"Full HTTP response: %v\n",r)}responseContent,_:=json.MarshalIndent(resp,""," ")fmt.Fprintf(os.Stdout,"Response from `LogsIndexesApi.GetLogsIndexOrder`:\n%s\n",responseContent)}
// Get indexes order returns "OK" responseimportcom.datadog.api.client.ApiClient;importcom.datadog.api.client.ApiException;importcom.datadog.api.client.v1.api.LogsIndexesApi;importcom.datadog.api.client.v1.model.LogsIndexesOrder;publicclassExample{publicstaticvoidmain(String[]args){ApiClientdefaultClient=ApiClient.getDefaultApiClient();LogsIndexesApiapiInstance=newLogsIndexesApi(defaultClient);try{LogsIndexesOrderresult=apiInstance.getLogsIndexOrder();System.out.println(result);}catch(ApiExceptione){System.err.println("Exception when calling LogsIndexesApi#getLogsIndexOrder");System.err.println("Status code: "+e.getCode());System.err.println("Reason: "+e.getResponseBody());System.err.println("Response headers: "+e.getResponseHeaders());e.printStackTrace();}}}
// Get indexes order returns "OK" response
usedatadog_api_client::datadog;usedatadog_api_client::datadogV1::api_logs_indexes::LogsIndexesAPI;#[tokio::main]asyncfnmain(){letconfiguration=datadog::Configuration::new();letapi=LogsIndexesAPI::with_config(configuration);letresp=api.get_logs_index_order().await;ifletOk(value)=resp{println!("{:#?}",value);}else{println!("{:#?}",resp.unwrap_err());}}
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com"DD_API_KEY="<API-KEY>"DD_APP_KEY="<APP-KEY>"cargo run
/**
* Get indexes order returns "OK" response
*/import{client,v1}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv1.LogsIndexesApi(configuration);apiInstance.getLogsIndexOrder().then((data: v1.LogsIndexesOrder)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));
This endpoint updates the index order of your organization.
It returns the index order object passed in the request body when the request is successful.
Request
Body Data (required)
Object containing the new ordered list of index names
Array of strings identifying by their name(s) the index(es) of your organization.
Logs are tested against the query filter of each index one by one, following the order of the array.
Logs are eventually stored in the first matching index.
Object containing the ordered list of log index names.
Expand All
Field
Type
Description
index_names [required]
[string]
Array of strings identifying by their name(s) the index(es) of your organization.
Logs are tested against the query filter of each index one by one, following the order of the array.
Logs are eventually stored in the first matching index.
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com"DD_API_KEY="<API-KEY>"DD_APP_KEY="<APP-KEY>"cargo run
/**
* Update indexes order returns "OK" response
*/import{client,v1}from"@datadog/datadog-api-client";constconfiguration=client.createConfiguration();constapiInstance=newv1.LogsIndexesApi(configuration);constparams: v1.LogsIndexesApiUpdateLogsIndexOrderRequest={body:{indexNames:["main","payments","web"],},};apiInstance.updateLogsIndexOrder(params).then((data: v1.LogsIndexesOrder)=>{console.log("API called successfully. Returned data: "+JSON.stringify(data));}).catch((error: any)=>console.error(error));