- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Archives forward all the logs ingested to a cloud storage system.
See the Archives Page for a list of the archives currently configured in Datadog.
GET https://api.ap1.datadoghq.com/api/v2/logs/config/archiveshttps://api.datadoghq.eu/api/v2/logs/config/archiveshttps://api.ddog-gov.com/api/v2/logs/config/archiveshttps://api.datadoghq.com/api/v2/logs/config/archiveshttps://api.us3.datadoghq.com/api/v2/logs/config/archiveshttps://api.us5.datadoghq.com/api/v2/logs/config/archives
Get the list of configured logs archives with their definitions.
This endpoint requires the logs_read_archives
permission.
OK
The available archives.
항목
유형
설명
data
[object]
A list of archives.
attributes
object
The attributes associated with the archive.
destination [required]
object <oneOf>
An archive's destination.
Option 1
object
The Azure archive destination.
container [required]
string
The container where the archive will be stored.
integration [required]
object
The Azure archive's integration destination.
client_id [required]
string
A client ID.
tenant_id [required]
string
A tenant ID.
path
string
The archive path.
region
string
The region where the archive will be stored.
storage_account [required]
string
The associated storage account.
type [required]
enum
Type of the Azure archive destination.
Allowed enum values: azure
default: azure
Option 2
object
The GCS archive destination.
bucket [required]
string
The bucket where the archive will be stored.
integration [required]
object
The GCS archive's integration destination.
client_email [required]
string
A client email.
project_id
string
A project ID.
path
string
The archive path.
type [required]
enum
Type of the GCS archive destination.
Allowed enum values: gcs
default: gcs
Option 3
object
The S3 archive destination.
bucket [required]
string
The bucket where the archive will be stored.
encryption
object
The S3 encryption settings.
key
string
An Amazon Resource Name (ARN) used to identify an AWS KMS key.
type [required]
enum
Type of S3 encryption for a destination.
Allowed enum values: NO_OVERRIDE,SSE_S3,SSE_KMS
integration [required]
object
The S3 Archive's integration destination.
account_id [required]
string
The account ID for the integration.
role_name [required]
string
The path of the integration.
path
string
The archive path.
storage_class
enum
The storage class where the archive will be stored.
Allowed enum values: STANDARD,STANDARD_IA,ONEZONE_IA,INTELLIGENT_TIERING,GLACIER_IR
default: STANDARD
type [required]
enum
Type of the S3 archive destination.
Allowed enum values: s3
default: s3
include_tags
boolean
To store the tags in the archive, set the value "true". If it is set to "false", the tags will be deleted when the logs are sent to the archive.
name [required]
string
The archive name.
query [required]
string
The archive query/filter. Logs matching this query are included in the archive.
rehydration_max_scan_size_in_gb
int64
Maximum scan size for rehydration from this archive.
rehydration_tags
[string]
An array of tags to add to rehydrated logs from an archive.
state
enum
The state of the archive.
Allowed enum values: UNKNOWN,WORKING,FAILING,WORKING_AUTH_LEGACY
id
string
The archive ID.
type [required]
string
The type of the resource. The value should always be archives.
default: archives
{
"data": [
{
"attributes": {
"destination": {
"container": "container-name",
"integration": {
"client_id": "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa",
"tenant_id": "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa"
},
"path": "string",
"region": "string",
"storage_account": "account-name",
"type": "azure"
},
"include_tags": false,
"name": "Nginx Archive",
"query": "source:nginx",
"rehydration_max_scan_size_in_gb": 100,
"rehydration_tags": [
"team:intake",
"team:app"
],
"state": "WORKING"
},
"id": "a2zcMylnM4OCHpYusxIi3g",
"type": "archives"
}
]
}
Forbidden
API error response.
{
"errors": [
"Bad Request"
]
}
Too many requests
API error response.
{
"errors": [
"Bad Request"
]
}
# Get all archives returns "OK" response
require "datadog_api_client"
api_instance = DatadogAPIClient::V2::LogsArchivesAPI.new
p api_instance.list_logs_archives()
First install the library and its dependencies and then save the example to example.rb
and run following commands:
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
POST https://api.ap1.datadoghq.com/api/v2/logs/config/archiveshttps://api.datadoghq.eu/api/v2/logs/config/archiveshttps://api.ddog-gov.com/api/v2/logs/config/archiveshttps://api.datadoghq.com/api/v2/logs/config/archiveshttps://api.us3.datadoghq.com/api/v2/logs/config/archiveshttps://api.us5.datadoghq.com/api/v2/logs/config/archives
Create an archive in your organization.
This endpoint requires the logs_write_archives
permission.
The definition of the new archive.
항목
유형
설명
data
object
The definition of an archive.
attributes
object
The attributes associated with the archive.
destination [required]
<oneOf>
An archive's destination.
Option 1
object
The Azure archive destination.
container [required]
string
The container where the archive will be stored.
integration [required]
object
The Azure archive's integration destination.
client_id [required]
string
A client ID.
tenant_id [required]
string
A tenant ID.
path
string
The archive path.
region
string
The region where the archive will be stored.
storage_account [required]
string
The associated storage account.
type [required]
enum
Type of the Azure archive destination.
Allowed enum values: azure
default: azure
Option 2
object
The GCS archive destination.
bucket [required]
string
The bucket where the archive will be stored.
integration [required]
object
The GCS archive's integration destination.
client_email [required]
string
A client email.
project_id
string
A project ID.
path
string
The archive path.
type [required]
enum
Type of the GCS archive destination.
Allowed enum values: gcs
default: gcs
Option 3
object
The S3 archive destination.
bucket [required]
string
The bucket where the archive will be stored.
encryption
object
The S3 encryption settings.
key
string
An Amazon Resource Name (ARN) used to identify an AWS KMS key.
type [required]
enum
Type of S3 encryption for a destination.
Allowed enum values: NO_OVERRIDE,SSE_S3,SSE_KMS
integration [required]
object
The S3 Archive's integration destination.
account_id [required]
string
The account ID for the integration.
role_name [required]
string
The path of the integration.
path
string
The archive path.
storage_class
enum
The storage class where the archive will be stored.
Allowed enum values: STANDARD,STANDARD_IA,ONEZONE_IA,INTELLIGENT_TIERING,GLACIER_IR
default: STANDARD
type [required]
enum
Type of the S3 archive destination.
Allowed enum values: s3
default: s3
include_tags
boolean
To store the tags in the archive, set the value "true". If it is set to "false", the tags will be deleted when the logs are sent to the archive.
name [required]
string
The archive name.
query [required]
string
The archive query/filter. Logs matching this query are included in the archive.
rehydration_max_scan_size_in_gb
int64
Maximum scan size for rehydration from this archive.
rehydration_tags
[string]
An array of tags to add to rehydrated logs from an archive.
type [required]
string
The type of the resource. The value should always be archives.
default: archives
{
"data": {
"attributes": {
"destination": {
"container": "container-name",
"integration": {
"client_id": "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa",
"tenant_id": "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa"
},
"path": "string",
"region": "string",
"storage_account": "account-name",
"type": "azure"
},
"include_tags": false,
"name": "Nginx Archive",
"query": "source:nginx",
"rehydration_max_scan_size_in_gb": 100,
"rehydration_tags": [
"team:intake",
"team:app"
]
},
"type": "archives"
}
}
OK
The logs archive.
항목
유형
설명
data
object
The definition of an archive.
attributes
object
The attributes associated with the archive.
destination [required]
object <oneOf>
An archive's destination.
Option 1
object
The Azure archive destination.
container [required]
string
The container where the archive will be stored.
integration [required]
object
The Azure archive's integration destination.
client_id [required]
string
A client ID.
tenant_id [required]
string
A tenant ID.
path
string
The archive path.
region
string
The region where the archive will be stored.
storage_account [required]
string
The associated storage account.
type [required]
enum
Type of the Azure archive destination.
Allowed enum values: azure
default: azure
Option 2
object
The GCS archive destination.
bucket [required]
string
The bucket where the archive will be stored.
integration [required]
object
The GCS archive's integration destination.
client_email [required]
string
A client email.
project_id
string
A project ID.
path
string
The archive path.
type [required]
enum
Type of the GCS archive destination.
Allowed enum values: gcs
default: gcs
Option 3
object
The S3 archive destination.
bucket [required]
string
The bucket where the archive will be stored.
encryption
object
The S3 encryption settings.
key
string
An Amazon Resource Name (ARN) used to identify an AWS KMS key.
type [required]
enum
Type of S3 encryption for a destination.
Allowed enum values: NO_OVERRIDE,SSE_S3,SSE_KMS
integration [required]
object
The S3 Archive's integration destination.
account_id [required]
string
The account ID for the integration.
role_name [required]
string
The path of the integration.
path
string
The archive path.
storage_class
enum
The storage class where the archive will be stored.
Allowed enum values: STANDARD,STANDARD_IA,ONEZONE_IA,INTELLIGENT_TIERING,GLACIER_IR
default: STANDARD
type [required]
enum
Type of the S3 archive destination.
Allowed enum values: s3
default: s3
include_tags
boolean
To store the tags in the archive, set the value "true". If it is set to "false", the tags will be deleted when the logs are sent to the archive.
name [required]
string
The archive name.
query [required]
string
The archive query/filter. Logs matching this query are included in the archive.
rehydration_max_scan_size_in_gb
int64
Maximum scan size for rehydration from this archive.
rehydration_tags
[string]
An array of tags to add to rehydrated logs from an archive.
state
enum
The state of the archive.
Allowed enum values: UNKNOWN,WORKING,FAILING,WORKING_AUTH_LEGACY
id
string
The archive ID.
type [required]
string
The type of the resource. The value should always be archives.
default: archives
{
"data": {
"attributes": {
"destination": {
"container": "container-name",
"integration": {
"client_id": "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa",
"tenant_id": "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa"
},
"path": "string",
"region": "string",
"storage_account": "account-name",
"type": "azure"
},
"include_tags": false,
"name": "Nginx Archive",
"query": "source:nginx",
"rehydration_max_scan_size_in_gb": 100,
"rehydration_tags": [
"team:intake",
"team:app"
],
"state": "WORKING"
},
"id": "a2zcMylnM4OCHpYusxIi3g",
"type": "archives"
}
}
Bad Request
API error response.
{
"errors": [
"Bad Request"
]
}
Forbidden
API error response.
{
"errors": [
"Bad Request"
]
}
Too many requests
API error response.
{
"errors": [
"Bad Request"
]
}
# Create an archive returns "OK" response
require "datadog_api_client"
api_instance = DatadogAPIClient::V2::LogsArchivesAPI.new
body = DatadogAPIClient::V2::LogsArchiveCreateRequest.new({
data: DatadogAPIClient::V2::LogsArchiveCreateRequestDefinition.new({
attributes: DatadogAPIClient::V2::LogsArchiveCreateRequestAttributes.new({
destination: DatadogAPIClient::V2::LogsArchiveDestinationAzure.new({
container: "container-name",
integration: DatadogAPIClient::V2::LogsArchiveIntegrationAzure.new({
client_id: "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa",
tenant_id: "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa",
}),
storage_account: "account-name",
type: DatadogAPIClient::V2::LogsArchiveDestinationAzureType::AZURE,
}),
include_tags: false,
name: "Nginx Archive",
query: "source:nginx",
rehydration_max_scan_size_in_gb: 100,
rehydration_tags: [
"team:intake",
"team:app",
],
}),
type: "archives",
}),
})
p api_instance.create_logs_archive(body)
First install the library and its dependencies and then save the example to example.rb
and run following commands:
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
GET https://api.ap1.datadoghq.com/api/v2/logs/config/archives/{archive_id}https://api.datadoghq.eu/api/v2/logs/config/archives/{archive_id}https://api.ddog-gov.com/api/v2/logs/config/archives/{archive_id}https://api.datadoghq.com/api/v2/logs/config/archives/{archive_id}https://api.us3.datadoghq.com/api/v2/logs/config/archives/{archive_id}https://api.us5.datadoghq.com/api/v2/logs/config/archives/{archive_id}
Get a specific archive from your organization.
This endpoint requires the logs_read_archives
permission.
이름
유형
설명
archive_id [required]
string
The ID of the archive.
OK
The logs archive.
항목
유형
설명
data
object
The definition of an archive.
attributes
object
The attributes associated with the archive.
destination [required]
object <oneOf>
An archive's destination.
Option 1
object
The Azure archive destination.
container [required]
string
The container where the archive will be stored.
integration [required]
object
The Azure archive's integration destination.
client_id [required]
string
A client ID.
tenant_id [required]
string
A tenant ID.
path
string
The archive path.
region
string
The region where the archive will be stored.
storage_account [required]
string
The associated storage account.
type [required]
enum
Type of the Azure archive destination.
Allowed enum values: azure
default: azure
Option 2
object
The GCS archive destination.
bucket [required]
string
The bucket where the archive will be stored.
integration [required]
object
The GCS archive's integration destination.
client_email [required]
string
A client email.
project_id
string
A project ID.
path
string
The archive path.
type [required]
enum
Type of the GCS archive destination.
Allowed enum values: gcs
default: gcs
Option 3
object
The S3 archive destination.
bucket [required]
string
The bucket where the archive will be stored.
encryption
object
The S3 encryption settings.
key
string
An Amazon Resource Name (ARN) used to identify an AWS KMS key.
type [required]
enum
Type of S3 encryption for a destination.
Allowed enum values: NO_OVERRIDE,SSE_S3,SSE_KMS
integration [required]
object
The S3 Archive's integration destination.
account_id [required]
string
The account ID for the integration.
role_name [required]
string
The path of the integration.
path
string
The archive path.
storage_class
enum
The storage class where the archive will be stored.
Allowed enum values: STANDARD,STANDARD_IA,ONEZONE_IA,INTELLIGENT_TIERING,GLACIER_IR
default: STANDARD
type [required]
enum
Type of the S3 archive destination.
Allowed enum values: s3
default: s3
include_tags
boolean
To store the tags in the archive, set the value "true". If it is set to "false", the tags will be deleted when the logs are sent to the archive.
name [required]
string
The archive name.
query [required]
string
The archive query/filter. Logs matching this query are included in the archive.
rehydration_max_scan_size_in_gb
int64
Maximum scan size for rehydration from this archive.
rehydration_tags
[string]
An array of tags to add to rehydrated logs from an archive.
state
enum
The state of the archive.
Allowed enum values: UNKNOWN,WORKING,FAILING,WORKING_AUTH_LEGACY
id
string
The archive ID.
type [required]
string
The type of the resource. The value should always be archives.
default: archives
{
"data": {
"attributes": {
"destination": {
"container": "container-name",
"integration": {
"client_id": "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa",
"tenant_id": "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa"
},
"path": "string",
"region": "string",
"storage_account": "account-name",
"type": "azure"
},
"include_tags": false,
"name": "Nginx Archive",
"query": "source:nginx",
"rehydration_max_scan_size_in_gb": 100,
"rehydration_tags": [
"team:intake",
"team:app"
],
"state": "WORKING"
},
"id": "a2zcMylnM4OCHpYusxIi3g",
"type": "archives"
}
}
Bad Request
API error response.
{
"errors": [
"Bad Request"
]
}
Forbidden
API error response.
{
"errors": [
"Bad Request"
]
}
Not found
API error response.
{
"errors": [
"Bad Request"
]
}
Too many requests
API error response.
{
"errors": [
"Bad Request"
]
}
# Get an archive returns "OK" response
require "datadog_api_client"
api_instance = DatadogAPIClient::V2::LogsArchivesAPI.new
p api_instance.get_logs_archive("archive_id")
First install the library and its dependencies and then save the example to example.rb
and run following commands:
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
PUT https://api.ap1.datadoghq.com/api/v2/logs/config/archives/{archive_id}https://api.datadoghq.eu/api/v2/logs/config/archives/{archive_id}https://api.ddog-gov.com/api/v2/logs/config/archives/{archive_id}https://api.datadoghq.com/api/v2/logs/config/archives/{archive_id}https://api.us3.datadoghq.com/api/v2/logs/config/archives/{archive_id}https://api.us5.datadoghq.com/api/v2/logs/config/archives/{archive_id}
Update a given archive configuration.
Note: Using this method updates your archive configuration by replacing your current configuration with the new one sent to your Datadog organization.
This endpoint requires thelogs_write_archives
permission.이름
유형
설명
archive_id [required]
string
The ID of the archive.
New definition of the archive.
항목
유형
설명
data
object
The definition of an archive.
attributes
object
The attributes associated with the archive.
destination [required]
<oneOf>
An archive's destination.
Option 1
object
The Azure archive destination.
container [required]
string
The container where the archive will be stored.
integration [required]
object
The Azure archive's integration destination.
client_id [required]
string
A client ID.
tenant_id [required]
string
A tenant ID.
path
string
The archive path.
region
string
The region where the archive will be stored.
storage_account [required]
string
The associated storage account.
type [required]
enum
Type of the Azure archive destination.
Allowed enum values: azure
default: azure
Option 2
object
The GCS archive destination.
bucket [required]
string
The bucket where the archive will be stored.
integration [required]
object
The GCS archive's integration destination.
client_email [required]
string
A client email.
project_id
string
A project ID.
path
string
The archive path.
type [required]
enum
Type of the GCS archive destination.
Allowed enum values: gcs
default: gcs
Option 3
object
The S3 archive destination.
bucket [required]
string
The bucket where the archive will be stored.
encryption
object
The S3 encryption settings.
key
string
An Amazon Resource Name (ARN) used to identify an AWS KMS key.
type [required]
enum
Type of S3 encryption for a destination.
Allowed enum values: NO_OVERRIDE,SSE_S3,SSE_KMS
integration [required]
object
The S3 Archive's integration destination.
account_id [required]
string
The account ID for the integration.
role_name [required]
string
The path of the integration.
path
string
The archive path.
storage_class
enum
The storage class where the archive will be stored.
Allowed enum values: STANDARD,STANDARD_IA,ONEZONE_IA,INTELLIGENT_TIERING,GLACIER_IR
default: STANDARD
type [required]
enum
Type of the S3 archive destination.
Allowed enum values: s3
default: s3
include_tags
boolean
To store the tags in the archive, set the value "true". If it is set to "false", the tags will be deleted when the logs are sent to the archive.
name [required]
string
The archive name.
query [required]
string
The archive query/filter. Logs matching this query are included in the archive.
rehydration_max_scan_size_in_gb
int64
Maximum scan size for rehydration from this archive.
rehydration_tags
[string]
An array of tags to add to rehydrated logs from an archive.
type [required]
string
The type of the resource. The value should always be archives.
default: archives
{
"data": {
"attributes": {
"destination": {
"container": "container-name",
"integration": {
"client_id": "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa",
"tenant_id": "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa"
},
"path": "string",
"region": "string",
"storage_account": "account-name",
"type": "azure"
},
"include_tags": false,
"name": "Nginx Archive",
"query": "source:nginx",
"rehydration_max_scan_size_in_gb": 100,
"rehydration_tags": [
"team:intake",
"team:app"
]
},
"type": "archives"
}
}
OK
The logs archive.
항목
유형
설명
data
object
The definition of an archive.
attributes
object
The attributes associated with the archive.
destination [required]
object <oneOf>
An archive's destination.
Option 1
object
The Azure archive destination.
container [required]
string
The container where the archive will be stored.
integration [required]
object
The Azure archive's integration destination.
client_id [required]
string
A client ID.
tenant_id [required]
string
A tenant ID.
path
string
The archive path.
region
string
The region where the archive will be stored.
storage_account [required]
string
The associated storage account.
type [required]
enum
Type of the Azure archive destination.
Allowed enum values: azure
default: azure
Option 2
object
The GCS archive destination.
bucket [required]
string
The bucket where the archive will be stored.
integration [required]
object
The GCS archive's integration destination.
client_email [required]
string
A client email.
project_id
string
A project ID.
path
string
The archive path.
type [required]
enum
Type of the GCS archive destination.
Allowed enum values: gcs
default: gcs
Option 3
object
The S3 archive destination.
bucket [required]
string
The bucket where the archive will be stored.
encryption
object
The S3 encryption settings.
key
string
An Amazon Resource Name (ARN) used to identify an AWS KMS key.
type [required]
enum
Type of S3 encryption for a destination.
Allowed enum values: NO_OVERRIDE,SSE_S3,SSE_KMS
integration [required]
object
The S3 Archive's integration destination.
account_id [required]
string
The account ID for the integration.
role_name [required]
string
The path of the integration.
path
string
The archive path.
storage_class
enum
The storage class where the archive will be stored.
Allowed enum values: STANDARD,STANDARD_IA,ONEZONE_IA,INTELLIGENT_TIERING,GLACIER_IR
default: STANDARD
type [required]
enum
Type of the S3 archive destination.
Allowed enum values: s3
default: s3
include_tags
boolean
To store the tags in the archive, set the value "true". If it is set to "false", the tags will be deleted when the logs are sent to the archive.
name [required]
string
The archive name.
query [required]
string
The archive query/filter. Logs matching this query are included in the archive.
rehydration_max_scan_size_in_gb
int64
Maximum scan size for rehydration from this archive.
rehydration_tags
[string]
An array of tags to add to rehydrated logs from an archive.
state
enum
The state of the archive.
Allowed enum values: UNKNOWN,WORKING,FAILING,WORKING_AUTH_LEGACY
id
string
The archive ID.
type [required]
string
The type of the resource. The value should always be archives.
default: archives
{
"data": {
"attributes": {
"destination": {
"container": "container-name",
"integration": {
"client_id": "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa",
"tenant_id": "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa"
},
"path": "string",
"region": "string",
"storage_account": "account-name",
"type": "azure"
},
"include_tags": false,
"name": "Nginx Archive",
"query": "source:nginx",
"rehydration_max_scan_size_in_gb": 100,
"rehydration_tags": [
"team:intake",
"team:app"
],
"state": "WORKING"
},
"id": "a2zcMylnM4OCHpYusxIi3g",
"type": "archives"
}
}
Bad Request
API error response.
{
"errors": [
"Bad Request"
]
}
Forbidden
API error response.
{
"errors": [
"Bad Request"
]
}
Not found
API error response.
{
"errors": [
"Bad Request"
]
}
Too many requests
API error response.
{
"errors": [
"Bad Request"
]
}
# Update an archive returns "OK" response
require "datadog_api_client"
api_instance = DatadogAPIClient::V2::LogsArchivesAPI.new
body = DatadogAPIClient::V2::LogsArchiveCreateRequest.new({
data: DatadogAPIClient::V2::LogsArchiveCreateRequestDefinition.new({
attributes: DatadogAPIClient::V2::LogsArchiveCreateRequestAttributes.new({
destination: DatadogAPIClient::V2::LogsArchiveDestinationAzure.new({
container: "container-name",
integration: DatadogAPIClient::V2::LogsArchiveIntegrationAzure.new({
client_id: "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa",
tenant_id: "aaaaaaaa-1a1a-1a1a-1a1a-aaaaaaaaaaaa",
}),
storage_account: "account-name",
type: DatadogAPIClient::V2::LogsArchiveDestinationAzureType::AZURE,
}),
include_tags: false,
name: "Nginx Archive",
query: "source:nginx",
rehydration_max_scan_size_in_gb: 100,
rehydration_tags: [
"team:intake",
"team:app",
],
}),
type: "archives",
}),
})
p api_instance.update_logs_archive("archive_id", body)
First install the library and its dependencies and then save the example to example.rb
and run following commands:
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
DELETE https://api.ap1.datadoghq.com/api/v2/logs/config/archives/{archive_id}https://api.datadoghq.eu/api/v2/logs/config/archives/{archive_id}https://api.ddog-gov.com/api/v2/logs/config/archives/{archive_id}https://api.datadoghq.com/api/v2/logs/config/archives/{archive_id}https://api.us3.datadoghq.com/api/v2/logs/config/archives/{archive_id}https://api.us5.datadoghq.com/api/v2/logs/config/archives/{archive_id}
Delete a given archive from your organization.
This endpoint requires the logs_write_archives
permission.
이름
유형
설명
archive_id [required]
string
The ID of the archive.
OK
Bad Request
API error response.
{
"errors": [
"Bad Request"
]
}
Forbidden
API error response.
{
"errors": [
"Bad Request"
]
}
Not found
API error response.
{
"errors": [
"Bad Request"
]
}
Too many requests
API error response.
{
"errors": [
"Bad Request"
]
}
# Delete an archive returns "OK" response
require "datadog_api_client"
api_instance = DatadogAPIClient::V2::LogsArchivesAPI.new
api_instance.delete_logs_archive("archive_id")
First install the library and its dependencies and then save the example to example.rb
and run following commands:
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
GET https://api.ap1.datadoghq.com/api/v2/logs/config/archives/{archive_id}/readershttps://api.datadoghq.eu/api/v2/logs/config/archives/{archive_id}/readershttps://api.ddog-gov.com/api/v2/logs/config/archives/{archive_id}/readershttps://api.datadoghq.com/api/v2/logs/config/archives/{archive_id}/readershttps://api.us3.datadoghq.com/api/v2/logs/config/archives/{archive_id}/readershttps://api.us5.datadoghq.com/api/v2/logs/config/archives/{archive_id}/readers
Returns all read roles a given archive is restricted to.
This endpoint requires the logs_read_config
permission.
이름
유형
설명
archive_id [required]
string
The ID of the archive.
OK
Response containing information about multiple roles.
항목
유형
설명
data
[object]
Array of returned roles.
attributes
object
Attributes of the role.
created_at
date-time
Creation time of the role.
modified_at
date-time
Time of last role modification.
name
string
The name of the role. The name is neither unique nor a stable identifier of the role.
user_count
int64
Number of users with that role.
id
string
The unique identifier of the role.
relationships
object
Relationships of the role object returned by the API.
permissions
object
Relationship to multiple permissions objects.
data
[object]
Relationships to permission objects.
id
string
ID of the permission.
type
enum
Permissions resource type.
Allowed enum values: permissions
default: permissions
type [required]
enum
Roles type.
Allowed enum values: roles
default: roles
meta
object
Object describing meta attributes of response.
page
object
Pagination object.
total_count
int64
Total count.
total_filtered_count
int64
Total count of elements matched by the filter.
{
"data": [
{
"attributes": {
"created_at": "2019-09-19T10:00:00.000Z",
"modified_at": "2019-09-19T10:00:00.000Z",
"name": "string",
"user_count": "integer"
},
"id": "string",
"relationships": {
"permissions": {
"data": [
{
"id": "string",
"type": "permissions"
}
]
}
},
"type": "roles"
}
],
"meta": {
"page": {
"total_count": "integer",
"total_filtered_count": "integer"
}
}
}
Bad Request
API error response.
{
"errors": [
"Bad Request"
]
}
Forbidden
API error response.
{
"errors": [
"Bad Request"
]
}
Not found
API error response.
{
"errors": [
"Bad Request"
]
}
Too many requests
API error response.
{
"errors": [
"Bad Request"
]
}
# List read roles for an archive returns "OK" response
require "datadog_api_client"
api_instance = DatadogAPIClient::V2::LogsArchivesAPI.new
p api_instance.list_archive_read_roles("archive_id")
First install the library and its dependencies and then save the example to example.rb
and run following commands:
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
POST https://api.ap1.datadoghq.com/api/v2/logs/config/archives/{archive_id}/readershttps://api.datadoghq.eu/api/v2/logs/config/archives/{archive_id}/readershttps://api.ddog-gov.com/api/v2/logs/config/archives/{archive_id}/readershttps://api.datadoghq.com/api/v2/logs/config/archives/{archive_id}/readershttps://api.us3.datadoghq.com/api/v2/logs/config/archives/{archive_id}/readershttps://api.us5.datadoghq.com/api/v2/logs/config/archives/{archive_id}/readers
Adds a read role to an archive. (Roles API)
This endpoint requires the logs_write_archives
permission.
이름
유형
설명
archive_id [required]
string
The ID of the archive.
항목
유형
설명
data
object
Relationship to role object.
id
string
The unique identifier of the role.
type
enum
Roles type.
Allowed enum values: roles
default: roles
{
"data": {
"id": "3653d3c6-0c75-11ea-ad28-fb5701eabc7d",
"type": "roles"
}
}
OK
Bad Request
API error response.
{
"errors": [
"Bad Request"
]
}
Forbidden
API error response.
{
"errors": [
"Bad Request"
]
}
Not found
API error response.
{
"errors": [
"Bad Request"
]
}
Too many requests
API error response.
{
"errors": [
"Bad Request"
]
}
# Grant role to an archive returns "OK" response
require "datadog_api_client"
api_instance = DatadogAPIClient::V2::LogsArchivesAPI.new
body = DatadogAPIClient::V2::RelationshipToRole.new({
data: DatadogAPIClient::V2::RelationshipToRoleData.new({
id: "3653d3c6-0c75-11ea-ad28-fb5701eabc7d",
type: DatadogAPIClient::V2::RolesType::ROLES,
}),
})
api_instance.add_read_role_to_archive("archive_id", body)
First install the library and its dependencies and then save the example to example.rb
and run following commands:
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
DELETE https://api.ap1.datadoghq.com/api/v2/logs/config/archives/{archive_id}/readershttps://api.datadoghq.eu/api/v2/logs/config/archives/{archive_id}/readershttps://api.ddog-gov.com/api/v2/logs/config/archives/{archive_id}/readershttps://api.datadoghq.com/api/v2/logs/config/archives/{archive_id}/readershttps://api.us3.datadoghq.com/api/v2/logs/config/archives/{archive_id}/readershttps://api.us5.datadoghq.com/api/v2/logs/config/archives/{archive_id}/readers
Removes a role from an archive. (Roles API)
This endpoint requires the logs_write_archives
permission.
이름
유형
설명
archive_id [required]
string
The ID of the archive.
항목
유형
설명
data
object
Relationship to role object.
id
string
The unique identifier of the role.
type
enum
Roles type.
Allowed enum values: roles
default: roles
{
"data": {
"id": "3653d3c6-0c75-11ea-ad28-fb5701eabc7d",
"type": "roles"
}
}
OK
Bad Request
API error response.
{
"errors": [
"Bad Request"
]
}
Forbidden
API error response.
{
"errors": [
"Bad Request"
]
}
Not found
API error response.
{
"errors": [
"Bad Request"
]
}
Too many requests
API error response.
{
"errors": [
"Bad Request"
]
}
# Revoke role from an archive returns "OK" response
require "datadog_api_client"
api_instance = DatadogAPIClient::V2::LogsArchivesAPI.new
body = DatadogAPIClient::V2::RelationshipToRole.new({
data: DatadogAPIClient::V2::RelationshipToRoleData.new({
id: "3653d3c6-0c75-11ea-ad28-fb5701eabc7d",
type: DatadogAPIClient::V2::RolesType::ROLES,
}),
})
api_instance.remove_role_from_archive("archive_id", body)
First install the library and its dependencies and then save the example to example.rb
and run following commands:
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
GET https://api.ap1.datadoghq.com/api/v2/logs/config/archive-orderhttps://api.datadoghq.eu/api/v2/logs/config/archive-orderhttps://api.ddog-gov.com/api/v2/logs/config/archive-orderhttps://api.datadoghq.com/api/v2/logs/config/archive-orderhttps://api.us3.datadoghq.com/api/v2/logs/config/archive-orderhttps://api.us5.datadoghq.com/api/v2/logs/config/archive-order
Get the current order of your archives.
This endpoint takes no JSON arguments.
This endpoint requires the logs_read_config
permission.
OK
A ordered list of archive IDs.
항목
유형
설명
data
object
The definition of an archive order.
attributes [required]
object
The attributes associated with the archive order.
archive_ids [required]
[string]
An ordered array of <ARCHIVE_ID>
strings, the order of archive IDs in the array
define the overall archives order for Datadog.
type [required]
enum
Type of the archive order definition.
Allowed enum values: archive_order
default: archive_order
{
"data": {
"attributes": {
"archive_ids": [
"a2zcMylnM4OCHpYusxIi1g",
"a2zcMylnM4OCHpYusxIi2g",
"a2zcMylnM4OCHpYusxIi3g"
]
},
"type": "archive_order"
}
}
Forbidden
API error response.
{
"errors": [
"Bad Request"
]
}
Too many requests
API error response.
{
"errors": [
"Bad Request"
]
}
# Get archive order returns "OK" response
require "datadog_api_client"
api_instance = DatadogAPIClient::V2::LogsArchivesAPI.new
p api_instance.get_logs_archive_order()
First install the library and its dependencies and then save the example to example.rb
and run following commands:
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"
PUT https://api.ap1.datadoghq.com/api/v2/logs/config/archive-orderhttps://api.datadoghq.eu/api/v2/logs/config/archive-orderhttps://api.ddog-gov.com/api/v2/logs/config/archive-orderhttps://api.datadoghq.com/api/v2/logs/config/archive-orderhttps://api.us3.datadoghq.com/api/v2/logs/config/archive-orderhttps://api.us5.datadoghq.com/api/v2/logs/config/archive-order
Update the order of your archives. Since logs are processed sequentially, reordering an archive may change the structure and content of the data processed by other archives.
Note: Using the PUT
method updates your archive’s order by replacing the current order
with the new one.
logs_write_archives
permission.An object containing the new ordered list of archive IDs.
항목
유형
설명
data
object
The definition of an archive order.
attributes [required]
object
The attributes associated with the archive order.
archive_ids [required]
[string]
An ordered array of <ARCHIVE_ID>
strings, the order of archive IDs in the array
define the overall archives order for Datadog.
type [required]
enum
Type of the archive order definition.
Allowed enum values: archive_order
default: archive_order
{
"data": {
"attributes": {
"archive_ids": [
"a2zcMylnM4OCHpYusxIi1g",
"a2zcMylnM4OCHpYusxIi2g",
"a2zcMylnM4OCHpYusxIi3g"
]
},
"type": "archive_order"
}
}
OK
A ordered list of archive IDs.
항목
유형
설명
data
object
The definition of an archive order.
attributes [required]
object
The attributes associated with the archive order.
archive_ids [required]
[string]
An ordered array of <ARCHIVE_ID>
strings, the order of archive IDs in the array
define the overall archives order for Datadog.
type [required]
enum
Type of the archive order definition.
Allowed enum values: archive_order
default: archive_order
{
"data": {
"attributes": {
"archive_ids": [
"a2zcMylnM4OCHpYusxIi1g",
"a2zcMylnM4OCHpYusxIi2g",
"a2zcMylnM4OCHpYusxIi3g"
]
},
"type": "archive_order"
}
}
Bad Request
API error response.
{
"errors": [
"Bad Request"
]
}
Forbidden
API error response.
{
"errors": [
"Bad Request"
]
}
Unprocessable Entity
API error response.
{
"errors": [
"Bad Request"
]
}
Too many requests
API error response.
{
"errors": [
"Bad Request"
]
}
# Update archive order returns "OK" response
require "datadog_api_client"
api_instance = DatadogAPIClient::V2::LogsArchivesAPI.new
body = DatadogAPIClient::V2::LogsArchiveOrder.new({
data: DatadogAPIClient::V2::LogsArchiveOrderDefinition.new({
attributes: DatadogAPIClient::V2::LogsArchiveOrderAttributes.new({
archive_ids: [
"a2zcMylnM4OCHpYusxIi1g",
"a2zcMylnM4OCHpYusxIi2g",
"a2zcMylnM4OCHpYusxIi3g",
],
}),
type: DatadogAPIClient::V2::LogsArchiveOrderDefinitionType::ARCHIVE_ORDER,
}),
})
p api_instance.update_logs_archive_order(body)
First install the library and its dependencies and then save the example to example.rb
and run following commands:
DD_SITE="datadoghq.comus3.datadoghq.comus5.datadoghq.comdatadoghq.euap1.datadoghq.comddog-gov.com" DD_API_KEY="<API-KEY>" DD_APP_KEY="<APP-KEY>" rb "example.rb"