- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`Datadog Kubernetes Autoscaling automates the scaling of your Kubernetes environments based on utilization metrics. This feature enables you to make changes to your Kubernetes environments from within Datadog.
Datadog Kubernetes Autoscaling provides cluster scaling observability and workload scaling recommendations and automation. Datadog uses real-time and historical utilization metrics to make recommendations. With data from Cloud Cost Management, Datadog can also make recommendations based on costs.
Automated workload scaling is powered by a DatadogPodAutoscaler
custom resource that defines scaling behavior on a per-workload level.
Each cluster can have a maximum of 100 workloads optimized with Datadog Kubernetes Autoscaler.
kubectl
CLI, for updating the Datadog Agentorg_management
)api_keys_write
)orchestration_workload_scaling_write
)helm upgrade datadog-operator datadog/datadog-operator
datadog-agent.yaml
configuration file:spec:
features:
orchestratorExplorer:
customResources:
- datadoghq.com/v1alpha1/datadogpodautoscalers
autoscaling:
workload:
enabled: true
eventCollection:
unbundleEvents: true
override:
clusterAgent:
image:
tag: 7.58.1
nodeAgent:
image:
tag: 7.58.1 # or 7.58.1-jmx
clusterChecksRunner
image:
tag: 7.58.1 # or 7.58.1-jmx
datadog-agent.yaml
:...
spec:
features:
admissionController:
enabled: true
...
datadog-agent.yaml
configuration:kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml
datadog-values.yaml
configuration file:datadog:
orchestratorExplorer:
customResources:
- datadoghq.com/v1alpha1/datadogpodautoscalers
autoscaling:
workload:
enabled: true
kubernetesEvents:
unbundleEvents: true
clusterAgent:
image:
tag: 7.58.1
agents:
image:
tag: 7.58.1 # or 7.58.1-jmx
clusterChecksRunner:
image:
tag: 7.58.1 # or 7.58.1-jmx
datadog-values.yaml
:...
clusterAgent:
image:
tag: 7.58.1
admissionController:
enabled: true
...
helm repo update
datadog-values.yaml
:helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadog
Datadog’s Kubernetes Autoscaling can work with Cloud Cost Management to make workload scaling recommendations based on cost data…
kubernetes-beta@datadoghq.com
.See Cloud Cost setup instructions for AWS, Azure, or Google Cloud.
If you do not enable Cloud Cost Management, all workload recommendations and autoscaling decisions are still valid and functional.
In Datadog, navigate to Containers > Kubernetes Explorer and select the Autoscaling tab. Use the Cluster Scaling view to see a list of your clusters, sortable by total idle CPU or total idle memory. If you enabled Cloud Cost Management, you can also see cost information and a trailing 30-day cost breakdown.
Click Optimize cluster to open a detailed view of the selected cluster, including a table of this cluster’s workloads.
You can also use the Workload Scaling view to see a filterable list of all workloads across all clusters.
Select a workload and click Optimize to see its Scaling Recommendations. You can inspect the metrics backing the recommendation for each container within the deployment.
You can deploy scaling recommendations:
automatically, with Datadog Kubernetes Autoscaling.
Select Enable Autoscaling to automatically apply your recommendations.
manually, with kubectl patch
.
Select Apply to see a generated kubectl patch
command.
You can also deploy a DatadogPodAutoscaler
custom resource to enable autoscaling for a workload. This custom resource targets a deployment.
For example:
apiVersion: datadoghq.com/v1alpha1
kind: DatadogPodAutoscaler
metadata:
name: <name>
# usually the same as your deployment object name
spec:
constraints:
# Adjust constraints as safeguards
maxReplicas: 50
minReplicas: 1
owner: Local
policy: All
# Values: All, None
# All - Allows automated recommendations to be applied. Default.
# None - Computes recommendations without applying them (dry run).
targetRef:
apiVersion: apps/v1
kind: Deployment
name: <your Deployment name>
targets:
# Currently, recommendation is to use a single target with CPU Utilization of main container of the POD.
- type: ContainerResource
containerResource:
container: <main-container-name>
name: cpu
value:
type: Utilization
utilization: 75