- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`Datadog Kubernetes Autoscaling continuously monitors your Kubernetes resources to provide immediate scaling recommendations and multidimensional autoscaling of your Kubernetes workloads. You can deploy autoscaling through the Datadog web interface, or with a DatadogPodAutoscaler
custom resource.
Datadog uses real-time and historical utilization metrics and event signals from your existing Datadog Agents to make recommendations. You can then examine these recommendations and choose to deploy them.
By default, Datadog Kubernetes Autoscaling uses estimated CPU and memory cost values to show savings opportunities and impact estimates. You can also use Kubernetes Autoscaling alongside Cloud Cost Management to get reporting based on your exact instance type costs.
Automated workload scaling is powered by a DatadogPodAutoscaler
custom resource that defines scaling behavior on a per-workload level. The Datadog Cluster Agent acts as the controller for this custom resource.
Each cluster can have a maximum of 1000 workloads optimized with Datadog Kubernetes Autoscaler.
kubectl
CLI, for updating the Datadog Agent.helm upgrade datadog-operator datadog/datadog-operator
datadog-agent.yaml
configuration file:spec:
features:
orchestratorExplorer:
customResources:
- datadoghq.com/v1alpha1/datadogpodautoscalers
autoscaling:
workload:
enabled: true
eventCollection:
unbundleEvents: true
override:
clusterAgent:
image:
tag: 7.58.1
nodeAgent:
image:
tag: 7.58.1 # or 7.58.1-jmx
clusterChecksRunner
image:
tag: 7.58.1 # or 7.58.1-jmx
datadog-agent.yaml
:...
spec:
features:
admissionController:
enabled: true
...
datadog-agent.yaml
configuration:kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml
datadog-values.yaml
configuration file:datadog:
orchestratorExplorer:
customResources:
- datadoghq.com/v1alpha1/datadogpodautoscalers
autoscaling:
workload:
enabled: true
kubernetesEvents:
unbundleEvents: true
clusterAgent:
image:
tag: 7.58.1
agents:
image:
tag: 7.58.1 # or 7.58.1-jmx
clusterChecksRunner:
image:
tag: 7.58.1 # or 7.58.1-jmx
datadog-values.yaml
:...
clusterAgent:
image:
tag: 7.58.1
admissionController:
enabled: true
...
helm repo update
datadog-values.yaml
:helm upgrade -f datadog-values.yaml <RELEASE_NAME> datadog/datadog
By default, Datadog Kubernetes Autoscaling shows idle cost and savings estimates using the following fixed values:
Fixed cost values are subject to refinement over time.
When Cloud Cost Management is enabled within an org, Datadog Kubernetes Autoscaling shows idle cost and savings estimates based on your exact bill cost of underlying monitored instances.
See Cloud Cost setup instructions for AWS, Azure, or Google Cloud.
Cost data enhances Kubernetes Autoscaling, but it is not required. All of Datadog’s workload recommendations and autoscaling decisions are valid and functional without cost data.
The Autoscaling Summary page provides a starting point for platform teams to understand the total Kubernetes Resource savings opportunities across an organization, and filter down to key clusters and namespaces. The Cluster Scaling view provides per-cluster information about total idle CPU, total idle memory, and costs. Click on a cluster for detailed information and a table of the cluster’s workloads. If you are an individual application or service owner, you can also filter by your team or service name directly from the Workload Scaling list view.
Click Optimize on any workload to see its scaling recommendation.
After you identify a workload to optimize, Datadog recommends inspecting its Scaling Recommendation. You can also click Configure Recommendation to add constraints or adjust target utilization levels.
When you are ready to proceed with enabling Autoscaling for a workload, you have two options for deployment:
Click Enable Autoscaling. (Requires Workload Scaling Write permission.)
Datadog automatically installs and configures autoscaling for this workload on your behalf.
Deploy a DatadogPodAutoscaler
custom resource.
Use your existing deploy process to target and configure Autoscaling for your workload. Click Export Recommendation to see a suggested manifest configuration.
As an alternative to Autoscaling, you can also deploy Datadog’s scaling recommendations manually. When you configure resources for your Kubernetes deployments, use the values suggested in the scaling recommendations. You can also click Export Recommendation to see a generated kubectl patch
command.