- 필수 기능
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- 디지털 경험
- 소프트웨어 제공
- 보안
- 로그 관리
- 관리
- 인프라스트럭처
- ci
- containers
- csm
- ndm
- otel_guides
- overview
- slos
- synthetics
- tests
- 워크플로
Supported OS
Red Hat OpenShift is an open source container application platform based on the Kubernetes container orchestrator for enterprise application development and deployment.
This README describes the necessary configuration to enable collection of OpenShift-specific metrics in the Agent. Data described here are collected by the
kubernetes_apiserver
check. You must configure the check to collect theopenshift.*
metrics.
To install the Agent, see the Agent installation instructions for Kubernetes. The default configuration targets OpenShift 3.7.0+ and OpenShift 4.0+, as it relies on features and endpoints introduced in this version.
Alternatively, the Datadog Operator can be used to install and manage the Datadog Agent. The Datadog Operator can be installed using OpenShift’s OperatorHub.
If you are deploying the Datadog Agent using any of the methods linked in the installation instructions above, you must include Security Context Constraints (SCCs) for the Agent to collect data. Follow the instructions below as they relate to your deployment.
The SCC can be applied directly within your Datadog agent’s values.yaml
. Add the following block underneath the agents:
section in the file.
...
agents:
...
podSecurity:
securityContextConstraints:
create: true
...
You can apply this when you initially deploy the Agent. Or, you can execute a helm upgrade
after making this change to apply the SCC.
Depending on your needs and the security constraints of your cluster, three deployment scenarios are supported:
Security Context Constraints | Restricted | Host network | Custom |
---|---|---|---|
Kubernetes layer monitoring | Supported | Supported | Supported |
Kubernetes-based Autodiscovery | Supported | Supported | Supported |
Dogstatsd intake | Not supported | Supported | Supported |
APM trace intake | Not supported | Supported | Supported |
Logs network intake | Not supported | Supported | Supported |
Host network metrics | Not supported | Supported | Supported |
Docker layer monitoring | Not supported | Not supported | Supported |
Container logs collection | Not supported | Not supported | Supported |
Live Container monitoring | Not supported | Not supported | Supported |
Live Process monitoring | Not supported | Not supported | Supported |
For instructions on how to install the Datadog Operator and DatadogAgent
resource in OpenShift, see the OpenShift installation guide.
If the Operator has been deployed with Operator Lifecycle Manager (OLM), then the necessary default SCCs present in OpenShift are automatically associated with the datadog-agent-scc
ServiceAccount
The Agent can then be deployed with the DatadogAgent
CustomResourceDefinition, referencing this Service Account on the Node Agent and Cluster Agent pods.
See Kubernetes Log Collection for further information.
This mode does not require granting special permissions to the datadog-agent
daemonset, other than the RBAC permissions needed to access the kubelet and the APIserver. You can get started with this kubelet-only template.
The recommended ingestion method for Dogstatsd, APM, and logs is to bind the Datadog Agent to a host port. This way, the target IP is constant and easily discoverable by your applications. The default restricted OpenShift SCC does not allow binding to the host port. You can set the Agent to listen on it’s own IP, but you need to handle the discovery of that IP from your application.
The Agent supports working on a sidecar
run mode, to enable running the Agent in your application’s pod for easier discoverability.
Add the allowHostPorts
permission to the pod with the standard hostnetwork
or hostaccess
SCC, or by creating your own. In this case, you can add the relevant port bindings in your pod specs:
ports:
- containerPort: 8125
name: dogstatsdport
protocol: UDP
- containerPort: 8126
name: traceport
protocol: TCP
If SELinux is in permissive mode or disabled, enable the hostaccess
SCC to benefit from all features.
If SELinux is in enforcing mode, it is recommended to grant the spc_t
type to the datadog-agent pod. In order to deploy the agent you can use the following datadog-agent SCC that can be applied after creating the datadog-agent service account. It grants the following permissions:
allowHostPorts: true
: Binds Dogstatsd / APM / Logs intakes to the node’s IP.allowHostPID: true
: Enables Origin Detection for Dogstatsd metrics submitted by Unix Socket.volumes: hostPath
: Accesses the Docker socket and the host’s proc
and cgroup
folders, for metric collection.SELinux type: spc_t
: Accesses the Docker socket and all processes’ proc
and cgroup
folders, for metric collection. See Introducing a Super Privileged Container Concept for more details.system:serviceaccount::
to the users
section.allowHostNetwork: true
in the scc.yaml
manifest, as well as hostNetwork: true
in the Agent configuration to get host tags and aliases. Access to metadata servers from the Pod network is otherwise restricted.Note: The Docker socket is owned by the root group, so you may need to elevate the Agent’s privileges to pull in Docker metrics. To run the Agent process as a root user, you can configure your SCC with the following:
runAsUser:
type: RunAsAny
openshift.appliedclusterquota.cpu.limit (gauge) | Hard limit for cpu by cluster resource quota and namespace Shown as cpu |
openshift.appliedclusterquota.cpu.remaining (gauge) | Remaining available cpu by cluster resource quota and namespace Shown as cpu |
openshift.appliedclusterquota.cpu.used (gauge) | Observed cpu usage by cluster resource quota and namespace Shown as cpu |
openshift.appliedclusterquota.memory.limit (gauge) | Hard limit for memory by cluster resource quota and namespace Shown as byte |
openshift.appliedclusterquota.memory.remaining (gauge) | Remaining available memory by cluster resource quota and namespace Shown as byte |
openshift.appliedclusterquota.memory.used (gauge) | Observed memory usage by cluster resource quota and namespace Shown as byte |
openshift.appliedclusterquota.persistentvolumeclaims.limit (gauge) | Hard limit for persistent volume claims by cluster resource quota and namespace |
openshift.appliedclusterquota.persistentvolumeclaims.remaining (gauge) | Remaining available persistent volume claims by cluster resource quota and namespace |
openshift.appliedclusterquota.persistentvolumeclaims.used (gauge) | Observed persistent volume claims usage by cluster resource quota and namespace |
openshift.appliedclusterquota.pods.limit (gauge) | Hard limit for pods by cluster resource quota and namespace |
openshift.appliedclusterquota.pods.remaining (gauge) | Remaining available pods by cluster resource quota and namespace |
openshift.appliedclusterquota.pods.used (gauge) | Observed pods usage by cluster resource quota and namespace |
openshift.appliedclusterquota.services.limit (gauge) | Hard limit for services by cluster resource quota and namespace |
openshift.appliedclusterquota.services.loadbalancers.limit (gauge) | Hard limit for service load balancers by cluster resource quota and namespace |
openshift.appliedclusterquota.services.loadbalancers.remaining (gauge) | Remaining available service load balancers by cluster resource quota and namespace |
openshift.appliedclusterquota.services.loadbalancers.used (gauge) | Observed service load balancers usage by cluster resource quota and namespace |
openshift.appliedclusterquota.services.nodeports.limit (gauge) | Hard limit for service node ports by cluster resource quota and namespace |
openshift.appliedclusterquota.services.nodeports.remaining (gauge) | Remaining available service node ports by cluster resource quota and namespace |
openshift.appliedclusterquota.services.nodeports.used (gauge) | Observed service node ports usage by cluster resource quota and namespace |
openshift.appliedclusterquota.services.remaining (gauge) | Remaining available services by cluster resource quota and namespace |
openshift.appliedclusterquota.services.used (gauge) | Observed services usage by cluster resource quota and namespace |
openshift.clusterquota.cpu.limit (gauge) | Hard limit for cpu by cluster resource quota for all namespaces Shown as cpu |
openshift.clusterquota.cpu.remaining (gauge) | Remaining available cpu by cluster resource quota for all namespaces Shown as cpu |
openshift.clusterquota.cpu.requests.used (gauge) | Observed cpu usage by cluster resource for request |
openshift.clusterquota.cpu.used (gauge) | Observed cpu usage by cluster resource quota for all namespaces Shown as cpu |
openshift.clusterquota.memory.limit (gauge) | Hard limit for memory by cluster resource quota for all namespaces Shown as byte |
openshift.clusterquota.memory.remaining (gauge) | Remaining available memory by cluster resource quota for all namespaces Shown as byte |
openshift.clusterquota.memory.used (gauge) | Observed memory usage by cluster resource quota for all namespaces Shown as byte |
openshift.clusterquota.persistentvolumeclaims.limit (gauge) | Hard limit for persistent volume claims by cluster resource quota for all namespaces |
openshift.clusterquota.persistentvolumeclaims.remaining (gauge) | Remaining available persistent volume claims by cluster resource quota for all namespaces |
openshift.clusterquota.persistentvolumeclaims.used (gauge) | Observed persistent volume claims usage by cluster resource quota for all namespaces |
openshift.clusterquota.pods.limit (gauge) | Hard limit for pods by cluster resource quota for all namespaces |
openshift.clusterquota.pods.remaining (gauge) | Remaining available pods by cluster resource quota for all namespaces |
openshift.clusterquota.pods.used (gauge) | Observed pods usage by cluster resource quota for all namespaces |
openshift.clusterquota.services.limit (gauge) | Hard limit for services by cluster resource quota for all namespaces |
openshift.clusterquota.services.loadbalancers.limit (gauge) | Hard limit for service load balancers by cluster resource quota for all namespaces |
openshift.clusterquota.services.loadbalancers.remaining (gauge) | Remaining available service load balancers by cluster resource quota for all namespaces |
openshift.clusterquota.services.loadbalancers.used (gauge) | Observed service load balancers usage by cluster resource quota for all namespaces |
openshift.clusterquota.services.nodeports.limit (gauge) | Hard limit for service node ports by cluster resource quota for all namespaces |
openshift.clusterquota.services.nodeports.remaining (gauge) | Remaining available service node ports by cluster resource quota for all namespaces |
openshift.clusterquota.services.nodeports.used (gauge) | Observed service node ports usage by cluster resource quota for all namespaces |
openshift.clusterquota.services.remaining (gauge) | Remaining available services by cluster resource quota for all namespaces |
openshift.clusterquota.services.used (gauge) | Observed services usage by cluster resource quota for all namespaces |
The OpenShift check does not include any events.
The OpenShift check does not include any Service Checks.
Need help? Contact Datadog support.