- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
Data Jobs Monitoring gives visibility into the performance and reliability of Apache Spark applications on Kubernetes.
Follow these steps to enable Data Jobs Monitoring for Spark on Kubernetes.
If you have already installed the Datadog Agent on your Kubernetes cluster, ensure that you have enabled the Datadog Admission Controller. You can then go to the next step, Inject Spark instrumentation.
You can install the Datadog Agent using the Datadog Operator or Helm.
Helm
kubectl
CLIInstall the Datadog Operator by running the following commands:
helm repo add datadog https://helm.datadoghq.com
helm install my-datadog-operator datadog/datadog-operator
Create a Kubernetes Secret to store your Datadog API key.
kubectl create secret generic datadog-secret --from-literal api-key=<DATADOG_API_KEY> --from-literal app-key=<DATADOG_APP_KEY>
<DATADOG_API_KEY>
with your Datadog API key.<DATADOG_APP_KEY>
with your Datadog app key.Create a file, datadog-agent.yaml
, that contains the following configuration:
kind: DatadogAgent
apiVersion: datadoghq.com/v2alpha1
metadata:
name: datadog
spec:
features:
apm:
enabled: true
hostPortConfig:
enabled: true
hostPort: 8126
admissionController:
enabled: true
mutateUnlabelled: false
global:
tags:
- 'data_workload_monitoring_trial:true'
site: <DATADOG_SITE>
credentials:
apiSecret:
secretName: datadog-secret
keyName: api-key
appSecret:
secretName: datadog-secret
keyName: app-key
override:
nodeAgent:
env:
- name: DD_DJM_CONFIG_ENABLED
value: "true"
Replace <DATADOG_SITE>
with your Datadog site. Your site is . (Ensure the correct SITE is selected on the right).
Deploy the Datadog Agent with the above configuration file:
kubectl apply -f /path/to/your/datadog-agent.yaml
Create a Kubernetes Secret to store your Datadog API key.
kubectl create secret generic datadog-secret --from-literal api-key=<DATADOG_API_KEY> --from-literal app-key=<DATADOG_APP_KEY>
<DATADOG_API_KEY>
with your Datadog API key.<DATADOG_APP_KEY>
with your Datadog app key.Create a file, datadog-values.yaml
, that contains the following configuration:
datadog:
apiKeyExistingSecret: datadog-secret
appKeyExistingSecret: datadog-secret
site: <DATADOG_SITE>
apm:
portEnabled: true
port: 8126
tags:
- 'data_workload_monitoring_trial:true'
env:
- name: DD_DJM_CONFIG_ENABLED
value: "true"
clusterAgent:
admissionController:
enabled: true
muteUnlabelled: false
Replace <DATADOG_SITE>
with your Datadog site. Your site is . (Ensure the correct SITE is selected on the right).
Run the following command:
helm install <RELEASE_NAME> \
-f datadog-values.yaml \
--set targetSystem=<TARGET_SYSTEM> \
datadog/datadog
Replace <RELEASE_NAME>
with your release name. For example, datadog-agent
.
Replace <TARGET_SYSTEM>
with the name of your OS. For example, linux
or windows
.
When you run your Spark job, use the following configurations:
spark.kubernetes.driver.label.admission.datadoghq.com/enabled
(Required)true
spark.kubernetes.driver.annotation.admission.datadoghq.com/java-lib.version
(Required)latest
spark.driver.extraJavaOptions
-Ddd.integration.spark.enabled
(Required)true
-Ddd.integrations.enabled
(Required)false
-Ddd.service
(Optional)-Ddd.env
(Optional)prod
or dev
.-Ddd.version
(Optional)-Ddd.tags
(Optional)<KEY_1>:<VALUE_1>,<KEY_2:VALUE_2>
.-Ddd.trace.experimental.long-running.enabled
(Optional)true
To view jobs while they are still runningspark-submit \
--class org.apache.spark.examples.SparkPi \
--master k8s://<CLUSTER_ENDPOINT> \
--conf spark.kubernetes.container.image=895885662937.dkr.ecr.us-west-2.amazonaws.com/spark/emr-6.10.0:latest \
--deploy-mode cluster \
--conf spark.kubernetes.namespace=<NAMESPACE> \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=<SERVICE_ACCOUNT> \
--conf spark.kubernetes.driver.label.admission.datadoghq.com/enabled=true \
--conf spark.kubernetes.driver.annotation.admission.datadoghq.com/java-lib.version=latest \
--conf spark.driver.extraJavaOptions="-Ddd.integration.spark.enabled=true -Ddd.integrations.enabled=false -Ddd.service=<JOB_NAME> -Ddd.env=<ENV> -Ddd.version=<VERSION> -Ddd.tags=<KEY_1>:<VALUE_1>,<KEY_2:VALUE_2> -Ddd.trace.experimental.long-running.enabled=true" \
local:///usr/lib/spark/examples/jars/spark-examples.jar 20
aws emr-containers start-job-run \
--virtual-cluster-id <EMR_CLUSTER_ID> \
--name myjob \
--execution-role-arn <EXECUTION_ROLE_ARN> \
--release-label emr-6.10.0-latest \
--job-driver '{
"sparkSubmitJobDriver": {
"entryPoint": "s3://BUCKET/spark-examples.jar",
"sparkSubmitParameters": "--class <MAIN_CLASS> --conf spark.kubernetes.driver.label.admission.datadoghq.com/enabled=true --conf spark.kubernetes.driver.annotation.admission.datadoghq.com/java-lib.version=latest --conf spark.driver.extraJavaOptions=\"-Ddd.integration.spark.enabled=true -Ddd.integrations.enabled=false -Ddd.service=<JOB_NAME> -Ddd.env=<ENV> -Ddd.version=<VERSION> -Ddd.tags=<KEY_1>:<VALUE_1>,<KEY_2:VALUE_2> -Ddd.trace.experimental.long-running.enabled=true\""
}
}
In Datadog, view the Data Jobs Monitoring page to see a list of all your data processing jobs.
You can set tags on Spark spans at runtime. These tags are applied only to spans that start after the tag is added.
// Add tag for all next Spark computations
sparkContext.setLocalProperty("spark.datadog.tags.key", "value")
spark.read.parquet(...)
To remove a runtime tag:
// Remove tag for all next Spark computations
sparkContext.setLocalProperty("spark.datadog.tags.key", null)
추가 유용한 문서, 링크 및 기사: