pySpark提交提交任务到Yarn

cluster模式:

spark-submit \
--conf spark.dynamicAllocation.enabled=false \
--name pool_liquidity_info \
--master yarn \
--deploy-mode cluster \
--queue prod \
--driver-memory 20G \
--num-executors 20 \
--executor-memory 15G \
--executor-cores 2 \--archives hdfs://ns1/user/hadoop/mypy3spark_env/py3spark.tar.gz#py3spark \
--conf "spark.pyspark.python=./py3spark/py3spark/bin/python" \
--conf "spark.pyspark.driver.python=./py3spark/py3spark/bin/python" \
/home/hadoop/test/python/test.py

client模式:client模式需要在client执行节点有对应的python3环境

spark-submit \
--conf spark.dynamicAllocation.enabled=false \
--name v3_dw_stat_position_ranking_days \
--master yarn \
--deploy-mode client \
--queue prod \
--driver-memory 4G \
--num-executors 50 \
--executor-memory 5G \
--executor-cores 2 \
--jars /home/hadoop/wangzhen/trino-jdbc-388.jar,/home/hadoop/bigdata/spark/jars/iceberg-spark-runtime-0.13.2.jar \
--packages org.apache.iceberg:iceberg-spark-runtime:0.13.2 \
--archives hdfs://ns1/user/hadoop/mypy3spark_env/py3spark.tar.gz#py3spark \
--conf "spark.pyspark.python=./py3spark/py3spark/bin/python" \
--conf "spark.pyspark.driver.python=/home/hadoop/bigdata/anaconda3/envs/py3spark/bin/python" \
/home/hadoop/test/python/test.py
--archives 这个指向虚拟环境包,通常将虚拟环境依赖包安装好,然后压缩后,上传到hdfs上。
--conf "spark.pyspark.python 和 --conf "spark.pyspark.driver.python 指向的路径都是指虚拟环境包在运行时解压缩后的临时路径,一般不用考虑这个问题。
/home/hadoop/test/python/test.py 最后这个是要允许的pySpark的脚本文件。

undefined

undefined

Original: https://www.cnblogs.com/30go/p/16217262.html
Author: 硅谷工具人
Title: pySpark提交提交任务到Yarn

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/6006/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

最近整理资源【免费获取】:   👉 程序员最新必读书单  | 👏 互联网各方向面试题下载 | ✌️计算机核心资源汇总