0


spark on yarn 的执行过程以及日志分析

提交命令

${SPARK_HOME}/bin/spark-submit --class org.apache.spark.examples.SparkPi \--masteryarn\
    --deploy-mode cluster \
    --driver-memory 4g \
    --executor-memory 1g \
    --executor-cores 4\--queue default \${SPARK_HOME}/examples/jars/spark-examples*.jar \10

执行过程

  1. 客户端执行spark-submit 提交应用程序,向 resourceManager 注册并申请资源。
  2. resourceManger 收到请求后,在集群中的选择一个 nodeManager,为该应用程序分配第一个 container,在其中创建 application master。application master 中有 driver,并开始执行 driver (实则是解析用户写的程序)
  3. driver :(1)driver 会运行应用程序的 main 方法。(2)在 main 方法中构建了 sparkContext 对象,该对向非常重要,他是所有 spark 程序的入口。在 sparkContext 对象的内部,也构建了两个对象 DAGScheduler 和 TaskScheduler。(3)程序中涉及到 rdd 大量的转化操作,最后给定一个action 触发真正的执行。此时会先根据代码中 rdd 的关系生成一张 DAG 有向无环图。图的方向就是 rdd 的算子操作顺序,最后将有向无环图发送给 DAGScheduler 对象。(4)DAGScheduler 获取到有向无环图后,按照宽依赖划分出很多个 stage,每个 stage 内部有很多可以并行运行的 task 并把这些 任务分装在一个 taskSet 集合中,最后把一个一个的 taskSet 集合发送给 TaskScheduler 对象。(5)TaskScheduler 接收到许多个 taskSet 后,按照 stage 的依赖关系执行它里面的 task。在执行每个 taskSet 时,TaskSchduler 遍历 taskSet 把每个 task 依次提交到 exector 中进行执行。driver 只是进行了任务的拆解,真正的执行在 yarn 的 container 中。
  4. application master 向 resourceManager 注册,这样通过 RM 就能看到任务的执行情况。同时 AM 为各个 task 申请资源并一直监控任务执行结束。
  5. AM 申请到资源(container)后会与 NM 通信,让 NM 在获取的容器中启动 CoarseGrainedExecutorBackend,当CoarseGrainedExecutorBackend 启动时会向 AM 中的 sparkContext 注册并且申请 task。
  6. AM 中的 sparkContext 分配 task 给 CoarseGrainedExecutorBackend,在执行 task 时 CoarseGrainedExecutorBackend 向 AM 汇报任务的进度和状态,以便 AM 能随时掌握 task 的执行情况,从而可以在 task 执行失败时进行第二次尝试或者当集群资源紧张时杀到 task。
  7. 当该任务执行完毕后, AM 向 RM 发送请求注销自己。

执行日志

22/11/19 17:42:18 WARN util.Utils: Your hostname, macdeMacBook-Pro-3.local resolves to a loopback address: 127.0.0.1; using 10.10.9.250 instead (on interface en0)22/11/19 17:42:18 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
22/11/19 17:42:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/11/19 17:42:19 INFO client.RMProxy: Connecting to ResourceManager at sh01/172.16.99.214:8010
22/11/19 17:42:19 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
22/11/19 17:42:19 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)22/11/19 17:42:19 INFO yarn.Client: Will allocate AM container, with 4505 MB memory including 409 MB overhead
22/11/19 17:42:19 INFO yarn.Client: Setting up container launch context for our AM
22/11/19 17:42:19 INFO yarn.Client: Setting up the launch environment for our AM container
22/11/19 17:42:19 INFO yarn.Client: Preparing resources for our AM container
22/11/19 17:42:20 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
22/11/19 17:42:23 INFO yarn.Client: Uploading resource file:/usr/local/spark-2.4.8-bin-hadoop2.7/tmp/spark-b423d166-c45e-429a-b25a-3efde9c1145c/__spark_libs__2899998199838240455.zip -> hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2205/__spark_libs__2899998199838240455.zip
22/11/19 17:45:52 INFO yarn.Client: Uploading resource file:/usr/local/spark/examples/jars/spark-examples_2.11-2.4.8.jar -> hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2205/spark-examples_2.11-2.4.8.jar
22/11/19 17:45:54 INFO yarn.Client: Uploading resource file:/usr/local/spark-2.4.8-bin-hadoop2.7/tmp/spark-b423d166-c45e-429a-b25a-3efde9c1145c/__spark_conf__8349177025085739013.zip -> hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2205/__spark_conf__.zip
22/11/19 17:45:56 INFO spark.SecurityManager: Changing view acls to: mac
22/11/19 17:45:56 INFO spark.SecurityManager: Changing modify acls to: mac
22/11/19 17:45:56 INFO spark.SecurityManager: Changing view acls groups to:
22/11/19 17:45:56 INFO spark.SecurityManager: Changing modify acls groups to:
22/11/19 17:45:56 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled;users  with view permissions: Set(mac);groups with view permissions: Set();users  with modify permissions: Set(mac);groups with modify permissions: Set()22/11/19 17:45:57 INFO yarn.Client: Submitting application application_1666603193487_2205 to ResourceManager
22/11/19 17:45:57 INFO impl.YarnClientImpl: Submitted application application_1666603193487_2205
22/11/19 17:45:58 INFO yarn.Client: Application report for application_1666603193487_2205 (state: ACCEPTED)22/11/19 17:45:58 INFO yarn.Client:
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1668851157430
     final status: UNDEFINED
     tracking URL: http://sh01:8012/proxy/application_1666603193487_2205/
     user: mac
22/11/19 17:45:59 INFO yarn.Client: Application report for application_1666603193487_2205 (state: ACCEPTED)22/11/19 17:46:00 INFO yarn.Client: Application report for application_1666603193487_2205 (state: ACCEPTED)22/11/19 17:46:01 INFO yarn.Client: Application report for application_1666603193487_2205 (state: ACCEPTED)22/11/19 17:46:02 INFO yarn.Client: Application report for application_1666603193487_2205 (state: ACCEPTED)22/11/19 17:46:03 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)22/11/19 17:46:03 INFO yarn.Client:
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: sh02
     ApplicationMaster RPC port: 46195
     queue: default
     start time: 1668851157430
     final status: UNDEFINED
     tracking URL: http://sh01:8012/proxy/application_1666603193487_2205/
     user: mac 
22/11/19 17:46:04 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)22/11/19 17:46:05 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)22/11/19 17:46:06 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)22/11/19 17:46:07 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)22/11/19 17:46:08 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)22/11/19 17:46:09 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)22/11/19 17:46:10 INFO yarn.Client: Application report for application_1666603193487_2205 (state: RUNNING)22/11/19 17:46:11 INFO yarn.Client: Application report for application_1666603193487_2205 (state: FINISHED)22/11/19 17:46:11 INFO yarn.Client:
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: sh02
     ApplicationMaster RPC port: 46195
     queue: default
     start time: 1668851157430
     final status: SUCCEEDED
     tracking URL: http://sh01:8012/proxy/application_1666603193487_2205/
     user: mac
22/11/19 17:46:12 INFO yarn.Client: Deleted staging directory hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2205
22/11/19 17:46:12 INFO util.ShutdownHookManager: Shutdown hook called
22/11/19 17:46:12 INFO util.ShutdownHookManager: Deleting directory /private/var/folders/pc/mj2v_vln4x14q6jylbtnmvx40000gn/T/spark-b39d7673-82ac-471c-8f8a-f667b8b081f2
22/11/19 17:46:12 INFO util.ShutdownHookManager: Deleting directory /usr/local/spark-2.4.8-bin-hadoop2.7/tmp/spark-b423d166-c45e-429a-b25a-3efde9c1145c

4-6:连接 ResourceManager,在 2 个 NodeManager 组成的集群上申请新应用,校验新应用申请的内存资源不超过集群最大内存资源,该集群每个 container 持有 8G 左右内存。

8-14:为 application master 分配 4505 M大小的 container。“including 409 MB overhead” 是什么意思呢?上面提到过,AM 中包含有driver,我们提交任务的时候为driver 申请了 4G(4096MB) 的内存,4505 - 4096 = 409,RM 在申请的基础上多分了些内存,至于为什么多分,就先到这吧,不深究。紧接着就是为 AM 容器构建环境、准备资源、将本地 spark 依赖的库(我查了下有 244 M)、应用程序 jar包、spark 配置文件各自打包后上传到了 HDFS 的

hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2205

目录下,等应用程序执行完后这个目录会被删掉。

15-19:安全校验。

20-21:提交应用程序到 RM,这里注意看应用的名字叫:application_1666603193487_2205,与上传资源到 HDFS 时目录的名字相同。猜测应该是 AM 提交的应用程序。

22-36: AM 为执行 task 向 RM 申请 container(资源),所以状态标记是 ACCEPTED。为什么这么说呢?因为有时候发现 ACCEPTED 会持续好长时间,而那时集群上有任务正在跑着,且没有多余资源,所以推断此时是 AM 正在给 task 申请资源。

37-55:task开始执行了,从 41 行可以看到 AM 容器被分配到了 sh02 机器上。(sh01: RM、sh02:NM、sh03:NM)

56-69:任务执行完毕,删除 HDFS 上的缓存,载删除本地的缓存,目录名字是能对上的。


客户端和 driver 都指的是什么

  • 客户端:执行 spark-submit 命令的地方叫客户端。
  • driver:用户提交的程序运行起来就是driver。

driver 在哪里?先设想几台服务器

服务器  角色
sh01  resourceManger
sh02  nodeManger
sh03  nodeManger
sh04  拥有大数据集群的配置

站在 sh04 提交任务

  • yarn-cluster 模式:就是上面描述的场景,driver不在客户端而在 sh02 的 AM 中。在任务执行过程中 task 容器和 AM(AM 中的driver) 的信息传输、AM 与 RM 的信息传输跟客户端没什么关系。客户端呢只是拿到了 stdout 传出来的数据,即便是没了客户端,任务也能依旧执行。
  • yarn-client 模式:driver 存在客户端上,没了客户端任务就不能运行。(至于别的信息,日子在最后面,自己分析哈)

在现实开发中,sh01、sh02、sh03 通常是大数据集群,sh04 很可能只是段提交任务的程序而已。


yarn 的 container 和 exector 是什么关系

在 yarn 集群,exector 和 application master 都必须运行在 ”container“ 中。这里的容器指的不是 docker 那种,它代表的是物理机上的存储资源和计算资源,这资源受 NM 监督,由 RM 调度,yarn 集群的资源分配以 container 为单位。exector 和 application master 都是进程,它们只有被分配到资源后才能执行。


yarn-client 日志

${SPARK_HOME}/bin/spark-submit --class org.apache.spark.examples.SparkPi \--masteryarn\
    --deploy-mode client \
    --driver-memory 4g \
    --executor-memory 1g \
    --executor-cores 4\--queue default \${SPARK_HOME}/examples/jars/spark-examples*.jar \1022/11/19 18:33:36 WARN util.Utils: Your hostname, macdeMacBook-Pro-3.local resolves to a loopback address: 127.0.0.1; using 10.10.9.250 instead (on interface en0)22/11/19 18:33:36 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
22/11/19 18:33:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/11/19 18:33:36 INFO spark.SparkContext: Running Spark version 2.4.8
22/11/19 18:33:36 INFO spark.SparkContext: Submitted application: Spark Pi
22/11/19 18:33:36 INFO spark.SecurityManager: Changing view acls to: mac
22/11/19 18:33:36 INFO spark.SecurityManager: Changing modify acls to: mac
22/11/19 18:33:36 INFO spark.SecurityManager: Changing view acls groups to:
22/11/19 18:33:36 INFO spark.SecurityManager: Changing modify acls groups to:
22/11/19 18:33:36 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled;users  with view permissions: Set(mac);groups with view permissions: Set();users  with modify permissions: Set(mac);groups with modify permissions: Set()22/11/19 18:33:37 INFO util.Utils: Successfully started service'sparkDriver' on port 53336.
22/11/19 18:33:37 INFO spark.SparkEnv: Registering MapOutputTracker
22/11/19 18:33:37 INFO spark.SparkEnv: Registering BlockManagerMaster
22/11/19 18:33:37 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
22/11/19 18:33:37 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
22/11/19 18:33:37 INFO storage.DiskBlockManager: Created local directory at /usr/local/spark-2.4.8-bin-hadoop2.7/tmp/blockmgr-ea23e012-50a5-4ad2-a2c0-cf40ea020a9e
22/11/19 18:33:37 INFO memory.MemoryStore: MemoryStore started with capacity 2004.6 MB
22/11/19 18:33:37 INFO spark.SparkEnv: Registering OutputCommitCoordinator
22/11/19 18:33:37 INFO util.log: Logging initialized @2435ms to org.spark_project.jetty.util.log.Slf4jLog
22/11/19 18:33:37 INFO server.Server: jetty-9.4.z-SNAPSHOT; built: unknown; git: unknown; jvm 1.8.0_333-b02
22/11/19 18:33:37 INFO server.Server: Started @2564ms
22/11/19 18:33:37 INFO server.AbstractConnector: Started ServerConnector@62b3df3a{HTTP/1.1, (http/1.1)}{0.0.0.0:4040}22/11/19 18:33:37 INFO util.Utils: Successfully started service'SparkUI' on port 4040.
22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@169da7f2{/jobs,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@757f675c{/jobs/json,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2617f816{/jobs/job,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5d10455d{/jobs/job/json,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@535b8c24{/stages,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4a951911{/stages/json,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@55b62629{/stages/stage,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6759f091{/stages/stage/json,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@33a053d{/stages/pool,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@14a54ef6{/stages/pool/json,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@20921b9b{/storage,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@867ba60{/storage/json,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5ba745bc{/storage/rdd,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@654b72c0{/storage/rdd/json,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@55b5e331{/environment,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6034e75d{/environment/json,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@15fc442{/executors,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3f3c7bdb{/executors/json,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@456abb66{/executors/threadDump,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2a3a299{/executors/threadDump/json,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7da10b5b{/static,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1da6ee17{/,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@78d39a69{/api,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@15f193b8{/jobs/job/kill,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2516fc68{/stages/stage/kill,null,AVAILABLE,@Spark}22/11/19 18:33:37 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.10.9.250:4040
22/11/19 18:33:37 INFO spark.SparkContext: Added JAR file:/usr/local/spark/examples/jars/spark-examples_2.11-2.4.8.jar at spark://10.10.9.250:53336/jars/spark-examples_2.11-2.4.8.jar with timestamp 166885401771622/11/19 18:33:38 INFO client.RMProxy: Connecting to ResourceManager at sh01/172.16.99.214:8010
22/11/19 18:33:38 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
22/11/19 18:33:38 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)22/11/19 18:33:38 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
22/11/19 18:33:38 INFO yarn.Client: Setting up container launch context for our AM
22/11/19 18:33:38 INFO yarn.Client: Setting up the launch environment for our AM container
22/11/19 18:33:38 INFO yarn.Client: Preparing resources for our AM container
22/11/19 18:33:39 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
22/11/19 18:33:42 INFO yarn.Client: Uploading resource file:/usr/local/spark-2.4.8-bin-hadoop2.7/tmp/spark-7ecf7a1c-87e6-4f76-8e50-cd1682762c25/__spark_libs__7614795133133378512.zip -> hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2206/__spark_libs__7614795133133378512.zip
22/11/19 18:37:46 INFO yarn.Client: Uploading resource file:/usr/local/spark-2.4.8-bin-hadoop2.7/tmp/spark-7ecf7a1c-87e6-4f76-8e50-cd1682762c25/__spark_conf__885526568489264491.zip -> hdfs://sh01:9000/user/mac/.sparkStaging/application_1666603193487_2206/__spark_conf__.zip
22/11/19 18:37:48 INFO spark.SecurityManager: Changing view acls to: mac
22/11/19 18:37:48 INFO spark.SecurityManager: Changing modify acls to: mac
22/11/19 18:37:48 INFO spark.SecurityManager: Changing view acls groups to:
22/11/19 18:37:48 INFO spark.SecurityManager: Changing modify acls groups to:
22/11/19 18:37:48 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled;users  with view permissions: Set(mac);groups with view permissions: Set();users  with modify permissions: Set(mac);groups with modify permissions: Set()22/11/19 18:37:49 INFO yarn.Client: Submitting application application_1666603193487_2206 to ResourceManager
22/11/19 18:37:50 INFO impl.YarnClientImpl: Submitted application application_1666603193487_2206
22/11/19 18:37:50 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1666603193487_2206 and attemptId None
22/11/19 18:37:51 INFO yarn.Client: Application report for application_1666603193487_2206 (state: ACCEPTED)22/11/19 18:37:51 INFO yarn.Client:
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1668854270205
     final status: UNDEFINED
     tracking URL: http://sh01:8012/proxy/application_1666603193487_2206/
     user: mac
22/11/19 18:37:52 INFO yarn.Client: Application report for application_1666603193487_2206 (state: ACCEPTED)22/11/19 18:37:53 INFO yarn.Client: Application report for application_1666603193487_2206 (state: ACCEPTED)22/11/19 18:37:54 INFO yarn.Client: Application report for application_1666603193487_2206 (state: ACCEPTED)22/11/19 18:37:55 INFO yarn.Client: Application report for application_1666603193487_2206 (state: ACCEPTED)22/11/19 18:37:55 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> sh01, PROXY_URI_BASES -> http://sh01:8012/proxy/application_1666603193487_2206), /proxy/application_1666603193487_2206
22/11/19 18:37:55 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)22/11/19 18:37:56 INFO yarn.Client: Application report for application_1666603193487_2206 (state: RUNNING)22/11/19 18:37:56 INFO yarn.Client:
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 172.16.99.116
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1668854270205
     final status: UNDEFINED
     tracking URL: http://sh01:8012/proxy/application_1666603193487_2206/
     user: mac
22/11/19 18:37:56 INFO cluster.YarnClientSchedulerBackend: Application application_1666603193487_2206 has started running.
22/11/19 18:37:56 INFO util.Utils: Successfully started service'org.apache.spark.network.netty.NettyBlockTransferService' on port 54084.
22/11/19 18:37:56 INFO netty.NettyBlockTransferService: Server created on 10.10.9.250:54084
22/11/19 18:37:56 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
22/11/19 18:37:56 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.10.9.250, 54084, None)22/11/19 18:37:56 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.10.9.250:54084 with 2004.6 MB RAM, BlockManagerId(driver, 10.10.9.250, 54084, None)22/11/19 18:37:56 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.10.9.250, 54084, None)22/11/19 18:37:56 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.10.9.250, 54084, None)22/11/19 18:37:56 INFO ui.JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /metrics/json.
22/11/19 18:37:56 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@238291d4{/metrics/json,null,AVAILABLE,@Spark}22/11/19 18:37:56 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)22/11/19 18:37:57 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:38
22/11/19 18:37:57 INFO scheduler.DAGScheduler: Got job 0(reduce at SparkPi.scala:38) with 10 output partitions
22/11/19 18:37:57 INFO scheduler.DAGScheduler: Final stage: ResultStage 0(reduce at SparkPi.scala:38)22/11/19 18:37:57 INFO scheduler.DAGScheduler: Parents of final stage: List()22/11/19 18:37:57 INFO scheduler.DAGScheduler: Missing parents: List()22/11/19 18:37:57 INFO scheduler.DAGScheduler: Submitting ResultStage 0(MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
22/11/19 18:37:57 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.0 KB, free2004.6 MB)22/11/19 18:37:58 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1358.0 B, free2004.6 MB)22/11/19 18:37:58 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.10.9.250:54084 (size: 1358.0 B, free: 2004.6 MB)22/11/19 18:37:58 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1184
22/11/19 18:37:58 INFO scheduler.DAGScheduler: Submitting 10 missing tasks from ResultStage 0(MapPartitionsRDD[1] at map at SparkPi.scala:34)(first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9))22/11/19 18:37:58 INFO cluster.YarnScheduler: Adding task set0.0 with 10 tasks
22/11/19 18:37:59 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor)(172.16.99.116:48068) with ID 222/11/19 18:37:59 INFO scheduler.TaskSetManager: Starting task 0.0in stage 0.0(TID 0, sh02, executor 2, partition 0, PROCESS_LOCAL, 7741 bytes)22/11/19 18:37:59 INFO scheduler.TaskSetManager: Starting task 1.0in stage 0.0(TID 1, sh02, executor 2, partition 1, PROCESS_LOCAL, 7743 bytes)22/11/19 18:37:59 INFO scheduler.TaskSetManager: Starting task 2.0in stage 0.0(TID 2, sh02, executor 2, partition 2, PROCESS_LOCAL, 7743 bytes)22/11/19 18:37:59 INFO scheduler.TaskSetManager: Starting task 3.0in stage 0.0(TID 3, sh02, executor 2, partition 3, PROCESS_LOCAL, 7743 bytes)22/11/19 18:38:00 INFO storage.BlockManagerMasterEndpoint: Registering block manager sh02:44398 with 366.3 MB RAM, BlockManagerId(2, sh02, 44398, None)22/11/19 18:38:02 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on sh02:44398 (size: 1358.0 B, free: 366.3 MB)22/11/19 18:38:02 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor)(172.16.97.106:57790) with ID 122/11/19 18:38:02 INFO scheduler.TaskSetManager: Starting task 4.0in stage 0.0(TID 4, sh03, executor 1, partition 4, PROCESS_LOCAL, 7743 bytes)22/11/19 18:38:02 INFO scheduler.TaskSetManager: Starting task 5.0in stage 0.0(TID 5, sh03, executor 1, partition 5, PROCESS_LOCAL, 7743 bytes)22/11/19 18:38:02 INFO scheduler.TaskSetManager: Starting task 6.0in stage 0.0(TID 6, sh03, executor 1, partition 6, PROCESS_LOCAL, 7743 bytes)22/11/19 18:38:02 INFO scheduler.TaskSetManager: Starting task 7.0in stage 0.0(TID 7, sh03, executor 1, partition 7, PROCESS_LOCAL, 7743 bytes)22/11/19 18:38:02 INFO scheduler.TaskSetManager: Starting task 8.0in stage 0.0(TID 8, sh02, executor 2, partition 8, PROCESS_LOCAL, 7743 bytes)22/11/19 18:38:02 INFO scheduler.TaskSetManager: Starting task 9.0in stage 0.0(TID 9, sh02, executor 2, partition 9, PROCESS_LOCAL, 7743 bytes)22/11/19 18:38:02 INFO scheduler.TaskSetManager: Finished task 2.0in stage 0.0(TID 2)in2609 ms on sh02 (executor 2)(1/10)22/11/19 18:38:02 INFO scheduler.TaskSetManager: Finished task 3.0in stage 0.0(TID 3)in2608 ms on sh02 (executor 2)(2/10)22/11/19 18:38:02 INFO scheduler.TaskSetManager: Finished task 1.0in stage 0.0(TID 1)in2622 ms on sh02 (executor 2)(3/10)22/11/19 18:38:02 INFO scheduler.TaskSetManager: Finished task 0.0in stage 0.0(TID 0)in2645 ms on sh02 (executor 2)(4/10)22/11/19 18:38:02 INFO storage.BlockManagerMasterEndpoint: Registering block manager sh03:45892 with 366.3 MB RAM, BlockManagerId(1, sh03, 45892, None)22/11/19 18:38:02 INFO scheduler.TaskSetManager: Finished task 8.0in stage 0.0(TID 8)in378 ms on sh02 (executor 2)(5/10)22/11/19 18:38:02 INFO scheduler.TaskSetManager: Finished task 9.0in stage 0.0(TID 9)in407 ms on sh02 (executor 2)(6/10)22/11/19 18:38:04 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on sh03:45892 (size: 1358.0 B, free: 366.3 MB)22/11/19 18:38:05 INFO scheduler.TaskSetManager: Finished task 5.0in stage 0.0(TID 5)in2762 ms on sh03 (executor 1)(7/10)22/11/19 18:38:05 INFO scheduler.TaskSetManager: Finished task 4.0in stage 0.0(TID 4)in2787 ms on sh03 (executor 1)(8/10)22/11/19 18:38:05 INFO scheduler.TaskSetManager: Finished task 7.0in stage 0.0(TID 7)in2794 ms on sh03 (executor 1)(9/10)22/11/19 18:38:05 INFO scheduler.TaskSetManager: Finished task 6.0in stage 0.0(TID 6)in2800 ms on sh03 (executor 1)(10/10)22/11/19 18:38:05 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
22/11/19 18:38:05 INFO scheduler.DAGScheduler: ResultStage 0(reduce at SparkPi.scala:38) finished in8.174 s
22/11/19 18:38:05 INFO scheduler.DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 8.233929 s
Pi is roughly 3.140567140567140522/11/19 18:38:05 INFO server.AbstractConnector: Stopped Spark@62b3df3a{HTTP/1.1, (http/1.1)}{0.0.0.0:4040}22/11/19 18:38:05 INFO ui.SparkUI: Stopped Spark web UI at http://10.10.9.250:4040
22/11/19 18:38:05 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
22/11/19 18:38:05 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
22/11/19 18:38:05 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
22/11/19 18:38:05 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)22/11/19 18:38:05 INFO cluster.YarnClientSchedulerBackend: Stopped
22/11/19 18:38:05 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!22/11/19 18:38:05 INFO memory.MemoryStore: MemoryStore cleared
22/11/19 18:38:05 INFO storage.BlockManager: BlockManager stopped
22/11/19 18:38:05 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
22/11/19 18:38:05 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!22/11/19 18:38:05 INFO spark.SparkContext: Successfully stopped SparkContext
22/11/19 18:38:05 INFO util.ShutdownHookManager: Shutdown hook called
22/11/19 18:38:05 INFO util.ShutdownHookManager: Deleting directory /private/var/folders/pc/mj2v_vln4x14q6jylbtnmvx40000gn/T/spark-5ece9ef1-aff6-451e-bf36-b637d4afb74d
22/11/19 18:38:05 INFO util.ShutdownHookManager: Deleting directory /usr/local/spark-2.4.8-bin-hadoop2.7/tmp/spark-7ecf7a1c-87e6-4f76-8e50-cd1682762c25
标签: spark 大数据 yarn

本文转载自: https://blog.csdn.net/yy_diego/article/details/127953198
版权归原作者 骑着蜗牛向前跑 所有, 如有侵权,请联系我们删除。

“spark on yarn 的执行过程以及日志分析”的评论:

还没有评论