0


SparkSubmit进程无法强制kill掉以及Flink相关error

SparkSubmit进程无法强制kill掉


文章目录

  • SparkSubmit进程无法强制kill掉
      1. 写在前面- 1. 正文- 2. Flink配合Kafka使用问题的记录

0. 写在前面

  • 操作系统:Linux(CentOS7.5)
  • Spark版本:Spark3.0.0
  • Scala版本:Scala2.12.1
  • Flink版本:Flink-1.13.1

本文出现「SparkSubmit进程无法强制kill掉」这种情况是在使用Spark-Shell环境下执行MLib的相关程序后导致的

1. 正文

注意:SparkSubmit进程无法强制kill掉,即使是

  1. kill -9

多次不成功!

!  [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-XT5MAlY9-1682151015895)(assets/01.png)]

  • 新会话窗口执行kill强制命令

尝试新开一个会话窗口,在新的会话窗口强制进行

  1. kill

,依旧是不能强制杀掉这个SparkSubmit进程

  • 查看SparkSubmit对应进程号的父进程是否存在,如果存在,直接杀掉其对应的父进程。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-tp8ybwXu-1682151015896)(assets/02.png)]

该方法查询不到对应的父进程,请换用下方的另一种方法 > 查看这个SparkSubmit进程的父进程有哪些,命令如下方图所示:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-vUBCp2GB-1682151015896)(assets/03.png)]

查询到的完整信息如下代码块所示:

  1. [whybigdata@bd01 ~]$ cat /proc/2116/status
  2. Name: java
  3. State: Z (zombie)
  4. Tgid: 2116
  5. Ngid: 0
  6. Pid: 2116
  7. PPid: 2105 TracerPid: 0
  8. Uid: 1000100010001000
  9. Gid: 1000100010001000
  10. FDSize: 0 Groups: 101000
  11. Threads: 1
  12. SigQ: 2/7804
  13. SigPnd: 0000000000000000
  14. ShdPnd: 0000000000000100
  15. SigBlk: 0000000000000000
  16. SigIgn: 0000000000000000
  17. SigCgt: 2000000181005ccf
  18. CapInh: 0000000000000000
  19. CapPrm: 0000000000000000
  20. CapEff: 0000000000000000
  21. CapBnd: 0000001fffffffff
  22. CapAmb: 0000000000000000
  23. Seccomp: 0
  24. Cpus_allowed: ffffffff,ffffffff,ffffffff,ffffffff
  25. Cpus_allowed_list: 0-127
  26. Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
  27. Mems_allowed_list: 0
  28. voluntary_ctxt_switches: 6
  29. nonvoluntary_ctxt_switches: 5
  • 解释说明:
  • Pid: 2116 --> 表示当前进程
  • PPid: 2105 --> 表示当前进程对应的父进程
  • 获取到SparkSubmit进程对应的父进程号后,首先强制杀掉父进程,再次查看进程是否kill成功,命令如下方所示
  1. shell kill -9 sub_pid

![外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-H8TwVswC-1682151015897)(assets/04.png)]

可以看到我们以及成功kill掉了SparkSubmit进程了 - 重新回到旧的会话窗口,可以观察到如下图所示的进程情况:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qlcG3AK4-1682151015898)(assets/05.png)]

2. Flink配合Kafka使用问题的记录

Flink通过Kafka读取数据进行分组统计求最大值,并设置了窗口的大小,将Kafka生产端Input主题的数据消费到Output主题中

  1. Caused by: java.lang.ClassCastException: cannot assign instance of org.apache.commons.collections.map.LinkedMap to field org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.pendingOffsetsToCommit of type org.apache.commons.collections.map.LinkedMap in instance of org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
  2. at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2287)
  3. at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1417)
  4. at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2293)
  5. at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
  6. at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
  7. at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
  8. at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
  9. at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
  10. at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
  11. at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
  12. at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
  13. at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
  14. at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
  15. at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
  16. at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
  17. at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:615)
  18. at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:600)
  19. at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:587)
  20. at org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:541)
  21. at org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:322)... 7more

Caused by: java.lang.ClassCastException: cannot assign instance of org.apache.commons.collections.map.LinkedMap to field org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.pendingOffsetsToCommit of type org.apache.commons.collections.map.LinkedMap in instance of org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer >
出现错误的原因是:Kafka库与Flink的反向类加载方法不兼容,修改 Flink安装目录下的

  1. conf/flink-conf.yaml

并重新启动Flink >

  1. classloader.resolve-order: parent-first

注意,在Flink中执行

  1. bin/flink run --class class_refrence your.jar

命令前要将Jar包所需要的依赖放进到 Flink安装目录下的

  1. lib

目录中。

全文结束!!!

标签: flink scala spark

本文转载自: https://blog.csdn.net/m0_52735414/article/details/130306311
版权归原作者 WHYBIGDATA 所有, 如有侵权,请联系我们删除。

“SparkSubmit进程无法强制kill掉以及Flink相关error”的评论:

还没有评论