0


Flink JobManager 内存占用大 问题

Flink JobManager 内存占用大问题

问题描述

当在 本地启动一个 flink 简单的 job 时候,发现出现了 heap outMemeory 问题,
然后就不假思索的 调整了 jvm 的 heap -Xms1000m -Xmx16000m 参数,就可以正常的启动了。
通过 jvisualvm 连接上 这个 jvm process,参看 堆大小 竟然达到了 4、5G。
flink jobManager 大内存 jvm 图1

解决过程

直到最近才有时间,来探究一下 到底 为什么 要占用 这么大的内存?

我们下 去掉 jvm 配置 的 heap -Xms1000m -Xmx16000m 参数,看看程序哪里报的错。

  1. Exception in thread "main" com.yyb.flink.core.exception.StreamBasicException: Context submit error
  2. at com.yyb.flink.core.context.AbstractContextProxy.submit(AbstractContextProxy.java:72)
  3. at com.yyb.flink.core.context.AbstractContextProxy.submit(AbstractContextProxy.java:101)
  4. at com.yyb.flink.app.table.dim.dataGen.JoinWithDataGenTable.main(JoinWithDataGenTable.java:39)
  5. Caused by: org.apache.flink.util.FlinkException: Failed to execute job 'JoinWithDataGenTable'.
  6. at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1969)
  7. at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1847)
  8. at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:69)
  9. at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1833)
  10. at com.yyb.flink.core.context.AbstractContextProxy.IfPresentSinkExecute(AbstractContextProxy.java:94)
  11. at com.yyb.flink.core.context.AbstractContextProxy.submit(AbstractContextProxy.java:69)
  12. ... 2 more
  13. Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobInitializationException: Could not start the JobMaster.
  14. at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:316)
  15. at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75)
  16. at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
  17. at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
  18. at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:443)
  19. at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
  20. at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
  21. at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
  22. at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
  23. Caused by: org.apache.flink.runtime.client.JobInitializationException: Could not start the JobMaster.
  24. at org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
  25. at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
  26. at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
  27. at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
  28. at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595)
  29. at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  30. at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  31. at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
  32. at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
  33. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  34. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  35. at java.lang.Thread.run(Thread.java:748)
  36. Caused by: java.util.concurrent.CompletionException: java.lang.OutOfMemoryError: Java heap space
  37. at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
  38. at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
  39. at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592)
  40. ... 7 more
  41. Caused by: java.lang.OutOfMemoryError: Java heap space
  42. at java.util.ArrayDeque.allocateElements(ArrayDeque.java:147)
  43. at java.util.ArrayDeque.<init>(ArrayDeque.java:203)
  44. at org.apache.flink.runtime.executiongraph.failover.flip1.FailureRateRestartBackoffTimeStrategy.<init>(FailureRateRestartBackoffTimeStrategy.java:59)
  45. at org.apache.flink.runtime.executiongraph.failover.flip1.FailureRateRestartBackoffTimeStrategy$FailureRateRestartBackoffTimeStrategyFactory.create(FailureRateRestartBackoffTimeStrategy.java:153)
  46. at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:97)
  47. at org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:110)
  48. at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:340)
  49. at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:317)
  50. at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:107)
  51. at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95)
  52. at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory$$Lambda$1246/1142234774.get(Unknown Source)
  53. at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
  54. at org.apache.flink.util.function.FunctionUtils$$Lambda$1247/405573242.get(Unknown Source)
  55. at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
  56. ... 7 more

然后我们找到 代码所在的位置:
FailureRateRestartBackoffTimeStrategy.class

  1. FailureRateRestartBackoffTimeStrategy(
  2. Clock clock, int maxFailuresPerInterval, long failuresIntervalMS, long backoffTimeMS) {
  3. checkArgument(
  4. maxFailuresPerInterval > 0,
  5. "Maximum number of restart attempts per time unit must be greater than 0.");
  6. checkArgument(failuresIntervalMS > 0, "Failures interval must be greater than 0 ms.");
  7. checkArgument(backoffTimeMS >= 0, "Backoff time must be at least 0 ms.");
  8. this.failuresIntervalMS = failuresIntervalMS;
  9. this.backoffTimeMS = backoffTimeMS;
  10. this.maxFailuresPerInterval = maxFailuresPerInterval;
  11. this.failureTimestamps = new ArrayDeque<>(maxFailuresPerInterval); //这里
  12. this.strategyString = generateStrategyString();
  13. this.clock = checkNotNull(clock);
  14. }

ArrayDeque.class

  1. public ArrayDeque(int numElements) {
  2. allocateElements(numElements);
  3. }
  4. private void allocateElements(int numElements) {
  5. elements = new Object[calculateSize(numElements)]; //这里
  6. }

可以知道,如果这个 numElements、maxFailuresPerInterval 设置的 比较大的话,那么这里就会直接 申请 这么大 的 object数组,就有可能 heap OutOfMemoryError。
回想到 我们曾经 设置 flink FailureRateRestartStrategyConfiguration 的 次数 为 Integer.MAX_VALUE,那么就 将通了。
为什么要设置这么大的失败重启次数,当时是因为 下载 s3文件,时不时会出现 timeOut 问题,所以 flink 的 FailureRateRestartStrategyConfiguration 设置为 Integer.MAX_VALUE,没有想到 致使 jobManager 的 内存占用 变得这么大了。

解决效果

设置 FailureRateRestartStrategyConfiguration 的 次数 为 3
flink jobManager 大内存 jvm 图2
设置 FailureRateRestartStrategyConfiguration 的 次数 为 10000.
flink jobManager 大内存 jvm 图3

标签: flink jvm java

本文转载自: https://blog.csdn.net/u010374412/article/details/128130690
版权归原作者 姜上清风 所有, 如有侵权,请联系我们删除。

“Flink JobManager 内存占用大 问题”的评论:

还没有评论