Spark调参过程中
保持每个task的 input + shuffle read 量在300-500M左右比较合适
The Spark UI is documented here: https://spark.apache.org/docs/3.0.1/web-ui.html
The relevant paragraph reads:
- Input: Bytes read from storage in this stage
- Output: Bytes written in storage in this stage
- Shuffle read: Total shuffle bytes and records read, includes both data read locally and data read from remote executors
- Shuffle write: Bytes and records written to disk in order to be read by a shuffle in a future stage
本文转载自: https://blog.csdn.net/JH_Zhai/article/details/134144515
版权归原作者 TaiKuLaHa 所有, 如有侵权,请联系我们删除。
版权归原作者 TaiKuLaHa 所有, 如有侵权,请联系我们删除。