0


【大数据监控】Grafana、Spark、HDFS、YARN、Hbase指标性能监控安装部署详细文档

目录

Grafana

在这里插入图片描述

简介

Grafana 是一款开源的数据可视化工具,使用 Grafana 可以非常轻松的将数据转成图表(如下图)的展现形式来做到数据监控以及数据统计。

下载软件包

  1. wget https://dl.grafana.com/enterprise/release/grafana-enterprise-9.1.6.linux-amd64.tar.gz

安装部署

解压

  1. tar-xvzf grafana-enterprise-9.1.6.linux-arm64.tar.gz
  2. mv grafana-9.1.6 /data/apps/
  3. cd /data/apps/grafana-9.1.6/conf
  4. cp sample.ini grafana.ini

修改配置文件

  1. vim grafana.ini
  2. [paths]# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used);data = /var/lib/grafana
  3. data = /data/apps/grafana-9.1.6/data
  4. logs = /data/apps/grafana-9.1.6/logs
  5. plugins = /data/apps/grafana-9.1.6/plugins
  6. [log]
  7. mode =file
  8. level = warn

创建用户

  1. groupadd-g9100 monitor
  2. useradd-g9100-u9100-s /sbin/nologin -M monitor
  3. mkdir data &&mkdir logs &&mkdir plugins
  4. chown-R monitor:monitor grafana-9.1.6

创建Systemd服务

  1. vim /usr/lib/systemd/system/grafana.service
  2. [Unit]Description=grafana serviceAfter=network.target
  3. [Service]User=monitor
  4. Group=monitor
  5. KillMode=control-group
  6. Restart=on-failure
  7. RestartSec=60ExecStart=/data/apps/grafana-9.1.6/bin/grafana-server -config /data/apps/grafana-9.1.6/conf/grafana.ini -pidfile /data/apps/grafana-9.1.6/grafana.pid -homepath /data/apps/grafana-9.1.6
  8. [Install]WantedBy=multi-user.target

启动 Grafana

  1. systemctl daemon-reload
  2. systemctl restart grafana.service
  3. systemctl enable grafana.service
  4. systemctl status grafana

Spark应用监控 Graphite_exporter

配置 mapping 文件

  1. cd /usr/local/graphite_exporter
  2. vim graphite_exporter_mapping
  1. graphite_exportergraphite_exportengs:
  2. - match: '*.*.executor.filesystem.*.*'
  3. name: spark_app_filesystem_usage
  4. labels:
  5. application: $1
  6. executor_id: $2
  7. fs_type: $3
  8. qty: $4
  9. - match: '*.*.jvm.*.*'
  10. name: spark_app_jvm_memory_usage
  11. labels:
  12. application: $1
  13. executor_id: $2
  14. mem_type: $3
  15. qty: $4
  16. - match: '*.*.executor.jvmGCTime.count'
  17. name: spark_app_jvm_gcTime_count
  18. labels:
  19. application: $1
  20. executor_id: $2
  21. - match: '*.*.jvm.pools.*.*'
  22. name: spark_app_jvm_memory_pools
  23. labels:
  24. application: $1
  25. executor_id: $2
  26. mem_type: $3
  27. qty: $4
  28. - match: '*.*.executor.threadpool.*'
  29. name: spark_app_executor_tasks
  30. labels:
  31. application: $1
  32. executor_id: $2
  33. qty: $3
  34. - match: '*.*.BlockManager.*.*'
  35. name: spark_app_block_manager
  36. labels:
  37. application: $1
  38. executor_id: $2
  39. type: $3
  40. qty: $4
  41. - match: '*.*.DAGScheduler.*.*'
  42. name: spark_app_dag_scheduler
  43. labels:
  44. application: $1
  45. executor_id: $2
  46. type: $3
  47. qty: $4
  48. - match: '*.*.CodeGenerator.*.*'
  49. name: spark_app_code_generator
  50. labels:
  51. application: $1
  52. executor_id: $2
  53. type: $3
  54. qty: $4
  55. - match: '*.*.HiveExternalCatalog.*.*'
  56. name: spark_app_hive_external_catalog
  57. labels:
  58. application: $1
  59. executor_id: $2
  60. type: $3
  61. qty: $4
  62. - match: '*.*.*.StreamingMetrics.*.*'
  63. name: spark_app_streaming_metrics
  64. labels:
  65. application: $1
  66. executor_id: $2
  67. app_name: $3
  68. type: $4
  69. qty: $5

修改spark的metrics.properties配置文件,让其推送metrics到Graphite_exporter

  1. *.sink.graphite.class=org.apache.spark.metrics.sink.GraphiteSink
  2. *.sink.graphite.host=10.253.128.31
  3. *.sink.graphite.port=9108
  4. *.sink.graphite.period=10
  5. *.sink.graphite.unit=seconds
  6. #*.sink.graphite.prefix=<optional_value>

HDFS 监控

namenode.yaml

  1. ---startDelaySeconds:0hostPort: localhost:1234#master为本机IP(一般可设置为localhost);1234为想设置的jmx端口#jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:1234/jmxrmissl:falselowercaseOutputName:falselowercaseOutputLabelNames:false

datanode.yaml

  1. ---startDelaySeconds:0hostPort: localhost:1244#master为本机IP(一般可设置为localhost);1244为想设置的jmx端口(可设置为未被占用的端口)#jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:1234/jmxrmissl:falselowercaseOutputName:falselowercaseOutputLabelNames:false

配置 hadoop-env.sh

  1. exportHADOOP_NAMENODE_JMX_OPTS="-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.port=1234 -javaagent:/jmx_prometheus_javaagent-0.8.jar=9211:/namenode.yaml"exportHADOOP_DATANODE_JMX_OPTS="-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.port=1244 -javaagent:/jmx_prometheus_javaagent-0.8.jar=9212:/datanode.yaml"

YARN 监控

yarn.yaml

  1. ---startDelaySeconds:0hostPort: localhost:2111#master为本机IP(一般可设置为localhost);1234为想设置的jmx端口#jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:1234/jmxrmissl:falselowercaseOutputName:falselowercaseOutputLabelNames:false

配置 yarn-env.sh

  1. export YARN_JMX_OPTS="-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.port=2111 -javaagent:/jmx_prometheus_javaagent-0.8.jar=9323:/yarn.yaml"

HBase 监控

master.yaml

  1. ---startDelaySeconds:0hostPort: IP:1254#master为本机IP(一般可设置为localhost);1234为想设置的jmx端口(可设#jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:1234/jmxrmissl:falselowercaseOutputName:falselowercaseOutputLabelNames:false

regionserver.yaml

  1. ---startDelaySeconds:0hostPort: IP:1255#master为本机IP(一般可设置为localhost);1234为想设置的jmx端口#jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:1234/jmxrmissl:falselowercaseOutputName:falselowercaseOutputLabelNames:false

配置 hbase-env.sh

  1. HBASE_M_JMX_OPTS="-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmx remote.port=1254 -javaagent:/jmx_prometheus_javaagent-0.8.jar=9523:/hbasem.yaml"
  2. #======================================= prometheus jmx export start===================================
  3. HBASE_JMX_OPTS="-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxre mote.port=1255 -javaagent:/jmx_prometheus_javaagent-0.8.jar=9522:/hbase.yaml"
  4. #======================================= prometheus jmx export end ===================================

希望对正在查看文章的您有所帮助,记得关注、评论、收藏,谢谢您

标签: 大数据 spark hbase

本文转载自: https://blog.csdn.net/u013412066/article/details/129332983
版权归原作者 笑起来贼好看 所有, 如有侵权,请联系我们删除。

“【大数据监控】Grafana、Spark、HDFS、YARN、Hbase指标性能监控安装部署详细文档”的评论:

还没有评论