0


安装 hbase(伪分布式)

1、安装 jdk8

(1)选择 jdk 版本

jdk版本选择:https://hbase.apache.org/book.html#java

jdk华为源下载:https://repo.huaweicloud.com/java/jdk/8u172-b11/

(2)下载 jdk 并解压
  1. cd /usr/local
  2. wget https://repo.huaweicloud.com/java/jdk/8u172-b11/jdk-8u172-linux-x64.tar.gz
  3. tar zxf jdk-8u172-linux-x64.tar.gz
(3)配置环境变量
  1. # vim /etc/profile
  2. export JAVA_HOME=/usr/local/jdk1.8.0_172
  3. export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
  4. export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
  5. # source /etc/profile
  6. # 查看版本
  7. java -version

#参考

https://blog.csdn.net/codedz/article/details/124044974

https://www.cnblogs.com/aerfazhe/p/15545946.html

https://www.cnblogs.com/aerfazhe/p/15545946.html

2、安装hadoop

(1)添加 hadoop 用户,配置免密登录
  1. useradd hadoop
  2. #passwd hadoop
  3. ssh-keygen -t rsa
  4. cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  5. chmod 0600 ~/.ssh/authorized_keys
(2)下载 hadoop-3.4.0.tar.gz
  1. cd /usr/local
  2. wget https://dlcdn.apache.org/hadoop/core/stable/hadoop-3.4.0.tar.gz
  3. tar zxf hadoop-3.4.0.tar.gz
(3)配置环境变量
  1. # vim /etc/profile
  2. export HADOOP_HOME=/usr/local/hadoop-3.4.0
  3. export HADOOP_MAPRED_HOME=$HADOOP_HOME
  4. export HADOOP_COMMON_HOME=$HADOOP_HOME
  5. export HADOOP_HDFS_HOME=$HADOOP_HOME
  6. export YARN_HOME=$HADOOP_HOME
  7. export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
  8. export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
  9. export HADOOP_INSTALL=$HADOOP_HOME
  10. # source /etc/profile
(4)修改配置文件
  1. cd /usr/local/hadoop-3.4.0/etc/hadoop/
  2. vim hadoop-env.sh
  3. export JAVA_HOME=/usr/local/jdk1.8.0_172
  4. vim core-site.xml
  5. <configuration>
  6. <property>
  7. <name>fs.default.name</name>
  8. <value>hdfs://localhost:9000</value>
  9. </property>
  10. </configuration>
  11. vim hdfs-site.xml
  12. <configuration>
  13. <property>
  14. <name>dfs.replication</name>
  15. <value>1</value>
  16. </property>
  17. <property>
  18. <name>dfs.name.dir</name>
  19. <value>file:///home/hadoop/hadoopinfra/hdfs/namenode </value>
  20. </property>
  21. <property>
  22. <name>dfs.data.dir</name>
  23. <value>file:///home/hadoop/hadoopinfra/hdfs/datanode </value>
  24. </property>
  25. </configuration>
  26. vim yarn-site.xml
  27. <configuration>
  28. <property>
  29. <name>yarn.nodemanager.aux-services</name>
  30. <value>mapreduce_shuffle</value>
  31. </property>
  32. </configuration>
  33. vim mapred-site.xml
  34. <configuration>
  35. <property>
  36. <name>mapreduce.framework.name</name>
  37. <value>yarn</value>
  38. </property>
  39. </configuration>
(5)修改文件权限,并且换到 hadoop 用户
  1. chown -R hadoop:hadoop hadoop-3.4.0
  2. su hadoop
(7)格式化 Hadoop DFS
  1. hdfs namenode -format
(8)启动 dfs
  1. ./sbin/start-dfs.sh
  2. # 查看进程
  3. jps

71681 NameNode
71834 DataNode

72068 SecondaryNameNod

(9)验证yarn脚本
  1. ./sbin/start-yarn.sh
  2. # 查看进程
  3. jps

71681 NameNode
71834 DataNode

72068 SecondaryNameNode

72289 ResourceManager
72396 NodeManager

(10)浏览器访问
  1. http://localhost:9870
  2. http://localhost:8088
(11)启动 hdfs 告警:WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform
  1. 不改也不影响,添加如下环境变量即可
  1. # vim /etc/profile
  2. export JAVA_LIBRARY_PATH=/usr/local/hbase-2.5.8-hadoop3/lib/native
  3. # source /etc/profile

参考

https://blog.csdn.net/weixin_45678985/article/details/120497297

3、安装 hbase

(1)下载 hbase-2.5.8-hadoop3-bin.tar.gz
  1. cd /usr/local
  2. wget https://dlcdn.apache.org/hbase/2.5.8/hbase-2.5.8-hadoop3-bin.tar.gz
  3. tar zxf hbase-2.5.8-hadoop3-bin.tar.gz
(2)配置环境变量
  1. export HBASE_HOME=/usr/local/hbase-2.5.8-hadoop3
  2. export PATH=$PATH:$HBASE_HOME/bin
(3)修改配置文件
  1. cd /usr/local/hbase-2.5.8-hadoop3/conf
  2. vim hbase-env.sh
  3. export JAVA_HOME=/usr/local/jdk1.8.0_172
  4. export HBASE_MANAGES_ZK=true
  5. export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP=true
  6. vim hbase-site.xml
  7. <property>
  8. <name>hbase.cluster.distributed</name>
  9. <value>true</value>
  10. </property>
  11. <property>
  12. <name>hbase.rootdir</name>
  13. <value>hdfs://localhost:9000/hbase</value>
  14. </property>
  15. <property>
  16. <name>hbase.zookeeper.property.dataDir</name>
  17. <value>/home/zookeeper</value>
  18. </property>
(4)添加 zookeeper 目录
  1. mkdir -p /home/zookeeper
  2. chown -R hadoop:hadoop /home/zookeeper
(5)修改文件权限,并且换到 hadoop 用户
  1. chown -R hadoop:hadoop hbase-2.5.8-hadoop3
  2. su hadoop
(6)启动 hbase
  1. ./bin/start-hbase.sh
  2. # 查看进程
  3. jps

71681 NameNode
71834 DataNode

72068 SecondaryNameNode
72289 ResourceManager
72396 NodeManager
74089 HQuorumPeer
74186 HMaster
74271 HRegionServer

(7)浏览器访问
  1. http://localhost:16010
(8)命令行验证 hbase shell
  1. ./bin/hbase shell
  2. > list
(9)关闭 hbase
  1. ./bin/stop-hbase.sh
  2. # kill -9 ${HRegionServer-PID} ${HMaster-PID} ${HQuorumPeer-PID}
(10)hbase shell 报错:ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet。
  1. 关闭 hbase,修改配置文件,然后重启
  1. vim conf/hbase-env.sh
  2. export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP=true

参考

https://hadoopdoc.com/hbase/hbase-install
https://cn.linux-console.net/?p=21626
https://blog.csdn.net/qq_45811072/article/details/121693142

4、gohbase 客户端外部连接测试

(1)首先使用 hbase shell 创建一个 emp 表,含有一个列组 a。
  1. ./bin/hbase shell
  2. # 创建表
  3. > create 'emp', 'a'
  4. # 查看表列表
  5. > list
  6. # 查看表结构
  7. > describe 'emp'
(2)然后利用 gohbase 客户端向 hbase 插入一条测试数据
  1. put 'emp','2','a:age','20'
  2. emp 表插入一条数据,rowkey='2'column_family='a'column='age'value='20'
  1. package main
  2. import (
  3. "context"
  4. "github.com/tsuna/gohbase"
  5. "github.com/tsuna/gohbase/hrpc"
  6. "log"
  7. )
  8. func main() {
  9. client := gohbase.NewClient("xxx.xxx.xxx.xxx:2181")
  10. // Values maps a ColumnFamily -> Qualifiers -> Values.
  11. values := map[string]map[string][]byte{"a": map[string][]byte{"age": []byte{20}}}
  12. putRequest, err := hrpc.NewPutStr(context.Background(), "emp", "2", values)
  13. if err != nil {
  14. log.Fatalln("hrpc.NewPutStr error:", err)
  15. return
  16. }
  17. rsp, err := client.Put(putRequest)
  18. if err != nil {
  19. log.Fatalln("client.Put error:", err)
  20. return
  21. }
  22. rsp.String()
  23. }
(3)连接报错:failed to dial RegionServer: dial tcp [::1]:16020: connectex: No connection could be made because the target machine actively refused it.

time="2024-06-04T10:51:31+08:00" level=info msg="added new region client" client="RegionClient{Addr: localhost:16020}"
time="2024-06-04T10:51:33+08:00" level=error msg="error occured, closing region client" client="RegionClient{Addr: localhost:16020}" err="failed to dial RegionServer: dial tcp [::1]:16020: connectex: No connection could be made because the target machine actively refused it."
time="2024-06-04T10:51:33+08:00" level=info msg="removed region client" client="RegionClient{Addr: localhost:16020}"

  1. 具体连接哪个 regionserver 是由 zk 返回的,默认返回的 regionserver 是主机名,由于没有配置主机名,返回的 regionserver 地址是一个本机地址 localhost:16020,导致外部无法连接。并且通过 netstat -nltp 查看端口监听状态可知,16020 监听在 127.0.0.1 网卡上。这两项都导致了在外部无法连接 16020 端口。
  2. 在服务器端:
  • 设置服务器主机名为 node01;
  • 查看主机对外ip,并修改 hosts 文件,将主机名绑定主机对外IP(注意不要绑定127.0.0.1,要绑定外部可以访问的ip);
  • 修改 regionservers 配置文件,以主机名代替原来的 localhost。
  1. # 修改主机名为 node01
  2. hostnamectl set-hostname node01
  3. # 添加主机名到主机对外ip的映射
  4. ip addr
  5. vim /etc/hosts
  6. xxx.xxx.xxx.xxx node01
  7. # 修改 regionservers 配置文件
  8. vim conf/regionservers
  9. node01
  1. 在客户端,同样将 hbase 主机名绑定到 hbase 主机IP
  1. xxx.xxx.xxx.xxx node01

参考

https://www.cnblogs.com/shanheyongmu/p/15657255.html
https://segmentfault.com/a/1190000019857725

标签: hbase 数据库 hadoop

本文转载自: https://blog.csdn.net/weixin_41565755/article/details/139424287
版权归原作者 小猪快点跑 所有, 如有侵权,请联系我们删除。

“安装 hbase(伪分布式)”的评论:

还没有评论