0


【大数据】Hadoop在呼唤Hive(附一键部署Hive脚本)

CSDN话题挑战赛第2期
参赛话题:学习笔记

一、准备工作

1、下载Hive包

Hive下载地址
本文以apache-hive-3.1.2-bin.tar.gz作为部署,可以自身需要选择版本
Hive文档地址

2、下载mysql的rpm包

  1. 01_mysql-community-common-5.7.29-1.el7.x86_64.rpm
  2. 02_mysql-community-libs-5.7.29-1.el7.x86_64.rpm
  3. 03_mysql-community-libs-compat-5.7.29-1.el7.x86_64.rpm
  4. 04_mysql-community-client-5.7.29-1.el7.x86_64.rpm
  5. 05_mysql-community-server-5.7.29-1.el7.x86_64.rpm
  6. mysql-connector-java-5.1.48.jar

3、下载Hive引擎tez包

  1. tez-0.10.1-SNAPSHOT.tar.gz

4、将shell脚本放到/usr/bin下

给脚本赋权限

  1. cd /usr/bin
  2. chmod +x hive-install.sh

5、启动Hadoop集群

执行命令

  1. one 10

在这里插入图片描述

具体安装包可加公众号【纯码农(purecodefarmer)】输入“hive”即可获取下载链接
一键部署hadoop集群可参考【大数据】搭建Hadoop集群(附一键部署脚本)
备注:将下载的包放到hadoop集群主机的/opt/software/hive 目录下

二、Hive的搭建(脚本分解)

1、安装mysql

在安装mysql包时我们需要先删除自带的Mysql-libs
可使用命令查看是否有

  1. rpm -qa |grep -i -E mysql\|mariadb

在这里插入图片描述

1、卸载原有的mysql

脚本内容

  1. echo"-------删除原有的mysql-------"
  2. systemctl stop mysqld
  3. service mysql stop 2>/dev/null
  4. service mysqld stop 2>/dev/null
  5. rpm -qa |grep -i mysql |xargs -n1 rpm -e --nodeps 2>/dev/null
  6. rpm -qa |grep -i mariadb |xargs -n1 rpm -e --nodeps 2>/dev/null
  7. rm -rf /var/lib/mysql
  8. rm -rf /usr/lib64/mysql
  9. rm -rf /etc/my.cnf
  10. rm -rf /usr/my.cnf
  11. rm -rf /var/log/mysqld.log

执行脚本命令

  1. sh hive-install.sh 1
2、安装mysql驱动包

脚本内容

  1. cd /opt/software/hive
  2. echo"-------安装mysql依赖-------"rpm -ivh 01_mysql-community-common-5.7.29-1.el7.x86_64.rpm
  3. rpm -ivh 02_mysql-community-libs-5.7.29-1.el7.x86_64.rpm
  4. rpm -ivh 03_mysql-community-libs-compat-5.7.29-1.el7.x86_64.rpm
  5. echo"-------安装mysql-client-------"rpm -ivh 04_mysql-community-client-5.7.29-1.el7.x86_64.rpm
  6. echo"-------安装mysql-server-------"rpm -ivh 05_mysql-community-server-5.7.29-1.el7.x86_64.rpm
  7. echo"-------启动mysql-------"
  8. systemctl start mysqld
  9. pass=`cat /var/log/mysqld.log |grep password`pass1=`echo ${pass#*localhost: }`echo"-------mysql 初始密码-------"$pass1echo"-------登录mysql-------"
  10. mysql -uroot -p$pass1

执行脚本命令

  1. sh hive-install.sh 1

3、修改mysql的默认密码
  1. # 设置复杂密码
  2. mysql>setpassword=password("Jimmy!1@2#3");# 修改密码策略
  3. mysql>set global validate_password_length=4;
  4. mysql>set global validate_password_policy=0;# 设置简单密码
  5. mysql>setpassword=password("1qazxsw2");# 进入mysql库修改Host
  6. mysql> use mysql;
  7. mysql> update user sethost="%" where user="root";# 刷新并退出
  8. mysql> flush privileges;
  9. mysql> quit;

2、安装hive

安装完Mysql驱动后安装Hive的安装包

1、安装Hive并配置环境变量

脚本内容

  1. echo"-------解压Hive的安装包-------"mkdir /opt/module/hive312
  2. tar -zxvf /opt/software/hive/apache-hive-3.1.2-bin.tar.gz -C /opt/module/hive312
  3. echo"-------重命名Hive解压包-------"mv /opt/module/apache-hive-3.1.2-bin/ /opt/module/hive312
  4. echo"-------配置Hive的环境变量-------"echo"">> /etc/profile.d/my_env.sh
  5. echo"#Hive environment">> /etc/profile.d/my_env.sh
  6. echo"export HIVE_HOME=/opt/module/hive312">> /etc/profile.d/my_env.sh
  7. echo"export PATH=\$PATH:\${HIVE_HOME}/bin">> /etc/profile.d/my_env.sh
  8. echo"-------解决日志Jar包冲突-------"mv /opt/module/hive312/lib/log4j-slf4j-impl-2.10.0.jar /opt/module/hive312/lib/log4j-slf4j-impl-2.10.0.bak
  9. echo"-------拷贝Mysql驱动放到Hive下-------"cp /opt/software/hive/mysql-connector-java-5.1.48.jar /opt/module/hive312/lib

执行脚本命令

  1. sh hive-install.sh 2

备注:配置了环境变量重新刷新后进入,不然环境变量不生效

2、配置Metastore到Mysql

脚本内容

  1. echo"-------配置Hive Metastore元数据-------"cd$HIVE_HOME/conf/
  2. rm -rf hive-site.xml
  3. sudotouch hive-site.xml
  4. echo'<?xml version="1.0"?>'>>$HIVE_HOME/conf/hive-site.xml
  5. echo'<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>'>>$HIVE_HOME/conf/hive-site.xml
  6. echo'<configuration>'>>$HIVE_HOME/conf/hive-site.xml
  7. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  8. echo'<name>javax.jdo.option.ConnectionURL</name>'>>$HIVE_HOME/conf/hive-site.xml
  9. echo'<value>jdbc:mysql://hdp101:3306/metastore?useSSL=false</value>'>>$HIVE_HOME/conf/hive-site.xml
  10. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  11. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  12. echo'<name>javax.jdo.option.ConnectionDriverName</name>'>>$HIVE_HOME/conf/hive-site.xml
  13. echo'<value>com.mysql.jdbc.Driver</value>'>>$HIVE_HOME/conf/hive-site.xml
  14. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  15. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  16. echo'<name>javax.jdo.option.ConnectionUserName</name>'>>$HIVE_HOME/conf/hive-site.xml
  17. echo'<value>root</value>'>>$HIVE_HOME/conf/hive-site.xml
  18. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  19. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  20. echo'<name>javax.jdo.option.ConnectionPassword</name>'>>$HIVE_HOME/conf/hive-site.xml
  21. echo'<value>1qazxsw2</value>'>>$HIVE_HOME/conf/hive-site.xml
  22. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  23. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  24. echo'<name>hive.metastore.warehouse.dir</name>'>>$HIVE_HOME/conf/hive-site.xml
  25. echo'<value>/user/hive/warehouse</value>'>>$HIVE_HOME/conf/hive-site.xml
  26. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  27. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  28. echo'<name>hive.metastore.schema.verification</name>'>>$HIVE_HOME/conf/hive-site.xml
  29. echo'<value>false</value>'>>$HIVE_HOME/conf/hive-site.xml
  30. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  31. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  32. echo'<name>hive.metastore.uris</name>'>>$HIVE_HOME/conf/hive-site.xml
  33. echo'<value>thrift://hdp101:9083</value>'>>$HIVE_HOME/conf/hive-site.xml
  34. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  35. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  36. echo'<name>hive.server2.thrift.port</name>'>>$HIVE_HOME/conf/hive-site.xml
  37. echo'<value>10000</value>'>>$HIVE_HOME/conf/hive-site.xml
  38. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  39. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  40. echo'<name>hive.server2.thrift.bind.host</name>'>>$HIVE_HOME/conf/hive-site.xml
  41. echo'<value>hdp101</value>'>>$HIVE_HOME/conf/hive-site.xml
  42. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  43. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  44. echo'<name>hive.metastore.event.db.notification.api.auth</name>'>>$HIVE_HOME/conf/hive-site.xml
  45. echo'<value>false</value>'>>$HIVE_HOME/conf/hive-site.xml
  46. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  47. # 修改Hive的计算引擎echo"-------修改Hive的计算引擎-------"echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  48. echo'<name>hive.execution.engine</name>'>>$HIVE_HOME/conf/hive-site.xml
  49. echo'<value>tez</value>'>>$HIVE_HOME/conf/hive-site.xml
  50. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  51. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  52. echo'<name>hive.tez.container.size</name>'>>$HIVE_HOME/conf/hive-site.xml
  53. echo'<value>1024</value>'>>$HIVE_HOME/conf/hive-site.xml
  54. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  55. echo'</configuration>'>>$HIVE_HOME/conf/hive-site.xml

执行脚本命令

  1. sh hive-install.sh 3
3、安装tez并配置

备注:安装tez的时候先启动Hadoop集群
脚本内容

  1. echo"-------安装tez-------"mkdir /opt/module/tez
  2. tar -zxvf /opt/software/hive/tez-0.10.1-SNAPSHOT.tar.gz -C /opt/module/tez
  3. echo"-------上传tez依赖到HDFS-------"`hadoop fs -mkdir /tez``hadoop fs -put /opt/software/hive/tez-0.10.1-SNAPSHOT.tar.gz /tez`echo"-------配置tez-site文件-------"cd$HADOOP_HOME/etc/hadoop
  4. rm -rf tez-site.xml
  5. sudotouch tez-site.xml
  6. echo'<?xml version="1.0" encoding="UTF-8"?>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  7. echo'<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  8. echo'<configuration>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  9. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  10. echo'<name>tez.lib.uris</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  11. echo'<value>${fs.defaultFS}/tez/tez-0.10.1-SNAPSHOT.tar.gz</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  12. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  13. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  14. echo'<name>tez.use.cluster.hadoop-libs</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  15. echo'<value>true</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  16. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  17. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  18. echo'<name>tez.am.resource.memory.mb</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  19. echo'<value>1024</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  20. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  21. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  22. echo'<name>tez.am.resource.cpu.vcores</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  23. echo'<value>1</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  24. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  25. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  26. echo'<name>tez.container.max.java.heap.fraction</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  27. echo'<value>0.4</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  28. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  29. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  30. echo'<name>tez.task.resource.memory.mb</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  31. echo'<value>1024</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  32. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  33. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  34. echo'<name>tez.task.resource.cpu.vcores</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  35. echo'<value>1</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  36. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  37. echo'</configuration> '>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  38. echo"-------修改Hadoop环境变量-------"echo"export TEZ_CONF_DIR=\$HADOOP_HOME/etc/hadoop">>$HADOOP_HOME/etc/hadoop/hadoop-env.sh
  39. echo"export TEZ_JARS=/opt/module/tez">>$HADOOP_HOME/etc/hadoop/hadoop-env.sh
  40. echo"export HADOOP_CLASSPATH=\$HADOOP_CLASSPATH:\${TEZ_CONF_DIR}:\${TEZ_JARS}/*:\${TEZ_JARS}/lib/*">>$HADOOP_HOME/etc/hadoop/hadoop-env.sh
  41. echo"-------解决jar包冲突,删除tez的日志jar-------"rm /opt/module/tez/lib/slf4j-log4j12-1.7.10.jar

执行脚本命令

  1. sh hive-install.sh 4

在这里插入图片描述

4、启动停止Hive

1、新建Hive的元数据库

  1. # 登录Mysql数据库
  2. mysql -uroot -p1qazxsw2
  3. # 新建Hive元数据库 metastore 是在hive-site.xml中配置的
  4. mysql> create database metastore;# 刷新并退出
  5. mysql> quit;

2、初始化元数据库

  1. cd /opt/module/hive312/bin/
  2. schematool -initSchema -dbType mysql -verbose

在这里插入图片描述

3、启动Hive命令
脚本内容

  1. echo"-------启动Hive-------"HIVE_LOG_DIR=$HIVE_HOME/logs
  2. mkdir -p $HIVE_LOG_DIRmetapid=$(check_process HiveMetastore 9083)cmd="nohup hive --service metastore >$HIVE_LOG_DIR/metastore.log 2>&1 &"cmd=$cmd" sleep 4; hdfs dfsadmin -safemode wait >/dev/null 2>&1"[ -z "$metapid"]&&eval$cmd||echo"-------Metastroe服务已启动-------"server2pid=$(check_process HiveServer2 10000)cmd="nohup hive --service hiveserver2 >$HIVE_LOG_DIR/hiveServer2.log 2>&1 &"[ -z "$server2pid"]&&eval$cmd||echo"-------HiveServer2服务已启动-------"

执行脚本命令

  1. sh hive-install.sh 5

4、停止Hive命令
脚本内容

  1. echo"-------关闭Hive-------"metapid=$(check_process HiveMetastore 9083)["$metapid"]&&kill$metapid||echo"-------Metastore服务已关闭-------"server2pid=$(check_process HiveServer2 10000)["$server2pid"]&&kill$server2pid||echo"-------HiveServer2服务已关闭-------"

执行脚本命令

  1. sh hive-install.sh 6

5、重启Hive命令
执行脚本命令

  1. sh hive-install.sh 7

6、更改Hive的日志路径

  1. cd /opt/module/hive312/conf
  2. mv hive-log4j2.properties.template hive-log4j2.properties
  3. vim hive-log4j2.properties
  4. # 修改后重启Hive
  5. property.hive.log.dir = /opt/module/hive312/logs

三、Hive一键部署脚本

脚本内容

  1. #!/bin/bash# 卸载原有的mysqluninstall_mysql(){echo"-------删除原有的mysql-------"
  2. systemctl stop mysqld
  3. service mysql stop 2>/dev/null
  4. service mysqld stop 2>/dev/null
  5. rpm -qa |grep -i mysql |xargs -n1 rpm -e --nodeps 2>/dev/null
  6. rpm -qa |grep -i mariadb |xargs -n1 rpm -e --nodeps 2>/dev/null
  7. rm -rf /var/lib/mysql
  8. rm -rf /usr/lib64/mysql
  9. rm -rf /etc/my.cnf
  10. rm -rf /usr/my.cnf
  11. rm -rf /var/log/mysqld.log
  12. }# 安装mysql驱动包install_mysql(){cd /opt/software/hive
  13. echo"-------安装mysql依赖-------"rpm -ivh 01_mysql-community-common-5.7.29-1.el7.x86_64.rpm
  14. rpm -ivh 02_mysql-community-libs-5.7.29-1.el7.x86_64.rpm
  15. rpm -ivh 03_mysql-community-libs-compat-5.7.29-1.el7.x86_64.rpm
  16. echo"-------安装mysql-client-------"rpm -ivh 04_mysql-community-client-5.7.29-1.el7.x86_64.rpm
  17. echo"-------安装mysql-server-------"rpm -ivh 05_mysql-community-server-5.7.29-1.el7.x86_64.rpm
  18. echo"-------启动mysql-------"
  19. systemctl start mysqld
  20. password=`cat /var/log/mysqld.log |grep password`password1=`echo ${password#*localhost: }`echo"-------mysql 初始密码-------"$password1echo"-------登录mysql-------"
  21. mysql -uroot -p$password1}# 安装hiveinstall_hive(){echo"-------解压Hive的安装包-------"tar -zxvf /opt/software/hive/apache-hive-3.1.2-bin.tar.gz -C /opt/module/
  22. echo"-------重命名Hive解压包-------"mv /opt/module/apache-hive-3.1.2-bin/ /opt/module/hive312
  23. echo"-------配置Hive的环境变量-------"echo"">> /etc/profile.d/my_env.sh
  24. echo"#Hive environment">> /etc/profile.d/my_env.sh
  25. echo"export HIVE_HOME=/opt/module/hive312">> /etc/profile.d/my_env.sh
  26. echo"export PATH=\$PATH:\${HIVE_HOME}/bin">> /etc/profile.d/my_env.sh
  27. echo"-------解决日志Jar包冲突-------"mv /opt/module/hive312/lib/log4j-slf4j-impl-2.10.0.jar /opt/module/hive312/lib/log4j-slf4j-impl-2.10.0.bak
  28. echo"-------拷贝Mysql驱动放到Hive下-------"cp /opt/software/hive/mysql-connector-java-5.1.48.jar /opt/module/hive312/lib
  29. }# 配置Hive Metastore元数据hive_config(){echo"-------配置Hive Metastore元数据-------"HIVE_HOME=/opt/module/hive312
  30. cd$HIVE_HOME/conf/
  31. rm -rf hive-site.xml
  32. sudotouch hive-site.xml
  33. echo'<?xml version="1.0"?>'>>$HIVE_HOME/conf/hive-site.xml
  34. echo'<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>'>>$HIVE_HOME/conf/hive-site.xml
  35. echo'<configuration>'>>$HIVE_HOME/conf/hive-site.xml
  36. # Hive元数据库名称metastoreecho'<property>'>>$HIVE_HOME/conf/hive-site.xml
  37. echo'<name>javax.jdo.option.ConnectionURL</name>'>>$HIVE_HOME/conf/hive-site.xml
  38. echo'<value>jdbc:mysql://hdp101:3306/metastore?useSSL=false</value>'>>$HIVE_HOME/conf/hive-site.xml
  39. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  40. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  41. echo'<name>javax.jdo.option.ConnectionDriverName</name>'>>$HIVE_HOME/conf/hive-site.xml
  42. echo'<value>com.mysql.jdbc.Driver</value>'>>$HIVE_HOME/conf/hive-site.xml
  43. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  44. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  45. echo'<name>javax.jdo.option.ConnectionUserName</name>'>>$HIVE_HOME/conf/hive-site.xml
  46. echo'<value>root</value>'>>$HIVE_HOME/conf/hive-site.xml
  47. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  48. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  49. echo'<name>javax.jdo.option.ConnectionPassword</name>'>>$HIVE_HOME/conf/hive-site.xml
  50. echo'<value>1qazxsw2</value>'>>$HIVE_HOME/conf/hive-site.xml
  51. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  52. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  53. echo'<name>hive.metastore.warehouse.dir</name>'>>$HIVE_HOME/conf/hive-site.xml
  54. echo'<value>/user/hive/warehouse</value>'>>$HIVE_HOME/conf/hive-site.xml
  55. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  56. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  57. echo'<name>hive.metastore.schema.verification</name>'>>$HIVE_HOME/conf/hive-site.xml
  58. echo'<value>false</value>'>>$HIVE_HOME/conf/hive-site.xml
  59. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  60. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  61. echo'<name>hive.metastore.uris</name>'>>$HIVE_HOME/conf/hive-site.xml
  62. echo'<value>thrift://hdp101:9083</value>'>>$HIVE_HOME/conf/hive-site.xml
  63. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  64. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  65. echo'<name>hive.server2.thrift.port</name>'>>$HIVE_HOME/conf/hive-site.xml
  66. echo'<value>10000</value>'>>$HIVE_HOME/conf/hive-site.xml
  67. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  68. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  69. echo'<name>hive.server2.thrift.bind.host</name>'>>$HIVE_HOME/conf/hive-site.xml
  70. echo'<value>hdp101</value>'>>$HIVE_HOME/conf/hive-site.xml
  71. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  72. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  73. echo'<name>hive.metastore.event.db.notification.api.auth</name>'>>$HIVE_HOME/conf/hive-site.xml
  74. echo'<value>false</value>'>>$HIVE_HOME/conf/hive-site.xml
  75. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  76. # 修改Hive的计算引擎echo"-------修改Hive的计算引擎-------"echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  77. echo'<name>hive.execution.engine</name>'>>$HIVE_HOME/conf/hive-site.xml
  78. echo'<value>tez</value>'>>$HIVE_HOME/conf/hive-site.xml
  79. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  80. echo'<property>'>>$HIVE_HOME/conf/hive-site.xml
  81. echo'<name>hive.tez.container.size</name>'>>$HIVE_HOME/conf/hive-site.xml
  82. echo'<value>1024</value>'>>$HIVE_HOME/conf/hive-site.xml
  83. echo'</property>'>>$HIVE_HOME/conf/hive-site.xml
  84. echo'</configuration>'>>$HIVE_HOME/conf/hive-site.xml
  85. }# 安装tez并配置install_tez(){echo"-------安装tez-------"mkdir /opt/module/tez
  86. tar -zxvf /opt/software/hive/tez-0.10.1-SNAPSHOT.tar.gz -C /opt/module/tez
  87. echo"-------上传tez依赖到HDFS-------"`hadoop fs -mkdir /tez``hadoop fs -put /opt/software/hive/tez-0.10.1-SNAPSHOT.tar.gz /tez`echo"-------配置tez-site文件-------"cd$HADOOP_HOME/etc/hadoop
  88. rm -rf tez-site.xml
  89. sudotouch tez-site.xml
  90. echo'<?xml version="1.0" encoding="UTF-8"?>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  91. echo'<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  92. echo'<configuration>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  93. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  94. echo'<name>tez.lib.uris</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  95. echo'<value>${fs.defaultFS}/tez/tez-0.10.1-SNAPSHOT.tar.gz</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  96. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  97. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  98. echo'<name>tez.use.cluster.hadoop-libs</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  99. echo'<value>true</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  100. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  101. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  102. echo'<name>tez.am.resource.memory.mb</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  103. echo'<value>1024</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  104. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  105. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  106. echo'<name>tez.am.resource.cpu.vcores</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  107. echo'<value>1</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  108. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  109. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  110. echo'<name>tez.container.max.java.heap.fraction</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  111. echo'<value>0.4</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  112. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  113. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  114. echo'<name>tez.task.resource.memory.mb</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  115. echo'<value>1024</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  116. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  117. echo'<property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  118. echo'<name>tez.task.resource.cpu.vcores</name>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  119. echo'<value>1</value>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  120. echo'</property>'>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  121. echo'</configuration> '>>$HADOOP_HOME/etc/hadoop/tez-site.xml
  122. echo"-------修改Hadoop环境变量-------"echo"export TEZ_CONF_DIR=\$HADOOP_HOME/etc/hadoop">>$HADOOP_HOME/etc/hadoop/hadoop-env.sh
  123. echo"export TEZ_JARS=/opt/module/tez">>$HADOOP_HOME/etc/hadoop/hadoop-env.sh
  124. echo"export HADOOP_CLASSPATH=\$HADOOP_CLASSPATH:\${TEZ_CONF_DIR}:\${TEZ_JARS}/*:\${TEZ_JARS}/lib/*">>$HADOOP_HOME/etc/hadoop/hadoop-env.sh
  125. echo"-------解决jar包冲突,删除tez的日志jar-------"rm -rf /opt/module/tez/lib/slf4j-log4j12-1.7.10.jar
  126. }#检查进程是否运行正常,参数1为进程名,参数2为进程端口functioncheck_process(){pid=$(ps -ef 2>/dev/null |grep -v grep|grep -i $1 |awk'{print $2}')ppid=$(netstat -nltp 2>/dev/null |grep $2 |awk'{print $7}'|cut -d '/' -f 1)echo$pid[["$pid"=~"$ppid"]]&&["$ppid"]&&return0||return1}# 启动Hivehive_start(){echo"-------启动Hive-------"HIVE_LOG_DIR=$HIVE_HOME/logs
  127. mkdir -p $HIVE_LOG_DIRmetapid=$(check_process HiveMetastore 9083)cmd="nohup hive --service metastore >$HIVE_LOG_DIR/metastore.log 2>&1 &"cmd=$cmd" sleep 4; hdfs dfsadmin -safemode wait >/dev/null 2>&1"[ -z "$metapid"]&&eval$cmd||echo"-------Metastroe服务已启动-------"server2pid=$(check_process HiveServer2 10000)cmd="nohup hive --service hiveserver2 >$HIVE_LOG_DIR/hiveServer2.log 2>&1 &"[ -z "$server2pid"]&&eval$cmd||echo"-------HiveServer2服务已启动-------"}# 关闭Hivehive_stop(){echo"-------关闭Hive-------"metapid=$(check_process HiveMetastore 9083)["$metapid"]&&kill$metapid||echo"-------Metastore服务已关闭-------"server2pid=$(check_process HiveServer2 10000)["$server2pid"]&&kill$server2pid||echo"-------HiveServer2服务已关闭-------"}#根据用户的选择进行对应的安装custom_option(){case$1in1)
  128. uninstall_mysql
  129. install_mysql
  130. ;;2)
  131. install_hive
  132. ;;3)
  133. hive_config
  134. ;;4)
  135. install_tez
  136. ;;5)
  137. hive_start
  138. ;;6)
  139. hive_stop
  140. ;;7)
  141. hive_stop
  142. sleep2
  143. hive_start
  144. ;;
  145. *)echo"please option 1~7"esac}#规定$1用户安装软件选择[]
  146. custom_option $1

执行脚本命令

  1. # 不同命令1:安装Mysql
  2. 2:安装Hive
  3. 3:配置Hive
  4. 4:安装Tez引擎
  5. 5:启动Hive
  6. 6:停止Hive
  7. 7:重启Hive
  8. sh hive-install.sh [1~7]
标签: hadoop hive 大数据

本文转载自: https://blog.csdn.net/m0_37172770/article/details/127127573
版权归原作者 纯码农 所有, 如有侵权,请联系我们删除。

“【大数据】Hadoop在呼唤Hive(附一键部署Hive脚本)”的评论:

还没有评论