0


hive环境安装

0.hive环境安装
win
0.解压
1.配置环境
HIVE_HOME D:\1.tools\apache-hive-3.1.3
path %HIVE_HOME%\bin
windows运行不了hive高版本 需替换bin
下载 http://archive.apache.org/dist/hive/hive-1.0.0/ 替换原来bin
下载和拷贝一个mysql-connector-java-8.0.x.jar到 $HIVE_HOME/lib目录下
hive-default.xml.template -----> hive-site.xml
hive-env.sh.template -----> hive-env.sh
hive-exec-log4j.properties.template -----> hive-exec-log4j2.properties
hive-log4j.properties.template -----> hive-log4j2.properties
将hive.cmd下载并放入bin中 (见 参考)

hive-env.sh

Set HADOOP_HOME to point to a specific hadoop install directory

HADOOP_HOME=D:\1.tools\apache-hive-3.1.3

Hive Configuration Directory can be controlled by:

export HIVE_CONF_DIR=D:\1.tools\apache-hive-3.1.3\conf

Folder containing extra libraries required for hive compilation/execution can be controlled by:

export HIVE_AUX_JARS_PATH=D:\1.tools\apache-hive-3.1.3\lib
hive-site.xml

hive.metastore.db.type
mysql

Expects one of [derby, oracle, mysql, mssql, postgres].
Type of database used by the metastore. Information schema & JDBCStorageHandler depend on it.

<property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
</property>
<property>
    <name>hive.exec.scratchdir</name>
    <value>/tmp/hive</value>
    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}.</description>
</property>
<property>
    <name>hive.exec.local.scratchdir</name>
    <value>D:/1.tools/apache-hive-3.1.3/my_hive/scratch_dir</value>
    <description>Local scratch space for Hive jobs</description>
</property>
<property>
    <name>hive.downloaded.resources.dir</name>
    <value>D:/1.tools/apache-hive-3.1.3/my_hive/resources_dir/${hive.session.id}_resources</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
    <name>hive.querylog.location</name>
    <value>D:/1.tools/apache-hive-3.1.3/my_hive/querylog_dir</value>
    <description>Location of Hive run time structured log file</description>
</property>
<property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>D:/1.tools/apache-hive-3.1.3/my_hive/operation_logs_dir</value>
    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>
<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://127.0.0.1:3306/hive?serverTimezone=UTC&amp;useSSL=false&amp;allowPublicKeyRetrieval=true</value>
    <description>
    JDBC connect string for a JDBC metastore.
    </description>
</property>
<property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.cj.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
</property>
<property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
    <description>Username to use against metastore database</description>
</property>
<property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>123456</value>
    <description>password to use against metastore database</description>
</property>
<property>
    <name>hive.metastore.schema.verification</name>
    <value>false</value>
    <description>
    Enforce metastore schema version consistency.
    True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic
    schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
    proper metastore schema migration. (Default)
    False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
    </description>
</property>
<property>
    <name>datanucleus.schema.autoCreateAll</name>
    <value>true</value>
    <description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
</property>

hive.server2.thrift.http.port 10001 hive.server2.thrift.port 10005 hive-site.xml搜索:hive.server2.active.passive.ha.enable value 改成 true Hive安装目录下 hive-common-3.1.3.jar hive-jdbc-3.1.3.jar 复制到 Hadoop安装目录下的\share\hadoop\common\lib中 2.赋权给hive用户 打开mysql use mysql; select host,user from user; create user 'hive'@'%' identified by '123456'; create user 'hive'@'localhost' identified by '123456'; grant usage on . to 'hive'@'%' with grant option; grant usage on . to 'hive'@'localhost' with grant option; grant select,insert,update,delete,create,drop on . to 'hive'@'%' with grant option; grant select,insert,update,delete,create,drop on . to 'hive'@'localhost' with grant option; flush privileges; 3.初始化 hive hive bin目录下 hive.cmd --service schematool -dbType mysql -initSchema --verbose 4.开启hive查询服务 hive --service metastore //开启通过服务链接hive 数据源 。 jdbc hive --service hiveserver2 //开启通过客户端链接 。 常用

1)直接连接:直接去mysql中连接metastore库;
2)通过服务连:hive有2种服务分别是metastore和hiveserver2,
hive通过metastore服务去连接mysql中的元数据。
5.dbeaver连接hive
需编辑驱动包 将hive目录下的lib文件夹加进去 即可连接成功 ,
连接前开启 metastore 和hiveserver2。
参考:https://zhuanlan.zhihu.com/p/565711345
出现 连不上, 主机连接从localhost改用 IP 10.2.154.166
出现 Connection refused: connect,检查如下配置的端口号与连接jdbcurl
是否一致 ,这里是10005
端口号主要由hive-site.xml下的 hive.server2.thrift.port决定
没配置默认是 10000和IP

hive.server2.thrift.port
10005

hive.server2.thrift.bind.host
localhost

6.windows使用beline连接教程

beeline -u jdbc:hive2://10.2.154.166:10005 -n hive

!connect jdbc:hive2://10.2.154.166:10005
hive 123456
show tables;
!exit
端口号主要由hive-site.xml下的 hive.server2.thrift.port决定
默认100000
问题参考:https://blog.csdn.net/wxplol/article/details/116141911
连接参考:https://www.cnblogs.com/alichengxuyuan/p/12576899.html
7.出现错误xxx is not allowed to impersonate anonymous
在hadoop的配置文件core-site.xml增加如下配置,
重启hdfs,其中“xxx”是连接beeline的用户,将“xxx”替换成自己的用户名即可

hadoop.proxyuser.xxx.hosts

hadoop.proxyuser.xxx.groups

“*”表示可通过超级代理“xxx”操作hadoop的用户、用户组和主机
参考:https://www.cnblogs.com/duaner92/p/14368889.html
搭建hadoop和hive环境结束。
参考:https://blog.csdn.net/xieedeni/article/details/120346162
hive不是内部命令 ,下hive.cmd:https://www.jianshu.com/p/c1fda44fa292
Error executing SQL query “select “DB_ID” from “DBS””
参考 https://blog.csdn.net/quiet_girl/article/details/75209070
mysql表创建用户权限 https://blog.csdn.net/weixin_35094917/article/details/113899057


mac 1.安装前先安装hadoop
brew install hive
需要把/lib/tools.jar tools.jar包移动的hive的lib目录下
hdfs dfs -mkdir /user
hdfs dfs -mkdir /user/sudo
2.hive赋权
在mysql中输入
create user ‘hive’ identified by ‘hive’;
grant all on . to ‘hive’@‘localhost’ identified by ‘hive’;
flush privileges;
select host,user,authentication_string from mysql.user;
登陆hive用户
mysql -u hive -p
create database hive
show databases;
3.配置环境变量
export HIVE_HOME=/usr/local/Cellar/hive/3.1.2
export PATH=

     H 
    
   
     I 
    
   
     V 
    
    
    
      E 
     
    
      H 
     
    
   
     O 
    
   
     M 
    
   
     E 
    
   
     / 
    
   
     b 
    
   
     i 
    
   
     n 
    
   
     : 
    
   
  
    HIVE_HOME/bin: 
   
  
HIVEH​OME/bin:PATH

向hive-site.xml中添加 。 没有则创建 在libexec/conf下创建

hive.metastore.local
true

javax.jdo.option.ConnectionURL
jdbc:mysql://localhost:3306/hive

<property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.cj.jdbc.Driver</value>
</property>

javax.jdo.option.ConnectionUserName
root

javax.jdo.option.ConnectionPassword
000000

<!-- hive用来存储不同阶段的map/reduce的执行计划的目录,同时也存储中间输出结果
,默认是/tmp/<user.name>/hive,我们实际一般会按组区分,然后组内自建一个tmp目录存>储 -->

<property>
    <name>hive.exec.local.scratchdir</name>
    <value>/tmp/hive</value>
</property>
<property>
    <name>hive.downloaded.resources.dir</name>
        <value>/tmp/hive</value>
</property>
<property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/data/hive/warehouse</value>
</property>
<property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>/tmp/hive</value>
</property>

4.初始化hive数据库
数据库初始化之后会生成很多张表
命令
cd /usr/local/Cellar/hive/3.1.3/libexec/bin
schematool -initSchema -dbType mysql
测试
在命令行中输入hive命令
hive
5.开启hive查询服务
hive --service metastore 2>&1 & //开启通过服务链接hive 数据源 。 jdbc
hive --service hiveserver2 2>&1 & //开启通过客户端链接 。 常用
1)直接连接:直接去mysql中连接metastore库;
2)通过服务连:hive有2种服务分别是metastore和hiveserver2,
hive通过metastore服务去连接mysql中的元数据。
dbeaver连接hive
需编辑驱动包 将hive目录下的lib文件夹加进去 即可连接成功 ,
连接前开启 metastore 和hiveserver2。
参考:https://zhuanlan.zhihu.com/p/565711345
至此,mac搭建hadoop和hive环境结束。
如访问不了请参考 参考1
参考1 https://blog.csdn.net/kevin_Luan/article/details/24177717
问题处理
报错.org.apache.hadoop.hive.metastore.HiveMetaException: Failed to load driver
驱动下载:mysql-connector-java.jar包的下载教程
导入mysql-connector-java-8.0.21.jar包之后, 重新初始化hive数据库
参考:https://www.cnblogs.com/lanboy/p/15899679.html

hadoop 练习项目:https://github.com/QingYang12/hadooptest


本文转载自: https://blog.csdn.net/qq_15123471/article/details/140667929
版权归原作者 清扬_br 所有, 如有侵权,请联系我们删除。

“hive环境安装”的评论:

还没有评论