介绍
Hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供类SQL查询功能。
其本质是将SQL转换为MapReduce的任务进行运算,底层由HDFS来提供数据的存储,说白了hive可以理解为一个将SQL转换为MapReduce的任务的工具,甚至更进一步可以说hive就是一个MapReduce的客户端。
官网
### 官网
https://hive.apache.org/
## 中文参考
https://www.docs4dev.com/docs/zh/apache-hive/3.1.1/reference/LanguageManual_DML.html
Hive的安装模式
内嵌模式: 内嵌式是内嵌在derby数据库俩存储元数据,也不需要额外起Metastore服务。数据库和Metastore服务都嵌入在主Hive Server进程中。这个是默认的,配置简单,但是一次只能一个客户端连接, 比较适合实验,不能用于生产环境。
本地模式: 本地模式采用外部数据库来存储元数据,目前支持的数据库有:mysql、postgre。本地模式不需要单独起metastore服务,用的是跟hive在同一个进程里的metastore服务。也就是说当你启动一个hive服务,里面默认会帮我们启动一个metastore服务。
远程模式: 远程模式下,需要单独起metastore服务,然后每个客户端都在配置文件里配置连接到该metastore服务。远程模式的metastore服务和hive运行在不同的进程中。在生产环境中,建议用远程模式来配置Hive metastore。在这种情况下,其他依赖hive的软件都可以通过访问metastore 访问hive
安装
##
cd /opt/software
tar -zxvf apache-hive-3.1.1-bin.tar.gz -C /opt/module/
##
vi /etc/profile
##
export HIVE_HOME=/opt/module/apache-hive-3.1.1-bin
export PATH=$PATH:$HADOOP_HOME/sbin:$HIVE_HOME/bin
##
source /etc/profile
## 修改hive的环境变量
cd /opt/module/apache-hive-3.1.1-bin/bin/ && vi hive-config.sh
export JAVA_HOME=/opt/module/jdk1.8.0_11
export HIVE_HOME=/opt/module/apache-hive-3.1.1-bin
export HADOOP_HOME=/opt/module/hadoop-3.2.0
export HIVE_CONF_DIR=/opt/module/apache-hive-3.1.1-bin/conf
## 拷贝hive的配置文件
cd /opt/module/apache-hive-3.1.1-bin/conf/
cp hive-default.xml.template hive-site.xml
- 修改Hive配置文件,找到对应的位置进行修改: 主要是修改连接Mysql
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root123</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.202.131:3306/hive?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=GMT</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
<description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
<description>
Enforce metastore schema version consistency.
True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic
schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
proper metastore schema migration. (Default)
False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/opt/module/apache-hive-3.1.1-bin/tmp/${user.name}</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>system:java.io.tmpdir</name>
<value>/opt/module/apache-hive-3.1.1-bin/iotmp</value>
<description/>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/opt/module/apache-hive-3.1.1-bin/tmp/${hive.session.id}_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/opt/module/apache-hive-3.1.1-bin/tmp/${system:user.name}</value>
<description>Location of Hive run time structured log file</description>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/opt/module/apache-hive-3.1.1-bin/tmp/${system:user.name}/operation_logs</value>
<description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>
<property>
<name>hive.metastore.db.type</name>
<value>mysql</value>
<description>
Expects one of [derby, oracle, mysql, mssql, postgres].
Type of database used by the metastore. Information schema & JDBCStorageHandler depend on it.
</description>
</property>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
<description>Whether to include the current database in the Hive prompt.</description>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
<description>Whether to print the names of the columns in query output.</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
上传mysql驱动包到/usr/local/soft/apache-hive-3.1.1-bin/lib/文件夹下,驱动包:mysql-connector-java-8.0.15.zip,解压后从里面获取jar包。
- 确保 mysql数据库中有名称为hive的数据库
- 初始化初始化元数据库
schematool -dbType mysql -initSchema
- 确保Hadoop启动
- 启动hive
# 启动
hive
## 检测是否启动成功
show databases;
hive常用命令
## 启动hive
[linux01 hive]$ bin/hive
## 显示数据库
hive>show databases;
## 使用default数据库
hive>use default;
## 显示default数据库中的表
hive>show tables;
## 创建student表, 并声明文件分隔符’\t’
hive> create table student(id int, name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t’;
版权归原作者 叱咤少帅(少帅) 所有, 如有侵权,请联系我们删除。