0


24、Flink 的table api与sql之Catalogs(java api操作视图)-3

Flink 系列文章

1、Flink 部署、概念介绍、source、transformation、sink使用示例、四大基石介绍和示例等系列综合文章链接

13、Flink 的table api与sql的基本概念、通用api介绍及入门示例
14、Flink 的table api与sql之数据类型: 内置数据类型以及它们的属性
15、Flink 的table api与sql之流式概念-详解的介绍了动态表、时间属性配置(如何处理更新结果)、时态表、流上的join、流上的确定性以及查询配置
16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及FileSystem示例(1)
16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及Elasticsearch示例(2)
16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及Apache Kafka示例(3)
16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及JDBC示例(4)

16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及Apache Hive示例(6)

20、Flink SQL之SQL Client: 不用编写代码就可以尝试 Flink SQL,可以直接提交 SQL 任务到集群上

22、Flink 的table api与sql之创建表的DDL
24、Flink 的table api与sql之Catalogs(介绍、类型、java api和sql实现ddl、java api和sql操作catalog)-1
24、Flink 的table api与sql之Catalogs(java api操作数据库、表)-2
24、Flink 的table api与sql之Catalogs(java api操作视图)-3

26、Flink 的SQL之概览与入门示例
27、Flink 的SQL之SELECT (select、where、distinct、order by、limit、集合操作和去重)介绍及详细示例(1)
27、Flink 的SQL之SELECT (SQL Hints 和 Joins)介绍及详细示例(2)
27、Flink 的SQL之SELECT (窗口函数)介绍及详细示例(3)
27、Flink 的SQL之SELECT (窗口聚合)介绍及详细示例(4)
27、Flink 的SQL之SELECT (Group Aggregation分组聚合、Over Aggregation Over聚合 和 Window Join 窗口关联)介绍及详细示例(5)
27、Flink 的SQL之SELECT (Top-N、Window Top-N 窗口 Top-N 和 Window Deduplication 窗口去重)介绍及详细示例(6)
27、Flink 的SQL之SELECT (Pattern Recognition 模式检测)介绍及详细示例(7)

29、Flink SQL之DESCRIBE、EXPLAIN、USE、SHOW、LOAD、UNLOAD、SET、RESET、JAR、JOB Statements、UPDATE、DELETE(1)
29、Flink SQL之DESCRIBE、EXPLAIN、USE、SHOW、LOAD、UNLOAD、SET、RESET、JAR、JOB Statements、UPDATE、DELETE(2)
30、Flink SQL之SQL 客户端(通过kafka和filesystem的例子介绍了配置文件使用-表、视图等)
32、Flink table api和SQL 之用户自定义 Sources & Sinks实现及详细示例
41、Flink之Hive 方言介绍及详细示例
42、Flink 的table api与sql之Hive Catalog
43、Flink之Hive 读写及详细验证示例
44、Flink之module模块介绍及使用示例和Flink SQL使用hive内置函数及自定义函数详细示例–网上有些说法好像是错误的


文章目录


本文简单介绍了通过java api操作视图,提供了三个示例,即sql实现和java api的两种实现方式。
本文依赖flink和hive、hadoop集群能正常使用。
本文示例java api的实现是通过Flink 1.13.5版本做的示例,SQL 如果没有特别说明则是Flink 1.17版本。

五、Catalog API

3、视图操作

1)、官方示例

// create view
catalog.createTable(newObjectPath("mydb","myview"),newCatalogViewImpl(...),false);// drop view
catalog.dropTable(newObjectPath("mydb","myview"),false);// alter view
catalog.alterTable(newObjectPath("mydb","mytable"),newCatalogViewImpl(...),false);// rename view
catalog.renameTable(newObjectPath("mydb","myview"),"my_new_view",false);// get view
catalog.getTable("myview");// check if a view exist or not
catalog.tableExists("mytable");// list views in a database
catalog.listViews("mydb");

2)、SQL创建HIVE 视图示例

1、maven依赖
properties>
        <encoding>UTF-8</encoding><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><maven.compiler.source>1.8</maven.compiler.source><maven.compiler.target>1.8</maven.compiler.target><java.version>1.8</java.version><scala.version>2.12</scala.version><flink.version>1.13.6</flink.version></properties><dependencies><dependency><groupId>org.apache.flink</groupId><artifactId>flink-clients_2.11</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-scala_2.11</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-java</artifactId><version>${flink.version}</version><scope>provided</scope></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-streaming-scala_2.11</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-streaming-java_2.11</artifactId><version>${flink.version}</version><scope>provided</scope></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-api-scala-bridge_2.11</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-api-java-bridge_2.11</artifactId><version>${flink.version}</version></dependency><!-- blink执行计划,1.11+默认的 --><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-planner-blink_2.11</artifactId><version>${flink.version}</version><scope>provided</scope></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-common</artifactId><version>${flink.version}</version></dependency><!-- flink连接器 --><dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-kafka_2.12</artifactId><version>${flink.version}</version><!-- <scope>provided</scope> --></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-sql-connector-kafka_2.12</artifactId><version>${flink.version}</version><scope>provided</scope></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-jdbc_2.12</artifactId><version>${flink.version}</version><scope>provided</scope></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-csv</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-json</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-hive_2.12</artifactId><version>${flink.version}</version><scope>provided</scope></dependency><dependency><groupId>org.apache.hive</groupId><artifactId>hive-metastore</artifactId><version>2.1.0</version></dependency><dependency><groupId>org.apache.hive</groupId><artifactId>hive-exec</artifactId><version>3.1.2</version><scope>provided</scope></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-shaded-hadoop-2-uber</artifactId><version>2.7.5-10.0</version><!-- <scope>provided</scope> --></dependency><dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>5.1.38</version><scope>provided</scope><!--<version>8.0.20</version> --></dependency><!-- 日志 --><dependency><groupId>org.slf4j</groupId><artifactId>slf4j-log4j12</artifactId><version>1.7.7</version><scope>runtime</scope></dependency><dependency><groupId>log4j</groupId><artifactId>log4j</artifactId><version>1.2.17</version><scope>runtime</scope></dependency><dependency><groupId>com.alibaba</groupId><artifactId>fastjson</artifactId><version>1.2.44</version></dependency><dependency><groupId>org.projectlombok</groupId><artifactId>lombok</artifactId><version>1.18.2</version><!-- <scope>provided</scope> --></dependency></dependencies><build><sourceDirectory>src/main/java</sourceDirectory><plugins><!-- 编译插件 --><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-compiler-plugin</artifactId><version>3.5.1</version><configuration><source>1.8</source><target>1.8</target><!--<encoding>${project.build.sourceEncoding}</encoding> --></configuration></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-surefire-plugin</artifactId><version>2.18.1</version><configuration><useFile>false</useFile><disableXmlReport>true</disableXmlReport><includes><include>**/*Test.*</include><include>**/*Suite.*</include></includes></configuration></plugin><!-- 打包插件(会包含所有依赖) --><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-shade-plugin</artifactId><version>2.3</version><executions><execution><phase>package</phase><goals><goal>shade</goal></goals><configuration><filters><filter><artifact>*:*</artifact><excludes><!-- zip -d learn_spark.jar META-INF/*.RSA META-INF/*.DSA META-INF/*.SF --><exclude>META-INF/*.SF</exclude><exclude>META-INF/*.DSA</exclude><exclude>META-INF/*.RSA</exclude></excludes></filter></filters><transformers><transformerimplementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"><!-- 设置jar包的入口类(可选) --><mainClass> org.table_sql.TestHiveViewBySQLDemo</mainClass></transformer></transformers></configuration></execution></executions></plugin></plugins></build>
2、代码
packageorg.table_sql;importjava.util.HashMap;importjava.util.List;importorg.apache.flink.streaming.api.environment.StreamExecutionEnvironment;importorg.apache.flink.table.api.SqlDialect;importorg.apache.flink.table.api.bridge.java.StreamTableEnvironment;importorg.apache.flink.table.catalog.CatalogDatabaseImpl;importorg.apache.flink.table.catalog.CatalogView;importorg.apache.flink.table.catalog.ObjectPath;importorg.apache.flink.table.catalog.hive.HiveCatalog;importorg.apache.flink.table.module.hive.HiveModule;importorg.apache.flink.types.Row;importorg.apache.flink.util.CollectionUtil;/**
 * @author alanchan
 *
 */publicclassTestHiveViewBySQLDemo{publicstaticfinalString tableName ="viewtest";publicstaticfinalString hive_create_table_sql ="CREATE  TABLE  "+ tableName +" (\n"+"  id INT,\n"+"  name STRING,\n"+"  age INT"+") "+"TBLPROPERTIES (\n"+"  'sink.partition-commit.delay'='5 s',\n"+"  'sink.partition-commit.trigger'='partition-time',\n"+"  'sink.partition-commit.policy.kind'='metastore,success-file'"+")";/**
     * @param args
     * @throws Exception
     */publicstaticvoidmain(String[] args)throwsException{StreamExecutionEnvironment env =StreamExecutionEnvironment.getExecutionEnvironment();StreamTableEnvironment tenv =StreamTableEnvironment.create(env);String moduleName ="myhive";String hiveVersion ="3.1.2";
        tenv.loadModule(moduleName,newHiveModule(hiveVersion));String name ="alan_hive";String defaultDatabase ="default";String databaseName ="viewtest_db";String hiveConfDir ="/usr/local/bigdata/apache-hive-3.1.2-bin/conf";HiveCatalog hiveCatalog =newHiveCatalog(name, defaultDatabase, hiveConfDir);
        tenv.registerCatalog(name, hiveCatalog);
        tenv.useCatalog(name);
        tenv.listDatabases();
        hiveCatalog.createDatabase(databaseName,newCatalogDatabaseImpl(newHashMap(), hiveConfDir){},true);//        tenv.executeSql("create database "+databaseName);
        tenv.useDatabase(databaseName);// 创建第一个视图viewName_byTableString selectSQL ="select * from "+ tableName;String viewName_byTable ="test_view_table_V";String createViewSQL ="create view "+ viewName_byTable +" as "+ selectSQL;

        tenv.getConfig().setSqlDialect(SqlDialect.HIVE);
        tenv.executeSql(hive_create_table_sql);//        tenv.getConfig().setSqlDialect(SqlDialect.DEFAULT);String insertSQL ="insert into "+ tableName +" values (1,'alan',18)";
        tenv.executeSql(insertSQL);

        tenv.executeSql(createViewSQL);
        tenv.listViews();CatalogView catalogView =(CatalogView) hiveCatalog.getTable(newObjectPath(databaseName, viewName_byTable));List<Row> results =CollectionUtil.iteratorToList(tenv.executeSql("select * from "+ viewName_byTable).collect());for(Row row : results){System.out.println("test_view_table_V: "+ row.toString());}// 创建第二个视图String viewName_byView ="test_view_view_V";
        tenv.executeSql("create view "+ viewName_byView +" (v2_id,v2_name,v2_age) comment 'test_view_view_V comment' as select * from "+ viewName_byTable);
        catalogView =(CatalogView) hiveCatalog.getTable(newObjectPath(databaseName, viewName_byView));

        results =CollectionUtil.iteratorToList(tenv.executeSql("select * from "+ viewName_byView).collect());System.out.println("test_view_view_V comment : "+ catalogView.getComment());for(Row row : results){System.out.println("test_view_view_V : "+ row.toString());}
        tenv.executeSql("drop database "+ databaseName +" cascade");}}
3、运行结果

前提是flink的集群可用。使用maven打包成jar。

[alanchan@server2 bin]$ flink run  /usr/local/bigdata/flink-1.13.5/examples/table/table_sql-0.0.2-SNAPSHOT.jar

Hive Session ID = ed6d5c9b-e00f-4881-840d-24c72aba6db7
Hive Session ID = 14445dc8-1f08-4f0f-bb45-aba8c6f52174
Job has been submitted with JobID bff7b59367bd5de6e778b442c4cc4404
Hive Session ID = 4c16f4fc-4c10-4353-b322-e6633e3ebe3d
Hive Session ID = 57949f09-bdcb-497f-a85c-ed9766fc4ce3
2023-10-13 02:42:24,891 INFO  org.apache.hadoop.mapred.FileInputFormat                     [] - Total input files to process :0
Job has been submitted with JobID 80e48bb76e3d580412fdcdc434a8a979
test_view_table_V: +I[1, alan, 18]
Hive Session ID = a73d5b93-2129-4159-ad5e-0814df77e987
Hive Session ID = e4ae1a79-4d5e-4835-81de-ebc2041eedf9
2023-10-13 02:42:33,648 INFO  org.apache.hadoop.mapred.FileInputFormat                     [] - Total input files to process :1
Job has been submitted with JobID c228d9ce3bdce91dc68bff75d14db1e5
test_view_view_V comment : test_view_view_V comment
test_view_view_V : +I[1, alan, 18]
Hive Session ID = e4a38393-d760-4bd3-8d8b-864cbe0daba7

3)、API创建Hive 视图示例

通过api创建视图相对比较麻烦,且存在版本更新的过期方法情况。
通过TableSchema和CatalogViewImpl创建视图则已过期,当前推荐使用通过CatalogView和ResolvedSchema来创建视图。
另外需要注意的是下面两个参数的区别
String originalQuery,原始的sql
String expandedQuery,带有数据库名称的表,甚至包含hivecatalog

例如:如果使用default作为默认的数据库,查询语句为select * from test1,则
originalQuery = ”select name,value from test1“即可,
expandedQuery = “select

test1.name

,

test1.value

from

default.test1

修改、删除视图等操作比较简单,不再赘述。

1、maven依赖

此处使用的依赖与上示例一致,mainclass变成本示例的类,不再赘述。

2、代码
importstaticorg.apache.flink.util.Preconditions.checkNotNull;importjava.util.ArrayList;importjava.util.Arrays;importjava.util.Collections;importjava.util.HashMap;importjava.util.List;importorg.apache.flink.api.common.typeinfo.Types;importorg.apache.flink.api.common.typeinfo.TypeInformation;importorg.apache.flink.streaming.api.environment.StreamExecutionEnvironment;importorg.apache.flink.table.api.DataTypes;importorg.apache.flink.table.api.Schema;importorg.apache.flink.table.api.SqlDialect;importorg.apache.flink.table.api.TableSchema;importorg.apache.flink.table.api.bridge.java.StreamTableEnvironment;importorg.apache.flink.table.catalog.CatalogBaseTable;importorg.apache.flink.table.catalog.CatalogDatabaseImpl;importorg.apache.flink.table.catalog.CatalogView;importorg.apache.flink.table.catalog.CatalogViewImpl;importorg.apache.flink.table.catalog.ObjectPath;importorg.apache.flink.table.catalog.ResolvedCatalogView;importorg.apache.flink.table.catalog.ResolvedSchema;importorg.apache.flink.table.catalog.exceptions.CatalogException;importorg.apache.flink.table.catalog.exceptions.DatabaseNotExistException;importorg.apache.flink.table.catalog.exceptions.TableAlreadyExistException;importorg.apache.flink.table.catalog.hive.HiveCatalog;importorg.apache.flink.table.module.hive.HiveModule;importorg.apache.flink.types.Row;importorg.apache.flink.util.CollectionUtil;importorg.apache.flink.table.catalog.CatalogBaseTable;importorg.apache.flink.table.catalog.Column;/**
 * @author alanchan
 *
 */publicclassTestHiveViewByAPIDemo{publicstaticfinalString tableName ="viewtest";publicstaticfinalString hive_create_table_sql ="CREATE  TABLE  "+ tableName +" (\n"+"  id INT,\n"+"  name STRING,\n"+"  age INT"+") "+"TBLPROPERTIES (\n"+"  'sink.partition-commit.delay'='5 s',\n"+"  'sink.partition-commit.trigger'='partition-time',\n"+"  'sink.partition-commit.policy.kind'='metastore,success-file'"+")";/**
     * @param args
     * @throws Exception
     */publicstaticvoidmain(String[] args)throwsException{StreamExecutionEnvironment env =StreamExecutionEnvironment.getExecutionEnvironment();StreamTableEnvironment tenv =StreamTableEnvironment.create(env);System.setProperty("HADOOP_USER_NAME","alanchan");String moduleName ="myhive";String hiveVersion ="3.1.2";
        tenv.loadModule(moduleName,newHiveModule(hiveVersion));String catalogName ="alan_hive";String defaultDatabase ="default";String databaseName ="viewtest_db";String hiveConfDir ="/usr/local/bigdata/apache-hive-3.1.2-bin/conf";HiveCatalog hiveCatalog =newHiveCatalog(catalogName, defaultDatabase, hiveConfDir);
        tenv.registerCatalog(catalogName, hiveCatalog);
        tenv.useCatalog(catalogName);
        tenv.listDatabases();
        
        hiveCatalog.createDatabase(databaseName,newCatalogDatabaseImpl(newHashMap(), hiveConfDir){},true);//        tenv.executeSql("create database "+databaseName);
        tenv.useDatabase(databaseName);

        tenv.getConfig().setSqlDialect(SqlDialect.HIVE);
        tenv.executeSql(hive_create_table_sql);String insertSQL ="insert into "+ tableName +" values (1,'alan',18)";String insertSQL2 ="insert into "+ tableName +" values (2,'alan2',19)";String insertSQL3 ="insert into "+ tableName +" values (3,'alan3',20)";
        tenv.executeSql(insertSQL);
        tenv.executeSql(insertSQL2);
        tenv.executeSql(insertSQL3);
        
        tenv.getConfig().setSqlDialect(SqlDialect.DEFAULT);String viewName1 ="test_view_table_V";String viewName2 ="test_view_table_V2";ObjectPath path1=newObjectPath(databaseName, viewName1);//ObjectPath.fromString("viewtest_db.test_view_table_V2")ObjectPath path2=newObjectPath(databaseName, viewName2);String originalQuery ="SELECT id, name, age FROM "+tableName+" WHERE id >=1 ";//        String originalQuery = String.format("select * from %s",tableName+" WHERE id >=1 ");System.out.println("originalQuery:"+originalQuery);String expandedQuery ="SELECT  id, name, age FROM "+databaseName+"."+tableName+"  WHERE id >=1 ";//        String expandedQuery = String.format("select * from %s.%s", catalogName, path1.getFullName());System.out.println("expandedQuery:"+expandedQuery);String comment ="this is a comment";// 创建视图,第一种方式(通过TableSchema和CatalogViewImpl),已声明过期    createView1(originalQuery,expandedQuery,comment,hiveCatalog,path1);// 查询视图List<Row> results =CollectionUtil.iteratorToList( tenv.executeSql("select * from "+ viewName1).collect());for(Row row : results){System.out.println("test_view_table_V: "+ row.toString());}// 创建视图,第二种方式(通过Schema和ResolvedSchema)createView2(originalQuery,expandedQuery,comment,hiveCatalog,path2);List<Row> results2 =CollectionUtil.iteratorToList( tenv.executeSql("select * from viewtest_db.test_view_table_V2").collect());for(Row row : results2){System.out.println("test_view_table_V2: "+ row.toString());}

        tenv.executeSql("drop database "+ databaseName +" cascade");}staticvoidcreateView1(String originalQuery,String expandedQuery,String comment,HiveCatalog hiveCatalog,ObjectPath path)throwsException{TableSchema viewSchema =newTableSchema(newString[]{"id","name","age"},newTypeInformation[]{Types.INT,Types.STRING,Types.INT});CatalogBaseTable viewTable =newCatalogViewImpl(
                originalQuery,
                expandedQuery,
                viewSchema,newHashMap(),
                comment);
        hiveCatalog.createTable(path, viewTable,false);}staticvoidcreateView2(String originalQuery,String expandedQuery,String comment,HiveCatalog hiveCatalog,ObjectPath path)throwsException{ResolvedSchema resolvedSchema =newResolvedSchema(Arrays.asList(Column.physical("id",DataTypes.INT()),Column.physical("name",DataTypes.STRING()),Column.physical("age",DataTypes.INT())),Collections.emptyList(),null);CatalogView origin =CatalogView.of(Schema.newBuilder().fromResolvedSchema(resolvedSchema).build(),
                            comment,//                            String.format("select * from tt"),//                            String.format("select * from %s.%s", TEST_CATALOG_NAME, path1.getFullName()),
                            originalQuery,
                            expandedQuery,Collections.emptyMap());CatalogView view =newResolvedCatalogView(origin, resolvedSchema);//            ObjectPath.fromString("viewtest_db.test_view_table_V2")
        hiveCatalog.createTable(path, view,false);}}
3、运行结果
[alanchan@server2 bin]$ flink run  /usr/local/bigdata/flink-1.13.5/examples/table/table_sql-0.0.3-SNAPSHOT.jar

Hive Session ID = ab4d159a-b2d3-489e-988f-eebdc43d9517
Hive Session ID = 391de19c-5d5a-4a83-a88c-c43cca71fc63
Job has been submitted with JobID a880510032165523f3f2a559c5ab4ec9
Hive Session ID = cb063c31-eaf2-44e3-8fc0-9e8d2a6a3a5d
Job has been submitted with JobID cb05286c404b561306f8eb3969c3456a
Hive Session ID = 8132b36e-c9e2-41a2-8f42-3fe842e0991f
Job has been submitted with JobID 264aef7da1b17598bda159d946827dea
Hive Session ID = 7657be14-8188-4362-84a9-4c84c596021b
2023-10-16 07:21:19,073 INFO  org.apache.hadoop.mapred.FileInputFormat                     [] - Total input files to process :3
Job has been submitted with JobID 05c2bb7265b0430cb12e00237f18444b
test_view_table_V: +I[1, alan, 18]
test_view_table_V: +I[2, alan2, 19]
test_view_table_V: +I[3, alan3, 20]
Hive Session ID = 7bb01c0d-03c9-413a-9040-c89676cec3b9
2023-10-16 07:21:27,512 INFO  org.apache.hadoop.mapred.FileInputFormat                     [] - Total input files to process :3
Job has been submitted with JobID 79130d1fe56d88a784980d16e7f1cfb4
test_view_table_V2: +I[1, alan, 18]
test_view_table_V2: +I[2, alan2, 19]
test_view_table_V2: +I[3, alan3, 20]
Hive Session ID = 6d44ea95-f733-4c56-8da4-e2687a4bf945

本文简单介绍了通过java api操作视图,提供了三个示例,即sql实现和java api的两种实现方式。


本文转载自: https://blog.csdn.net/chenwewi520feng/article/details/133862678
版权归原作者 一瓢一瓢的饮 alanchan 所有, 如有侵权,请联系我们删除。

“24、Flink 的table api与sql之Catalogs(java api操作视图)-3”的评论:

还没有评论