0


FlinkCDC全量及增量采集SqlServer数据

本文将详细介绍Flink-CDC如何全量及增量采集Sqlserver数据源,准备适配Sqlserver数据源的小伙伴们可以参考本文,希望本文能给你带来一定的帮助。

一、Sqlserver的安装及开启事务日志

如果没有

Sqlserver

环境,但你又想学习这块的内容,那你只能自己动手通过

docker

安装一个

myself sqlserver

来用作学习,当然,如果你有现成环境,那就检查一下

Sqlserver

是否开启了代理(

sqlagent.enabled

)服务和

CDC

功能。

1.1 docker拉取镜像

Github

上写

Flink-CDC 

目前支持的

Sqlserver

版本为2012, 2014, 2016, 2017, 2019,但我想全部拉到最新(事实证明,2022-latest 和latest是一样的,因为

imagId

都是一致的,且在后续测试也是没有问题的),所以我在

docker

上拉取镜像时,直接采用如下命令:

docker pull mcr.microsoft.com/mssql/server:latest
1.2 运行Sqlserver并设置代理

标准启动模式,没什么好说的,主要设置一下密码(密码要求比较严格,建议直接在网上搜个随机密码生成器来搞一下)。

docker run -e'ACCEPT_EULA=Y'-e'SA_PASSWORD=${your_password}'\-p1433:1433 --name sqlserver \-d mcr.microsoft.com/mssql/server:latest

设置代理

sqlagent.enabled

,代理设置完成后,需要重启

Sqlserver

,因为我们是

docker

安装的,直接用

docker restart sqlserver

就行了。

[root@hdp-01 ~]# docker exec -it --user root sqlserver bash
root@0274812d0c10:/# /opt/mssql/bin/mssql-conf set sqlagent.enabled true
SQL Server needs to be restarted in order to apply this setting. Please run
'systemctl restart mssql-server.service'.
root@0274812d0c10:/# exitexit[root@hdp-01 ~]# docker restart sqlserver
sqlserver
1.3 启用CDC功能

按照如下步骤执行命令,如果看到

is_cdc_enabled = 1

,则说明当前数据库

root@0274812d0c10:/# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "${your_password}"1> create databases test;2> go
1> use test;2> go
Changed database context to 'test'.1> EXEC sys.sp_cdc_enable_db;2> go
1> SELECT is_cdc_enabled FROM sys.databases WHERE name ='test';2> go
is_cdc_enabled
--------------
             1(1 rows affected)1> CREATE TABLE t_info (id int,order_date date,purchaser int,quantity int,product_id int,PRIMARY KEY ([id]))2> go
1>2>3> EXEC sys.sp_cdc_enable_table
4> @source_schema ='dbo',
5> @source_name   ='t_info',
6> @role_name     ='cdc_role';7> go
Update mask evaluation will be disabled in net_changes_function because the CLR configuration option is disabled.
Job 'cdc.zeus_capture' started successfully.
Job 'cdc.zeus_cleanup' started successfully.
1>select * from t_info;2> go
id          order_date       purchaser   quantity    product_id 
----------- ---------------- ----------- ----------- -----------

(0 rows affected)
1.4 检查CDC是否正常开启

用客户端连接

Sqlserver

,查看

test

库下的

INFORMATION_SCHEMA.TABLES

中是否出现

TABLE_SCHEMA = cdc

的表,如果出现,说明已经成功安装

Sqlserver

并启用了

CDC

1> use test;
2> go
Changed database context to 'test'.
1> select * from INFORMATION_SCHEMA.TABLES;
2> go
TABLE_CATALOG    TABLE_SCHEMA    TABLE_NAME           TABLE_TYPE
test                dbo          user_info             BASE TABLE
test                dbo          systranschemas       BASE TABLE
test                cdc          change_tables         BASE TABLE
test                cdc          ddl_history           BASE TABLE
test                cdc          lsn_time_mapping     BASE TABLE
test                cdc          captured_columns     BASE TABLE
test                cdc          index_columns         BASE TABLE
test                dbo          orders               BASE TABLE
test                cdc          dbo_orders_CT         BASE TABLE

二、具体实现

2.1 Flik-CDC采集SqlServer主程序

添加依赖包:

<dependency><groupId>com.ververica</groupId><artifactId>flink-connector-sqlserver-cdc</artifactId><version>3.0.0</version></dependency>

编写主函数:

publicstaticvoidmain(String[] args)throwsException{StreamExecutionEnvironment env =StreamExecutionEnvironment.getExecutionEnvironment();// 设置全局并行度
        env.setParallelism(1);// 设置时间语义为ProcessingTime
        env.getConfig().setAutoWatermarkInterval(0);// 每隔60s启动一个检查点
        env.enableCheckpointing(60000,CheckpointingMode.EXACTLY_ONCE);// checkpoint最小间隔
        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(1000);// checkpoint超时时间
        env.getCheckpointConfig().setCheckpointTimeout(60000);// 同一时间只允许一个checkpoint// env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);// Flink处理程序被cancel后,会保留Checkpoint数据//   env.getCheckpointConfig().setExternalizedCheckpointCleanup(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);SourceFunction<String> sqlServerSource =SqlServerSource.<String>builder().hostname("localhost").port(1433).username("SA").password("").database("test").tableList("dbo.t_info").startupOptions(StartupOptions.initial()).debeziumProperties(getDebeziumProperties()).deserializer(newCustomerDeserializationSchemaSqlserver()).build();DataStreamSource<String> dataStreamSource = env.addSource(sqlServerSource,"_transaction_log_source");
        dataStreamSource.print().setParallelism(1);
        env.execute("sqlserver-cdc-test");}publicstaticPropertiesgetDebeziumProperties(){Properties properties =newProperties();
        properties.put("converters","sqlserverDebeziumConverter");
        properties.put("sqlserverDebeziumConverter.type","SqlserverDebeziumConverter");
        properties.put("sqlserverDebeziumConverter.database.type","sqlserver");// 自定义格式,可选
        properties.put("sqlserverDebeziumConverter.format.datetime","yyyy-MM-dd HH:mm:ss");
        properties.put("sqlserverDebeziumConverter.format.date","yyyy-MM-dd");
        properties.put("sqlserverDebeziumConverter.format.time","HH:mm:ss");return properties;}
2.2 自定义
Sqlserver

反序列化格式:

Flink-CDC

底层技术为

debezium

,它捕获到

Sqlserver

数据变更(CRUD)的数据格式如下:

#初始化
Struct{after=Struct{id=1,order_date=2024-01-30,purchaser=1,quantity=100,product_id=1},source=Struct{version=1.9.7.Final,connector=sqlserver,name=sqlserver_transaction_log_source,ts_ms=1706574924473,snapshot=true,db=zeus,schema=dbo,table=orders,commit_lsn=0000002b:00002280:0003},op=r,ts_ms=1706603724432}

#新增
Struct{after=Struct{id=12,order_date=2024-01-11,purchaser=6,quantity=233,product_id=63},source=Struct{version=1.9.7.Final,connector=sqlserver,name=sqlserver_transaction_log_source,ts_ms=1706603786187,db=zeus,schema=dbo,table=orders,change_lsn=0000002b:00002480:0002,commit_lsn=0000002b:00002480:0003,event_serial_no=1},op=c,ts_ms=1706603788461}

#更新
Struct{before=Struct{id=12,order_date=2024-01-11,purchaser=6,quantity=233,product_id=63},after=Struct{id=12,order_date=2024-01-11,purchaser=8,quantity=233,product_id=63},source=Struct{version=1.9.7.Final,connector=sqlserver,name=sqlserver_transaction_log_source,ts_ms=1706603845603,db=zeus,schema=dbo,table=orders,change_lsn=0000002b:00002500:0002,commit_lsn=0000002b:00002500:0003,event_serial_no=2},op=u,ts_ms=1706603850134}

#删除
Struct{before=Struct{id=11,order_date=2024-01-11,purchaser=6,quantity=233,product_id=63},source=Struct{version=1.9.7.Final,connector=sqlserver,name=sqlserver_transaction_log_source,ts_ms=1706603973023,db=zeus,schema=dbo,table=orders,change_lsn=0000002b:000025e8:0002,commit_lsn=0000002b:000025e8:0005,event_serial_no=1},op=d,ts_ms=1706603973859}

因此,可以根据自己需要自定义反序列化格式,将数据按照标准统一数据输出,下面是我自定义的格式,供大家参考:

importcom.alibaba.fastjson2.JSON;importcom.alibaba.fastjson2.JSONObject;importcom.alibaba.fastjson2.JSONWriter;importcom.ververica.cdc.debezium.DebeziumDeserializationSchema;importio.debezium.data.Envelope;importorg.apache.flink.api.common.typeinfo.BasicTypeInfo;importorg.apache.flink.api.common.typeinfo.TypeInformation;importorg.apache.flink.util.Collector;importorg.apache.kafka.connect.data.Field;importorg.apache.kafka.connect.data.Schema;importorg.apache.kafka.connect.data.Struct;importorg.apache.kafka.connect.source.SourceRecord;importjava.util.HashMap;importjava.util.Map;publicclassCustomerDeserializationSchemaSqlserverimplementsDebeziumDeserializationSchema<String>{privatestaticfinallong serialVersionUID =-1L;@Overridepublicvoiddeserialize(SourceRecord sourceRecord,Collector collector){Map<String,Object> resultMap =newHashMap<>();String topic = sourceRecord.topic();String[] split = topic.split("[.]");String database = split[1];String table = split[2];
        resultMap.put("db", database);
        resultMap.put("tableName", table);//获取操作类型Envelope.Operation operation =Envelope.operationFor(sourceRecord);//获取数据本身Struct struct =(Struct) sourceRecord.value();Struct after = struct.getStruct("after");Struct before = struct.getStruct("before");String op = operation.name();
        resultMap.put("op", op);//新增,更新或者初始化if(op.equals(Envelope.Operation.CREATE.name())|| op.equals(Envelope.Operation.READ.name())|| op.equals(Envelope.Operation.UPDATE.name())){JSONObject afterJson =newJSONObject();if(after !=null){Schema schema = after.schema();for(Field field : schema.fields()){
                    afterJson.put(field.name(), after.get(field.name()));}
                resultMap.put("after", afterJson);}}if(op.equals(Envelope.Operation.DELETE.name())){JSONObject beforeJson =newJSONObject();if(before !=null){Schema schema = before.schema();for(Field field : schema.fields()){
                    beforeJson.put(field.name(), before.get(field.name()));}
                resultMap.put("before", beforeJson);}}

        collector.collect(JSON.toJSONString(resultMap,JSONWriter.Feature.FieldBased,JSONWriter.Feature.LargeObject));}@OverridepublicTypeInformation<String>getProducedType(){returnBasicTypeInfo.STRING_TYPE_INFO;}}
2.3 自定义日期格式转换器
debezium

会将日期转为5位数字,日期时间转为13位的数字,因此我们需要根据

Sqlserver

的日期类型转换成标准的时期或者时间格式。

Sqlserver

的日期类型主要包含以下几种:
字段类型快照类型(jdbc type)cdc类型(jdbc type)DATEjava.sql.Date(91)java.sql.Date(91)TIMEjava.sql.Timestamp(92)java.sql.Time(92)DATETIMEjava.sql.Timestamp(93)java.sql.Timestamp(93)DATETIME2java.sql.Timestamp(93)java.sql.Timestamp(93)DATETIMEOFFSETmicrosoft.sql.DateTimeOffset(-155)microsoft.sql.DateTimeOffset(-155)SMALLDATETIMEjava.sql.Timestamp(93)java.sql.Timestamp(93)

importio.debezium.spi.converter.CustomConverter;importio.debezium.spi.converter.RelationalColumn;importorg.apache.kafka.connect.data.SchemaBuilder;importjava.time.ZoneOffset;importjava.time.format.DateTimeFormatter;importjava.util.Properties;@Sl4jpublicclassSqlserverDebeziumConverterimplementsCustomConverter<SchemaBuilder,RelationalColumn>{privatestaticfinalStringDATE_FORMAT="yyyy-MM-dd";privatestaticfinalStringTIME_FORMAT="HH:mm:ss";privatestaticfinalStringDATETIME_FORMAT="yyyy-MM-dd HH:mm:ss";privateDateTimeFormatter dateFormatter;privateDateTimeFormatter timeFormatter;privateDateTimeFormatter datetimeFormatter;privateSchemaBuilder schemaBuilder;privateString databaseType;privateString schemaNamePrefix;@Overridepublicvoidconfigure(Properties properties){// 必填参数:database.type,只支持sqlserverthis.databaseType = properties.getProperty("database.type");// 如果未设置,或者设置的不是mysql、sqlserver,则抛出异常。if(this.databaseType ==null||!this.databaseType.equals("sqlserver"))){thrownewIllegalArgumentException("database.type 必须设置为'sqlserver'");}// 选填参数:format.date、format.time、format.datetime。获取时间格式化的格式String dateFormat = properties.getProperty("format.date",DATE_FORMAT);String timeFormat = properties.getProperty("format.time",TIME_FORMAT);String datetimeFormat = properties.getProperty("format.datetime",DATETIME_FORMAT);// 获取自身类的包名+数据库类型为默认schema.nameString className =this.getClass().getName();// 查看是否设置schema.name.prefixthis.schemaNamePrefix = properties.getProperty("schema.name.prefix", className +"."+this.databaseType);// 初始化时间格式化器
        dateFormatter =DateTimeFormatter.ofPattern(dateFormat);
        timeFormatter =DateTimeFormatter.ofPattern(timeFormat);
        datetimeFormatter =DateTimeFormatter.ofPattern(datetimeFormat);}// sqlserver的转换器publicvoidregisterSqlserverConverter(String columnType,ConverterRegistration<SchemaBuilder> converterRegistration){String schemaName =this.schemaNamePrefix +"."+ columnType.toLowerCase();
        schemaBuilder =SchemaBuilder.string().name(schemaName);switch(columnType){case"DATE":
                converterRegistration.register(schemaBuilder, value ->{if(value ==null){returnnull;}elseif(value instanceofjava.sql.Date){return dateFormatter.format(((java.sql.Date) value).toLocalDate());}else{returnthis.failConvert(value, schemaName);}});break;case"TIME":
                converterRegistration.register(schemaBuilder, value ->{if(value ==null){returnnull;}elseif(value instanceofjava.sql.Time){return timeFormatter.format(((java.sql.Time) value).toLocalTime());}elseif(value instanceofjava.sql.Timestamp){return timeFormatter.format(((java.sql.Timestamp) value).toLocalDateTime().toLocalTime());}else{returnthis.failConvert(value, schemaName);}});break;case"DATETIME":case"DATETIME2":case"SMALLDATETIME":case"DATETIMEOFFSET":
                converterRegistration.register(schemaBuilder, value ->{if(value ==null){returnnull;}elseif(value instanceofjava.sql.Timestamp){return datetimeFormatter.format(((java.sql.Timestamp) value).toLocalDateTime());}elseif(value instanceofmicrosoft.sql.DateTimeOffset){microsoft.sql.DateTimeOffset dateTimeOffset =(microsoft.sql.DateTimeOffset) value;return datetimeFormatter.format(
                                dateTimeOffset.getOffsetDateTime().withOffsetSameInstant(ZoneOffset.UTC).toLocalDateTime());}else{returnthis.failConvert(value, schemaName);}});break;default:
                schemaBuilder =null;break;}}@OverridepublicvoidconverterFor(RelationalColumn relationalColumn,ConverterRegistration<SchemaBuilder> converterRegistration){// 获取字段类型String columnType = relationalColumn.typeName().toUpperCase();// 根据数据库类型调用不同的转换器if(this.databaseType.equals("sqlserver")){this.registerSqlserverConverter(columnType, converterRegistration);}else{
            log.warn("不支持的数据库类型: {}",this.databaseType);
            schemaBuilder =null;}}privateStringgetClassName(Object value){if(value ==null){returnnull;}return value.getClass().getName();}// 类型转换失败时的日志打印privateStringfailConvert(Object value,String type){String valueClass =this.getClassName(value);String valueString = valueClass ==null?null: value.toString();return valueString;}}

三、总计

目前

Fink-CDC

对这种增量采集传统数据库的技术已经封装的很好了,并且官方也给了详细的操作教程,但如果想要深入的学习一项技能,个人觉得还是要从头到尾操作一遍,一方面能够快速的提升自己,另一方面发现问题时,也能从不同的角度来思考解决方案,希望本篇文章能够给大家带来一点帮助。


本文转载自: https://blog.csdn.net/weixin_43914798/article/details/135999194
版权归原作者 码猿小站 所有, 如有侵权,请联系我们删除。

“FlinkCDC全量及增量采集SqlServer数据”的评论:

还没有评论