0


Canal+kafka 配置与部署笔记

canal部署

canal 官网 https://github.com/alibaba/canal

一、MySql开启binlog日志

找到my.cnf文件,并进行编辑

vim /usr/my.cnf

如果不知道my.cnf文件地址,可以通过

locate my.cnf

增加my.cnf配置

[mysqld]#开启binlog
log-bin=mysql-bin
# binlog格式# 1. STATEMENT:基于SQL语句的模式,binlog 数据量小,但是某些语句和函数在复制过程可能导致数据不一致甚至出错;# 2. MIXED:混合模式,根据语句来选用是 STATEMENT 还是 ROW 模式;# 3. ROW:基于行的模式,记录的是行的完整变化。安全,但 binlog 会比其他两种模式大很多;
binlog-format=ROW
#给当前mysql一个server id,同一局域网内注意要唯一,这里一定要填,有些数据库没有配置server_id=1# FULL:binlog记录每一行的完整变更 MINIMAL:只记录影响后的行binlog_row_image=FULL
...省略其他无关配置...注意:编码一定要一样

开启binlog之后重启mysql服务:

#重启mysql服务service mysqld restart
#查看mysql服务状态service mysqld status

进入mysql命令行,使用下面的命令查看binlog开启的情况:

mysql> show variables like 'log_bin';
mysql> show variables like 'binlog_format';

二、创建canal用户

在mysql命令行下,创建canal用户,用做Mysql slave的权限,如果有这个用户则不用创建

#创建用户canalCREATEUSER canal IDENTIFIED BY'canal';GRANTSELECT,REPLICATION SLAVE,REPLICATION CLIENT ON*.*TO'canal'@'%';-- GRANT ALL PRIVILEGES ON *.* TO 'canal'@'%' ;
FLUSH PRIVILEGES;

此时创建了用户名为canal密码是canal的mysql数据库用户

三、安装Canal Admin WEB UI

  • admin需要单独的数据库,因此先进行用户创建
#创建用户canal_adminCREATEUSER canal_admin IDENTIFIED BY'canal_admin_pwd';GRANTALLPRIVILEGESON*.*TO'canal_admin'@'%';
FLUSH PRIVILEGES;
  • 上传canal.admin-1.1.5.tar.gz到指定文件夹,并进行解压
tar -zxvf canal.admin-1.1.5.tar.gz -C admin

进入admin/conf文件夹下,修改配置文件

$ vim application.yml

在这里插入图片描述

其他参数不用管,如需要可以更改server.port

  • 初始化admin_manager数据库
mysql -uroot -p -h数据库的ip
# 导入初始化SQLsource conf/canal_manager.sql
  • 进入bin目录下,启动canal admin服务
./startup.sh

启动完毕后则可以输入地址,进行访问http://IP:8089/。默认管理员账号为admin/123456

四、Canal Admin配置

  • 新建集群

在这里插入图片描述

  • 进行集群配置

在这里插入图片描述

  • 进行载入模板

在这里插入图片描述

PS:有时候载入模板会失效,这里留一个模板:

##########################################################         common argument        ############################################################### tcp bind ip
canal.ip =# register ip to zookeeper
canal.register.ip =
canal.port =11111
canal.metrics.pull.port =11112# canal instance user/passwd# canal.user = canal# canal.passwd = E3619321C1A937C46A0D8BD1DAC39F93B27D4458# canal admin config#canal.admin.manager = 127.0.0.1:8089
canal.admin.port =11110
canal.admin.user = admin
canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441
# admin auto register#canal.admin.register.auto = true#canal.admin.register.cluster =#canal.admin.register.name =

canal.zkServers =# flush data to zk
canal.zookeeper.flush.period =1000
canal.withoutNetty =false# tcp, kafka, rocketMQ, rabbitMQ, pulsarMQ
canal.serverMode = tcp
# flush meta cursor/parse position to file
canal.file.data.dir =${canal.conf.dir}
canal.file.flush.period =1000## memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size =16384## memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit =1024## meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE
canal.instance.memory.rawEntry =true## detecing config
canal.instance.detecting.enable =false#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
canal.instance.detecting.sql =select1
canal.instance.detecting.interval.time =3
canal.instance.detecting.retry.threshold =3
canal.instance.detecting.heartbeatHaEnable =false# support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size =1024# mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds =60# network config
canal.instance.network.receiveBufferSize =16384
canal.instance.network.sendBufferSize =16384
canal.instance.network.soTimeout =30# binlog filter config
canal.instance.filter.druid.ddl =true
canal.instance.filter.query.dcl =false
canal.instance.filter.query.dml =false
canal.instance.filter.query.ddl =false
canal.instance.filter.table.error =false
canal.instance.filter.rows =false
canal.instance.filter.transaction.entry =false
canal.instance.filter.dml.insert =false
canal.instance.filter.dml.update =false
canal.instance.filter.dml.delete =false# binlog format/image check
canal.instance.binlog.format = ROW,STATEMENT,MIXED 
canal.instance.binlog.image = FULL,MINIMAL,NOBLOB

# binlog ddl isolation
canal.instance.get.ddl.isolation =false# parallel parser config
canal.instance.parser.parallel =true## concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()#canal.instance.parser.parallelThreadSize = 16## disruptor ringbuffer size, must be power of 2
canal.instance.parser.parallelBufferSize =256# table meta tsdb info
canal.instance.tsdb.enable =true
canal.instance.tsdb.dir =${canal.file.data.dir:../conf}/${canal.instance.destination:}
canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;
canal.instance.tsdb.dbUsername = canal
canal.instance.tsdb.dbPassword = canal
# dump snapshot interval, default 24 hour
canal.instance.tsdb.snapshot.interval =24# purge snapshot expire , default 360 hour(15 days)
canal.instance.tsdb.snapshot.expire =360##########################################################         destinations        ##############################################################
canal.destinations =# conf root dir
canal.conf.dir =../conf
# auto scan instance dir add/remove and start/stop instance
canal.auto.scan =true
canal.auto.scan.interval =5# set this value to 'true' means that when binlog pos not found, skip to latest.# WARN: pls keep 'false' in pro1aduction env, or if you know what you want.
canal.auto.reset.latest.pos.mode =false

canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml
#canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml

canal.instance.global.mode = spring
canal.instance.global.lazy =false
canal.instance.global.manager.address =${canal.admin.manager}#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
canal.instance.global.spring.xml = classpath:spring/file-instance.xml
#canal.instance.global.spring.xml = classpath:spring/default-instance.xml###########################################################           MQ Properties      ################################################################ aliyun ak/sk , support rds/mq
canal.aliyun.accessKey =
canal.aliyun.secretKey =
canal.aliyun.uid=

canal.mq.flatMessage =true
canal.mq.canalBatchSize =50
canal.mq.canalGetTimeout =100# Set this value to "cloud", if you want open message trace feature in aliyun.
canal.mq.accessChannel =local

canal.mq.database.hash =true
canal.mq.send.thread.size =30
canal.mq.build.thread.size =8###########################################################              Kafka              ###############################################################
kafka.bootstrap.servers =127.0.0.1:9092
kafka.acks = all
kafka.compression.type = none
kafka.batch.size =16384
kafka.linger.ms =1
kafka.max.request.size =1048576
kafka.buffer.memory =33554432
kafka.max.in.flight.requests.per.connection =1
kafka.retries =0

kafka.kerberos.enable =false
kafka.kerberos.krb5.file ="../conf/kerberos/krb5.conf"
kafka.kerberos.jaas.file ="../conf/kerberos/jaas.conf"###########################################################             RocketMQ         ###############################################################
rocketmq.producer.group =test
rocketmq.enable.message.trace =false
rocketmq.customized.trace.topic =
rocketmq.namespace =
rocketmq.namesrv.addr =127.0.0.1:9876
rocketmq.retry.times.when.send.failed =0
rocketmq.vip.channel.enabled =false
rocketmq.tag =###########################################################             RabbitMQ         ###############################################################
rabbitmq.host =
rabbitmq.virtual.host =
rabbitmq.exchange =
rabbitmq.username =
rabbitmq.password =
rabbitmq.deliveryMode =###########################################################               Pulsar         ###############################################################
pulsarmq.serverUrl =
pulsarmq.roleToken =
pulsarmq.topicTenantPrefix =
  • 修改配置
#IP为admin服务地址
canal.admin.manager =127.0.0.1:8089
#zookeeper配置,集群用逗号隔开,不部署canal集群可以省略
canal.zkServers =127.0.0.1:2181
#发送方式 tcp, kafka, rocketMQ, rabbitMQ
canal.serverMode = kafka
#kafka配置,集群用逗号隔开
kafka.bootstrap.servers =127.0.0.1:9092

修改完毕点击保存即可

五、安装Canal Server

  • 上传canal.deployer-1.1.5.tar.gz到指定文件夹,并进行解压
tar -zxvf canal.deployer-1.1.5.tar.gz -C canal

进入canal/conf文件夹下,修改配置文件

vim canal_local.properties

在这里插入图片描述

  • 进入bin目录进行启动
./startup.sh local

然后去管理界面进行查看

在这里插入图片描述

PS:如需多台服务实现需填第四步最后的 canal.zkServers ,新服务需修改 canal.register.ip 与 canal.admin.register.name 的值。

六、配置监控实例

  • 新建instance

在这里插入图片描述

PS:有时候载入模板会失效,这里留一个模板:

################################################### mysql serverId , v1.0.26+ will autoGen# canal.instance.mysql.slaveId=0# enable gtid use true/false
canal.instance.gtidon=false

# position info
canal.instance.master.address=127.0.0.1:3306
canal.instance.master.journal.name=
canal.instance.master.position=
canal.instance.master.timestamp=
canal.instance.master.gtid=# rds oss binlog
canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=# table meta tsdb info
canal.instance.tsdb.enable=true
#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb#canal.instance.tsdb.dbUsername=canal#canal.instance.tsdb.dbPassword=canal#canal.instance.standby.address =#canal.instance.standby.journal.name =#canal.instance.standby.position =#canal.instance.standby.timestamp =#canal.instance.standby.gtid=# username/password
canal.instance.dbUsername=canal
canal.instance.dbPassword=canal
canal.instance.connectionCharset = UTF-8
# enable druid Decrypt database password
canal.instance.enableDruid=false
#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==# table regex
canal.instance.filter.regex=.*\\..*
# table black regex
canal.instance.filter.black.regex=mysql\\.slave_.*
# table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch# table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch# mq config
canal.mq.topic=example
# dynamic topic route by schema or table regex#canal.mq.dynamicTopic=mytest1.user,mytest2\\..*,.*\\..*
canal.mq.partition=0# hash partition config#canal.mq.partitionsNum=3#canal.mq.partitionHash=test.table:id^name,.*\\..*#canal.mq.dynamicTopicPartitionNum=test.*:4,mycanal:6#################################################
  • 修改配置
#作为数据库slave的id,单独部署canal.deployer一定不要重复,通过admin配置方式好像不用
canal.instance.mysql.slaveId=1234#数据库地址
canal.instance.master.address=127.0.0.1:3306
# username/password
canal.instance.dbUsername=canal
canal.instance.dbPassword=canal
# table regex监听的表名,这个配置比较重要。此配置的意思是监听testcanal库下的stu表,topic为 testcanal_stu。
canal.instance.filter.regex=testcanal\\.stu
# mq 默认 发送的topic名字 ,如果固定topic,就配置到这个就好,不用配置下面的动态
canal.mq.topic=example
# 如需动态topic则配置dynamicTopic
canal.mq.dynamicTopic=.*\\..*
# canal.mq.partition表示数据写到kafka中哪个分片中,此时数据量不大 是只放到0分区里
canal.mq.partition=0#如果数据量大则需要配置分区数与 表名hash计算分区。官方说:会有热点表分区过大问题#canal.mq.partitionsNum=3#canal.mq.partitionHash=.*\\..*

输入Instance名称,选择完所属集群即可保存。

  • 点击启动(保存会也会自动启动)

在这里插入图片描述

  • 查看日志,是否有报错,当看到 find start position successfully, 和 cost : 60ms , the next step is binlog dump则启动成功
  • 注意,有时候kafka里有topic冲突,可能是之前创建过这个表的topic,会创建topic失败, . 与 _ 会冲突(testcanal_stu与testcanal.stu)。

七、简单测试

@ComponentpublicclassKafkaConsumer{@KafkaListener(topics ={"testcanal_stu"})publicvoidonMessage3(ConsumerRecord<?,?> consumerRecord){Optional<?> optional =Optional.ofNullable(consumerRecord.value());if(optional.isPresent()){Object msg = optional.get();
            logger.info("message:{}", msg);}}}

数据库插入数据:

message:{"data":[{"id":"1","name":"xu","age":"21","addr":"chaoyang"}],"database":"canaltest","es":1677839127000,"id":3,"isDdl":false,"mysqlType":{"id":"int","name":"varchar(255)","age":"int","addr":"varchar(255)"},"old":null,"pkNames":["id"],"sql":"","sqlType":{"id":4,"name":12,"age":4,"addr":12},"table":"stu","ts":1677839127949,"type":"INSERT"}

数据库修改数据:

message:{"data":[{"id":"2","name":"fff","age":"33","addr":"haidian"}],"database":"canaltest","es":1677840058000,"id":5,"isDdl":false,"mysqlType":{"id":"int","name":"varchar(255)","age":"int","addr":"varchar(255)"},"old":[{"name":"aaa","age":"22"}],"pkNames":["id"],"sql":"","sqlType":{"id":4,"name":12,"age":4,"addr":12},"table":"stu","ts":1677840059090,"type":"UPDATE"}

数据库删除数据:

message:{"data":[{"id":"3","name":"popo","age":"25","addr":"fengtai"}],"database":"canaltest","es":1677840415000,"id":7,"isDdl":false,"mysqlType":{"id":"int","name":"varchar(255)","age":"int","addr":"varchar(255)"},"old":null,"pkNames":["id"],"sql":"","sqlType":{"id":4,"name":12,"age":4,"addr":12},"table":"stu","ts":1677840415339,"type":"DELETE"}
标签: kafka 笔记 mysql

本文转载自: https://blog.csdn.net/xu963981912/article/details/131090988
版权归原作者 奋笔疾书xrp 所有, 如有侵权,请联系我们删除。

“Canal+kafka 配置与部署笔记”的评论:

还没有评论