0


clickhouse使用clickhouse-keeper代替zookeeper

背景:clickhouse分布式表使用zookeeper作为元数据的存储,客户端每次读写分布式表都会读写zookeeper。 zookeeper是个小型的日志文件系统,在大范围读写时会进入只读模式

clickhouse官方为了解决这个,自己开发了clickhouse-keeper来代替。在21.8版本开始引入,21.12 featrue开发完毕,22.05不依赖系统库。

据官网自己说,目前22.5版的写性能和zookeeper相当,读的性能比zookeeper好。

异常现象:

1. clickhouse的异常日志

可以看到说socket和zookeeer连接不上 xxx.xxx.xxx.xxx:2181)

2022.04.01 17:11:01.452465 [ 428517 ] {} <Error> void Coordination::ZooKeeper::sendThread(): Code: 210, e.displayText() = DB::NetException: I/O error: 23: Can't create epoll queue, while writing to socket (20.20.20.34:2181), Stack trace (when copying this message, always include the lines below):
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8f9b87a in /usr/lib/debug/.build-id/b1/6d23354750e4d6ff9887c2b4f856f045d62da0.debug
2. DB::WriteBufferFromPocoSocket::nextImpl() @ 0x100764a0 in /usr/lib/debug/.build-id/b1/6d23354750e4d6ff9887c2b4f856f045d62da0.debug

2. 追踪对应节点的zookeeper日志

看到 zookeeper进入了只读模式(r-o mode)

2022-04-01 07:21:14,189 [myid:3] - INFO  [PurgeTask:FileTxnSnapLog@124] - zookeeper.snapshot.trust.empty : false
2022-04-01 07:21:14,191 [myid:3] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@145] - Purge task completed.
2022-04-01 17:07:55,961 [myid:3] - INFO  [SessionTracker:ZooKeeperServer@628] - Expiring session 0x31056da7a8a0000, timeout of 30000ms exceeded
2022-04-01 17:07:55,962 [myid:3] - INFO  [RequestThrottler:QuorumZooKeeperServer@163] - Submitting global closeSession request for session 0x31056da7a8a0000
2022-04-01 17:10:23,523 [myid:3] - WARN  [NIOWorkerThread-75:ZooKeeperServer@1411] - Connection request from old client /20.20.20.46:62879; will be dropped if server is in r-o mode
2022-04-01 17:10:23,534 [myid:3] - INFO  [CommitProcessor:3:LeaderSessionTracker@104] - Committing global session 0x31056da7a8a0001
2022-04-01 17:11:01,453 [myid:3] - WARN  [NIOWorkerThread-20:NIOServerCnxn@371] - Unexpected exception
EndOfStreamException: Unable to read additional data from client, it probably closed the socket: address = /20.20.20.46:62879, session = 0x31056da7a8a0001
   at org.apache.zookeeper.server.NIOServerCnxn.handleFailedRead(NIOServerCnxn.java:170)
   at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:333)
   at org.apache.zookeeper.server.NIOServerCnxnFactory$IOWorkRequest.doWork(NIOServerCnxnFactory.java:508)
   at org.apache.zookeeper.server.WorkerService$ScheduledWorkRequest.run(WorkerService.java:154)
   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748)

使用clickhouse-keeper代替 zookeeper的步骤:

1- 准备 clickhouse-keeper的配置文件 (config.xml)
2- 备份 clickhouse-server的配置文件和数据,以及zookeeper的元数据
3- 下载 clickhouse-keeper-converter (集成在clickhouse中了)

4- 迁移以前zookeeper元数据到clickhouse-keeper

a. 停止所有zk节点
 b. 找到zk leader节点
 c. 重启zk leader节点,并再次停止(这一步是为了让leader节点生成一份snapshot)
 d. 运行clickhouse-keeper-converter,生成keeper的snapshot文件
 e. 启动keeper, 使其加载上一步中的snapshot

5- 重启clickhouse-server


1: 准备 clickhouse-keeper的配置文件

keeper在clickhousenode上的配置 config.xml

1.1- 设置通信地址,以便对外通信

<listen_host>0.0.0.0</listen_host>

1.2- 在config.xml中的 zookeeper配置 clickhouse-keeper的地址,keeper的属性, 端口,存放地址等。

a. 检查端口是否被占用

假设keeper的端口是9181,与server通信接口为9444

netstat -anp | grep 9181 
netstat -anp | grep 9444

b. 设置 clickhouse-keeper的地址,每个节点内容一致

<zookeeper>
        <node>
            <host>clickhouse-node01</host>
            <port>9181</port>
        </node>
        <node>
            <host>clickhouse-node02</host>
            <port>9181</port>
        </node>
        <node>
            <host>clickhouse-node03</host>
            <port>9181</port>
        </node>
    </zookeeper>

c. 设置clickhouse-keeper的server_id 和clickhouse-server通信端口 9444

每个节点的server_id要确保唯一,不能和其他节点重复 , keeper_server中的server_id是要和配置raft协议集群时命名的一致。

例如:

在 clickhouse-node01上的配置, <server_id>1</server_id>

在clickhouse_node02上的配置, <server_id>2</server_id>

<keeper_server>
    <tcp_port>9181</tcp_port>
    <server_id>1</server_id>
    <log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
    <snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>

    <coordination_settings>
        <operation_timeout_ms>10000</operation_timeout_ms>
        <session_timeout_ms>30000</session_timeout_ms>
        <raft_logs_level>warning</raft_logs_level>
    </coordination_settings>

    <raft_configuration>
        <server>
            <id>1</id>
            <hostname>clickhouse-node01</hostname>
            <port>9444</port>
        </server>
        <server>
            <id>2</id>
            <hostname>clickhouse-node02</hostname>
            <port>9444</port>
        </server>
        <server>
            <id>3</id>
            <hostname>clickhouse-node03</hostname>
            <port>9444</port>
        </server>
    </raft_configuration>
</keeper_server>

2:备份 clickhouse-server的配置文件和数据,以及zookeeper的元数据。(预防升级失败需要回滚)

a. clickhouse的数据,在config.xml

<path>/data/1/clickhouse</path>

b. zookeeper的数据: 在zoo.cfg,数据目录。

  dataDir=/data/1/zookeeper/data    (存储snap数据)
  dataLogDir=/data/1/zookeeper/logs   (存储 transation命令)

3:升级clickhouse。( 包含了clickhouse-server,clickhouse-common, clickhouse-keeper和clickhouse-keeper-converter )

  以先卸载,再安装的升级方法为例
## 卸载
yum remove -y clickhouse-client.noarch clickhouse-common-static.x86_64 clickhouse-common-static-dbg.x86_64 clickhouse-server.noarch

## 下载安装
yum install -y clickhouse-server-22.8.4.7-1.x86_64 clickhouse-client-22.8.4.7-1.x86_64 clickhouse-common-static-22.8.4.7-1.x86_64  clickhouse-common-static-dbg-22.8.4.7-1.x86_64

4: 迁移zk的元数据到 clickhouse-keeper

a- 停止所有的zk 节点。

   在所有的zk节点上 执行
 /usr/local/zookeeper/bin/zkServer.sh stop

b- 建议 启停zk的leader,以便强制 复制一份 一致性的快照。

   查找leader节点, 在所有的节点上执行
/usr/local/zookeeper/bin/zkServer.sh stop


在leader节点上 启停zookeeper

/usr/local/zookeeper/bin/zkServer.sh start
/usr/local/zookeeper/bin/zkServer.sh stop

c- 运行 clickhouse-keeper-converter , 生成snapshot

clickhouse-keeper-converter --zookeeper-logs-dir /data/1/zookeeper/logs/version-2 --zookeeper-snapshots-dir /data/1/zookeeper/data/version-2 --output-dir /var/lib/clickhouse/coordination/snapshots

e- 单独启动clickhouse-keeper

如果server和keeper是安装在同一个节点,这步可以省略

sudo -su clickhouse
clickhouse-keeper --config  /etc/clickhouse-server/config.xml

5- 重启 clickhouse

/usr/bin/clickhouse-server stop
/usr/bin/clickhouse-server start --config=/etc/clickhouse-server/config.xml

6- 验证是否clickhouse-keeper是否正常运行

 echo ruok | nc localhost 9181; echo

期望看到:imok

7- 验证clickhouse是否正常运行

连接客户端,创建一张分布式表,查看数据是否能正常操作分布式表(CRUD)。

 select * from system.clusters;


在重启clickhouse时可能遇到认证问题

1- 异常log

<Error> CertificateReloader: Cannot obtain modification time for certificate file /etc/clickhouse-server/server.crt, skipping update. errno: 2, strerror: No such file or directory

解决方法:在所有的clickhouse-server节点上执行

openssl req -subj "/CN=localhost" -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt

2- 异常log

Error opening Diffie-Hellman parameters file /etc/clickhouse-server/dhparam.pem

解决方法: 在所有的clickhouse-server节点上执行

openssl dhparam -out /etc/clickhouse-server/dhparam.pem 4096

clickhouse既然已经支持多zk集群,是否可以让ck同时访问zk集群和keeper集群?

答: 不可以,官网明确说了


参考文档:

ClickHouse Keeper | ClickHouse Docs
Configuring ClickHouse Keeper (clickhouse-keeper) | ClickHouse Docs

标签: clickhouse zookeeper

本文转载自: https://blog.csdn.net/zhang5324496/article/details/127869781
版权归原作者 龟速扣代码 所有, 如有侵权,请联系我们删除。

“clickhouse使用clickhouse-keeper代替zookeeper”的评论:

还没有评论