0


Linux-Kafka 3.7.0 Kraft+SASL认证模式 集群安装与部署超详细

1.集群规划

一般模式下,元数据在 zookeeper 中,运行时动态选举 controller,由controller 进行 Kafka 集群管理。kraft 模式架构(实验性)下,不再依赖 zookeeper 集群,而是用三台 controller 节点代替 zookeeper,元数据保存在 controller 中,由 controller 直接进行 Kafka 集群管理。

好处有以下几个:

  • Kafka 不再依赖外部框架,而是能够独立运行
  • controller 管理集群时,不再需要从 zookeeper 中先读取数据,集群性能上升
  • 由于不依赖 zookeeper,集群扩展时不再受到 zookeeper 读写能力限制
  • controller 不再动态选举,而是由配置文件规定。可以有针对性的加强controller 节点的配置,而不是像以前一样对随机 controller 节点的高负载束手无策。
    kfka1 192.172.21.120)kfka2 192.172.21.121)kfka3 192.172.21.122)kafkakafkakafka

    2.集群部署

1.下载kafka二进制包

https://kafka.apache.org/downloads

2.解压

tar -zxvf /data/kafka_2.13-3.7.0.tgz

3.修改配置文件(kfka1 192.172.21.120上节点的配置为例)

cd /usr/kafka/kafka_2.13-3.7.0/config/kraft
vi server.properties

  1. 注:Kraft模式的配置文件在config目录的kraft子目录下
  1. # Licensed to the Apache Software Foundation (ASF) under one or more
  2. # contributor license agreements. See the NOTICE file distributed with
  3. # this work for additional information regarding copyright ownership.
  4. # The ASF licenses this file to You under the Apache License, Version 2.0
  5. # (the "License"); you may not use this file except in compliance with
  6. # the License. You may obtain a copy of the License at
  7. #
  8. # http://www.apache.org/licenses/LICENSE-2.0
  9. #
  10. # Unless required by applicable law or agreed to in writing, software
  11. # distributed under the License is distributed on an "AS IS" BASIS,
  12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. # See the License for the specific language governing permissions and
  14. # limitations under the License.
  15. #
  16. # This configuration file is intended for use in KRaft mode, where
  17. # Apache ZooKeeper is not present.
  18. #
  19. ############################# Server Basics #############################
  20. # The role of this server. Setting this puts us in KRaft mode
  21. #角色
  22. process.roles=broker,controller
  23. # The node id associated with this instance's roles
  24. #id
  25. node.id=1
  26. # The connect string for the controller quorum
  27. controller.quorum.voters=1@192.172.21.120:19093,2@192.172.21.121:19093,3@192.172.21.122:19093
  28. ############################# Socket Server Settings #############################
  29. # The address the socket server listens on.
  30. # Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
  31. # If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
  32. # with PLAINTEXT listener name, and port 19092.
  33. # FORMAT:
  34. # listeners = listener_name://host_name:port
  35. # EXAMPLE:
  36. # listeners = PLAINTEXT://your.host.name:19092
  37. listeners=SASL_PLAINTEXT://192.172.21.120:19092,CONTROLLER://192.172.21.120:19093
  38. # Name of listener used for communication between brokers.
  39. inter.broker.listener.name=SASL_PLAINTEXT
  40. # Listener name, hostname and port the broker will advertise to clients.
  41. # If not set, it uses the value for "listeners".
  42. advertised.listeners=SASL_PLAINTEXT://192.172.21.120:19092
  43. # A comma-separated list of the names of the listeners used by the controller.
  44. # If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
  45. # This is required if running in KRaft mode.
  46. controller.listener.names=CONTROLLER
  47. # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
  48. #CONTROLLER:SASL_PLAINTEXT需要修改
  49. listener.security.protocol.map=CONTROLLER:SASL_PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
  50. # 设置必须授权才能用
  51. allow.everyone.if.no.acl.found=false
  52. # The number of threads that the server uses for receiving requests from the network and sending responses to the network
  53. num.network.threads=3
  54. # The number of threads that the server uses for processing requests, which may include disk I/O
  55. num.io.threads=8
  56. # The send buffer (SO_SNDBUF) used by the socket server
  57. socket.send.buffer.bytes=102400
  58. # The receive buffer (SO_RCVBUF) used by the socket server
  59. socket.receive.buffer.bytes=102400
  60. # The maximum size of a request that the socket server will accept (protection against OOM)
  61. socket.request.max.bytes=104857600
  62. ############################# Log Basics #############################
  63. # A comma separated list of directories under which to store log files
  64. log.dirs=/data/kafka/datas
  65. # The default number of log partitions per topic. More partitions allow greater
  66. # parallelism for consumption, but this will also result in more files across
  67. # the brokers.
  68. num.partitions=1
  69. # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
  70. # This value is recommended to be increased for installations with data dirs located in RAID array.
  71. num.recovery.threads.per.data.dir=1
  72. ############################# Internal Topic Settings #############################
  73. # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
  74. # For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
  75. offsets.topic.replication.factor=1
  76. transaction.state.log.replication.factor=1
  77. transaction.state.log.min.isr=1
  78. ############################# Log Flush Policy #############################
  79. # Messages are immediately written to the filesystem but by default we only fsync() to sync
  80. # the OS cache lazily. The following configurations control the flush of data to disk.
  81. # There are a few important trade-offs here:
  82. # 1. Durability: Unflushed data may be lost if you are not using replication.
  83. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
  84. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
  85. # The settings below allow one to configure the flush policy to flush data after a period of time or
  86. # every N messages (or both). This can be done globally and overridden on a per-topic basis.
  87. # The number of messages to accept before forcing a flush of data to disk
  88. #log.flush.interval.messages=10000
  89. # The maximum amount of time a message can sit in a log before we force a flush
  90. #log.flush.interval.ms=1000
  91. ############################# Log Retention Policy #############################
  92. # The following configurations control the disposal of log segments. The policy can
  93. # be set to delete segments after a period of time, or after a given size has accumulated.
  94. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
  95. # from the end of the log.
  96. # The minimum age of a log file to be eligible for deletion due to age
  97. log.retention.hours=168
  98. # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
  99. # segments drop below log.retention.bytes. Functions independently of log.retention.hours.
  100. #log.retention.bytes=1073741824
  101. # The maximum size of a log segment file. When this size is reached a new log segment will be created.
  102. log.segment.bytes=1073741824
  103. # The interval at which log segments are checked to see if they can be deleted according
  104. # to the retention policies
  105. log.retention.check.interval.ms=300000
  106. # 认证方式,用了最简单的PLAIN,缺点是不能动态添加用户
  107. sasl.mechanism.inter.broker.protocol=PLAIN
  108. sasl.enabled.mechanisms=PLAIN
  109. sasl.mechanism=PLAIN
  110. # 禁用了自动创建topic
  111. auto.create.topics.enable = false
  112. # 设置必须授权才能用
  113. allow.everyone.if.no.acl.found=false
  114. # 设置超级管理员
  115. super.users=User:admin
  116. # 这个是3.2.0版本新引入的认证方式,可以参考 https://cwiki.apache.org/confluence/display/KAFKA/KIP-801%3A+Implement+an+Authorizer+that+stores+metadata+in+__cluster_metadata
  117. authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
  118. # 集群间认证时用的认证方式
  119. sasl.mechanism.controller.protocol=PLAIN

5.在其他节点上修改配置文件

在 192.172.21.121 和 192.172.21.122 上修改配置文件

  1. server.properties

1.
  1. node.id

注:node.id 不得重复,整个集群中唯一,且值需要和controller.quorum.voters 对应。

2.
  1. dvertised.Listeners

地址

根据各自的主机名称,修改相应的 dvertised.Listeners 地址

3.listeners地址

根据各自的主机IP修改

节点 ID

node.id=2

#不同服务器绑定的端口
listeners=SASL_PLAINTEXT://192.172.21.121:19092,CONTROLLER://192.172.21.121:19093

侦听器名称、主机名和代理将向客户端公布的端口.(broker 对外暴露的地址)

如果未设置,则使用"listeners"的值.

advertised.listeners=SASL_PLAINTEXT://192.172.21.121:19092

节点 ID

node.id=3

#不同服务器绑定的端口
listeners=SASL_PLAINTEXT://192.172.21.122:19092,CONTROLLER://192.172.21.122:19093

侦听器名称、主机名和代理将向客户端公布的端口.(broker 对外暴露的地址)

如果未设置,则使用"listeners"的值.

advertised.listeners=SASL_PLAINTEXT://192.172.21.122:19092

6.创建Kraft账号密码认证文件

  1. KafkaServer {
  2. org.apache.kafka.common.security.plain.PlainLoginModule required
  3. username="admin"
  4. password="password"
  5. user_admin="password"
  6. user_test="test";
  7. };
  • username/password 表示了认证时用的用户。
  • suer_admin="password",这个表示一个用户名为admin用户,密码是password,这个必须要有一个,且要这一个跟上面的username和password保持一致。
  • user_test="test" 是第二个用户,表示的是用户名为test的账户,密码为test。

7.初始化集群数据目录

1.首先生成存储目录唯一 ID。

bin/kafka-storage.sh random-uuid
输出ID:Mu_PwVjLQGGYBcE_EjCfmA

2.用该 ID 格式化 kafka 存储目录(每个节点都需要执行)

  1. bin/kafka-storage.sh format -t Mu_PwVjLQGGYBcE_EjCfmA -c /data/kafka/kafka_2.13-3.7.0/config/kraft/server.properties

8.启动集群

1.配置kafka服务的启动脚本

cp kafka-server-start.sh kafka-server-start-sasl.sh

  1. #!/bin/bash
  2. # Licensed to the Apache Software Foundation (ASF) under one or more
  3. # contributor license agreements. See the NOTICE file distributed with
  4. # this work for additional information regarding copyright ownership.
  5. # The ASF licenses this file to You under the Apache License, Version 2.0
  6. # (the "License"); you may not use this file except in compliance with
  7. # the License. You may obtain a copy of the License at
  8. #
  9. # http://www.apache.org/licenses/LICENSE-2.0
  10. #
  11. # Unless required by applicable law or agreed to in writing, software
  12. # distributed under the License is distributed on an "AS IS" BASIS,
  13. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  14. # See the License for the specific language governing permissions and
  15. # limitations under the License.
  16. if [ $# -lt 1 ];
  17. then
  18. echo "USAGE: $0 [-daemon] server.properties [--override property=value]*"
  19. exit 1
  20. fi
  21. base_dir=$(dirname $0)
  22. if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
  23. export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
  24. fi
  25. if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
  26. #将创建的kafka_server_jaas.conf地址添加到下面
  27. export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G -Djava.security.auth.login.config=/data/kafka/config/kafka_server_jaas.conf"
  28. fi
  29. EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'}
  30. COMMAND=$1
  31. case $COMMAND in
  32. -daemon)
  33. EXTRA_ARGS="-daemon "$EXTRA_ARGS
  34. shift
  35. ;;
  36. *)
  37. ;;
  38. esac
  39. exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@"

kafka_2.13-3.6.0-1、kafka_2.13-3.6.0-2、kafka_2.13-3.6.0-3修改部分为:

  1. if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
  2. export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G -Djava.security.auth.login.config=/data/kafka-cluster/global_config/kafka_server_jaas.conf"
  3. fi
2.在节点上依次启动 Kafka
  1. kafka-server-start-sasl.sh -daemon /data/kafka/kafka_2.13-3.7.0/config/kraft/server.properties

9.命令测试集群

1.先创建一个用于client的认证文件
vim jaas.properties

  1. 配置上一个用户
  1. sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="password";
  2. security.protocol=SASL_PLAINTEXT
  3. sasl.mechanism=PLAI

#执行命令式,后面都要带上 --command-config ./jaas.properties来进行用户认证
3.创建 topic create-for-test 到bin下面

  1. bin/kafka-topics.sh --bootstrap-server 192.172.21.120:19092 --create --topic repair.queue --partitions 1 --replication-factor 1 --command-config /data/kafka/config/jaas.properties
  1. 查看topic应该只能看到 create-for-test
  1. ./kafka-console-producer.sh broker-list --bootstrap-server 192.172.21.120:19092 --topic create-for-test --producer.config /data/kafka/config/jaas.properties

4.测试进行消费先创建kafka_client_jaas.conf

  1. KafkaClient {
  2. org.apache.kafka.common.security.plain.PlainLoginModule required
  3. username="admin"
  4. password="password";
  5. };

5.修改kafka-console-producer.sh和kafka-console-consumer.sh启动文件两个都要改

  1. #!/bin/bash
  2. # Licensed to the Apache Software Foundation (ASF) under one or more
  3. # contributor license agreements. See the NOTICE file distributed with
  4. # this work for additional information regarding copyright ownership.
  5. # The ASF licenses this file to You under the Apache License, Version 2.0
  6. # (the "License"); you may not use this file except in compliance with
  7. # the License. You may obtain a copy of the License at
  8. #
  9. # http://www.apache.org/licenses/LICENSE-2.0
  10. #
  11. # Unless required by applicable law or agreed to in writing, software
  12. # distributed under the License is distributed on an "AS IS" BASIS,
  13. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  14. # See the License for the specific language governing permissions and
  15. # limitations under the License.
  16. if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
  17. export KAFKA_HEAP_OPTS="-Xmx512M"
  18. fi
  19. #添加-Djava.security.auth.login.config=/data/kafka/config/kafka_client_jaas.conf
  20. exec $(dirname $0)/kafka-run-class.sh -Djava.security.auth.login.config=/data/kafka/config/kafka_client_jaas.conf kafka.tools.ConsoleProducer "$@"

6.打开生产监控等待消费查看

  1. ./kafka-console-producer.sh broker-list --bootstrap-server 192.172.21.120:19092 --topic s_system_trace_topic --producer.config /data/kafka/config/jaas.properties

7.进消费数据在生产监控看到这样就完成测试了

  1. ./kafka-console-consumer.sh --bootstrap-server 192.172.21.120:19092 --topic create-for-test --from-beginning --consumer.config /data/kafka/config/jaas.properties

8.删除测试主题

  1. bin/kafka-topics.sh --bootstrap-server 192.172.21.120:19092 --delete --topic create-for-test --command-config /data/kafka/config/jaas.properties

如果不需要加SASL认证参考:https://www.cnblogs.com/fanqisoft/p/18027195

那不懂的可以联系博主哦

标签: linux kafka 运维

本文转载自: https://blog.csdn.net/qq_41118173/article/details/140183152
版权归原作者 即将雄起的运维玩家 所有, 如有侵权,请联系我们删除。

“Linux-Kafka 3.7.0 Kraft+SASL认证模式 集群安装与部署超详细”的评论:

还没有评论