0


Zookeeper和Kafka on k8s 部署

这里写目录标题

一、概述

  • Apache ZooKeeper 是一个集中式服务,用于维护配置信息、命名、提供分布式同步和提供组服务,ZooKeeper 致力于开发和维护一个开源服务器,以实现高度可靠的分布式协调,其实也可以认为就是一个分布式数据库,只是结构比较特殊,是树状结构。官网文档:https://zookeeper.apache.org/doc/r3.8.0/

在这里插入图片描述

  • Kafka是最初由 Linkedin 公司开发,是一个分布式、支持分区的(partition)、多副本的(replica),基于 zookeeper 协调的分布式消息系统。官方文档:
  1. https://kafka.apache.org/documentation/

在这里插入图片描述

二、Zookeeper on k8s 部署

在这里插入图片描述
1)添加源
部署包地址:
https://artifacthub.io/packages/helm/zookeeper/zookeeper

  1. helm repo add bitnami https://charts.bitnami.com/bitnami
  2. helm pull bitnami/zookeeper
  3. tar -xf zookeeper-10.2.1.tgz

2)修改配置
修改zookeeper/values.yaml

  1. image:registry: myharbor.com
  2. repository: bigdata/zookeeper
  3. tag: 3.8.0-debian-11-r36
  4. ...replicaCount:3...service:type: NodePort
  5. nodePorts:#NodePort 默认范围是 30000-32767client:"32181"tls:"32182"...persistence:storageClass:"zookeeper-local-storage"size:"10Gi"# 目录需要提前在宿主机上创建local:-name: zookeeper-0host:"local-168-182-110"path:"/opt/bigdata/servers/zookeeper/data/data1"-name: zookeeper-1host:"local-168-182-111"path:"/opt/bigdata/servers/zookeeper/data/data1"-name: zookeeper-2host:"local-168-182-112"path:"/opt/bigdata/servers/zookeeper/data/data1"...# Enable Prometheus to access ZooKeeper metrics endpointmetrics:enabled:true

添加zookeeper/templates/pv.yaml

  1. {{- range .Values.persistence.local }}---apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:name:{{ .name }}labels:name:{{ .name }}spec:storageClassName:{{ $.Values.persistence.storageClass }}capacity:storage:{{ $.Values.persistence.size }}accessModes:- ReadWriteOnce
  4. local:path:{{ .path }}nodeAffinity:required:nodeSelectorTerms:-matchExpressions:-key: kubernetes.io/hostname
  5. operator: In
  6. values:-{{ .host }}---{{- end }}

添加zookeeper/templates/storage-class.yaml

  1. kind: StorageClass
  2. apiVersion: storage.k8s.io/v1
  3. metadata:name:{{ .Values.persistence.storageClass }}provisioner: kubernetes.io/no-provisioner

3)开始安装

  1. # 先准备好镜像
  2. docker pull docker.io/bitnami/zookeeper:3.8.0-debian-11-r36
  3. docker tag docker.io/bitnami/zookeeper:3.8.0-debian-11-r36 myharbor.com/bigdata/zookeeper:3.8.0-debian-11-r36
  4. docker push myharbor.com/bigdata/zookeeper:3.8.0-debian-11-r36
  5. # 开始安装
  6. helm install zookeeper ./zookeeper -n zookeeper --create-namespace

在这里插入图片描述

查看 pod 状态

  1. kubectl get pods,svc -n zookeeper -o wide

4)测试验证

  1. # 登录zookeeper pod
  2. kubectl exec -it zookeeper-0 -n zookeeper -- zkServer.sh status
  3. kubectl exec -it zookeeper-1 -n zookeeper -- zkServer.sh status
  4. kubectl exec -it zookeeper-2 -n zookeeper -- zkServer.sh status
  5. kubectl exec -it zookeeper-0 -n zookeeper -- bash

在这里插入图片描述

5)Prometheus 监控
Prometheus:https://prometheus.k8s.local/targets?search=zookeeper

可以通过命令查看采集数据

  1. kubectl get --raw http://10.244.0.52:9141/metrics
  2. kubectl get --raw http://10.244.1.101:9141/metrics
  3. kubectl get --raw http://10.244.2.137:9141/metrics

Grafana:https://grafana.k8s.local/
账号:admin,密码通过下面命令获取

  1. kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

导入 grafana 模板,集群资源监控:10465
官方模块下载地址:https://grafana.com/grafana/dashboards/
在这里插入图片描述
6)卸载

  1. helm uninstall zookeeper -n zookeeper
  2. kubectl delete pod -n zookeeper `kubectl get pod -n zookeeper|awk 'NR>1{print $1}'` --force
  3. kubectl patch ns zookeeper -p '{"metadata":{"finalizers":null}}'
  4. kubectl delete ns zookeeper --force

三、Kafka on k8s 部署

1)添加源
部署包地址:https://artifacthub.io/packages/helm/bitnami/kafka

  1. helm repo add bitnami https://charts.bitnami.com/bitnami
  2. helm pull bitnami/kafka
  3. tar -xf kafka-18.4.2.tgz

2)修改配置
修改kafka/values.yaml

  1. image:registry: myharbor.com
  2. repository: bigdata/kafka
  3. tag: 3.2.1-debian-11-r16
  4. ...replicaCount:3...service:type: NodePort
  5. nodePorts:client:"30092"external:"30094"...
  6. externalAccess
  7. enabled:trueservice:type: NodePort
  8. nodePorts:-30001-30002-30003useHostIPs:true...persistence:storageClass:"kafka-local-storage"size:"10Gi"# 目录需要提前在宿主机上创建local:-name: kafka-0host:"local-168-182-110"path:"/opt/bigdata/servers/kafka/data/data1"-name: kafka-1host:"local-168-182-111"path:"/opt/bigdata/servers/kafka/data/data1"-name: kafka-2host:"local-168-182-112"path:"/opt/bigdata/servers/kafka/data/data1"...metrics:kafka:enabled:trueimage:registry: myharbor.com
  9. repository: bigdata/kafka-exporter
  10. tag: 1.6.0-debian-11-r8
  11. jmx:enabled:trueimage:registry: myharbor.com
  12. repository: bigdata/jmx-exporter
  13. tag: 0.17.1-debian-11-r1
  14. annotations:prometheus.io/path:"/metrics"...zookeeper:enabled:false...
  15. externalZookeeper
  16. servers:- zookeeper-0.zookeeper-headless.zookeeper
  17. - zookeeper-1.zookeeper-headless.zookeeper
  18. - zookeeper-2.zookeeper-headless.zookeeper

添加kafka/templates/pv.yaml

  1. {{- range .Values.persistence.local }}---apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:name:{{ .name }}labels:name:{{ .name }}spec:storageClassName:{{ $.Values.persistence.storageClass }}capacity:storage:{{ $.Values.persistence.size }}accessModes:- ReadWriteOnce
  4. local:path:{{ .path }}nodeAffinity:required:nodeSelectorTerms:-matchExpressions:-key: kubernetes.io/hostname
  5. operator: In
  6. values:-{{ .host }}---{{- end }}

添加kafka/templates/storage-class.yaml

  1. kind: StorageClass
  2. apiVersion: storage.k8s.io/v1
  3. metadata:name:{{ .Values.persistence.storageClass }}provisioner: kubernetes.io/no-provisioner

3)开始安装

  1. # 先准备好镜像
  2. docker pull docker.io/bitnami/kafka:3.2.1-debian-11-r16
  3. docker tag docker.io/bitnami/kafka:3.2.1-debian-11-r16 myharbor.com/bigdata/kafka:3.2.1-debian-11-r16
  4. docker push myharbor.com/bigdata/kafka:3.2.1-debian-11-r16
  5. # node-export
  6. docker pull docker.io/bitnami/kafka-exporter:1.6.0-debian-11-r8
  7. docker tag docker.io/bitnami/kafka-exporter:1.6.0-debian-11-r8 myharbor.com/bigdata/kafka-exporter:1.6.0-debian-11-r8
  8. docker push myharbor.com/bigdata/kafka-exporter:1.6.0-debian-11-r8
  9. # JXM
  10. docker.io/bitnami/jmx-exporter:0.17.1-debian-11-r1
  11. docker tag docker.io/bitnami/jmx-exporter:0.17.1-debian-11-r1 myharbor.com/bigdata/jmx-exporter:0.17.1-debian-11-r1
  12. docker push myharbor.com/bigdata/jmx-exporter:0.17.1-debian-11-r1
  13. #开始安装
  14. helm install kafka ./kafka -n kafka --create-namespace

在这里插入图片描述
查看 pod 状态

  1. kubectl get pods,svc -n kafka -o wide

4)测试验证

  1. # 登录zookeeper pod
  2. kubectl exec -it kafka-0 -n kafka -- bash

1、创建 Topic(一个副本一个分区)

  1. --create 指定创建topic动作
  2. --topic:指定新建topic的名称
  3. --bootstrap-server 指定kafka连接地址
  4. --config:指定当前topic上有效的参数值,参数列表参考文档为: Topic-level configuration
  5. --partitions:指定当前创建的kafka分区数量,默认为1
  6. --replication-factor:指定每个分区的复制因子个数,默认1
  1. kafka-topics.sh --create --topic test001 --bootstrap-server kafka.kafka:9092 --partitions 1 --replication-factor 1
  2. # 查看
  3. kafka-topics.sh --describe --bootstrap-server kafka.kafka:9092 --topic test001

2、查看 Topic 列表

  1. kafka-topics.sh --list --bootstrap-server kafka.kafka:9092

3、生产者/消费者测试
【生产者】

  1. kafka-console-producer.sh --broker-list kafka.kafka:9092 --topic test001
  2. {"id":"1","name":"n1","age":"20"}{"id":"2","name":"n2","age":"21"}{"id":"3","name":"n3","age":"22"}

【消费者】

  1. # 从头开始消费
  2. kafka-console-consumer.sh --bootstrap-server kafka.kafka:9092 --topic test001 --from-beginning
  3. # 指定从分区的某个位置开始消费,这里只指定了一个分区,可以多写几行或者遍历对应的所有分区
  4. kafka-console-consumer.sh --bootstrap-server kafka.kafka:9092 --topic test001 --partition 0 --offset 100 --group test001

4、查看数据积压

  1. kafka-consumer-groups.sh --bootstrap-server kafka.kafka:9092 --describe --group test001

5、删除 topic

  1. kafka-topics.sh --delete --topic test001 --bootstrap-server kafka.kafka:9092

5)Prometheus 监控
Prometheus:https://prometheus.k8s.local/targets?search=kafka
在这里插入图片描述
可以通过命令查看采集数据

  1. kubectl get --raw http://10.244.2.165:9308/metrics

Grafana:https://grafana.k8s.local/
账号:admin,密码通过下面命令获取

  1. kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

导入 grafana 模板,集群资源监控:11962
官方模块下载地址:https://grafana.com/grafana/dashboards/

6)卸载

  1. helm uninstall kafka -n kafka
  2. kubectl delete pod -n kafka `kubectl get pod -n kafka|awk 'NR>1{print $1}'` --force
  3. kubectl patch ns kafka -p '{"metadata":{"finalizers":null}}'
  4. kubectl delete ns kafka --force

zookeeper + kafka on k8s 环境部署


本文转载自: https://blog.csdn.net/qq_39578545/article/details/127034447
版权归原作者 果子哥丶 所有, 如有侵权,请联系我们删除。

“Zookeeper和Kafka on k8s 部署”的评论:

还没有评论