0


kubernetes集群下部署kafka+zookeeper单机部署方案

背景:

注:在kubernetes集群上部署单机版的zookeeper+kafka服务,是采用了kubernetes中的deploment组件+service组件+pvc存储组件

1、部署zookeeper服务:

注:这里时候的镜像是:dockerhub.jiang.com/jiang-public/zookeeper:3.5.9

  1. 镜像下载地址:

registry.cn-hangzhou.aliyuncs.com/images-speed-up/zookeeper:3.5.9

  1. 开发deploment控制器的配置yaml:
  1. kind: Deployment
  2. metadata:
  3. name: zookeeper-kultz
  4. namespace: sit
  5. labels:
  6. app: zookeeper-kultz
  7. name: zookeeper
  8. version: v3.5.9
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. app: zookeeper-kultz
  14. name: zookeeper
  15. template:
  16. metadata:
  17. labels:
  18. app: zookeeper-kultz
  19. name: zookeeper
  20. version: v3.5.9
  21. spec:
  22. volumes:
  23. - name: zookeeper-pvc
  24. persistentVolumeClaim:
  25. claimName: zookeeper-pvc
  26. containers:
  27. - name: zookeeper
  28. image: 'dockerhub.jiang.com/jiang-public/zookeeper:3.5.9'
  29. ports:
  30. - containerPort: 2181
  31. protocol: TCP
  32. env:
  33. - name: ALLOW_ANONYMOUS_LOGIN
  34. value: 'yes'
  35. resources:
  36. limits:
  37. cpu: '1'
  38. memory: 2Gi
  39. requests:
  40. cpu: 800m
  41. memory: 2Gi
  42. volumeMounts:
  43. - name: zookeeper-pvc
  44. mountPath: /bitnami/zookeeper/data
  45. terminationMessagePath: /dev/termination-log
  46. terminationMessagePolicy: File
  47. imagePullPolicy: IfNotPresent
  48. securityContext:
  49. privileged: false
  50. restartPolicy: Always
  51. terminationGracePeriodSeconds: 30
  52. dnsPolicy: ClusterFirst
  53. serviceAccountName: default
  54. serviceAccount: default
  55. securityContext:
  56. runAsUser: 0
  57. fsGroup: 0
  58. imagePullSecrets:
  59. - name: user-1-registrysecret
  60. affinity: {}
  61. schedulerName: default-scheduler
  62. strategy:
  63. type: Recreate
  64. minReadySeconds: 10
  65. revisionHistoryLimit: 10
  66. progressDeadlineSeconds: 600

注:这里有两点需要注意的,
1、是env的配置, ALLOW_ANONYMOUS_LOGIN='yes'
2、securityContext:的配置,需要设置 runAsUser: 0和fsGroup: 0,否则提示报错:
mkdir: cannot create directory '/bitnami/zookeeper/data': Permission denied

不然zookeeper服务会无法启动起来。

pvc存储是挂载了/bitnami/zookeeper/data位置,这个地址是zoo.cfg里的配置。

  1. pvc存储yaml配置:
  1. apiVersion: storage.k8s.io/v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: zookeeper-pvc
  5. namespace: sit
  6. finalizers:
  7. - kubernetes.io/pvc-protection
  8. spec:
  9. accessModes:
  10. - ReadWriteOnce
  11. resources:
  12. requests:
  13. storage: 10Gi
  14. storageClassName: hpe-san
  15. volumeMode: Filesystem

注:这里是用storageclaas控制器,所以这里就不多介绍了。

4.zookeeper的service控制器配置:

  1. kind: Service
  2. metadata:
  3. name: zookeeper
  4. namespace: sit
  5. labels:
  6. name: zookeeper
  7. system/appName: nginxdemo0516
  8. spec:
  9. ports:
  10. - name: tcp-port-0
  11. protocol: TCP
  12. port: 2181
  13. targetPort: 2181
  14. selector:
  15. name: zookeeper
  16. type: ClusterIP
  17. sessionAffinity: None
  18. status:
  19. loadBalancer: {}

5.运行zookeeper容器放服务:

先创建pvc存储:

  1. # kubectl apply -f zookeeper-pvc.yaml

再次创建deployment:

  1. # kubectl apply -f zookeeper-deploy.yaml

最后创建zookeeper的service服务:

  1. # kubectl apply -f zookeeper-svc.yaml

2、部署kafka服务:

注:这里时候的镜像是:dockerhub.jiang.com/jiang-public/kafka:3.2.1

  1. 镜像下载地址:

registry.cn-hangzhou.aliyuncs.com/images-speed-up/kafka:3.2.1

  1. 开发deploment控制器的配置yaml:
  1. kind: Deployment
  2. metadata:
  3. name: kafka-jbhpb
  4. namespace: sit
  5. labels:
  6. app: kafka-jbhpb
  7. name: kafka
  8. version: v3.2.1
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. app: kafka-jbhpb
  14. name: kafka
  15. template:
  16. metadata:
  17. labels:
  18. app: kafka-jbhpb
  19. name: kafka
  20. version: v3.2.1
  21. spec:
  22. volumes:
  23. - name: kafka-pvc
  24. persistentVolumeClaim:
  25. claimName: kafka-pvc
  26. containers:
  27. - name: kafka
  28. image: 'dockerhub.jiang.com/jiang-public/kafka:3.2.1'
  29. ports:
  30. - containerPort: 9092
  31. protocol: TCP
  32. env:
  33. - name: KAFKA_ZOOKEEPER_CONNECT
  34. value: 'zookeeper.sit:2181'
  35. - name: ALLOW_PLAINTEXT_LISTENER
  36. value: 'yes'
  37. resources:
  38. limits:
  39. cpu: '1'
  40. memory: 2Gi
  41. requests:
  42. cpu: 800m
  43. memory: 2Gi
  44. volumeMounts:
  45. - name: kafka-pvc
  46. mountPath: /bitnami/kafka/data/
  47. terminationMessagePath: /dev/termination-log
  48. terminationMessagePolicy: File
  49. imagePullPolicy: IfNotPresent
  50. securityContext:
  51. privileged: false
  52. restartPolicy: Always
  53. terminationGracePeriodSeconds: 30
  54. dnsPolicy: ClusterFirst
  55. serviceAccountName: default
  56. serviceAccount: default
  57. securityContext:
  58. runAsUser: 0
  59. fsGroup: 0
  60. imagePullSecrets:
  61. - name: user-1-registrysecret
  62. affinity: {}
  63. schedulerName: default-scheduler
  64. strategy:
  65. type: Recreate
  66. minReadySeconds: 10
  67. revisionHistoryLimit: 10
  68. progressDeadlineSeconds: 600

注:这里有两点需要注意的,
1、是env的配置,
KAFKA_ZOOKEEPER_CONNECT='zookeeper.sit:2181'
ALLOW_PLAINTEXT_LISTENER=yes

2、securityContext:的配置,需要设置 runAsUser: 0和fsGroup: 0,不然挂载存储时,会提示报错:
mkdir: cannot create directory '/bitnami/kafka/data': Permission denied

不然kafka服务会无法启动起来。

pvc存储是挂载了/bitnami/kafka/data位置,这个地址是server.properties里的配置。

  1. pvc存储yaml配置:
  1. apiVersion: storage.k8s.io/v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: kafka-pvc
  5. namespace: sit
  6. finalizers:
  7. - kubernetes.io/pvc-protection
  8. spec:
  9. accessModes:
  10. - ReadWriteOnce
  11. resources:
  12. requests:
  13. storage: 5Gi
  14. storageClassName: hpe-san
  15. volumeMode: Filesystem

4.kafka的service控制器配置:

  1. kind: Service
  2. metadata:
  3. name: kafka
  4. namespace: sit
  5. labels:
  6. name: kafka
  7. system/appName: nginxdemo0516
  8. spec:
  9. ports:
  10. - name: tcp-port-0
  11. protocol: TCP
  12. port: 9092
  13. targetPort: 9092
  14. selector:
  15. name: kafka
  16. type: ClusterIP
  17. sessionAffinity: None
  18. status:
  19. loadBalancer: {}

5.运行kafka容器放服务:

先创建pvc存储:

  1. # kubectl apply -f kafka-pvc.yaml

再次创建deployment:

  1. # kubectl apply -f kafka-deploy.yaml

最后创建zookeeper的service服务:

  1. # kubectl apply -f kafka-svc.yaml

3、测试kafka功能:

在kafak容器里操作:

创建topic:

  1. root@kafka-jbhpb-78bb6df4dc-xhmp6:/opt/bitnami/kafka/bin# ./kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 1 --replication-factor 1
  2. Created topic my-topic.

查看topic:

  1. root@kafka-jbhpb-78bb6df4dc-xhmp6:/opt/bitnami/kafka/bin# ./kafka-topics.sh --bootstrap-server localhost:9092 --list
  2. my-topic

这里就是成功了。


本文转载自: https://blog.csdn.net/jiang0615csdn/article/details/141953534
版权归原作者 jiang0615csdn 所有, 如有侵权,请联系我们删除。

“kubernetes集群下部署kafka+zookeeper单机部署方案”的评论:

还没有评论