0


kubernetes集群下部署kafka+zookeeper单机部署方案

背景:

注:在kubernetes集群上部署单机版的zookeeper+kafka服务,是采用了kubernetes中的deploment组件+service组件+pvc存储组件

1、部署zookeeper服务:

注:这里时候的镜像是:dockerhub.jiang.com/jiang-public/zookeeper:3.5.9

  1. 镜像下载地址:

registry.cn-hangzhou.aliyuncs.com/images-speed-up/zookeeper:3.5.9

  1. 开发deploment控制器的配置yaml:
kind: Deployment
metadata:
  name: zookeeper-kultz
  namespace: sit
  labels:
    app: zookeeper-kultz
    name: zookeeper
    version: v3.5.9
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper-kultz
      name: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper-kultz
        name: zookeeper
        version: v3.5.9
    spec:
      volumes:
        - name: zookeeper-pvc
          persistentVolumeClaim:
            claimName: zookeeper-pvc
      containers:
        - name: zookeeper
          image: 'dockerhub.jiang.com/jiang-public/zookeeper:3.5.9'
          ports:
            - containerPort: 2181
              protocol: TCP
          env:
            - name: ALLOW_ANONYMOUS_LOGIN
              value: 'yes'
          resources:
            limits:
              cpu: '1'
              memory: 2Gi
            requests:
              cpu: 800m
              memory: 2Gi
          volumeMounts:
            - name: zookeeper-pvc
              mountPath: /bitnami/zookeeper/data
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: default
      serviceAccount: default
      securityContext:
        runAsUser: 0
        fsGroup: 0
      imagePullSecrets:
        - name: user-1-registrysecret
      affinity: {}
      schedulerName: default-scheduler
  strategy:
    type: Recreate
  minReadySeconds: 10
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

注:这里有两点需要注意的,
1、是env的配置, ALLOW_ANONYMOUS_LOGIN='yes'
2、securityContext:的配置,需要设置 runAsUser: 0和fsGroup: 0,否则提示报错:
mkdir: cannot create directory '/bitnami/zookeeper/data': Permission denied

不然zookeeper服务会无法启动起来。

pvc存储是挂载了/bitnami/zookeeper/data位置,这个地址是zoo.cfg里的配置。

  1. pvc存储yaml配置:
apiVersion: storage.k8s.io/v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-pvc
  namespace: sit
  finalizers:
    - kubernetes.io/pvc-protection
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: hpe-san
  volumeMode: Filesystem

注:这里是用storageclaas控制器,所以这里就不多介绍了。

4.zookeeper的service控制器配置:

kind: Service
metadata:
  name: zookeeper
  namespace: sit
  labels:
    name: zookeeper
    system/appName: nginxdemo0516
spec:
  ports:
    - name: tcp-port-0
      protocol: TCP
      port: 2181
      targetPort: 2181
  selector:
    name: zookeeper
  type: ClusterIP
  sessionAffinity: None
status:
  loadBalancer: {}

5.运行zookeeper容器放服务:

先创建pvc存储:

# kubectl apply -f zookeeper-pvc.yaml

再次创建deployment:

# kubectl apply -f zookeeper-deploy.yaml

最后创建zookeeper的service服务:

#  kubectl apply -f zookeeper-svc.yaml

2、部署kafka服务:

注:这里时候的镜像是:dockerhub.jiang.com/jiang-public/kafka:3.2.1

  1. 镜像下载地址:

registry.cn-hangzhou.aliyuncs.com/images-speed-up/kafka:3.2.1

  1. 开发deploment控制器的配置yaml:
kind: Deployment
metadata:
  name: kafka-jbhpb
  namespace: sit
  labels:
    app: kafka-jbhpb
    name: kafka
    version: v3.2.1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-jbhpb
      name: kafka
  template:
    metadata:
      labels:
        app: kafka-jbhpb
        name: kafka
        version: v3.2.1
    spec:
      volumes:
        - name: kafka-pvc
          persistentVolumeClaim:
            claimName: kafka-pvc
      containers:
        - name: kafka
          image: 'dockerhub.jiang.com/jiang-public/kafka:3.2.1'
          ports:
            - containerPort: 9092
              protocol: TCP
          env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: 'zookeeper.sit:2181'
            - name: ALLOW_PLAINTEXT_LISTENER
              value: 'yes'
          resources:
            limits:
              cpu: '1'
              memory: 2Gi
            requests:
              cpu: 800m
              memory: 2Gi
          volumeMounts:
            - name: kafka-pvc
              mountPath: /bitnami/kafka/data/
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: default
      serviceAccount: default
      securityContext:
        runAsUser: 0
        fsGroup: 0
      imagePullSecrets:
        - name: user-1-registrysecret
      affinity: {}
      schedulerName: default-scheduler
  strategy:
    type: Recreate
  minReadySeconds: 10
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

注:这里有两点需要注意的,
1、是env的配置,
KAFKA_ZOOKEEPER_CONNECT='zookeeper.sit:2181'
ALLOW_PLAINTEXT_LISTENER=yes

2、securityContext:的配置,需要设置 runAsUser: 0和fsGroup: 0,不然挂载存储时,会提示报错:
mkdir: cannot create directory '/bitnami/kafka/data': Permission denied

不然kafka服务会无法启动起来。

pvc存储是挂载了/bitnami/kafka/data位置,这个地址是server.properties里的配置。

  1. pvc存储yaml配置:
apiVersion: storage.k8s.io/v1
kind: PersistentVolumeClaim
metadata:
  name: kafka-pvc
  namespace: sit
  finalizers:
    - kubernetes.io/pvc-protection
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: hpe-san
  volumeMode: Filesystem

4.kafka的service控制器配置:

kind: Service
metadata:
  name: kafka
  namespace: sit
  labels:
    name: kafka
    system/appName: nginxdemo0516
spec:
  ports:
    - name: tcp-port-0
      protocol: TCP
      port: 9092
      targetPort: 9092
  selector:
    name: kafka
  type: ClusterIP
  sessionAffinity: None
status:
  loadBalancer: {}

5.运行kafka容器放服务:

先创建pvc存储:

# kubectl apply -f kafka-pvc.yaml

再次创建deployment:

# kubectl apply -f kafka-deploy.yaml

最后创建zookeeper的service服务:

#  kubectl apply -f kafka-svc.yaml

3、测试kafka功能:

在kafak容器里操作:

创建topic:

root@kafka-jbhpb-78bb6df4dc-xhmp6:/opt/bitnami/kafka/bin# ./kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 1 --replication-factor 1
Created topic my-topic.

查看topic:

root@kafka-jbhpb-78bb6df4dc-xhmp6:/opt/bitnami/kafka/bin# ./kafka-topics.sh --bootstrap-server localhost:9092 --list
my-topic

这里就是成功了。


本文转载自: https://blog.csdn.net/jiang0615csdn/article/details/141953534
版权归原作者 jiang0615csdn 所有, 如有侵权,请联系我们删除。

“kubernetes集群下部署kafka+zookeeper单机部署方案”的评论:

还没有评论