0


Ubuntu22.04搭建k8s集群,看这一篇就够啦!

k8s安装

前言

kuberneter-v1.25.3版本部署,由于自 1.24 版起,Dockershim 已从 Kubernetes 项目中移除,所以我们的 **容器运行时(容器运行时负责运行容器的软件)** 已不在是docker。本文将采用containerd作为 **容器运行时**。
本文所有的所有软件包和配置文件在文章末尾的网盘中。

一、准备开始

系统CPURAMIP网卡主机名Ubuntu22.0424G192.168.247.100NATk8s-masterUbuntu22.0424G192.168.247.101NATk8s-slave1Ubuntu22.0424G192.168.247.102NATk8s-slave2

最低配置:CPU核心不低于2个,RAM不低于2G。

二、环境配置(所有节点操作)

修改主机名

# master节点
hostnamectl set-hostname k8s-master
# slave1节点
hostnamectl set-hostname k8s-slave1
# slave2节点
hostnamectl set-hostname k8s-slave2

配置hosts映射

cat >> /etc/hosts << EOF
192.168.247.100 k8s-master
192.168.247.101 k8s-slave1
192.168.247.102 k8s-slave2
EOF

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭交换分区(为了保证 kubelet 正常工作,必须禁用交换分区)

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

转发 IPv4 并让 iptables 看到桥接流

#转发 IPv4 并让 iptables 看到桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter
lsmod | grep br_netfilter #验证br_netfilter模块
# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

配置 时间同步

# 执行date命令,查看时间是否异常
date
# 更换时区
sudo timedatectl set-timezone Asia/Shanghai
# 安装ntp服务
apt install ntp
# 启动服务
systemctl start ntp

三、安装containerd(所有节点操作)

3.1 安装containerd

下载containerd包

首先访问https://github.com/,搜索containerd,进入项目找到Releases,下拉找到对应版本的tar包,如图所示:

在这里插入图片描述

下载完成后,将该压缩包传到三台服务器上。

$ tar Cvzxf /usr/local containerd-1.6.9-linux-amd64.tar.gz

# 通过 systemd 启动 containerd
$ vi /etc/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
# 加载配置、启动
systemctl daemon-reload
systemctl enable --now containerd
# 验证
ctr version
#生成配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd

3.2 安装runc

#下载runc地址:https://github.com/opencontainers/runc/releases
# 安装
install -m 755 runc.amd64 /usr/local/sbin/runc
# 验证
runc -v

3.3 安装CNI

#下载CNI地址:https://github.com/containernetworking/plugins/releases
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz

3.4 配置加速器

#参考:https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration
#添加 config_path = "/etc/containerd/certs.d"
sed -i 's/config_path\ =.*/config_path = \"\/etc\/containerd\/certs.d\"/g' /etc/containerd/config.toml
mkdir /etc/containerd/certs.d/docker.io -p

# 下述https://xxxx.mirror.aliyuncs.com为阿里云容器镜像加速器地址,搜索阿里云容器服务复制即可
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://xxxx.mirror.aliyuncs.com"]
  capabilities = ["pull", "resolve"]
EOF
  
systemctl daemon-reload && systemctl restart containerd

四、cgroup 驱动(所有节点操作)

在 Linux 上,控制组(CGroup)用于限制分配给进程的资源。

kubelet 和底层容器运行时都需要对接控制组 为 Pod 和容器管理资源 ,如 CPU、内存这类资源设置请求和限制。
若要对接控制组(CGroup),kubelet 和容器运行时需要使用一个 cgroup 驱动。 关键的一点是 kubelet 和容器运行时需使用相同的 cgroup 驱动并且采用相同的配置。

#把SystemdCgroup = false修改为:SystemdCgroup = true
sed -i 's/SystemdCgroup\ =\ false/SystemdCgroup\ =\ true/g' /etc/containerd/config.toml
#把sandbox_image = "k8s.gcr.io/pause:3.6"修改为:sandbox_image="registry.aliyuncs.com/google_containers/pause:3.8"
sed -i 's/sandbox_image\ =.*/sandbox_image\ =\ "registry.aliyuncs.com\/google_containers\/pause:3.8"/g' /etc/containerd/config.toml|grep sandbox_image

systemctl daemon-reload 
systemctl restart containerd

五、安装crictl(所有节点操作)

kubernetes中使用crictl管理容器,不使用ctr。

crictl 是 CRI 兼容的容器运行时命令行接口。 可以使用它来检查和调试 Kubernetes 节点上的容器运行时和应用程序。

# 下载地址 https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.25.0/crictl-v1.25.0-linux-amd64.tar.gz

#配置crictl对接ctr容器运行时。
tar -vzxf crictl-v1.25.0-linux-amd64.tar.gz
mv crictl /usr/local/bin/

cat >>  /etc/crictl.yaml << EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: true
EOF

systemctl restart containerd

六、kubeadm部署集群

6.1 更换阿里云k8s镜像源(所有节点操作)

# 如果不添加镜像源,会报Unable to locate package XXX,使用官方镜像源又太慢,这里使用的阿里的源
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main"  | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update

6.2 安装kubeadm、kubelet、kubectl(所有节点操作)

sudo apt install -y kubelet=1.25.3-00 kubeadm=1.25.3-00 kubectl=1.25.3-00

6.3 kubeadm初始化(master节点操作)

#查看我们kubeadm版本,这里为GitVersion:"v1.25.3"
[root@k8s-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:55:36Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}

# 生成默认配置文件
$ kubeadm config print init-defaults > kubeadm.yaml
$ vi kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.247.100  # 修改为宿主机ip,主节点IP
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master   # 修改为宿主机名
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 修改为阿里镜像
kind: ClusterConfiguration
kubernetesVersion: 1.25.3  # kubeadm的版本为多少这里就修改为多少
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16   ## 设置pod网段
scheduler: {}

###添加内容:配置kubelet的CGroup为systemd
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

编辑完成之后,开始拉取镜像并初始化

#下载镜像
$ kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers  --kubernetes-version=v1.25.3
#初始化
$ kubeadm init --config kubeadm.yaml
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.247.100:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:7d52da1b42af69666db3483b30a389ab143a1a199b500843741dfd5f180bcb3f
# master节点执行
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

# slave1节点执行
[root@k8s-slave1 ~]#  kubeadm join 192.168.247.100:6443 --token abcdef.0123456789abcdef         --discovery-token-ca-cert-hash sha256:7d52da1b42af69666db3483b30a389ab143a1a199b500843741dfd5f180bcb3f
# slave2节点执行
[root@k8s-slave2 ~]#  kubeadm join 192.168.247.100:6443 --token abcdef.0123456789abcdef         --discovery-token-ca-cert-hash sha256:7d52da1b42af69666db3483b30a389ab143a1a199b500843741dfd5f180bcb3f

查看节点是否成功添加进来

# master节点执行
[root@k8s-master ~]# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
k8s-master   NotReady   control-plane   3m25s   v1.25.4
k8s-slave1   NotReady   <none>          128s    v1.25.4
k8s-slave2   NotReady   <none>          118s    v1.25.4

6.4 部署网络(master节点操作)

经过上述步骤,可以看到slave节点成功添加进来,但是所有节点的的STATUS都为NotReady,我们需要安装网络插件,常见的网络插件有flannel、calico等。这里以flannel为例。

# 创建flannel.yaml配置文件
# flannel.yaml 配置文件官网地址 https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
vim flannel.yaml

将下列内容添加进去

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.21.5
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.21.5
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

注意,如果如果此时执行 kubectl apply -f flannel.yaml,那么由于镜像是国外的地址,导致下述情形。
在这里插入图片描述

这时我们需要对上述官方的配置文件镜像修改,替换其中的install-cni-plugin、install-cni、kube-flannel的镜像地址。修改后的配置文件如下所示。

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: lizhenliang/flannel:v0.11.0-amd64
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: lizhenliang/flannel:v0.11.0-amd64
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

修改完之后,执行 kubectl apply -f flannel.yaml即可(等待一会)

root@k8s-master:~# kubectl get pod -A
NAMESPACE      NAME                             READY   STATUS    RESTARTS      AGE
kube-flannel   kube-flannel-ds-5bj7j            1/1     Running   1 (13m ago)   11h
kube-flannel   kube-flannel-ds-xvfkd            1/1     Running   1 (13m ago)   11h
kube-flannel   kube-flannel-ds-z9tcm            1/1     Running   1 (10m ago)   11h
kube-system    coredns-c676cc86f-72ntm          1/1     Running   1 (10m ago)   11h
kube-system    coredns-c676cc86f-r876l          1/1     Running   1 (10m ago)   11h
kube-system    etcd-master                      1/1     Running   1 (10m ago)   11h
kube-system    kube-apiserver-master            1/1     Running   1 (10m ago)   11h
kube-system    kube-controller-manager-master   1/1     Running   1 (10m ago)   11h
kube-system    kube-proxy-b92z7                 1/1     Running   1 (10m ago)   11h
kube-system    kube-proxy-jjsws                 1/1     Running   1 (13m ago)   11h
kube-system    kube-proxy-pqw6n                 1/1     Running   1 (13m ago)   11h
kube-system    kube-scheduler-master            1/1     Running   1 (10m ago)   11h
root@k8s-master:~# kubectl get node -A
NAME         STATUS   ROLES           AGE   VERSION
k8s-slave1   Ready    <none>          13h   v1.25.3
k8s-slave2   Ready    <none>          13h   v1.25.3
master       Ready    control-plane   13h   v1.25.3

至此,k8s的集群环境搭建完成,一路走来,踩了不少坑,很多是因为机器环境、镜像地址、版本的等问题,希望本篇文章能帮助到大家!
链接:https://pan.baidu.com/s/1Jgp8B1FhAyNew-y_fE8beg?pwd=6iot
提取码:6iot


本文转载自: https://blog.csdn.net/m0_43445928/article/details/130524917
版权归原作者 青山见我应如是丶 所有, 如有侵权,请联系我们删除。

“Ubuntu22.04搭建k8s集群,看这一篇就够啦!”的评论:

还没有评论