0


从0开始安装k8s1.25【最新k8s版本——20220904】

文章目录

从0开始安装k8s1.25

相关链接:
VMware官网:https://www.vmware.com/
containerd官网:https://containerd.io/
kubernetes官网: https://kubernetes.io/
阿里云镜像官网:https://developer.aliyun.com/mirror/
Caclio官网:https://www.tigera.io/project-calico/
Containerd【轻量级容器管理工具】:https://blog.csdn.net/qq_41822345/article/details/126677121
安装高可用版k8s集群:https://mp.weixin.qq.com/s/lqasax-2-t4QpzgcCOqF-A

一、准备工作

1、安装Vmware和虚拟机CentOS

假设现在我们有了VMware,并在其中安装了三台虚拟机。
虚拟机安装流程:https://blog.csdn.net/qq_41822345/article/details/105567852

2、虚拟机CentOS环境初始化

下面进行系统环境的初始化。
Linux系统初始化环境【docker安装部分可以选择跳过】:https://blog.csdn.net/qq_41822345/article/details/118096213

3、安装容器运行时Containerd

容器运行时containerd安装【弃用docker】,所有主机都需要安装 容器运行时。
Containerd安装:https://blog.csdn.net/qq_41822345/article/details/126677121

二、安装kubelet kubeadm kubectl

所有主机都需要安装 kubelet和kubeadm

kubernetes官网: https://kubernetes.io/
阿里云镜像官网:https://developer.aliyun.com/mirror/

kubernetes官方镜像安装一般会报错【无法解析url】:

failure: repodata/repomd.xml from kubernetes: [Errno 256] No more mirrors to try.
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to packages.cloud.google.com:443; Connection refused"

1、使用阿里云镜像安装

# 阿里云最新镜像[2022-0901]
$ cat<<EOF> /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

$ setenforce 0# ps: 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时需要跳过gpg检查
$ yum install -y --nogpgcheck kubelet kubeadm kubectl
$ systemctl enable kubelet && systemctl start kubelet

# 安装完成验证
$ kubelet --version
Kubernetes v1.25.0

2、使用kubeadm创建集群

先初始化一些必要配置【所有机器】

# 设计hostname
$ hostnamectl set-hostname k8s201
$ hostnamectl set-hostname k8s202
$ hostnamectl set-hostname k8s203

$ hostname
k8s201
$ hostname
k8s202
$ hostname
k8s203

#以方便互相访问
$ cat>>/etc/hosts<<EOF 
192.168.168.201 k8s201
192.168.168.202 k8s202
192.168.168.203 k8s203 
EOF#关闭防火墙和关闭SELinux
$ systemctl stop firewalld
$ systemctl disable firewalld
$ setenforce 0# 临时关闭
$ vim /etc/sysconfig/selinux #永久关闭# 改为SELINUX=disabled# 所有节点关闭swap
$ swapoff -a  #临时关闭
$ vim /etc/fstab #永久关闭#注释掉以下字段
/dev/mapper/cl-swap swap swap defaults 00#设置允许路由转发,不对bridge的数据进行处理,先创建文件
$ vim /etc/sysctl.d/k8s.conf
#内容如下:
net.bridge.bridge-nf-call-ip6tables =1 
net.bridge.bridge-nf-call-iptables =1
net.ipv4.ip_forward =1 
vm.swappiness =0# 执行文件
$ sysctl -p /etc/sysctl.d/k8s.conf

# kube-proxy 开启ipvs的前置条件【极其重要】# 安装ipvsadm工具
$ yum install ipset ipvsadm -y
$ cat> /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF# 添加文件权限
$ chmod755 /etc/sysconfig/modules/ipvs.modules &&bash
$ /etc/sysconfig/modules/ipvs.modules && lsmod |grep -e ip_vs -e nf_conntrack_ipv4

# 加载 br_netfilter模块【默认是不加载的,所以需要手动加载】
$ modprobe br_netfilter

3、初始化master节点

kubeadm init

首先运行一系列预检查以确保机器 准备运行 Kubernetes。这些预检查会显示警告并在错误时退出。然后

kubeadm init

下载并安装集群控制平面组件。这可能会需要几分钟。

$ kubeadm init --help

$ kubeadm init --kubernetes-version=1.25.0 \
--apiserver-advertise-address=192.168.168.201 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

#注意:# kubernetes-version的值 即为 kubelet --version获得的值# apiserver-advertise-address这个地址必须是master机器的IP# 如果有报错,可以执行以下命令进行排查【一般都是因为外网原因拉取不到镜像,可以拉取国内镜像代替】
$ systemctl status kubelet -l
$ systemctl status containerd -l

# 查看安装k8s所需的镜像列表
$ kubeadm config images list
# 通过查询拉取k8s所需的镜像,看缺哪些镜像
$ ctr -n k8s.io images ls|grep 镜像名
# 比如缺乏pause:3.6镜像### 由于k8s.gcr.io 需要连外网才可以拉取到,导致 k8s 的基础容器 pause 经常无法获取。k8s docker 可使用代理服拉取,再利用 docker tag 解决问题
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6
#但是我们k8s集群中使用的CRI是containerd。所以只能通过 docker tag 镜像,再使用 ctr 导入镜像.
$ docker save k8s.gcr.io/pause -o pause.tar
$ ctr -n k8s.io images import pause.tar

# 解决报错后,要再次运行 kubeadm init,必须首先卸载集群
$ kubeadm reset

#成功示例:
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudocp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudochown$(id -u):$(id -g)$HOME/.kube/config

Alternatively, if you are the root user, you can run:

  exportKUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
# 这一行生成的临时令牌[24h过期]是用来加入node节点
kubeadm join192.168.168.201:6443 --token pq5otc.ker47p9nails0xsf \
        --discovery-token-ca-cert-hash sha256:7a256694edafdbd21b52ca729b0b7ebc142c7fe8435657a6115b95019d2a3178

配置可以运行 kubectl,请运行以下命令:

$ mkdir -p $HOME/.kube
$ sudocp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudochown$(id -u):$(id -g)$HOME/.kube/config
# 验证
$ kubectl get pods --all-namespaces

4、安装 Pod 网络附加组件

每个集群只能安装一个 Pod 网络。这里安装calico网络。

相关链接:https://www.tigera.io/project-calico

$ curl https://docs.projectcalico.org/manifests/calico.yaml -O
# 把calico.yaml里pod所在网段改成kubeadm init时选项--pod-network-cidr所指定的网段
$ sed -i 's/192.168.0.0/10.244.0.0/g' calico.yaml
$ kubectl apply -f calico.yaml
# 验证
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-58dbc876ff-bhmdv   1/1     Running   0          2m39s
kube-system   calico-node-9qppz                          1/1     Running   0          2m39s
kube-system   coredns-c676cc86f-f4vwx                    1/1     Running   0          54m
kube-system   coredns-c676cc86f-htq2v                    1/1     Running   0          54m
kube-system   etcd-k8s201                                1/1     Running   1          54m
kube-system   kube-apiserver-k8s201                      1/1     Running   0          54m
kube-system   kube-controller-manager-k8s201             1/1     Running   1          54m
kube-system   kube-proxy-pn4k7                           1/1     Running   0          54m
kube-system   kube-scheduler-k8s201                      1/1     Running   1          54m

5、加入node节点

通过在

kubectl get pods --all-namespaces

输出中检查 CoreDNS Pod 是否

Running

来确认其是否正常运行。只有 CoreDNS Pod 启用并运行成功,才可以加入node节点。

$ kubectl get node
NAME     STATUS   ROLES           AGE   VERSION
k8s201   Ready    control-plane   54m   v1.25.0

# 执行master节点初始化成功后的输出
$ kubeadm join192.168.168.201:6443 --token pq5otc.ker47p9nails0xsf \
        --discovery-token-ca-cert-hash sha256:7a256694edafdbd21b52ca729b0b7ebc142c7fe8435657a6115b95019d2a3178

# 验证
$ kubectl get node
NAME     STATUS     ROLES           AGE   VERSION
k8s201   Ready      control-plane   57m   v1.25.0
k8s202   NotReady   <none>          70s   v1.25.0

# 再加入一个node节点
$ kubeadm join192.168.168.201:6443 --token pq5otc.ker47p9nails0xsf \
        --discovery-token-ca-cert-hash sha256:7a256694edafdbd21b52ca729b0b7ebc142c7fe8435657a6115b95019d2a3178

# 验证
$ kubectl get po --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-58dbc876ff-bhmdv   1/1     Running   0          17m
kube-system   calico-node-9qppz                          1/1     Running   0          17m
kube-system   calico-node-j28p5                          1/1     Running   0          11m
kube-system   calico-node-jbwn7                          1/1     Running   0          12m
kube-system   coredns-c676cc86f-f4vwx                    1/1     Running   0          69m
kube-system   coredns-c676cc86f-htq2v                    1/1     Running   0          69m
kube-system   etcd-k8s201                                1/1     Running   1          69m
kube-system   kube-apiserver-k8s201                      1/1     Running   0          69m
kube-system   kube-controller-manager-k8s201             1/1     Running   1          69m
kube-system   kube-proxy-8czn5                           1/1     Running   0          12m
kube-system   kube-proxy-pn4k7                           1/1     Running   0          69m
kube-system   kube-proxy-vcgwv                           1/1     Running   0          11m
kube-system   kube-scheduler-k8s201                      1/1     Running   1          69m
$ kubectl get node
NAME     STATUS   ROLES           AGE   VERSION
k8s201   Ready    control-plane   69m   v1.25.0
k8s202   Ready    <none>          12m   v1.25.0
k8s203   Ready    <none>          11m   v1.25.0

完成

如果想安装高可用版k8s集群,可参考:https://mp.weixin.qq.com/s/lqasax-2-t4QpzgcCOqF-A

6、验证是否安装成功

使用k8s启动一个deployment资源。

$ vim deploy-nginx.yaml 
$ cat deploy-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3# 告知 Deployment 运行 3 个与该模板匹配的 Pod
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
$ kubectl apply -f deploy-nginx.yaml 
deployment.apps/nginx-deployment created
$ kubectl get po
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7fb96c846b-6xxw8   1/1     Running   0          6s
nginx-deployment-7fb96c846b-r2s9z   1/1     Running   0          6s
nginx-deployment-7fb96c846b-tsmhh   1/1     Running   0          6s

本文转载自: https://blog.csdn.net/qq_41822345/article/details/126679925
版权归原作者 进击的程序猿~ 所有, 如有侵权,请联系我们删除。

“从0开始安装k8s1.25【最新k8s版本——20220904】”的评论:

还没有评论