0


国产化麒麟操作系统部署K8S

国产化银河麒麟操作系统(Kylin)部署K8S集群

一、K8S集群节点准备

1.1 主机操作系统说明

序号操作系统及版本备注1Kylin-Server-10-SP2-x86

1.2 主机硬件配置说明

需求CPU内存硬盘角色主机名值4C8G100GBslave1slave1值4C8G100GBslave2slave2

1.3 主机配置

1.3.1 主机名配置

由于本次使用2台主机完成kubernetes集群部署,其中1台为master节点,名称为slave1;其中1台为slave2节点,名称分别为:slave2

  1. slave1节点
  2. # hostnamectl set-hostname salve1
  1. salve2节点
  2. # hostnamectl set-hostname slave2
  1. ### 1.3.2 主机IP地址配置
  2. ~~~powershell
  3. salve1节点IP地址为:192.168.71.102/24
  4. # vim /etc/sysconfig/network-scripts/ifcfg-ens33
  5. TYPE="Ethernet"
  6. PROXY_METHOD="none"
  7. BROWSER_ONLY="no"
  8. BOOTPROTO="none"
  9. DEFROUTE="yes"
  10. IPV4_FAILURE_FATAL="no"
  11. IPV6INIT="yes"
  12. IPV6_AUTOCONF="yes"
  13. IPV6_DEFROUTE="yes"
  14. IPV6_FAILURE_FATAL="no"
  15. IPV6_ADDR_GEN_MODE="stable-privacy"
  16. NAME="ens33"
  17. DEVICE="ens33"
  18. ONBOOT="yes"
  19. IPADDR="192.168.71.102"
  20. PREFIX="24"
  21. GATEWAY="192.168.71.2"
  22. DNS1="8.8.8.8"
  1. salve2节点IP地址为:192.168.71.103/24
  2. # vim /etc/sysconfig/network-scripts/ifcfg-ens33TYPE="Ethernet"
  3. PROXY_METHOD="none"
  4. BROWSER_ONLY="no"
  5. BOOTPROTO="none"
  6. DEFROUTE="yes"
  7. IPV4_FAILURE_FATAL="no"
  8. IPV6INIT="yes"
  9. IPV6_AUTOCONF="yes"
  10. IPV6_DEFROUTE="yes"
  11. IPV6_FAILURE_FATAL="no"
  12. IPV6_ADDR_GEN_MODE="stable-privacy"
  13. NAME="ens33"
  14. DEVICE="ens33"
  15. ONBOOT="yes"
  16. IPADDR="192.168.71.103"
  17. PREFIX="24"
  18. GATEWAY="192.168.71.2"
  19. DNS1="8.8.8.8"

1.3.3 主机名与IP地址解析

所有集群主机均需要进行配置。

  1. # cat /etc/hosts
  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 192.168.71.102 slave1
  5. 192.168.71.103 salve2

1.3.4 防火墙配置

所有主机均需要操作。

  1. 关闭现有防火墙firewalld
  2. # systemctl disable firewalld# systemctl stop firewalld# firewall-cmd --state
  3. not running

1.3.5 SELINUX配置

所有主机均需要操作。修改SELinux配置需要重启操作系统。

  1. # sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

1.3.6 时间同步配置

所有主机均需要操作。最小化安装系统需要安装ntpdate软件。

  1. # crontab -l
  2. 0 */1 ***/usr/sbin/ntpdate time1.aliyun.com

1.3.7 配置内核转发及网桥过滤

所有主机均需要操作。

  1. 添加网桥过滤及内核转发配置文件
  2. # cat /etc/sysctl.d/k8s.conf
  3. net.bridge.bridge-nf-call-ip6tables = 1
  4. net.bridge.bridge-nf-call-iptables = 1
  5. vm.swappiness = 0
  1. 加载br_netfilter模块
  2. # modprobe br_netfilter
  1. 查看是否加载
  2. # lsmod | grep br_netfilter
  3. br_netfilter 22256 0
  4. bridge 151336 1 br_netfilter

1.3.8 安装ipset及ipvsadm

所有主机均需要操作。

  1. 安装ipsetipvsadm
  2. # yum -y install ipset ipvsadm
  1. 配置ipvsadm模块加载方式
  2. 添加需要加载的模块
  3. # cat > /etc/sysconfig/modules/ipvs.modules <<EOF#!/bin/bash
  4. modprobe -- ip_vs
  5. modprobe -- ip_vs_rr
  6. modprobe -- ip_vs_wrr
  7. modprobe -- ip_vs_sh
  8. modprobe -- nf_conntrack
  9. EOF
  1. 授权、运行、检查是否加载
  2. # chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

1.3.9 关闭SWAP分区

修改完成后需要重启操作系统,如不重启,可临时关闭,命令为swapoff -a

  1. 临时关闭
  2. # swapoff -a
  1. 永远关闭swap分区,需要重启操作系统
  2. # cat /etc/fstab......# /dev/mapper/centos-swap swap swap defaults 0 0
  3. 在上一行中行首添加#

二、 Docker准备

2.1 Docker安装包准备

二进制方式安装

2.2 Docker安装

链接:https://pan.baidu.com/s/1Tm4sdpM2eInUKhrf9aZYrQ?pwd=v63r
提取码:v63r
–来自百度网盘超级会员V3的分享

  1. # tar -xzxf docker-19.03.10.tgz
  1. # chmod +x docker/*
  1. # cp docker/* /usr/bin/

2.3 启动Docker服务

  1. # vim /usr/lib/systemd/system/docker.service[Unit]
  2. Description=Docker Application Container Engine
  3. Documentation=https://docs.docker.com
  4. After=network-online.target firewalld.service
  5. Wants=network-online.target
  6. [Service]Type=notify
  7. ExecStart=/usr/bin/dockerd
  8. ExecReload=/bin/kill-s HUP $MAINPID
  9. LimitNOFILE=infinity
  10. LimitNPROC=infinity
  11. TimeoutStartSec=0
  12. Delegate=yes
  13. KillMode=process
  14. Restart=on-failure
  15. StartLimitBurst=3
  16. StartLimitInterval=60s
  17. [Install]
  18. WantedBy=multi-user.target
  1. # systemctl enable --now docker

2.4 修改cgroup方式

/etc/docker/daemon.json 默认没有此文件,需要单独创建

  1. 在/etc/docker/daemon.json添加如下内容
  2. # cat /etc/docker/daemon.json{"exec-opts": ["native.cgroupdriver=systemd"]}
  1. # systemctl restart docker

三、kubernetes 1.23.6 集群部署

3.2 kubernetes YUM源准备

3.2.1 谷歌YUM源

  1. [kubernetes]
  2. name=Kubernetes
  3. baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
  4. enabled=1
  5. gpgcheck=0
  6. repo_gpgcheck=0
  7. gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
  8. https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

3.2.2 阿里云YUM源

  1. [kubernetes]
  2. name=Kubernetes
  3. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  4. enabled=1
  5. gpgcheck=0
  6. repo_gpgcheck=0
  7. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

3.3 集群软件安装

所有节点均可安装

  1. 默认安装
  2. # yum -y install kubeadm kubelet kubectl
  1. 查看指定版本
  2. # yum list kubeadm.x86_64 --showduplicates | sort -r# yum list kubelet.x86_64 --showduplicates | sort -r# yum list kubectl.x86_64 --showduplicates | sort -r
  1. 安装指定版本
  2. # yum -y install kubeadm-1.23.6-0 kubelet-1.23.6-0 kubectl-1.23.6-0## 3.4 配置kubelet
  3. >为了实现docker使用的cgroupdriverkubelet使用的cgroup的一致性,建议修改如下文件内容。
  4. ~~~powershell
  5. # vim /etc/sysconfig/kubelet
  6. KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
  1. 设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动
  2. # systemctl enable kubelet

3.5 集群镜像准备

可使用VPN实现下载。

  1. # kubeadm config images list --kubernetes-version=v1.23.6
  2. docker pull registry.cn-hangzhou.aliyuncs.com/google_containers:kube-apiserver:v1.23.6

3.6 集群初始化

  1. K8S 1.23.6版本集群初始化
  2. [root@k8s-master01 ~]# kubeadm init --kubernetes-version=v1.23.6 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=192.168.71.102
  1. 初始化过程输出
  2. [init]Using Kubernetes version: v1.23.6
  3. [preflight] Running pre-flight checks
  4. [preflight] Pulling images required for setting up a Kubernetes cluster
  5. [preflight] This might take a minute or two, depending on the speed of your internet connection
  6. [preflight] You can also perform this action in beforehand using'kubeadm config images pull'[certs]Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key
  7. [certs] Generating "apiserver" certificate and key
  8. [certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.71.200][certs] Generating "apiserver-kubelet-client" certificate and key
  9. [certs] Generating "front-proxy-ca" certificate and key
  10. [certs] Generating "front-proxy-client" certificate and key
  11. [certs] Generating "etcd/ca" certificate and key
  12. [certs] Generating "etcd/server" certificate and key
  13. [certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.10.200 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key
  14. [certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.10.200 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key
  15. [certs] Generating "apiserver-etcd-client" certificate and key
  16. [certs] Generating "sa" key and public key
  17. [kubeconfig]Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file
  18. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  19. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  20. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  21. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet
  22. [control-plane]Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for"kube-apiserver"[control-plane] Creating static Pod manifest for"kube-controller-manager"[control-plane] Creating static Pod manifest for"kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  23. [apiclient] All control plane components are healthy after 13.006785 seconds
  24. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  25. [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
  26. [upload-certs] Skipping phase. Please see --upload-certs
  27. [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule][bootstrap-token]Using token: 8x4o2u.hslo8xzwwlrncr8s
  28. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  29. [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
  30. [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  31. [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  32. [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  33. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  34. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  35. [addons] Applied essential addon: CoreDNS
  36. [addons] Applied essential addon: kube-proxy
  37. Your Kubernetes control-plane has initialized successfully!
  38. To startusing your cluster, you need to run the following as a regular user:
  39. mkdir -p $HOME/.kube
  40. sudo cp-i /etc/kubernetes/admin.conf $HOME/.kube/config
  41. sudo chown $(id -u):$(id -g)$HOME/.kube/config
  42. Alternatively,if you are the root user, you can run:
  43. export KUBECONFIG=/etc/kubernetes/admin.conf
  44. You should now deploy a pod network to the cluster.
  45. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  46. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  47. Then you can join any number of worker nodes by running the following on each as root:
  48. kubeadm join 192.168.71.102:6443 --token 8x4o2u.hslo8xzwwlrncr8s \
  49. --discovery-token-ca-cert-hash sha256:7323a8b0658fc33d89e627f078f6eb16ac94394f9a91b3335dd3ce73a3f313a0

3.7 集群应用客户端管理集群文件准备

  1. [root@k8s-master01 ~]# mkdir -p $HOME/.kube[root@k8s-master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@k8s-master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config[root@k8s-master01 ~]# ls /root/.kube/
  2. config

3.8 集群网络准备

链接:https://pan.baidu.com/s/14r7g_4E0kKNygZnWKd5U2Q?pwd=hslo
提取码:hslo
–来自百度网盘超级会员V3的分享
在这里插入图片描述

3.9 集群工作节点添加

  1. K8S 1.23版本集群加入方法
  2. [root@k8s-worker0X ~]# kubeadm join 192.168.71.102:6443 --token 8x4o2u.hslo8xzwwlrncr8s \ --discovery-token-ca-cert-hash sha256:7323a8b0658fc33d89e627f078f6eb16ac94394f9a91b3335dd3ce73a3f313a0

本文转载自: https://blog.csdn.net/weixin_46544841/article/details/140533212
版权归原作者 努力提升的羊 所有, 如有侵权,请联系我们删除。

“国产化麒麟操作系统部署K8S”的评论:

还没有评论