0


k8s离线快速搭建(含镜像,rpm包)

使用kubeadm快速搭建一个k8s集群

版本列表(下面安装都已此版本为例,其他版本兼容自行查询官网)

组件版本docker20.10.6k8sv1.21.0calicov3.26.0

资源网盘连接: 网盘

1、准备机器

  • 开通三台机器,内网互通
  • 每台机器的hostname不要用localhost【不包含下划线、小数点、大写字母】(这个后续步骤也可以做)

2、安装前置环境(都执行)

2.1 基础环境
  1. #关闭防火墙: 如果是云服务器,需要设置安全组策略放行端口
  2. systemctl stop firewalld
  3. systemctl disable firewalld
  4. # 修改 hostname
  5. hostnamectl set-hostname master
  6. hostnamectl set-hostname node1
  7. hostnamectl set-hostname node2
  8. # 查看修改结果
  9. hostnamectl status
  10. # 设置 hostname 解析echo"127.0.0.1 $(hostname)">> /etc/hosts
  11. #关闭 selinux: sed-i's/enforcing/disabled/' /etc/selinux/config
  12. setenforce 0#关闭 swap:
  13. swapoff -ased-ri's/.*swap.*/#&/' /etc/fstab
  14. #将桥接的 IPv4 流量传递到 iptables:# 修改 /etc/sysctl.conf# 有配置,则修改sed-i"s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf
  15. sed-i"s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf
  16. sed-i"s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf
  17. sed-i"s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf
  18. sed-i"s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf
  19. sed-i"s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf
  20. sed-i"s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf
  21. # 没有,追加echo"net.ipv4.ip_forward = 1">> /etc/sysctl.conf
  22. echo"net.bridge.bridge-nf-call-ip6tables = 1">> /etc/sysctl.conf
  23. echo"net.bridge.bridge-nf-call-iptables = 1">> /etc/sysctl.conf
  24. echo"net.ipv6.conf.all.disable_ipv6 = 1">> /etc/sysctl.conf
  25. echo"net.ipv6.conf.default.disable_ipv6 = 1">> /etc/sysctl.conf
  26. echo"net.ipv6.conf.lo.disable_ipv6 = 1">> /etc/sysctl.conf
  27. echo"net.ipv6.conf.all.forwarding = 1">> /etc/sysctl.conf
  28. # 执行命令以应用sysctl-p# 执行命令查看设置sysctl-a|grep call
2.2 docker 环境
2.2.1 下载docker-20.10.6-ce.tgz,下载地址:官网地址,选择centos7 x86_64版本的。
2.2.2 上传,解压

将docker-20.10.6-ce.tgz上传到服务器,并执行命令解压

  1. tar-zxvf docker-20.10.6-ce.tgz
  2. cp docker/* /usr/bin/
2.2.3 创建docker.service
  1. vi /usr/lib/systemd/system/docker.service
  1. [Unit]Description=Docker Application Container Engine
  2. Documentation=https://docs.docker.com
  3. After=network-online.target firewalld.service
  4. Wants=network-online.target
  5. [Service]Type=notify
  6. # the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by docker# 开启远程连接 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
  7. ExecReload=/bin/kill -s HUP $MAINPID# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=infinity
  8. LimitNPROC=infinity
  9. LimitCORE=infinity
  10. # Uncomment TasksMax if your systemd version supports it.# Only systemd 226 and above support this version.#TasksMax=infinityTimeoutStartSec=0# set delegate yes so that systemd does not reset the cgroups of docker containersDelegate=yes
  11. # kill only the docker process, not all processes in the cgroupKillMode=process
  12. # restart the docker process if it exits prematurelyRestart=on-failure
  13. StartLimitBurst=3StartLimitInterval=60s
  14. [Install]WantedBy=multi-user.target
2.2.4 启动docker,并创建daemon.json
  1. systemctl start docker
  2. systemctl enabledockervi /etc/docker/daemon.json # 配置仓库等{"oom-score-adjust": -1000,
  3. "graph":"/xxx/docker", #配置自己的镜像,容器存放路径"log-driver":"json-file",
  4. "log-opts":{"max-size":"100m",
  5. "max-file":"3"},
  6. "max-concurrent-downloads":10,
  7. "max-concurrent-uploads":10,
  8. "registry-mirrors":["xxxx"], # 配置镜像源"storage-driver":"overlay2",
  9. "storage-opts":["overlay2.override_kernel_check=true"]}## 执行重启dockersudo systemctl daemon-reload
  10. sudo systemctl restart docker
3、安装k8s核心 kubectl kubeadm kubelet(都执行)
  1. # 卸载旧版本
  2. 提前使用可以联网的机器下载 离线rpm
  3. # 创建rpm软件存储目录:mkdir-p /kubeadm-rpm
  4. # 执行命令:
  5. yum install-y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0 --downloadonly--downloaddir /kubeadm-rpm
  6. yum remove -y kubelet kubeadm kubectl
  7. 上传rpm包到服务器后使用yum安装
  8. yum -yinstall /kubeadm-rpm/*"
  9. # 开机启动kubelet
  10. systemctl enable kubelet && systemctl start kubelet
4、初始化master节点(master执行)
4.1 镜像准备(三台服务器都上传)
  1. kube-apiserver:v1.21.0
  2. kube-proxy:v1.21.0
  3. kube-controller-manager:v1.21.0
  4. kube-scheduler:v1.21.0
  5. coredns:v1.8.0
  6. etcd:3.4.13-0
  7. pause:3.4.1
  8. # 网络插件镜像 这里使用的是calico
  9. calico-cni
  10. calico-node
  11. calico-kube-controllers
  12. calico-pod2Daemon-dlexvol
  13. ##注意1.21.0版本的k8s coredns镜像比较特殊,结合阿里云需要特殊处理,重新打标签 镜像名称前缀可以自己打tagdocker tag registry.cn-hangzhou.aliyuncs.com/zzl/coredns:v1.8.0 registry.cn-hangzhou.aliyuncs.com/zzl/coredns/coredns:v1.8.0
4.2 init master节点
  1. ########kubeadm init 一个master################################kubeadm join 其他worker########################
  2. kubeadm init \
  3. --apiserver-advertise-address=10.170.11.8 \
  4. --image-repository registry.cn-hangzhou.aliyuncs.com/zzl \
  5. --kubernetes-version v1.21.0 \
  6. --service-cidr=10.96.0.0/16 \
  7. --pod-network-cidr=192.168.0.0/16
  8. # apiserver-advertise-address master的ip (内网)# image-repository 指定准备镜像的仓库 如 我这里使用的阿里云的 registry.cn-hangzhou.aliyuncs.com/zzl## 注意:pod-cidr与service-cidr# 指定一个网络可达范围 pod的子网范围+service负载均衡网络的子网范围+本机ip的子网范围不能有重复域#例如 apiserver-advertise-address=10.170.xx pod-network-cidr=192.170.0.0/16 这样不行####按照提示继续###### init完成后第一步:复制相关文件夹
  9. To start using your cluster, you need to run the following as a regular user:
  10. mkdir-p$HOME/.kube
  11. sudocp-i /etc/kubernetes/admin.conf $HOME/.kube/config
  12. sudochown$(id-u):$(id-g)$HOME/.kube/config
  13. ## 导出环境变量
  14. Alternatively, if you are the root user, you can run:
  15. exportKUBECONFIG=/etc/kubernetes/admin.conf
  16. ### 部署一个pod网络组件
  17. You should now deploy a pod network to the cluster.
  18. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  19. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  20. ##############如下:安装calico#####################
  21. kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
  22. #(此步骤,可以先使用联网的机器 curl 下载yml curl)离线下载命令# 下载对应版本curl https://docs.projectcalico.org/manifests/calico.yaml -Ocurl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O# 启动calico
  23. kubectl apply -f calico.yml
  24. kubectl get pod -A##获取集群中所有部署好的应用Pod
  25. kubectl get nodes ##查看集群所有机器的状态
5. 初始化work节点
  1. ## 用master生成的命令即可
  2. kubeadm join172.24.80.222:6443 --token nz9azl.9bl27pyr4exy2wz4 \
  3. --discovery-token-ca-cert-hash sha256:4bdc81a83b80f6bdd30bb56225f9013006a45ed423f131ac256ffe16bae73a20
  4. #过期之后重写创建
  5. kubeadm token create --print-join-command
  6. kubeadm token create --ttl0 --print-join-command
  7. kubeadm join--token y1eyw5.ylg568kvohfdsfco --discovery-token-ca-cert-hash sha256: 6c35e4f73f72afd89bf1c8c303ee55677d2cdb1342d67bb23c852aba2efc7c73
6. 验证集群
  1. #获取所有节点
  2. kubectl get nodes
  3. #给节点打标签###加标签 《h1》
  4. kubectl label node 机器hostname node-role.kubernetes.io/worker=''###去标签
  5. kubectl label node 机器hostname node-role.kubernetes.io/worker-
7. 设置ipvs模式
  1. #1、查看默认kube-proxy 使用的模式
  2. kubectl logs -n kube-system kube-proxy-28xv4
  3. #2、需要修改 kube-proxy 的配置文件,修改mode 为ipvs。默认iptables,但是集群大了以后就会慢
  4. kubectl edit cm kube-proxy -n kube-system
  5. 修改如下
  6. ipvs:
  7. excludeCIDRs: null
  8. minSyncPeriod: 0s
  9. scheduler: ""
  10. strictARP: false
  11. syncPeriod: 30s
  12. kind: KubeProxyConfiguration
  13. metricsBindAddress: 127.0.0.1:10249
  14. mode: "ipvs"###修改了kube-proxy的配置,delete以前的Kube-proxy pod
  15. kubectl get pod -A|grep kube-proxy
  16. kubectl delete pod kube-proxy-xxxx -n kube-system

本文转载自: https://blog.csdn.net/aloney1/article/details/131276908
版权归原作者 灬龙 所有, 如有侵权,请联系我们删除。

“k8s离线快速搭建(含镜像,rpm包)”的评论:

还没有评论