0


kubeadm部署k8s_V1.31集群教程(保证成功)

kubeadm部署k8s_V1.31集群

本文参考kubernetes官网教程:https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/

在这里插入图片描述
在这里插入图片描述

前提准备

准备3台机器(1m2w)

操作系统HostNameIP架构内存CPU磁盘角色Ubuntu 20.04.5 LTSubuntu1192.168.17.142aarch642G2C50GmasterUbuntu 20.04.5 LTSubuntu3192.168.17.8aarch642G2C50GworkerUbuntu 20.04.5 LTSubuntu3192.168.17.239aarch642G2C50Gworker
以下操作,3台机器上都需要执行。

  • 关闭swap使用
  1. # 临时关闭(服务器重启会失效)
  2. sudo swapoff -a
  3. # 永久关闭(推荐)
  4. sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab # 永久关闭 Swap
  5. sudo swapon --show # 永久关闭是否生效
  • 关闭swap使用
  1. # 临时生效
  2. sysctl -w net.ipv4.ip_forward=1
  3. sysctl -w net.bridge.bridge-nf-call-ip6tables=1
  4. sysctl -w net.bridge.bridge-nf-call-iptables=1
  5. sysctl -p # 查看以上3个配置是否生效
  6. # 永久生效(推荐)
  7. sudo vim /etc/sysctl.conf
  8. >>文件输入
  9. net.ipv4.ip_forward=1
  10. net.bridge.bridge-nf-call-ip6tables=1
  11. net.bridge.bridge-nf-call-iptables=1
  12. >>保存并退出执行
  13. sudo sysctl -p # 该命令会从 /etc/sysctl.conf 文件中重新加载所有参数设置。
  14. 注意:
  15. 如果出现 net.bridge 参数无法应用
  16. 如果 net.bridge 参数显示错误,可能是因为你的系统缺少 br_netfilter 模块。在这种情况下,可以通过以下命令加载 br_netfilter 模块:
  17. sudo modprobe br_netfilter
  18. 然后将其添加到 /etc/modules 文件中,以便在每次系统启动时加载:
  19. sudo echo "br_netfilter" >> /etc/modules

部署容器运行时(containerd)

以下操作,3台机器上都需要执行。

  • 下载安装
  1. https://github.com/containerd/containerd/blob/main/docs/getting-started.md

大家按照上面文章Option1的step1→step2→step3操作,但是要注意大家下面下载的软件需要结合自己的操作系统架构来选择。我这边是ubuntu aarch64所以选择想相关的aarch64软件就好了。

在这里插入图片描述

  • 配置containerd

如果本地没有可以创建:/etc/containerd/config.toml,把下面的内容复制到自己的机器上对应的文件。

默认好像containerd启动是不会创建这个文件的,查看默认的配置文件内容执行:containerd config default。这个命令会输出如下内容:

  1. disabled_plugins =[]
  2. imports =[]
  3. oom_score =0
  4. plugin_dir =""
  5. required_plugins =[]
  6. root ="/var/lib/containerd"
  7. state ="/run/containerd"
  8. temp =""
  9. version =2[cgroup]
  10. path =""[debug]
  11. address =""format=""
  12. gid =0
  13. level =""
  14. uid =0[grpc]
  15. address ="/run/containerd/containerd.sock"
  16. gid =0
  17. max_recv_message_size =16777216
  18. max_send_message_size =16777216
  19. tcp_address =""
  20. tcp_tls_ca =""
  21. tcp_tls_cert =""
  22. tcp_tls_key =""
  23. uid =0[metrics]
  24. address =""
  25. grpc_histogram =false[plugins][plugins."io.containerd.gc.v1.scheduler"]
  26. deletion_threshold =0
  27. mutation_threshold =100
  28. pause_threshold =0.02
  29. schedule_delay ="0s"
  30. startup_delay ="100ms"[plugins."io.containerd.grpc.v1.cri"]
  31. cdi_spec_dirs =["/etc/cdi", "/var/run/cdi"]
  32. device_ownership_from_security_context =false
  33. disable_apparmor =false
  34. disable_cgroup =false
  35. disable_hugetlb_controller =true
  36. disable_proc_mount =false
  37. disable_tcp_service =true
  38. drain_exec_sync_io_timeout ="0s"
  39. enable_cdi =false
  40. enable_selinux =false
  41. enable_tls_streaming =false
  42. enable_unprivileged_icmp =false
  43. enable_unprivileged_ports =false
  44. ignore_deprecation_warnings =[]
  45. ignore_image_defined_volumes =false
  46. image_pull_progress_timeout ="5m0s"
  47. image_pull_with_sync_fs =false
  48. max_concurrent_downloads =3
  49. max_container_log_line_size =16384
  50. netns_mounts_under_state_dir =false
  51. restrict_oom_score_adj =false
  52. sandbox_image ="registry.k8s.io/pause:3.10"
  53. selinux_category_range =1024
  54. stats_collect_period =10
  55. stream_idle_timeout ="4h0m0s"
  56. stream_server_address ="127.0.0.1"
  57. stream_server_port ="0"
  58. systemd_cgroup =false
  59. tolerate_missing_hugetlb_controller =true
  60. unset_seccomp_profile =""[plugins."io.containerd.grpc.v1.cri".cni]
  61. bin_dir ="/opt/cni/bin"
  62. conf_dir ="/etc/cni/net.d"
  63. conf_template =""
  64. ip_pref =""
  65. max_conf_num =1
  66. setup_serially =false[plugins."io.containerd.grpc.v1.cri".containerd]
  67. default_runtime_name ="runc"
  68. disable_snapshot_annotations =true
  69. discard_unpacked_layers =false
  70. ignore_blockio_not_enabled_errors =false
  71. ignore_rdt_not_enabled_errors =false
  72. no_pivot =false
  73. snapshotter ="overlayfs"[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
  74. base_runtime_spec =""
  75. cni_conf_dir =""
  76. cni_max_conf_num =0
  77. container_annotations =[]
  78. pod_annotations =[]
  79. privileged_without_host_devices =false
  80. privileged_without_host_devices_all_devices_allowed =false
  81. runtime_engine =""
  82. runtime_path =""
  83. runtime_root =""
  84. runtime_type =""
  85. sandbox_mode =""
  86. snapshotter =""[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options][plugins."io.containerd.grpc.v1.cri".containerd.runtimes][plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  87. base_runtime_spec =""
  88. cni_conf_dir =""
  89. cni_max_conf_num =0
  90. container_annotations =[]
  91. pod_annotations =[]
  92. privileged_without_host_devices =false
  93. privileged_without_host_devices_all_devices_allowed =false
  94. runtime_engine =""
  95. runtime_path =""
  96. runtime_root =""
  97. runtime_type ="io.containerd.runc.v2"
  98. sandbox_mode ="podsandbox"
  99. snapshotter =""[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  100. BinaryName =""
  101. CriuImagePath =""
  102. CriuPath =""
  103. CriuWorkPath =""
  104. IoGid =0
  105. IoUid =0
  106. NoNewKeyring =false
  107. NoPivotRoot =false
  108. Root =""
  109. ShimCgroup =""
  110. **# 这个地方需要注意,要保持kubelet一致的cgroup方式。本文采用cgroups所以设置为true,如果不是需要改成true。# 具体可以看下这里解释:https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime:~:text=%E7%B3%BB%E7%BB%9F%E9%85%8D%E7%BD%AE%E6%96%B9%E5%BC%8F%E3%80%82-,%E5%AE%89%E8%A3%85%E5%AE%B9%E5%99%A8%E8%BF%90%E8%A1%8C%E6%97%B6,-%E4%B8%BA%E4%BA%86%E5%9C%A8%20Pod**
  111. SystemdCgroup =true[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
  112. base_runtime_spec =""
  113. cni_conf_dir =""
  114. cni_max_conf_num =0
  115. container_annotations =[]
  116. pod_annotations =[]
  117. privileged_without_host_devices =false
  118. privileged_without_host_devices_all_devices_allowed =false
  119. runtime_engine =""
  120. runtime_path =""
  121. runtime_root =""
  122. runtime_type =""
  123. sandbox_mode =""
  124. snapshotter =""[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options][plugins."io.containerd.grpc.v1.cri".image_decryption]
  125. key_model ="node"[plugins."io.containerd.grpc.v1.cri".registry]
  126. config_path =""[plugins."io.containerd.grpc.v1.cri".registry.auths][plugins."io.containerd.grpc.v1.cri".registry.configs][plugins."io.containerd.grpc.v1.cri".registry.headers][plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
  127. tls_cert_file =""
  128. tls_key_file =""[plugins."io.containerd.internal.v1.opt"]
  129. path ="/opt/containerd"[plugins."io.containerd.internal.v1.restart"]
  130. interval ="10s"[plugins."io.containerd.internal.v1.tracing"][plugins."io.containerd.metadata.v1.bolt"]
  131. content_sharing_policy ="shared"[plugins."io.containerd.monitor.v1.cgroups"]
  132. no_prometheus =false[plugins."io.containerd.nri.v1.nri"]
  133. disable =true
  134. disable_connections =false
  135. plugin_config_path ="/etc/nri/conf.d"
  136. plugin_path ="/opt/nri/plugins"
  137. plugin_registration_timeout ="5s"
  138. plugin_request_timeout ="2s"
  139. socket_path ="/var/run/nri/nri.sock"[plugins."io.containerd.runtime.v1.linux"]
  140. no_shim =false
  141. runtime ="runc"
  142. runtime_root =""
  143. shim ="containerd-shim"
  144. shim_debug =false[plugins."io.containerd.runtime.v2.task"]
  145. platforms =["linux/arm64/v8"]
  146. sched_core =false[plugins."io.containerd.service.v1.diff-service"]
  147. default =["walking"][plugins."io.containerd.service.v1.tasks-service"]
  148. blockio_config_file =""
  149. rdt_config_file =""[plugins."io.containerd.snapshotter.v1.aufs"]
  150. root_path =""[plugins."io.containerd.snapshotter.v1.blockfile"]
  151. fs_type =""
  152. mount_options =[]
  153. root_path =""
  154. scratch_file =""[plugins."io.containerd.snapshotter.v1.btrfs"]
  155. root_path =""[plugins."io.containerd.snapshotter.v1.devmapper"]
  156. async_remove =false
  157. base_image_size =""
  158. discard_blocks =false
  159. fs_options =""
  160. fs_type =""
  161. pool_name =""
  162. root_path =""[plugins."io.containerd.snapshotter.v1.native"]
  163. root_path =""[plugins."io.containerd.snapshotter.v1.overlayfs"]
  164. mount_options =[]
  165. root_path =""
  166. sync_remove =false
  167. upperdir_label =false[plugins."io.containerd.snapshotter.v1.zfs"]
  168. root_path =""[plugins."io.containerd.tracing.processor.v1.otlp"][plugins."io.containerd.transfer.v1.local"]
  169. config_path =""
  170. max_concurrent_downloads =3
  171. max_concurrent_uploaded_layers =3[[plugins."io.containerd.transfer.v1.local".unpack_config]]
  172. differ =""
  173. platform ="linux/arm64/v8"
  174. snapshotter ="overlayfs"[proxy_plugins][stream_processors][stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
  175. accepts =["application/vnd.oci.image.layer.v1.tar+encrypted"]
  176. args =["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env=["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
  177. path ="ctd-decoder"
  178. returns ="application/vnd.oci.image.layer.v1.tar"[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
  179. accepts =["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
  180. args =["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env=["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
  181. path ="ctd-decoder"
  182. returns ="application/vnd.oci.image.layer.v1.tar+gzip"[timeouts]"io.containerd.timeout.bolt.open"="0s""io.containerd.timeout.metrics.shimstats"="2s""io.containerd.timeout.shim.cleanup"="5s""io.containerd.timeout.shim.load"="5s""io.containerd.timeout.shim.shutdown"="3s""io.containerd.timeout.task.state"="2s"[ttrpc]
  183. address =""
  184. gid =0
  185. uid =0

也可以执行:containerd config default > config.toml,这样内容就可以直接复制到/etc/containerd/config.toml文件中不要手动复制了。

  • 查看containerd是否启动成功
  1. systemctl status containerd.service
  2. # 或者执行
  3. journalctl -u containerd -f

在这里插入图片描述

注意需要保证3台机器上的containerd都要启动成哦。

kubeadmin、kubectl、kubelet安装

以下操作,3台机器上都需要执行。

  1. 参考:https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime:~:text=cri%2Ddockerd.sock-,%E5%AE%89%E8%A3%85%20kubeadm%E3%80%81kubelet%20%E5%92%8C%20kubectl,-%E4%BD%A0%E9%9C%80%E8%A6%81%E5%9C%A8

在这里插入图片描述

大家可根据上面的链接选择自己的操作系统文档教程分别安装

  1. kubeadm

  1. kubectl

  1. kubelet

基础镜像准备

以下操作,3台机器上都需要执行。

由于k8s内置镜像下载都需要

  1. 科学上网

才能拉取的下来,所有可以自己电脑使用

  1. 科学上网

拉取到本地然后再重新tag推到自己的docker仓库。然后在3台机器上分别拉取,重新tag成官方的镜像。

  • 如何知道需要准备哪些镜像(如果你部署的也是v1.31版本,你可以直接用拉取我的下面镜像)
  1. root@ubuntu1:~/kubernetes/deploy/calico# kubeadm config images list# 输出7个镜像需要提前准备
  2. registry.k8s.io/kube-apiserver:v1.31.0
  3. registry.k8s.io/kube-controller-manager:v1.31.0
  4. registry.k8s.io/kube-scheduler:v1.31.0
  5. registry.k8s.io/kube-proxy:v1.31.0
  6. registry.k8s.io/coredns/coredns:v1.11.3
  7. registry.k8s.io/pause:3.10
  8. registry.k8s.io/etcd:3.5.15-0
  9. # tag自己仓库名 10个
  10. docker tag registry.k8s.io/kube-apiserver:v1.31.0 registry.cn-hangzhou.aliyuncs.com/shouzhi/kube-apiserver:v1.31.0
  11. docker tag registry.k8s.io/kube-controller-manager:v1.31.0 registry.cn-hangzhou.aliyuncs.com/shouzhi/kube-controller-manager:v1.31.0
  12. docker tag registry.k8s.io/kube-scheduler:v1.31.0 registry.cn-hangzhou.aliyuncs.com/shouzhi/kube-scheduler:v1.31.0
  13. docker tag registry.k8s.io/kube-proxy:v1.31.0 registry.cn-hangzhou.aliyuncs.com/shouzhi/kube-proxy:v1.31.0
  14. docker tag registry.k8s.io/coredns/coredns:v1.11.3 registry.cn-hangzhou.aliyuncs.com/shouzhi/coredns:v1.11.3
  15. docker tag registry.k8s.io/pause:3.10 registry.cn-hangzhou.aliyuncs.com/shouzhi/pause:3.10
  16. docker tag registry.k8s.io/etcd:3.5.15-0 registry.cn-hangzhou.aliyuncs.com/shouzhi/etcd:3.5.15-0
  17. # 这3个镜像不属于k8s集群范畴镜像,但是后续整个集群启动会需要所以也可以提前准备好
  18. docker tag docker.io/calico/cni:v3.26.0 registry.cn-hangzhou.aliyuncs.com/shouzhi/calico_cni:v3.26.0
  19. docker tag docker.io/calico/node:v3.26.0 registry.cn-hangzhou.aliyuncs.com/shouzhi/calico_node:v3.26.0
  20. docker tag docker.io/calico/kube-controllers:v3.26.0 registry.cn-hangzhou.aliyuncs.com/shouzhi/calico_kube-controllers:v3.26.0
  21. # crictl 拉取镜像 10个
  22. crictl pull registry.cn-hangzhou.aliyuncs.com/shouzhi/kube-apiserver:v1.31.0
  23. crictl pull registry.cn-hangzhou.aliyuncs.com/shouzhi/kube-controller-manager:v1.31.0
  24. crictl pull registry.cn-hangzhou.aliyuncs.com/shouzhi/kube-scheduler:v1.31.0
  25. crictl pull registry.cn-hangzhou.aliyuncs.com/shouzhi/kube-proxy:v1.31.0
  26. crictl pull registry.cn-hangzhou.aliyuncs.com/shouzhi/coredns:v1.11.3
  27. crictl pull registry.cn-hangzhou.aliyuncs.com/shouzhi/pause:3.10
  28. crictl pull registry.cn-hangzhou.aliyuncs.com/shouzhi/etcd:3.5.15-0
  29. crictl pull registry.cn-hangzhou.aliyuncs.com/shouzhi/calico_cni:v3.26.0
  30. crictl pull registry.cn-hangzhou.aliyuncs.com/shouzhi/calico_node:v3.26.0
  31. crictl pull registry.cn-hangzhou.aliyuncs.com/shouzhi/calico_kube-controllers:v3.26.0
  32. # 修改镜像名 10个
  33. ctr --namespace k8s.io image tag registry.cn-hangzhou.aliyuncs.com/shouzhi/kube-apiserver:v1.31.0 registry.k8s.io/kube-apiserver:v1.31.0
  34. ctr --namespace k8s.io image tag registry.cn-hangzhou.aliyuncs.com/shouzhi/kube-controller-manager:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0
  35. ctr --namespace k8s.io image tag registry.cn-hangzhou.aliyuncs.com/shouzhi/kube-scheduler:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0
  36. ctr --namespace k8s.io image tag registry.cn-hangzhou.aliyuncs.com/shouzhi/kube-proxy:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0
  37. ctr --namespace k8s.io image tag registry.cn-hangzhou.aliyuncs.com/shouzhi/coredns:v1.11.3 registry.k8s.io/coredns/coredns:v1.11.3
  38. ctr --namespace k8s.io image tag registry.cn-hangzhou.aliyuncs.com/shouzhi/pause:3.10 registry.k8s.io/pause:3.10
  39. ctr --namespace k8s.io image tag registry.cn-hangzhou.aliyuncs.com/shouzhi/etcd:3.5.15-0 registry.k8s.io/etcd:3.5.15-0
  40. ctr --namespace k8s.io image tag registry.cn-hangzhou.aliyuncs.com/shouzhi/calico_cni:v3.26.0 docker.io/calico/cni:v3.26.0
  41. ctr --namespace k8s.io image tag registry.cn-hangzhou.aliyuncs.com/shouzhi/calico_node:v3.26.0 docker.io/calico/node:v3.26.0
  42. ctr --namespace k8s.io image tag registry.cn-hangzhou.aliyuncs.com/shouzhi/calico_kube-controllers:v3.26.0 docker.io/calico/kube-controllers:v3.26.0
  43. # 最小集群启动所需镜像列表 10个
  44. docker.io/calico/cni v3.26.0 54cd67220700c 85.5MB
  45. docker.io/calico/kube-controllers v3.26.0 aebf438b736fc 29.2MB
  46. docker.io/calico/node v3.26.0 0259a80e0f442 84.6MB
  47. registry.k8s.io/coredns/coredns v1.11.3 2f6c962e7b831 16.9MB
  48. registry.k8s.io/etcd 3.5.15-0 27e3830e14027 66.4MB
  49. registry.k8s.io/kube-apiserver v1.31.0 cd0f0ae0ec9e0 25.6MB
  50. registry.k8s.io/kube-controller-manager v1.31.0 fcb0683e6bdbd 23.9MB
  51. registry.k8s.io/kube-proxy v1.31.0 71d55d66fd4ee 26.8MB
  52. registry.k8s.io/kube-scheduler v1.31.0 fbbbd428abb4d 18.4MB
  53. registry.k8s.io/pause 3.10 afb61768ce381 266kB
  • 查看本地镜像
  1. root@ubuntu1:~/kubernetes/deploy/calico# crictl images TAG IMAGE ID SIZE
  2. docker.io/calico/cni v3.26.0 54cd67220700c 85.5MB
  3. docker.io/calico/kube-controllers v3.26.0 aebf438b736fc 29.2MB
  4. docker.io/calico/node v3.26.0 0259a80e0f442 84.6MB
  5. registry.k8s.io/coredns/coredns v1.11.3 2f6c962e7b831 16.9MB
  6. registry.k8s.io/etcd 3.5.15-0 27e3830e14027 66.4MB
  7. registry.k8s.io/kube-apiserver v1.31.0 cd0f0ae0ec9e0 25.6MB
  8. registry.k8s.io/kube-controller-manager v1.31.0 fcb0683e6bdbd 23.9MB
  9. registry.k8s.io/kube-proxy v1.31.0 71d55d66fd4ee 26.8MB
  10. registry.k8s.io/kube-scheduler v1.31.0 fbbbd428abb4d 18.4MB
  11. registry.k8s.io/pause 3.10 afb61768ce381 266kB

检查以上镜像是否却是,否则会影响剧群的启动。

开始部署

准备kubeadmin-config.yml

主意:这个只需要在你具体的master环境服务器上执行

  • 输出kubeadm默认配置,执行:kubeadm config print init-defaults

这个文件创建自己的维护目录,将下面输出内容复制到kubeadmin-config.yml文件中。

  1. # 这个内容具体根据你执行(kubeadm config print init-defaults)命令输出的为准。
  2. apiVersion: kubeadm.k8s.io/v1beta4
  3. bootstrapTokens:
  4. - groups:
  5. - system:bootstrappers:kubeadm:default-node-token
  6. token: abcdef.0123456789abcdef
  7. ttl: 24h0m0s
  8. usages:
  9. - signing
  10. - authentication
  11. kind: InitConfiguration
  12. localAPIEndpoint:
  13. # 你的master服务器的ip(需要改)
  14. advertiseAddress: 192.168.17.142
  15. # api-server端口,不改除非本机端口占用
  16. bindPort: 6443
  17. nodeRegistration:
  18. criSocket: unix:///var/run/containerd/containerd.sock
  19. imagePullPolicy: IfNotPresent
  20. imagePullSerial: true
  21. name: node
  22. taints: null
  23. timeouts:
  24. controlPlaneComponentHealthCheck: 4m0s
  25. discovery: 5m0s
  26. etcdAPICall: 2m0s
  27. kubeletHealthCheck: 4m0s
  28. kubernetesAPICall: 1m0s
  29. tlsBootstrap: 5m0s
  30. upgradeManifests: 5m0s
  31. ---
  32. apiServer: {}
  33. apiVersion: kubeadm.k8s.io/v1beta4
  34. caCertificateValidityPeriod: 87600h0m0s
  35. certificateValidityPeriod: 8760h0m0s
  36. # eycd证书保存文件,不用改
  37. certificatesDir: /etc/kubernetes/pki
  38. clusterName: kubernetes
  39. controllerManager: {}
  40. dns: {}
  41. encryptionAlgorithm: RSA-2048
  42. etcd:
  43. local:
  44. dataDir: /var/lib/etcd
  45. # 设置k8s的仓库地址,不用改
  46. imageRepository: registry.k8s.io
  47. kind: ClusterConfiguration
  48. # 设置k8s版本,根据具体的版本
  49. kubernetesVersion: 1.31.0
  50. networking:
  51. dnsDomain: cluster.local
  52. # servcie虚拟ip网段不用改
  53. serviceSubnet: 10.96.0.0/12
  54. proxy: {}
  55. scheduler: {}
  56. # 需要最追加下面的内容
  57. ---
  58. apiVersion: kubelet.config.k8s.io/v1beta1
  59. kind: KubeletConfiguration
  60. # 设置kubelet驱动
  61. cgroupDriver: systemd

初始化Master

主意:这个只需要在你具体的master环境服务器上执行

  1. # 初始化master,找到你本地上面创建的kubeadm-config.yaml文件,进入这个文件所在路径执行下面
  2. kubeadm init --config=kubeadm-config.yaml
  3. # 也可以根据kubeadm-config.yaml文件定义内容来输出你配置的镜像仓库
  4. kubeadm config images list --config=kubeadm-config.yaml
  1. # 出现下面的内容表示你的master就初始化成功
  2. Your Kubernetes control-plane has initialized successfully!
  3. To start using your cluster, you need to run the following as a regular user:
  4. mkdir -p $HOME/.kube
  5. sudocp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  6. sudochown$(id -u):$(id -g)$HOME/.kube/config
  7. Alternatively, if you are the root user, you can run:
  8. exportKUBECONFIG=/etc/kubernetes/admin.conf
  9. # 这句话的意思是最终集群启动成功还需要依赖部署一个网络插件,这里我们选择了calico
  10. You should now deploy a pod network to the cluster.
  11. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  12. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  13. Then you can join any number of worker nodes by running the following on each as root:
  14. # 这里需要记住,这个是其他非worker节点加入master时候需要执行的,也是在后面**加入work节点**章节用到
  15. kubeadm join192.168.17.142:6443 --token abcdef.0123456789abcdef \
  16. --discovery-token-ca-cert-hash sha256:8bcc34481b37c8325791bc0d275bf7aab6b1c9222c4ea23f5dfa4988d3f21f60

出现上面的成功需要继续执行:

  1. mkdir -p $HOME/.kube
  2. sudocp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudochown$(id -u):$(id -g)$HOME/.kube/config
  4. exportKUBECONFIG=/etc/kubernetes/admin.conf

走到这一步,master节点的初始化工作都已完成。

加入worker节点

主意:这个只需要在你其他的worker服务器上执行。

  1. # 这个是你执行kubeadm init --config=kubeadm-config.yaml具体输出的
  2. kubeadm join192.168.17.142:6443 --token abcdef.0123456789abcdef \
  3. --discovery-token-ca-cert-hash sha256:8bcc34481b37c8325791bc0d275bf7aab6b1c9222c4ea23f5dfa4988d3f21f60

Calico安装

在 Kubernetes v1.31 上安装 Calico 网络插件,可以按照以下步骤操作:

  • 准备工作

确保 Kubernetes 集群已经部署并且所有节点都能够互相通信,并且具备以下要求:

  1. 确保每个节点上都有合适的网络连接,特别是控制平面与工作节点之间。
  2. kubectl 命令可以正常使用,集群处于运行状态。
  • 下载 Calico 配置文件

Calico 提供官方的安装文件,使用以下命令来下载配置文件:

  1. # 下载的到:calico.yaml
  2. wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml
  3. # 启动calico
  4. kubectl apply -f calico.yaml
  • 检查 Calico Pods 状态

应用安装后,Calico 的 Pod 会在

  1. kube-system

命名空间启动。通过以下命令查看状态:

  1. kubectl get pods -n kube-system

所有与 Calico 相关的 Pod 应该显示为

  1. Running

状态,通常包括以下组件:

  • calico-node
  • calico-kube-controllers

在这里插入图片描述

出现上图表示整个calico服务启动成功。

集群状态确认

  • 检查集群所有node的状态
  1. root@ubuntu1:~/kubernetes/deploy# kubectl get node -o wide
  2. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  3. node Ready control-plane 163m v1.31.1 192.168.17.142 <none> Ubuntu 20.04.5 LTS 5.4.0-196-generic containerd://1.7.22
  4. ubuntu2 Ready <none> 155m v1.31.1 192.168.17.8 <none> Ubuntu 20.04.5 LTS 5.4.0-196-generic containerd://1.7.22
  5. ubuntu3 Ready <none> 154m v1.31.1 192.168.17.239 <none> Ubuntu 20.04.5 LTS 5.4.0-196-generic containerd://1.7.22
  • 检查集群所有pod

下面就是一个最小k8s集群启动最小单元的所有pod

  1. root@ubuntu1:~/kubernetes/deploy# kubectl get pod -A
  2. NAMESPACE NAME READY STATUS RESTARTS AGE
  3. kube-system calico-kube-controllers-7f764f4f68-tgwzt 1/1 Running 0 151m
  4. kube-system calico-node-4h285 1/1 Running 0 151m
  5. kube-system calico-node-rkl7l 1/1 Running 0 151m
  6. kube-system calico-node-xqpq8 1/1 Running 0 151m
  7. kube-system coredns-7c65d6cfc9-68fsq 1/1 Running 0 166m
  8. kube-system coredns-7c65d6cfc9-nh9b2 1/1 Running 0 166m
  9. kube-system etcd-node 1/1 Running 0 166m
  10. kube-system kube-apiserver-node 1/1 Running 0 166m
  11. kube-system kube-controller-manager-node 1/1 Running 6 166m
  12. kube-system kube-proxy-bm7rx 1/1 Running 0 157m
  13. kube-system kube-proxy-r97ql 1/1 Running 0 158m
  14. kube-system kube-proxy-z9d2j 1/1 Running 0 166m
  15. kube-system kube-scheduler-node 1/1 Running 6 166m

至此整个的1matser和2worker集群已经部署成功。

后续展望

这个文档只是给大家部署了一个最小的集群单元,够大家学习和开发环境使用。如果是生产级别的高可用集群还需要基于当前会部署

  1. 多个master作为高可用集群

。后续会抽时间给大家输出一个关于比较全面高可用部署教程。


本文转载自: https://blog.csdn.net/weixin_44102162/article/details/142640088
版权归原作者 techzhi 所有, 如有侵权,请联系我们删除。

“kubeadm部署k8s_V1.31集群教程(保证成功)”的评论:

还没有评论