目录
前言
环境:
centos 7.6、k8s 1.22.17、kubesphere v3.3.0
本篇以kubesphere v3.3.0版本讲解。
kubesphere 愿景是打造一个以 kubernetes 为内核的云原生分布式操作系统,它的架构可以非常方便地使第三方应用与云原生生态组件进行即插即用(plug-and-play)的集成,支持云原生应用在多云与多集群的统一分发和运维管理。换句话说,使用kubekey工具可以在linux服务器上同时安装k8s集群和kubesphere,也可以在已有的k8s集群中安装kubesphere,kubesphere只是一个k8s界面图形化工具,可以让人快速的部署k8s对象资源等功能以及devops功能等。
kubekey安装k8s集群,etcd是systemd管理的。etcd、kubelet都是systemd管理的,其他master组件都是pod启动的,kube-apiserver、kube-controller-manager、kube-scheduler都是静态pod启动的。
什么是kubekey(简称kk)
kubesphere的官网:
https://www.kubesphere.io/zh/
kubekey是用go语言开发的一款全新的安装工具,代替了以前基于 ansible 的安装程序。kubekey 为用户提供了灵活的安装选择,可以分别安装 kubesphere和 kubernetes 或二者同时安装,既方便又高效。安装kubesphere需要k8s集群中有默认存储,如果k8s没有默认存储,则kubesphere 默认会创建一种叫做openebs的存储,openebs存储类本质上是hostpath类型的存储。
单节点上安装 kubesphere(all in one 快速熟悉kubesphere)
使用kk在单台服务器上快速部署 kubesphere 和 kubernetes。
官方文档:
https://www.kubesphere.io/zh/docs/v3.3/quick-start/all-in-one-on-linux/
部署 kubernetes和和kubesphere
#安装服务器基本依赖以及做基本配置
yum install socat conntrack ebtables ipset -y
yum installvimlsof net-tools zipunzip tree wgetcurl bash-completion pciutils gcc make lrzsz tcpdump bind-utils -ysed-ri's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0echo"检查是否关闭selinux:";getenforce &&grep'SELINUX=disabled' /etc/selinux/config
systemctl stop firewalld.service && systemctl disable firewalld.service
echo"检查是否关闭防火墙:";systemctl status firewalld.service |grep-E'Active|disabled'sed-ri's/.*swap.*/#&/' /etc/fstab
swapoff -aecho"检查swap是否关闭:";grep-i'swap' /etc/fstab;free-h|grep-i'swap'#可提前手动安装docker,不安装的话kubekey默认会自动安装与k8s匹配的最新版本docker#docker 手动安装参考:https://blog.csdn.net/MssGuo/article/details/122694156#下载安装kubekey v3.0.7exportKKZONE=cn
curl-sfL https://get-kk.kubesphere.io |VERSION=v3.0.7 sh -
chmod +x kk
#上面下载不了可直接下载包解压亦可wget https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.0.7/kubekey-v3.0.7-linux-amd64.tar.gz
tar xf kubekey-v3.0.7-linux-amd64.tar.gz &&chmod a+x kk
#安装kubernetes和kubesphere#查看当前kubekey支持安装哪些k8s版本
./kk version --show-supported-k8s
#kk命令创建集群语法格式
./kk create cluster [--with-kubernetes version][--with-kubesphere version]#同时创建k8s集群和安装kubesphere,因为没有创建配置文件,默认就是当前节点单节点安装k8s集群
./kk create cluster --with-kubernetes v1.22.17 --with-kubesphere v3.3.0
#等待k8s集群安装完成,查看全部的pod是否启动就绪
kubectl get pod --all-namespaces
#查看kubesphere安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l'app in (ks-install, ks-installer)'-ojsonpath='{.items[0].metadata.name}')-f#配置kubectl命令自动补全功能
yum -yinstall bash-completion
echo'source <(kubectl completion bash)'>>~/.bashrc
kubectl completion bash>/etc/bash_completion.d/kubectl
#登录kubesphere,根据安装页面的输出信息登录kubesphere即可
Console: http://192.168.xx.xx:30880
Account: admin
Password: P@88w0rd
多节点安装
最少准备3台服务器。
官方文档:
https://www.kubesphere.io/zh/docs/v3.3/installing-on-linux/introduction/multioverview/
部署 kubernetes和和kubesphere
#安装服务器基本依赖以及做基本配置
yum install socat conntrack ebtables ipset -y
yum installvimlsof net-tools zipunzip tree wgetcurl bash-completion pciutils gcc make lrzsz tcpdump bind-utils -ysed-ri's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0echo"检查是否关闭selinux:";getenforce &&grep'SELINUX=disabled' /etc/selinux/config
systemctl stop firewalld.service && systemctl disable firewalld.service
echo"检查是否关闭防火墙:";systemctl status firewalld.service |grep-E'Active|disabled'sed-ri's/.*swap.*/#&/' /etc/fstab
swapoff -aecho"检查swap是否关闭:";grep-i'swap' /etc/fstab;free-h|grep-i'swap'#可提前手动安装docker,不安装的话kubekey默认会自动安装与k8s匹配的最新版本docker#docker手动安装参考: https://blog.csdn.net/MssGuo/article/details/122694156#下载安装kubekey v3.0.7exportKKZONE=cn
curl-sfL https://get-kk.kubesphere.io |VERSION=v3.0.7 sh -
chmod +x kk
#上面下载不了可直接下载包解压亦可wget https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.0.7/kubekey-v3.0.7-linux-amd64.tar.gz
tar xf kubekey-v3.0.7-linux-amd64.tar.gz &&chmod +x kk
#安装kubernetes和kubesphere#对于多节点安装,需要通过指定配置文件来创建集群#查看当前kubekey支持安装哪些k8s版本
./kk version --show-supported-k8s
#1. 创建配置文件#不添加--with-kubesphere则不会部署kubesphere,只能使用配置文件中的addons字段安装,或者在您后续使用./kk create cluster命令时再次添加这个标志#添加标志--with-kubesphere时不指定kubesphere版本,则会安装最新版本的kubesphere#语法格式: ./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]#创建配置文件
./kk create config --with-kubernetes v1.22.17 --with-kubesphere v3.3.0 -f config.yaml
#2、编辑配置文件#完整的配置文件 https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md 可参考这个文件里面参数的含义#修改配置文件中的参数,包括主机信息,节点角色等vim config.yaml #下面是一段使用外部etcd集群的配置,如果需要使用外部etcd集群则可以如下配置,无外部etcd集群则忽略保持原来的默认即可
etcd:
type: external #指定类型是external外部
external:
endpoints: #指定etcd集群的url和证书
- https://192.168.566.4:2379
- https://192.168.566.5:2379
- https://192.168.566.6:2379
caFile: /etc/etcd/pki/ca.pem
certFile: /etc/etcd/pki/server.pem
keyFile: /etc/etcd/pki/server-key.pem
#3、使用配置文件创建集群
./kk create cluster -f config.yaml
#查看全部的pod是否启动就绪
kubectl get pod --all-namespaces
#查看kubesphere安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l'app in (ks-install, ks-installer)'-ojsonpath='{.items[0].metadata.name}')-f#配置kubectl命令自动补全功能
yum -yinstall bash-completion
echo'source <(kubectl completion bash)'>>~/.bashrc
kubectl completion bash>/etc/bash_completion.d/kubectl
#根据安装页面的输出信息登录kubesphere即可
Console: http://192.168.xx.xx:30880
Account: admin
Password: P@88w0rd
离线安装k8s v1.22.17和kubesphere v3.3.2
#离线安装官网: https://www.kubesphere.io/zh/docs/v3.3/installing-on-linux/introduction/air-gapped-installation/
清单(manifest):manifest是一个描述当前kubernetes集群信息和定义
制品(artifact):artifact制品中需要包含哪些内容的文本文件
使用清单manifest文件来定义将要离线部署的集群环境需要的内容,然后使用kubekey通过该./kk artifact export命令指定manifest清单文件来导出
制品包,最后将kk工具和制品包上传到内网离线服务器安装部署。离线部署时只需要kubekey和artifact就可快速、简单的在环境中部署镜像仓库和 kubernetes集群。
下载kubekey和artifact制品包都是在有网的环境下进行,所以找一台可以联网的服务器进行下载即可。
#联网下载kubekeyexportKKZONE=cn
curl-sfL https://get-kk.kubesphere.io |VERSION=v3.0.7 sh -
#上面下载不了可直接下载包解压亦可wget https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.0.7/kubekey-v3.0.7-linux-amd64.tar.gz
tar xf kubekey-v3.0.7-linux-amd64.tar.gz &&chmod +x kk
#编写一个配置清单manifest.yaml文件,用于下载对应的制品#清单文件的字段解析可参考:https://github.com/kubesphere/kubekey/blob/master/docs/manifest-example.md#下面的清单文件对应的k8s版本和kubesphere是kubernetes v1.22.17、kubesphere v3.3.2cat> manifest.yaml<<'EOF'
---
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
name: k8s
spec:
arches:
- amd64
operatingSystems:
- arch: amd64
type: linux
id: centos
version: "7"
repository:
iso:
localPath:
url: https://github.com/kubesphere/kubekey/releases/download/v3.0.7/centos7-rpms-amd64.iso
kubernetesDistributions:
- type: kubernetes
version: v1.22.17
components:
helm:
version: v3.9.0
cni:
version: v0.9.1
etcd:
version: v3.4.13
## For now, if your cluster container runtime is containerd, kubekey will add a docker 20.10.8 container runtime in the below list.
## The reason is kubekey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.
containerRuntimes:
- type: docker
version: 20.10.8
crictl:
version: v1.24.0
docker-registry:
version: "2"
harbor:
version: v2.5.3
docker-compose:
version: v2.2.2
images:
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.17
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.17
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.17
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.17
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.23.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-upgrade:v3.3.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
- registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
- registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
- registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine
- registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
- registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.9.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.9.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:ks-v3.3.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:ks-v3.3.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:ks-v3.3.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.3.0-2.319.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman
- registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine
- registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22
- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
- registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11
- registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.4.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.4.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.4.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27
- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38
- registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine
- registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text
- registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache
- registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest
- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2
- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0
EOF#下载artifact制品,kubesphere.tar.gz文件非常大,主要取决的manifest.yaml文件里的镜像多少与大小exportKKZONE=cn
./kk artifact export-m manifest.yaml -o kubesphere.tar.gz
#将kubekey文件和kubesphere.tar.gz制品包上传到离线服务器#创建配置文件,注意版本要与上面制品包里的版本一致
./kk create config --with-kubesphere v3.3.2 --with-kubernetes v1.22.17 -f config.yaml
#编辑配置文件#完整的配置文件 https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md 可参考这个文件里面字段的含义vim config.yaml #配置镜像仓库
roleGroups: #角色组下面添加镜像仓库,指定镜像仓库的节点
registry:
- ks1
registry:
type: harbor #仓库类型设置为harbor,如果不设置默认会安装docker registry镜像仓库
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []#安装镜像仓库,配置文件设置的镜像仓库是harbor,所以安装的就是harbor
./kk init registry -f config.yaml -a kubesphere.tar.gz
#创建Harbor项目,因为harbor需要创建项目才能推送镜像curl-O https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/create_project_harbor.sh
vim create_project_harbor.sh #修改脚本
kubesphereio #在仓库列表中添加这个项目url="https://dockerhub.kubekey.local"#修改url的值为https://dockerhub.kubekey.localcurl-u"oject_name\": \"${project}\", \"public\": true}"-k#curl命令末尾加上 -k#授权脚本chmod +x create_project_harbor.sh
#创建harbor,harbor创建成功之后,harbor由systemd管理,安装位置在/opt/harbor目录,可自行维护harbor
./create_project_harbor.sh
#登录harbor,脚本默认创建的项目都是公开的,项目设置为公开以便所有用户都能够拉取镜像
https://192.168.xx.xx:80 admin/Harbor12345
#再次编辑集群配置文件,添加镜像仓库的信息vim config.yaml
...
registry:
type: harbor
auths: #新增auths配置增加dockerhub.kubekey.local和账号密码"dockerhub.kubekey.local":
username: admin
password: Harbor12345
privateRegistry: "dockerhub.kubekey.local"#privateRegistry值为dockerhub.kubekey.local
namespaceOverride: "kubesphereio"#namespaceOverride值为kubesphereio,镜像仓库里面的项目名称
registryMirrors: []
insecureRegistries: []
addons: []#真正开始创建k8s集群和kubesphere
./kk create cluster -f config.yaml -a kubesphere.tar.gz --with-packages
#安装完成之后发现镜像都上传到kubesphereio项目,这因为kk在安装的过程中会将制品包里面的镜像装换为harbor的域名+项目名,然后上#传到harbor镜像仓库,项目名就是config.yaml文件中的namespaceOverride字段定义的#查看全部的pod是否启动就绪
kubectl get pod --all-namespaces
#查看kubesphere安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system \-l'app in (ks-install, ks-installer)'-ojsonpath='{.items[0].metadata.name}')-f#配置kubectl命令自动补全功能
yum -yinstall bash-completion
echo'source <(kubectl completion bash)'>>~/.bashrc
kubectl completion bash>/etc/bash_completion.d/kubectl
#根据安装页面的输出信息登录kubesphere即可
Console: http://192.168.xx.xx:30880
Account: admin
Password: P@88w0rd
#离线安装-小节1、找一台联网服务器下载kk,编写一个清单文件,根据清单文件使用./kk artifact export命令导出制品包,制品包主要是镜像文件,所以很大;2、仅需将kk和制品包上传到离线服务器;3、使用./kk create config命令创建配置文件,修改配置文件,主要是修改主机信息、节点角色分配、镜像仓库类型等信息;4、使用./kk init registry命令根据配置文件和制品包来创建一个镜像仓库,镜像仓库的类型已经在配置文件中定义,可以是harbor或docker
registry,镜像仓库都是systemd管理的,之所以要创建镜像仓库是因为要把制品包里面的镜像上传到镜像仓库,harbor默认安装在/opt/harbor目录下,
/opt/harbor目录里面的配置文件详细的说明了仓库名称,端口等信息,可以自行维护harbor。
5、harbor仓库创建完成之后需要执行自动化脚本创建项目,因为harbor需要有项目才能上传镜像;6、重新编辑配置文件,主要是配置镜像仓库的信息,如harbor的认证信息,指定默认的项目名称,因为kk创建k8s集群时需要将制品包里的镜像名转换后上传到harbor镜像仓库指定的项目中;7、创建集群;8、检查全部pod是否就绪,检查kubesphere日志是否显示kubesphere安装完成。
联网-在已有k8s集群上安装kubesphere v3.3.0
#最小化安装kubesphere,可在安装完成之后自行启用kubesphere的插件
https://kubesphere.io/zh/docs/v3.3/quick-start/minimal-kubesphere-on-k8s/
https://www.kubesphere.io/zh/docs/v3.3/installing-on-kubernetes/introduction/overview/
#准备工作1、kubernetes 版本必须为:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
2、确保您的机器满足最低硬件要求:CPU >1 核,内存 >2 GB。
3、在安装之前,需要配置 kubernetes 集群中的默认存储类型。
所以得首先安装一个合适的k8s版本集群,然后k8s集群要有动态供给的默认存储类型。
安装k8s集群参照:https://blog.csdn.net/MssGuo/article/details/122773155
配置nfs作为k8s默认存储可以参照:https://blog.csdn.net/MssGuo/article/details/116381308和https://blog.csdn.net/MssGuo/article/details/123611986
#部署kubesphere v3.3.0wget-c https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f kubesphere-installer.yaml
wget-c https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
#vim cluster-configuration.yaml 编辑文件适当修改
kubectl apply -f cluster-configuration.yaml
#kubesphere v3.3.2版本# kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml# kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml#检查日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l'app in (ks-install, ks-installer)'-ojsonpath='{.items[0].metadata.name}')-f#输出日志可以查看kubesphere登录信息
Console: http://192.168.xx.xx:30880
Account: admin
Password: P@88w0rd
#kubesphere基本使用
admin登录,创建A用户并授权可以创建企业空间;
A用户登录,创建企业空间;A用户邀请其他用户加入企业空间;其他用户都是可以查看企业空间全部资源,只有B是项目总监,B可以创建项目;
B用户登录,创建项目,邀请其他开发成员加入项目并分配项目角色即可;
创建一个项目后k8s就会创建一个对应的namespace
离线-在已有k8s集群上安装kubesphere v3.3.0
#官方文档: https://www.kubesphere.io/zh/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped/#离线在已有的k8s集群中安装kubesphere和联网在已有k8s集群上安装kubesphere差别在于,离线需要自己在离线服务器上创建一个本地仓库来托管Docker镜像#本教程演示了如何在离线环境中将kubesphere安装到kubernetes 上#首先需要离线服务器上或同网段服务器上有一个镜像仓库,可参考下面两个链接搭建,任选一个即可,如果已有镜像仓库,可不用重复搭建#搭建docker registry镜像仓库: https://blog.csdn.net/MssGuo/article/details/128945312#搭建harbor镜像仓库: https://blog.csdn.net/MssGuo/article/details/126210184#找一台有docker环境且可以联网的服务器#联网-下载KubeSphere镜像列表wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt
#可以根据需要选择拉取的镜像,如果已经有一个kubernetes集群了,可以在images-list.text中删除 ##k8s-images 和在它下面的相关镜像vim images-list.txt
#联网-下载offline-installation-tool.shcurl-L-O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/offline-installation-tool.sh
chmod +x offline-installation-tool.sh
./offline-installation-tool.sh -h#联网-拉取镜像,服务器需要能联网并且有docker环境,下载包是在kubesphere-images目录下一个个以tar.gz结尾的文件
./offline-installation-tool.sh -s-l images-list.txt -d ./kubesphere-images
#下载kubesphere部署文件curl-L-O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
curl-L-O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
#编辑集群配置文件,spec.local_registry字段指定自己镜像仓库的IP+端口vim cluster-configuration.yaml
#替换镜像名称,dockerhub.kubekey.local是镜像仓库名称sed-i"s#^\s*image: kubesphere.*/ks-installer:.*# image: dockerhub.kubekey.local/kubesphere/ks-installer:v3.0.0#" kubesphere-installer.yaml
#将已经联网下载好的全部文件上传到离线服务器#推送镜像至私有仓库,-r参数指定是镜像仓库域名和端口#备注:如果使用harbor,需要先在harbor上面创建项目,要创建的项目目录参考images-list.txt文件的镜像列表,如kubesphere/tomcat85-java8-centos7:v3.2.0,项目名称就是kubesphere
./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r dockerhub.kubekey.local
#开始安装kubesphere
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
卸载kubesphere
在k8s集群中卸载kubesphere,可参考官方写的卸载脚本:
https://www.kubesphere.io/zh/docs/v3.4/installing-on-kubernetes/uninstall-kubesphere-from-k8s/
#卸载kubesphere,下载脚本在master节点执行即可wget-c https://raw.githubusercontent.com/kubesphere/ks-installer/release-3.1/scripts/kubesphere-delete.sh
bash kubesphere-delete.sh
添加一个节点
官方文档: https://www.kubesphere.io/zh/docs/v3.3/installing-on-linux/cluster-operation/add-new-nodes/
#添加工作节点#从集群中检索并生成sample.yaml配置文件,生成的sample.yaml配置文件可能信息不全,需要自己补全,如果机器上已有原来的配置文件,可跳过此步#编辑配置文件,将新节点的信息放在hosts和roleGroups之下
./kk create config --from-cluster
vim sample.yaml
#添加节点
./kk add nodes -f sample.yaml
#添加master节点实现高可用#与上相识,不过配置的是master节点相关的信息
./kk create config --from-cluster
#将新节点和负载均衡器的信息添加到sample.yaml文件中vim sample.yaml
##添加节点
./kk add nodes -f sample.yaml
删除节点
#找到设置集群时所用的配置文件,如果没有该配置文件,可使用kubekey检索集群信息,默认创建文件sample.yaml
./kk create config --from-cluster
#删除节点
./kk delete node<nodeName>-f sample.yaml
kk命令语法
[root@ks1 ~]# ./kk --help
Deploy a kubernetes or kubesphere cluster efficiently, flexibly and easily. There are three scenarios to use kubekey.
1. 仅安装kubernetes
2. 一条命令同时安装kubernetes和kubesphere
3. 现在安装kubernetes,然后在使用ks-installer在k8s上部署kubesphere,ks-installer参考:https://github.com/kubesphere/ks-installer
语法:
kk [command]
可用命令s:
add k8s集群添加节点
alpha Commands forfeaturesin alpha
artifact 管理kubekey离线下载的安装包
certs 集群证书
completion 生成 shell 完成脚本
create 创建一个集群或创建集群配置文件
delete 删除节点或删除集群
help 帮助
init 初始化安装环境
plugin Provides utilities for interacting with plugins
upgrade 平滑升级集群
version 打印kk版本信息
Flags:
-h, --helphelpfor kk
Use "kk [command] --help"formore information about a command.
[root@ks1 ~]# #安装k8s集群和kubesphere,没有使用配置文件,默认当前单节点安装k8s
./kk create cluster --with-kubernetes v1.22.17 --with-kubesphere v3.3.0
#仅安装k8s集群,没有使用配置文件,默认当前单节点安装k8s
./kk create cluster --with-kubernetes v1.22.17
#创建配置文件
./kk create config --with-kubesphere v3.3.2 --with-kubernetes v1.22.17 -f config.yaml
#从配置文件读取配置,创建k8s集群和kubesphere
./kk create cluster -f config.yaml
#删除k8s节点
./kk delete node ks2 -f config.yaml
#删除k8s集群
./kk delete cluster
版权归原作者 MssGuo 所有, 如有侵权,请联系我们删除。