kubeadm介绍
kubeadm概述
kubeadm功能
本方案描述
- 本方案采用kubeadm部署Kubernetes 1.20.4版本;
- etcd采用混部方式;
- KeepAlived:实现VIP高可用;
- HAProxy:以系统systemd形式运行,提供反向代理至3个master 6443端口;
- 其他主要部署组件包括:
- Metrics:度量;
- Dashboard:Kubernetes 图形UI界面;
- Helm:Kubernetes Helm包管理工具;
- Ingress:Kubernetes 服务暴露;
- Longhorn:Kubernetes 动态存储组件。
部署规划
节点规划
节点主机名 IP 类型 运行服务 master01 172.16.10.11 Kubernetes master节点 containerd、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、metrics、calico master02 172.16.10.12 Kubernetes master节点 containerd、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、metrics、calico master03 172.16.10.13 Kubernetes master节点 containerd、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、metrics、calico worker01 172.16.10.21 Kubernetes worker节点 containerd、kubelet、proxy、calico worker02 172.16.10.22 Kubernetes worker节点 containerd、kubelet、proxy、calico worker03 172.16.10.23 Kubernetes worker节点 containerd、kubelet、proxy、calico
Kubernetes的高可用主要指的是控制平面的高可用,即指多套Master节点组件和Etcd组件,工作节点通过负载均衡连接到各Master。
Kubernetes高可用架构中etcd与Master节点组件混布方式特点:
- 所需机器资源少
- 部署简单,利于管理
- 容易进行横向扩展
- 风险大,一台宿主机挂了,master和etcd就都少了一套,集群冗余度受到的影响比较大
提示:本实验使用Keepalived+HAProxy架构实现Kubernetes的高可用。
初始准备
[root@master01 ~]# hostnamectl set-hostname master01 #其他节点依次修改
[root@master01 ~]# cat >> /etc/hosts << EOF
172.16.10.11 master01
172.16.10.12 master02
172.16.10.13 master03
172.16.10.21 worker01
172.16.10.22 worker02
172.16.10.23 worker03
EOF
[root@master01 ~]# wget http://down.linuxsb.com/k8sconinit.sh
提示:此操作仅需要在master01节点操作。
对于某些特性,可能需要升级内核,内核升级操作见《018.Linux升级内核》。4.19版及以上内核nf_conntrack_ipv4已经改为nf_conntrack。
互信配置
为了更方便远程分发文件和执行命令,本实验配置master01节点到其它节点的 ssh 信任关系。
[root@master01 ~]# ssh-keygen -f ~/.ssh/id_rsa -N ''
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master01
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master02
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master03
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker01
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker02
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker03
提示:此操作仅需要在master01节点操作。
其他准备
[root@master01 ~]# vi environment.sh
#!/bin/sh
#****************************************************************#
ScriptName: environment.sh
Author: xhy
Create Date: 2020-05-30 16:30
Modify Author: xhy
Modify Date: 2020-05-30 16:30
Version:
#***************************************************************#
集群 MASTER 机器 IP 数组
export MASTER_IPS=(172.16.10.11 172.16.10.12 172.16.10.13)
集群 MASTER IP 对应的主机名数组
export MASTER_NAMES=(master01 master02 master03)
集群 NODE 机器 IP 数组
export NODE_IPS=(172.16.10.21 172.16.10.22 172.16.10.23)
集群 NODE IP 对应的主机名数组
export NODE_NAMES=(worker01 worker02 worker03)
集群所有机器 IP 数组
export ALL_IPS=(172.16.10.11 172.16.10.12 172.16.10.13 172.16.10.21 172.16.10.22 172.16.10.23)
集群所有IP 对应的主机名数组
export ALL_NAMES=(master01 master02 master03 worker01 worker02 worker03)
[root@master01 ~]# source environment.sh
[root@master01 ~]# chmod +x *.sh
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
scp -rp /etc/hosts root@${all_ip}:/etc/hosts
scp -rp k8sconinit.sh root@${all_ip}:/root/
ssh root@${all_ip} "bash /root/k8sconinit.sh"
done
提示:Kubernetes 1.20.4可兼容的containerd版本最新为1.4.3。
集群部署
相关组件包
需要在每台机器上都安装以下的软件包:
- kubeadm: 用来初始化集群的指令;
- kubelet: 在集群中的每个节点上用来启动 pod 和 container 等;
- kubectl: 用来与集群通信的命令行工具。
kubeadm不能安装或管理 kubelet 或 kubectl ,所以得保证他们满足通过 kubeadm 安装的 Kubernetes控制层对版本的要求。如果版本没有满足要求,可能导致一些意外错误或问题。
具体相关组件安装见;附001.kubectl介绍及使用书
提示:Kubernetes 1.20.4版本所有兼容相应组件的版本参考:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.4.md。
正式安装
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
ssh root@${all_ip} "cat <<eof> /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF"
ssh root@${all_ip} "yum install -y kubeadm-1.20.4-0.x86_64 kubelet-1.20.4-0.x86_64 kubectl-1.20.4-0.x86_64 --disableexcludes=kubernetes"
ssh root@${all_ip} "systemctl enable kubelet"
done
[root@master01 ~]# yum search -y kubelet --showduplicates #查看相应版本
</eof>
提示:如上仅需Master01节点操作,从而实现所有节点自动化安装,同时此时不需要启动kubelet,初始化的过程中会自动启动的,如果此时启动了会出现报错,忽略即可。
说明:同时安装了cri-tools, kubernetes-cni, socat三个依赖:
socat:kubelet的依赖;
cri-tools:即CRI(Container Runtime Interface)容器运行时接口的命令行工具。
部署高可用组件
HAProxy安装
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "yum -y install gcc gcc-c++ make libnl libnl-devel libnfnetlink-devel openssl-devel wget openssh-clients systemd-devel zlib-devel pcre-devel libnl3-devel"
ssh root@${master_ip} "wget http://down.linuxsb.com/software/haproxy-2.3.5.tar.gz"
ssh root@${master_ip} "tar -zxvf haproxy-2.3.5.tar.gz"
ssh root@${master_ip} "cd haproxy-2.3.5/ && make ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_ZLIB=1 USE_SYSTEMD=1 PREFIX=/usr/local/haprpxy && make install PREFIX=/usr/local/haproxy"
ssh root@${master_ip} "cp /usr/local/haproxy/sbin/haproxy /usr/sbin/"
ssh root@${master_ip} "useradd -r haproxy && usermod -G haproxy haproxy"
ssh root@${master_ip} "mkdir -p /etc/haproxy && cp -r /root/haproxy-2.3.5/examples/errorfiles/ /usr/local/haproxy/"
done
Keepalived安装
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "yum -y install curl gcc gcc-c++ make libnl libnl-devel libnl3-devel libnfnetlink-devel openssl-devel"
ssh root@${master_ip} "wget http://down.linuxsb.com/software/keepalived-2.2.1.tar.gz"
ssh root@${master_ip} "tar -zxvf keepalived-2.2.1.tar.gz"
ssh root@${master_ip} "cd keepalived-2.2.1/ && LDFLAGS=\"$LDFAGS -L /usr/local/openssl/lib/\" ./configure --sysconf=/etc --prefix=/usr/local/keepalived && make && make install"
ssh root@${master_ip} "systemctl enable keepalived && systemctl start keepalived"
done
提示:如上仅需Master01节点操作,从而实现所有节点自动化安装。若出现如下报错:undefined reference to OPENSSL_init_ssl',可带上openssl lib路径:</strong></em></p>
<p><em><strong><code>LDFLAGS="$LDFAGS -L /usr/local/openssl/lib/" ./configure --sysconf=/etc --prefix=/usr/local/keepalived</code></strong></em></p>
<h3>创建配置文件</h3>
<pre><code class="language-shell">[root@master01 ~]# wget http://down.linuxsb.com/hakek8s.sh #拉取自动部署脚本
[root@master01 ~]# chmod u+x hakek8s.sh
</code></pre>
<p><code>[root@master01 ~]# vi hakek8s.sh</code></p>
<pre><code class="language-shell">#!/bin/sh
#****************************************************************#
ScriptName: hakek8s.sh
Author: xhy
Create Date: 2020-06-08 20:00
Modify Author: xhy
Modify Date: 2020-06-15 18:15
Version: v2
#***************************************************************#
####################
set variables below to create the config files, all files will create at ./config directory
####################
master keepalived virtual ip address
export K8SHA_VIP=172.16.10.254
master01 ip address
export K8SHA_IP1=172.16.10.11
master02 ip address
export K8SHA_IP2=172.16.10.12
master03 ip address
export K8SHA_IP3=172.16.10.13
master01 hostname
export K8SHA_HOST1=master01
master02 hostname
export K8SHA_HOST2=master02
master03 hostname
export K8SHA_HOST3=master03
master01 network interface name
export K8SHA_NETINF1=eth0
master02 network interface name
export K8SHA_NETINF2=eth0
master03 network interface name
export K8SHA_NETINF3=eth0
keepalived auth_pass config
export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d
kubernetes CIDR pod subnet
export K8SHA_PODCIDR=10.10.0.0
kubernetes CIDR svc subnet
export K8SHA_SVCCIDR=10.20.0.0
</code></pre>
<p><code>[root@master01 ~]# ./hakek8s.sh</code></p>
<p><strong>解释:如上仅需Master01节点操作。执行hakek8s.sh脚本后会生产如下配置文件清单:</strong></p>
<ul>
<li><strong>kubeadm-config.yaml:kubeadm初始化配置文件,位于当前目录</strong></li>
<li><strong>keepalived:keepalived配置文件,位于各个master节点的/etc/keepalived目录</strong></li>
<li><strong>haproxy:haproxy的配置文件,位于各个master节点的/etc/haproxy/目录</strong></li>
<li><strong>calico.yaml:calico网络组件部署文件,位于config/calico/目录</strong></li>
</ul>
<pre><code>[root@master01 ~]# cat kubeadm-config.yaml #检查集群初始化配置
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
serviceSubnet: "10.20.0.0/16" #设置svc网段
podSubnet: "10.10.0.0/16" #设置Pod网段
dnsDomain: "cluster.local"
kubernetesVersion: "v1.20.4" #设置安装版本
controlPlaneEndpoint: "172.16.10.254:16443" #设置相关API VIP地址
apiServer:
certSANs:
- master01
- master02
- master03
- 127.0.0.1
- 172.16.10.11
- 172.16.10.12
- 172.16.10.13
- 172.16.10.254
timeoutForControlPlane: 4m0s
certificatesDir: "/etc/kubernetes/pki"
imageRepository: "k8s.gcr.io"
…… #如下全部注释
#apiVersion: v1
#kind: Secret
#metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
#type: Opaque
……
kind: Deployment
……
replicas: 3 #适当调整为3副本
……
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.1.0
imagePullPolicy: IfNotPresent #修改镜像下载策略
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
- --tls-key-file=tls.key
- --tls-cert-file=tls.crt
- --token-ttl=3600 #追加如上args
……
nodeSelector:
"kubernetes.io/os": linux
"dashboard": "yes" #部署在master节点
……
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
type: NodePort #新增
ports:
- port: 8000
targetPort: 8000
nodePort: 30000 #新增
selector:
k8s-app: dashboard-metrics-scraper
……
replicas: 3 #适当调整为3副本
……
nodeSelector:
"beta.kubernetes.io/os": linux
"dashboard": "yes" #部署在master节点
……
</code></pre>
<h3>正式部署</h3>
<pre><code>[root@master01 dashboard]# kubectl apply -f recommended.yaml
[root@master01 dashboard]# kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard
[root@master01 dashboard]# kubectl get services -n kubernetes-dashboard
[root@master01 dashboard]# kubectl get pods -o wide -n kubernetes-dashboard
</code></pre>
<p><img alt="" src="https://bed01.oss-cn-hangzhou.aliyuncs.com/study/kubernetes/f031/007.png" /></p>
<p><em><strong>提示:master01 NodePort 30001/TCP映射到 dashboard pod 443 端口。</strong></em></p>
<h3>创建管理员账户</h3>
<p><em><strong>提示:dashboard v2版本默认没有创建具有管理员权限的账户,可如下操作创建。</strong></em></p>
<pre><code>[root@master01 dashboard]# vi dashboard-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
</code></pre>
<p><code>[root@master01 dashboard]# kubectl apply -f dashboard-admin.yaml</code></p>
<h2>ingress暴露dashboard</h2>
<h3>创建ingress tls</h3>
<pre><code>[root@master01 ~]# cd /root/dashboard/certs
[root@master01 certs]# kubectl -n kubernetes-dashboard create secret tls kubernetes-dashboard-tls --cert=tls.crt --key=tls.key
[root@master01 certs]# kubectl -n kubernetes-dashboard describe secrets kubernetes-dashboard-tls
</code></pre>
<p><img alt="" src="https://bed01.oss-cn-hangzhou.aliyuncs.com/study/kubernetes/f028/010.png" /></p>
<h3>创建ingress策略</h3>
<pre><code>[root@master01 ~]# cd /root/dashboard/
[root@master01 dashboard]# vi dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
#nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_ssl_session_reuse off;
spec:
rules:
- host: dashboard.odocker.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
tls:
- hosts:
- dashboard.odocker.com
secretName: kubernetes-dashboard-tls
</code></pre>
<pre><code>[root@master01 dashboard]# kubectl apply -f dashboard-ingress.yaml
[root@master01 dashboard]# kubectl -n kubernetes-dashboard get ingress
</code></pre>
<p><img alt="" src="https://bed01.oss-cn-hangzhou.aliyuncs.com/study/kubernetes/f031/008.png" /></p>
<h2>访问dashboard</h2>
<h3>导入证书</h3>
<p>将dashboard.odocker.com证书导入浏览器,并设置为信任,导入操作略。</p>
<h3>创建kubeconfig文件</h3>
<p>使用token相对复杂,可将token添加至kubeconfig文件中,使用KubeConfig文件访问dashboard。</p>
<pre><code>[root@master01 dashboard]# ADMIN_SECRET=$(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
[root@master01 dashboard]# DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kubernetes-dashboard ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')
[root@master01 dashboard]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=172.16.10.254:16443 \
--kubeconfig=ucloud-ngkeconk8s-dashboard-admin.kubeconfig # 设置集群参数
[root@master01 dashboard]# kubectl config set-credentials dashboard_user \
--token=${DASHBOARD_LOGIN_TOKEN} \
--kubeconfig=ucloud-ngkeconk8s-dashboard-admin.kubeconfig # 设置客户端认证参数,使用上面创建的 Token
[root@master01 dashboard]# kubectl config set-context default \
--cluster=kubernetes \
--user=dashboard_user \
--kubeconfig=ucloud-ngkeconk8s-dashboard-admin.kubeconfig # 设置上下文参数
[root@master01 dashboard]# kubectl config use-context default --kubeconfig=ucloud-ngkeconk8s-dashboard-admin.kubeconfig # 设置默认上下文
</code></pre>
<p>将ucloud-ngkeconk8s-dashboard-admin.kubeconfig文件导入,以便于浏览器使用该文件登录。</p>
<h3>测试访问dashboard</h3>
<p>本实验采用ingress所暴露的域名:<a href="https://dashboard.odocker.com">https://dashboard.odocker.com</a> 方式访问。使用ucloud-ngkeconk8s-dashboard-admin.kubeconfig文件访问。</p>
<p><img alt="" src="https://bed01.oss-cn-hangzhou.aliyuncs.com/study/kubernetes/f031/009.png" /></p>
<p><em><strong>提示:<br> 更多dashboard访问方式及认证可参考<a href="https://www.cnblogs.com/itzgr/p/11082342.html">附004.Kubernetes Dashboard简介及使用</a> 。<br>dashboard登录整个流程可参考:<a href="https://www.cnadn.net/post/2613.html">https://www.cnadn.net/post/2613.html</a></strong></em></p>
<h2>Longhorn存储部署</h2>
<h3>Longhorn概述</h3>
<p>Longhorn是用于Kubernetes的开源分布式块存储系统。
<em><strong>提示:更多介绍参考:<a href="https://github.com/longhorn/longhorn%E3%80%82">https://github.com/longhorn/longhorn。</a></strong></em></p>
<h3>Longhorn部署</h3>
<pre><code>[root@master01 ~]# source environment.sh
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
ssh root@${all_ip} "yum -y install iscsi-initiator-utils &"
done
</code></pre>
<p><em><strong>提示:所有节点都需要安装。</strong></em></p>
<pre><code>[root@master01 ~]# mkdir longhorn
[root@master01 ~]# cd longhorn/
[root@master01 longhorn]# wget \
https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
</code></pre>
<pre><code>[root@master01 longhorn]# vi longhorn.yaml
#……
……
kind: DaemonSet
……
imagePullPolicy: IfNotPresent
……
#……
</code></pre>
<pre><code>[root@master01 longhorn]# kubectl apply -f longhorn.yaml
[root@master01 longhorn]# kubectl -n longhorn-system get pods -o wide
</code></pre>
<p><img alt="" src="https://bed01.oss-cn-hangzhou.aliyuncs.com/study/kubernetes/f031/010.png" /></p>
<p><em><strong>提示:若部署异常可删除重建,若出现无法删除namespace,可通过如下操作进行删除:</strong></em></p>
<pre><code>wget https://github.com/longhorn/longhorn/blob/master/uninstall/uninstall.yaml
rm -rf /var/lib/longhorn/
kubectl delete -f uninstall.yaml
kubectl delete -f longhorn.yaml
</code></pre>
<h3>动态sc创建</h3>
<p><em><strong>提示:默认longhorn部署完成已创建一个sc,也可通过如下手动编写yaml创建。</strong></em></p>
<pre><code> [root@master01 longhorn]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
……
longhorn driver.longhorn.io Delete Immediate true 15m
</code></pre>
<pre><code>[root@master01 longhorn]# vi longhornsc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhornsc
provisioner: rancher.io/longhorn
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
fromBackup: ""
</code></pre>
<p><code>[root@master01 longhorn]# kubectl create -f longhornsc.yaml</code></p>
<h3>测试PV及PVC</h3>
<p>
[root@master01 longhorn]# vi longhornpod.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-pvc
spec:
accessModes:
– ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 2Gi
Original: https://www.cnblogs.com/itzgr/p/14657454.html
Author: 木二
Title: 附031.Kubernetes_v1.20.4高可用部署架构二
原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/607237/
转载文章受原作者版权保护。转载请注明原作者出处!