- kubeadm介绍
- kubeadm概述
- kubeadm功能
- 本方案描述
- 部署规划
- 节点规划
- 初始准备
- 互信配置
- 其他准备
- 集群部署
- 相关组件包
- 正式安装
- 部署高可用组件I
- Keepalived安装
- 创建配置文件
- 启动Keepalived
- 启动Nginx
- 初始化集群-Master
- 拉取镜像
- Master上初始化
- 添加其他master节点
- 安装NIC插件
- NIC插件介绍
- 设置标签
- 部署calico
- 修改node端口范围
- 开启非安全端口
- 部署高可用组件II
- 高可用说明
- 污点和标签
- 容器化实现高可用
- 添加Worker节点
- 添加Worker节点
- 确认验证
- Metrics部署
- Metrics介绍
- 开启聚合层
- 获取部署文件
- 正式部署
- 查看资源监控
- Nginx ingress部署
- Dashboard部署
- 设置标签
- 创建证书
- 手动创建secret
- 下载yaml
- 修改yaml
- 正式部署
- 创建管理员账户
- ingress暴露dashboard
- 创建ingress tls
- 创建ingress策略
- 访问dashboard
- 导入证书
- 创建kubeconfig文件
- 测试访问dashboard
- Longhorn存储部署
- Longhorn概述
- Longhorn部署
- 动态sc创建
- 测试PV及PVC
- Ingress暴露Longhorn
- 确认验证
- Helm部署
kubeadm介绍
kubeadm概述
kubeadm功能
本方案描述
- 本方案采用kubeadm部署Kubernetes 1.20.0版本;
- etcd采用混部方式;
- Keepalived:实现VIP高可用;
- Nginx:以Pod形式运行与Kubernetes之上,即in Kubernetes模式,提供反向代理至3个master 6443端口;
- 其他主要部署组件包括:
- Metrics:度量;
- Dashboard:Kubernetes 图形UI界面;
- Helm:Kubernetes Helm包管理工具;
- Ingress:Kubernetes 服务暴露;
- Longhorn:Kubernetes 动态存储组件。
部署规划
节点规划
节点主机名 IP 类型 运行服务 master01 172.24.8.71 Kubernetes master节点 docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、metrics、calico master02 172.24.8.72 Kubernetes master节点 docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、metrics、calico master03 172.24.8.73 Kubernetes master节点 docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、metrics、calico worker01 172.24.8.74 Kubernetes worker节点 docker、kubelet、proxy、calico worker02 172.24.8.75 Kubernetes worker节点 docker、kubelet、proxy、calico worker03 172.24.8.76 Kubernetes worker节点 docker、kubelet、proxy、calico
Kubernetes的高可用主要指的是控制平面的高可用,即指多套Master节点组件和Etcd组件,工作节点通过负载均衡连接到各Master。
Kubernetes高可用架构中etcd与Master节点组件混布方式特点:
- 所需机器资源少
- 部署简单,利于管理
- 容易进行横向扩展
- 风险大,一台宿主机挂了,master和etcd就都少了一套,集群冗余度受到的影响比较大
提示:本实验使用Keepalived+Nginx架构实现Kubernetes的高可用。
初始准备
[root@master01 ~]# hostnamectl set-hostname master01 #其他节点依次修改
[root@master01 ~]# cat >> /etc/hosts << EOF
172.24.8.71 master01
172.24.8.72 master02
172.24.8.73 master03
172.24.8.74 worker01
172.24.8.75 worker02
172.24.8.76 worker03
EOF
[root@master01 ~]# wget http://down.linuxsb.com/k8sinit.sh
提示:此操作仅需要在master01节点操作。
对于某些特性,可能需要升级内核,内核升级操作见《018.Linux升级内核》。4.19版及以上内核nf_conntrack_ipv4已经改为nf_conntrack。
互信配置
为了更方便远程分发文件和执行命令,本实验配置master01节点到其它节点的 ssh 信任关系。
[root@master01 ~]# ssh-keygen -f ~/.ssh/id_rsa -N ''
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master01
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master02
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master03
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker01
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker02
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker03
提示:此操作仅需要在master01节点操作。
其他准备
[root@master01 ~]# vi environment.sh
#!/bin/sh
#****************************************************************#
ScriptName: environment.sh
Author: xhy
Create Date: 2020-05-30 16:30
Modify Author: xhy
Modify Date: 2020-05-30 16:30
Version:
#***************************************************************#
集群 MASTER 机器 IP 数组
export MASTER_IPS=(172.24.8.71 172.24.8.72 172.24.8.73)
集群 MASTER IP 对应的主机名数组
export MASTER_NAMES=(master01 master02 master03)
集群 NODE 机器 IP 数组
export NODE_IPS=(172.24.8.74 172.24.8.75 172.24.8.76)
集群 NODE IP 对应的主机名数组
export NODE_NAMES=(worker01 worker02 worker03)
集群所有机器 IP 数组
export ALL_IPS=(172.24.8.71 172.24.8.72 172.24.8.73 172.24.8.74 172.24.8.75 172.24.8.76)
集群所有IP 对应的主机名数组
export ALL_NAMES=(master01 master02 master03 worker01 worker02 worker03)
[root@master01 ~]# source environment.sh
[root@master01 ~]# chmod +x *.sh
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
scp -rp /etc/hosts root@${all_ip}:/etc/hosts
scp -rp k8sinit.sh root@${all_ip}:/root/
ssh root@${all_ip} "bash /root/k8sinit.sh"
done
提示:Kubernetes 1.20.0可兼容的docker版本最新为19.03。
集群部署
相关组件包
需要在每台机器上都安装以下的软件包:
- kubeadm: 用来初始化集群的指令;
- kubelet: 在集群中的每个节点上用来启动 pod 和 container 等;
- kubectl: 用来与集群通信的命令行工具。
kubeadm不能安装或管理 kubelet 或 kubectl ,所以得保证他们满足通过 kubeadm 安装的 Kubernetes控制层对版本的要求。如果版本没有满足要求,可能导致一些意外错误或问题。
具体相关组件安装见;附001.kubectl介绍及使用书
提示:Kubernetes 1.20版本所有兼容相应组件的版本参考:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md。
正式安装
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
ssh root@${all_ip} "cat <<eof> /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF"
ssh root@${all_ip} "yum install -y kubeadm-1.20.0-0.x86_64 kubelet-1.20.0-0.x86_64 kubectl-1.20.0-0.x86_64 --disableexcludes=kubernetes"
ssh root@${all_ip} "systemctl enable kubelet"
done
[root@master01 ~]# yum search -y kubelet --showduplicates #查看相应版本
</eof>
提示:如上仅需Master01节点操作,从而实现所有节点自动化安装,同时此时不需要启动kubelet,初始化的过程中会自动启动的,如果此时启动了会出现报错,忽略即可。
说明:同时安装了cri-tools, kubernetes-cni, socat三个依赖:
socat:kubelet的依赖;
cri-tools:即CRI(Container Runtime Interface)容器运行时接口的命令行工具。
部署高可用组件I
Keepalived安装
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "yum -y install curl gcc gcc-c++ make libnl libnl-devel libnl3-devel libnfnetlink-devel openssl-devel"
ssh root@${master_ip} "wget http://down.linuxsb.com/software/keepalived-2.1.5.tar.gz"
ssh root@${master_ip} "tar -zxvf keepalived-2.1.5.tar.gz"
ssh root@${master_ip} "cd keepalived-2.1.5/ && LDFLAGS=\"$LDFAGS -L /usr/local/openssl/lib/\" ./configure --sysconf=/etc --prefix=/usr/local/keepalived && make && make install"
ssh root@${master_ip} "systemctl enable keepalived && systemctl start keepalived"
done
提示:如上仅需Master01节点操作,从而实现所有节点自动化安装。若出现如下报错:undefined reference to OPENSSL_init_ssl',可带上openssl lib路径:</strong></em></p>
<p><em><strong><code>LDFLAGS="$LDFAGS -L /usr/local/openssl/lib/" ./configure --sysconf=/etc --prefix=/usr/local/keepalived</code></strong></em></p>
<h4>创建配置文件</h4>
<pre><code>[root@master01 ~]# wget http://down.linuxsb.com/ngkek8s.sh #拉取自动部署脚本
[root@master01 ~]# chmod u+x ngkek8s.sh
</code></pre>
<pre><code>[root@master01 ~]# vi ngkek8s.sh #其他保持默认
#!/bin/sh
#****************************************************************#
ScriptName: k8s_ha.sh
Author: xhy
Create Date: 2020-05-13 16:32
Modify Author: xhy
Modify Date: 2020-06-12 12:53
Version: v2
#***************************************************************#
#######################################
set variables below to create the config files, all files will create at ./config directory
#######################################
master keepalived virtual ip address
export K8SHA_VIP=172.24.8.254
master01 ip address
export K8SHA_IP1=172.24.8.71
master02 ip address
export K8SHA_IP2=172.24.8.72
master03 ip address
export K8SHA_IP3=172.24.8.73
master01 hostname
export K8SHA_HOST1=master01
master02 hostname
export K8SHA_HOST2=master02
master03 hostname
export K8SHA_HOST3=master03
master01 network interface name
export K8SHA_NETINF1=eth0
master02 network interface name
export K8SHA_NETINF2=eth0
master03 network interface name
export K8SHA_NETINF3=eth0
keepalived auth_pass config
export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d
kubernetes CIDR pod subnet
export K8SHA_PODCIDR=10.10.0.0
kubernetes CIDR svc subnet
export K8SHA_SVCCIDR=10.20.0.0
</code></pre>
<p><code>[root@master01 ~]# ./ngkek8s.sh</code></p>
<p><strong>解释:如上仅需Master01节点操作。执行ngkek8s.sh脚本后,会自动生成以下配置文件:</strong></p>
<ul>
<li><strong>kubeadm-config.yaml:kubeadm初始化配置文件,位于当前目录</strong></li>
<li><strong>keepalived:keepalived配置文件,位于各个master节点的/etc/keepalived目录</strong></li>
<li><strong>nginx-lb:nginx-lb负载均衡配置文件,位于各个master节点的/etc/kubernetes/nginx-lb/目录</strong></li>
<li>*<em>calico.yaml:calico网络组件部署文件,位于config/calico/目录</em></li>
</ul>
<pre><code>[root@master01 ~]# cat kubeadm-config.yaml #检查集群初始化配置
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
serviceSubnet: "10.20.0.0/16" #设置svc网段
podSubnet: "10.10.0.0/16" #设置Pod网段
dnsDomain: "cluster.local"
kubernetesVersion: "v1.20.0" #设置安装版本
controlPlaneEndpoint: "172.24.11.254:16443" #设置相关API VIP地址
apiServer:
certSANs:
- master01
- master02
- master03
- 127.0.0.1
- 172.24.8.71
- 172.24.8.72
- 172.24.8.73
- 172.24.8.254
timeoutForControlPlane: 4m0s
certificatesDir: "/etc/kubernetes/pki"
imageRepository: "k8s.gcr.io"
…… #如下全部注释
#apiVersion: v1
#kind: Secret
#metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
#type: Opaque
……
kind: Deployment
……
replicas: 3 #适当调整为3副本
……
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.1.0
imagePullPolicy: IfNotPresent #修改镜像下载策略
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
- --tls-key-file=tls.key
- --tls-cert-file=tls.crt
- --token-ttl=3600 #追加如上args
……
nodeSelector:
"kubernetes.io/os": linux
"dashboard": "yes" #部署在master节点
……
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
type: NodePort #新增
ports:
- port: 8000
nodePort: 30000 #新增
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
……
replicas: 3 #适当调整为3副本
……
nodeSelector:
"beta.kubernetes.io/os": linux
"dashboard": "yes" #部署在master节点
……
</code></pre>
<h4>正式部署</h4>
<pre><code>[root@master01 dashboard]# kubectl apply -f recommended.yaml
[root@master01 dashboard]# kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard
[root@master01 dashboard]# kubectl get services -n kubernetes-dashboard
[root@master01 dashboard]# kubectl get pods -o wide -n kubernetes-dashboard
</code></pre>
<p><img alt="" src="https://bed01.oss-cn-hangzhou.aliyuncs.com/study/kubernetes/f028/009.png" /></p>
<p><em><strong>提示:master01 NodePort 30001/TCP映射到 dashboard pod 443 端口。</strong></em></p>
<h4>创建管理员账户</h4>
<p><em><strong>提示:dashboard v2版本默认没有创建具有管理员权限的账户,可如下操作创建。</strong></em></p>
<pre><code>[root@master01 dashboard]# vi dashboard-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
</code></pre>
<p><code>[root@master01 dashboard]# kubectl apply -f dashboard-admin.yaml</code></p>
<h3>ingress暴露dashboard</h3>
<h4>创建ingress tls</h4>
<pre><code>[root@master01 ~]# cd /root/dashboard/certs
[root@master01 certs]# kubectl -n kubernetes-dashboard create secret tls kubernetes-dashboard-tls --cert=tls.crt --key=tls.key
[root@master01 certs]# kubectl -n kubernetes-dashboard describe secrets kubernetes-dashboard-tls
</code></pre>
<p><img alt="" src="https://bed01.oss-cn-hangzhou.aliyuncs.com/study/kubernetes/f028/010.png" /></p>
<h4>创建ingress策略</h4>
<pre><code>[root@master01 ~]# cd /root/dashboard/
[root@master01 dashboard]# vi dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
#nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_ssl_session_reuse off;
spec:
rules:
- host: kubernetes.linuxsb.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
tls:
- hosts:
- kubernetes.linuxsb.com
secretName: kubernetes-dashboard-tls
</code></pre>
<pre><code>[root@master01 dashboard]# kubectl apply -f dashboard-ingress.yaml
[root@master01 dashboard]# kubectl -n kubernetes-dashboard get ingress
</code></pre>
<p><img alt="" src="https://bed01.oss-cn-hangzhou.aliyuncs.com/study/kubernetes/f028/011.png" /></p>
<h3>访问dashboard</h3>
<h4>导入证书</h4>
<p>将kubernetes.linuxsb.com证书导入浏览器,并设置为信任,导入操作略。</p>
<h4>创建kubeconfig文件</h4>
<p>使用token相对复杂,可将token添加至kubeconfig文件中,使用KubeConfig文件访问dashboard。</p>
<pre><code>[root@master01 dashboard]# ADMIN_SECRET=$(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
[root@master01 dashboard]# DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kubernetes-dashboard ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')
[root@master01 dashboard]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \
--server=172.24.8.254:16443 \
--kubeconfig=ucloud-ngkek8s-dashboard-admin.kubeconfig # 设置集群参数
[root@master01 dashboard]# kubectl config set-credentials dashboard_user \
--token=${DASHBOARD_LOGIN_TOKEN} \
--kubeconfig=ucloud-ngkek8s-dashboard-admin.kubeconfig # 设置客户端认证参数,使用上面创建的 Token
[root@master01 dashboard]# kubectl config set-context default \
--cluster=kubernetes \
--user=dashboard_user \
--kubeconfig=ucloud-ngkek8s-dashboard-admin.kubeconfig # 设置上下文参数
[root@master01 dashboard]# kubectl config use-context default --kubeconfig=ucloud-ngkek8s-dashboard-admin.kubeconfig # 设置默认上下文
</code></pre>
<p>将ucloud-ngkek8s-dashboard-admin.kubeconfig文件导入,以便于浏览器使用该文件登录。</p>
<h4>测试访问dashboard</h4>
<p>本实验采用ingress所暴露的域名:<a href="https://kubernetes.linuxsb.xn--com-vb3f670azq3do5o">https://kubernetes.linuxsb.com方式访问</a>。使用ucloud-ngkek8s-dashboard-admin.kubeconfig文件访问。</p>
<p><img alt="" src="https://bed01.oss-cn-hangzhou.aliyuncs.com/study/kubernetes/f028/012.png" /></p>
<p><em><strong>提示:<br> 更多dashboard访问方式及认证可参考<a href="https://www.cnblogs.com/itzgr/p/11082342.html">附004.Kubernetes Dashboard简介及使用</a> 。<br>dashboard登录整个流程可参考:<a href="https://www.cnadn.net/post/2613.html">https://www.cnadn.net/post/2613.html</a></strong></em></p>
<h3>Longhorn存储部署</h3>
<h4>Longhorn概述</h4>
<p>Longhorn是用于Kubernetes的开源分布式块存储系统。
<em><strong>提示:更多介绍参考:<a href="https://github.com/longhorn/longhorn%E3%80%82">https://github.com/longhorn/longhorn。</a></strong></em></p>
<h4>Longhorn部署</h4>
<pre><code>[root@master01 ~]# source environment.sh
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
ssh root@${all_ip} "yum -y install iscsi-initiator-utils &"
done
</code></pre>
<p><em><strong>提示:所有节点都需要安装。</strong></em></p>
<pre><code>[root@master01 ~]# mkdir longhorn
[root@master01 ~]# cd longhorn/
[root@master01 longhorn]# wget \
https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
</code></pre>
<pre><code>[root@master01 longhorn]# vi longhorn.yaml
#……
……
kind: DaemonSet
……
imagePullPolicy: IfNotPresent
……
#……
</code></pre>
<pre><code>[root@master01 longhorn]# kubectl apply -f longhorn.yaml
[root@master01 longhorn]# kubectl -n longhorn-system get pods -o wide
</code></pre>
<p><img alt="" src="https://bed01.oss-cn-hangzhou.aliyuncs.com/study/kubernetes/f028/013.png" /></p>
<p><em><strong>提示:若部署异常可删除重建,若出现无法删除namespace,可通过如下操作进行删除:</strong></em></p>
<pre><code>wget https://github.com/longhorn/longhorn/blob/master/uninstall/uninstall.yaml
rm -rf /var/lib/longhorn/
kubectl apply -f uninstall.yaml
kubectl delete -f longhorn.yaml
</code></pre>
<h4>动态sc创建</h4>
<p><em><strong>提示:默认longhorn部署完成已创建一个sc,也可通过如下手动编写yaml创建。</strong></em></p>
<pre><code> [root@master01 longhorn]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
……
longhorn driver.longhorn.io Delete Immediate true 15m
</code></pre>
<pre><code>[root@master01 longhorn]# vi longhornsc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhornsc
provisioner: rancher.io/longhorn
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
fromBackup: ""
</code></pre>
<p><code>[root@master01 longhorn]# kubectl create -f longhornsc.yaml</code></p>
<h4>测试PV及PVC</h4>
<p>
[root@master01 longhorn]# vi longhornpod.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-pvc
spec:
accessModes:
– ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 2Gi
Original: https://www.cnblogs.com/itzgr/p/14173665.html
Author: 木二
Title: 附028.Kubernetes_v1.20.0高可用部署架构二
原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/607233/
转载文章受原作者版权保护。转载请注明原作者出处!