一、设置基本环境(需要开启超级用户权限)
安装控制selinux的命令:
apt-get install -y selinux-utils
禁止selinux:
setenforce 0
重启操作系统:
shutdown -r now
查看selinux是否已经关闭:
getenforcesudo
显示Disabled表示则已经关闭
2. 关闭swap分区
swapoff -a && sed -i 's/.*swap.*/#&/' /etc/fstab
重启机器后使用free -m查看分区状态swap一行应该都为0,因swap分区重新挂载需要修改文件权限:
mount -o remount rw /
关闭防火墙:
ufw disable
配置DNS:
vi /etc/systemd/resolved.conf并添加DNS=8.8.8.8 211.142.211.124 8.8.8.4
重启网络服务:
8.8.8.8 211.142.211.124 8.8.8.4
添加主机名(每个k8s节点都做相同配置,vi /etc/hostname可修改主机名)以区分主从节点,也可不做此配置:
vi /etc/hosts172.16.1.34 master172.16.1.35 worker1172.16.233.52 worker2
crontab -e 编辑添加以下命令用于每小时更新ntp
* */1 * * * ntpdate time1.aliyun.com
cat >/etc/sysctl.d/kubernetes.conf<net.bridge.bridge-nf-call-iptables=1net.bridge.bridge-nf-call-ip6tables=1net.ipv4.ip_forward=1net.ipv4.tcp_tw_recycle=0 #由于tcp_tw_recycle与kubernetes的NAT冲突,必须关闭!否则会导致服务不通。vm.swappiness=0 #禁止使用 swap 空间,只有当系统 OOM 时才允许使用它vm.overcommit_memory=1 #不检查物理内存是否够用vm.panic_on_oom=0 #开启 OOMfs.inotify.max_user_instances=8192fs.inotify.max_user_watches=1048576fs.file-max=52706963fs.nr_open=52706963net.ipv6.conf.all.disable_ipv6=1 #关闭不使用的ipv6协议栈,防止触发docker BUG.net.netfilter.nf_conntrack_max=2310720EOF
modprobe br_netfiltercat >ipvs.sh<< EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOFchmod +x ipvs.sh && sh ipvs.shlsmod | grep ip_vs
二、安装docker
apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
验证添加的密钥是否成功:
[En]
Verify that the added key is successful:
apt-key fingerprint 0EBFCD88
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"apt-get update
apt-cache madison docker-ce
apt-get install docker-ce=18.06.0~ce~3-0~ubuntu
查看安装是否成功
docker --version
tee /etc/docker/daemon.json <{ "exec-opts":["native.cgroupdriver=systemd"], "registry-mirrors":["https://f3lu6ju1.mirror.aliyuncs.com"]}EOF
systemctl enable dockersystemctl daemon-reload && systemctl restart docker
三、安装 kubectl、kubelet、kubeadm
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -cat deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial mainEOFapt-get update
apt-cache madison kubectl
apt-get install -y kubelet=1.17.0-00 kubeadm=1.17.0-00 kubectl=1.17.0-00
查看安装是否成功
kubectl versionkubeadm version
systemctl enable kubelet.service
tee /etc/default/kubelet <KUBELET_EXTRA_ARGS="--fail-swap-on=false"EOFsystemctl daemon-reload && systemctl restart kubelet
四、开始部署k8s集群
vi pullimages.sh
将以下内容添加到 pullimages.sh 中
#!/bin/bashkubeadm config images list > /root/.a.txtfor i in $( cat /root/.a.txt )do docker pull registry.aliyuncs.com/google_containers/${i#*/} docker tag registry.aliyuncs.com/google_containers/${i#*/} k8s.gcr.io/${i#*/} docker rmi registry.aliyuncs.com/google_containers/${i#*/}doneecho "all initilized docker images download done, please use 'docker images' to check"
给pullimages.sh脚本赋予执行权限,并执行脚本
chmod +x pullimages.sh && sh pullimages.sh
在等待脚本执行后,您可以使用以下命令查看拉取的初始图像
[En]
After waiting for the script to be executed, you can view the initial image that has been pulled with the following command
docker images
以上操作在master和node节点都需要做,以下操作分master和node
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.17.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=172.16.15.43 | tee kubeadm-init.log
机器重启后可能需要重启kubelet和docker
如果还不行需要用”kubeadm reset”先重置kubernets服务再”kubeadm init …”
根据提示设置master节点
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
验证kubernetes启动结果
kubectl get nodes -n kube-system
PS : 注意显示master状态是NotReady,证明初始化服务器成功
允许mster节点部署pod(可选):
kubectl taint nodes --all node-role.kubernetes.io/master-
等一两分钟后使用kubectl get nodes命令
可以看到以下master已经激活:
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready master 3m16s v1.17.0
下载kube-flannel.yml文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
s检查kube-flannel.yml文件中net-conf.json的Network参数是否跟kubeadm init时的–pod-network-cidr一致
net-conf.json: |{ "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" }}
部署flannel网络插件
kubectl apply -f kube-flannel.yml
命令执行完毕后,查看节点状态,显示Ready,即flannel网络插件部署成功;
设置集群只允许部署单个coredns组件:
kubectl scale deployments.apps -n kube-system coredns --replicas=1
查看当前集群状态
kubectl get cs
以下显示指示群集处于正常状态:
[En]
The following display indicates that the cluster is in a healthy state:
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”: “true”}
首先在master机器上执行以下命令:
kubeadm token create --print-join-command
根据生成的kubeadm join命令在,需要加入的node机器上执行该命令,如:
kubeadm join 172.16.15.43:6443 --token gas2fs.8x4bmyk1opibsb3w --discovery-token-ca-cert-hash sha256:343adb6a92e846dcc1dbf5e1b936e7f0350793ebe976cd327e0b3da857326a5c
若机器重启后需重启docker和kubelet服务
如果还不行需要用”kubeadm reset”先重置kubernets服务再”kubeadm join …”
等node1加入master后再给node打标签:
kubectl label nodes ${node name} node-role.kubernetes.io/${node role name}=
kubectl label nodes test-super-server node-role.kubernetes.io/worker1=
删除该标签:
kubectl label nodes test-super-server node-role.kubernetes.io/worker1-
当kubelet服务异常时使用一下命令查看错误打印:
systemctl status kubeletjournalctl -xefu kubelet
五、Q&A
1. 解决k8s集群在master节点”kubeadm reset”再”kubeadm init …”之后运行kubectl出现的错误:
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kubernetes”)
rm $HOME/.kube -fr
再次执行
mkdir -p $HOME/.kubecp -i /etc/kubernetes/admin.conf $HOME/.kube/configchown $(id -u):$(id -g) $HOME/.kube/config
即可。
2. 解决k8s集群在worker节点运行kubectl出现的错误:
The connection to the server localhost:8080 was refused – did you specify the right host or port?
root@k8s-master1:# scp -r /etc/kubernetes/admin.conf ${node1}:/tmp
root@k8s-node1:# mv /tmp/admin.conf /etc/kubernetes/ root@k8s-node1:# echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> ~/.bash_profile root@k8s-node1:# source ~/.bash_profile
之后在worker节点多终端执行kubectl命令只需要source ~/.bash_profile一下即可
Original: https://www.cnblogs.com/zqxFly/p/15425215.html
Author: 测试小张
Title: 搭建k8s
原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/524329/
转载文章受原作者版权保护。转载请注明原作者出处!