部署kubernetes-v1.25.3(k8s)- 基于containerd容器运行时

部署kubernetes-v1.25.3(k8s)- 基于containerd容器运行时

文章目录

; 前言

大家好,我是秋意临。

今日分享,kuberneter-v1.25.3版本部署(目前2022年11月最新版),由于自 1.24 版起,Dockershim 已从 Kubernetes 项目中移除,所以我们的 容器运行时(容器运行时负责运行容器的软件) 已不在是docker。本文将采用containerd作为 容器运行时

Kubernetes 中几个常见 的容器运行时。(具体用法见kubernetes官方文档

  • containerd
  • CRI-O
  • Docker Engine
  • Mirantis Container Runtime

一、准备开始

本文操作配置,如下:

系统CPURAMIP网卡主机名Linux24G192.168.200.5NATmasterLinux24G192.168.200.6NATnode

最低配置:CPU核心不低于2个,RAM不低于2G。

注意命令在那台节点上执行的。

二、环境配置(所有节点操作)

修改主机名


hostnamectl set-hostname master
bash

hostnamectl set-hostname node
bash

配置hosts映射

cat >> /etc/hosts << EOF
192.168.200.5 master
192.168.200.6 node
EOF

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭交换分区

为了保证 kubelet 正常工作,必须禁用交换分区。

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

转发 IPv4 并让 iptables 看到桥接流

为了让 Linux 节点的 iptables 能够正确查看桥接流量,请确认 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 设置为 1。


cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter
lsmod | grep br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system

配置 时间同步


rm -rf /etc/yum.repos.d/*
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

IP=ip addr | grep 'state UP' -A2 | grep inet | egrep -v '(127.0.0.1|inet6|docker)' | awk '{print $2}' | tr -d "addr:" | head -n 1 | cut -d / -f1
yum install -y chrony
sed -i '3,6s/^/#/g' /etc/chrony.conf
sed -i "7s|^|server $IP iburst|g" /etc/chrony.conf
echo "allow all" >> /etc/chrony.conf
echo "local stratum 10" >> /etc/chrony.conf
systemctl restart chronyd
systemctl enable chronyd
timedatectl set-ntp true
sleep 5
systemctl restart chronyd
chronyc sources

yum install ntpdate -y
ntpdate ntp1.aliyun.com

三、安装containerd(所有节点操作)

3.1、安装containerd

下载containerd包
首先访问https://github.com/,搜索containerd,进入项目找到Releases,下拉找到对应版本的tar包,如图所示:

部署kubernetes-v1.25.3(k8s)- 基于containerd容器运行时
$ tar Cvzxf /usr/local containerd-1.6.9-linux-amd64.tar.gz

$ vi /etc/systemd/system/containerd.service

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]

ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5

LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity

TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

systemctl daemon-reload
systemctl enable --now containerd

ctr version

mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd

3.2、安装runc


install -m 755 runc.amd64 /usr/local/sbin/runc

runc -v

3.3、安装CNI


mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz

3.4、配置加速器


sed -i 's/config_path\ =.*/config_path = \"\/etc\/containerd\/certs.d\"/g' /etc/containerd/config.toml
mkdir /etc/containerd/certs.d/docker.io -p

cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://vh3bm52y.mirror.aliyuncs.com"]
  capabilities = ["pull", "resolve"]
EOF

systemctl daemon-reload && systemctl restart containerd

四、cgroup 驱动(所有节点操作)

在 Linux 上,控制组(CGroup)用于限制分配给进程的资源。

kubelet 和底层容器运行时都需要对接控制组 为 Pod 和容器管理资源 ,如 CPU、内存这类资源设置请求和限制。
若要对接控制组(CGroup),kubelet 和容器运行时需要使用一个 cgroup 驱动。 关键的一点是 kubelet 和容器运行时需使用相同的 cgroup 驱动并且采用相同的配置。


sed -i 's/SystemdCgroup\ =\ false/SystemdCgroup\ =\ true/g' /etc/containerd/config.toml

sed -i 's/sandbox_image\ =.*/sandbox_image\ =\ "registry.aliyuncs.com\/google_containers\/pause:3.8"/g' /etc/containerd/config.toml|grep sandbox_image

systemctl daemon-reload
systemctl restart containerd

五、安装crictl(所有节点操作)

kubernetes中使用crictl管理容器,不使用ctr。

crictl 是 CRI 兼容的容器运行时命令行接口。 可以使用它来检查和调试 Kubernetes 节点上的容器运行时和应用程序。


tar -vzxf crictl-v1.25.0-linux-amd64.tar.gz
mv crictl /usr/local/bin/

cat >>  /etc/crictl.yaml << EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: true
EOF

systemctl restart containerd

六、kubeadm部署集群

6.1、安装kubeadm、kubelet、kubectl(所有节点操作)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install  --nogpgcheck kubelet-1.25.3 kubeadm-1.25.3 kubectl-1.25.3 -y
systemctl enable kubelet
  • yum安装时出现错误,如下:

[root@master ~]
Loaded plugins: fastestmirror
Existing lock /var/run/yum.pid: another copy is running as pid 8721.

Another app is currently holding the yum lock; waiting for it to exit...

  The other application is: yum
    Memory :  44 M RSS (444 MB VSZ)
    Started: Fri Nov 11 20:40:32 2022 - 02:07 ago
    State  : Traced/Stopped, pid: 8721

[root@master ~]

6.1.1、配置ipvs


yum install ipset ipvsadm -y

cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

/bin/bash /etc/sysconfig/modules/ipvs.modules

lsmod | grep -e ip_vs -e nf_conntrack_ipv4

cat >>  /etc/sysconfig/kubelet << EOF
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
EOF

6.2、kubeadm初始化(master节点操作)


[root@master ~]
kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:55:36Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}

$ kubeadm config print init-defaults > kubeadm.yaml
$ vi kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.200.5
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master
  taints: null
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

$ kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers  --kubernetes-version=v1.25.3

$ kubeadm init --config kubeadm.yaml
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.200.5:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:7d52da1b42af69666db3483b30a389ab143a1a199b500843741dfd5f180bcb3f

[root@master ~]
[root@master ~]
[root@master ~]

[root@node ~]

[root@master ~]
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   3m25s   v1.25.4
node     NotReady   <none>          118s    v1.25.4

6.3、部署网络(master节点操作)

6.3.1、说明

本博客测试时,由于ctr拉取镜像特别慢,所有我们这里采用docker拉取镜像。首先在node节点安装docker-ce,拉取calico网络插件需要的镜像,再使用 docker save命令打包后上传镜像到master节点。步骤如下:

注意:使用calico.yaml下载地址下载后有可能不能使用,报错信息如下。打开配置文件后发现镜像版本为 v3.14.2。这个版本本人测试时是不可用的。 &#x62A5;&#x9519;&#x4FE1;&#x606F;&#xFF1A;resource mapping not found for name: "bgpconfigurations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
&#x7248;&#x672C;&#x201C;apiextension.k8s.io/v1beta1&#x201D;&#x4E2D;&#x7684;&#x7C7B;&#x578B;&#x201C;CustomResourceDefinition&#x201D;&#x4E0D;&#x5339;&#x914D;

如果遇到上述情况,请下载使用本博客提供的calico.yaml文件。

yum install -y wget
wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml --no-check-certificate

[root@master ~]
configmap/calico-config configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers configured
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node configured
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead
daemonset.apps/calico-node configured
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged
resource mapping not found for name: "bgpconfigurations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "bgppeers.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "blockaffinities.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "clusterinformations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "felixconfigurations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "globalnetworkpolicies.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "globalnetworksets.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "hostendpoints.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "ipamblocks.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "ipamconfigs.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "ipamhandles.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "ippools.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "kubecontrollersconfigurations.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "networkpolicies.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "networksets.crd.projectcalico.org" namespace: "" from "calico.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first

6.3.2、操作(calico下载)

&#x5173;&#x6CE8;&#x5FAE;&#x4FE1;&#x516C;&#x4F17;&#x53F7;&#xFF1A;[&#x79CB;&#x610F;&#x96F6;]&#xFF0C;&#x56DE;&#x590D; calico &#x83B7;&#x53D6;&#x4E0B;&#x8F7D;&#x3002;

node节点安装docker-ce,并拉取镜像如下


curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"]
}
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker

docker pull docker.io/calico/node:v3.24.4
docker save -o calico_node_v3.24.4.tar docker.io/calico/node:v3.24.4

docker pull docker.io/calico/cni:v3.24.4
docker save -o calico_cni_v3.24.4.tar docker.io/calico/cni:v3.24.4

docker pull docker.io/calico/kube-controllers:v3.24.4
docker save -o calico_kube-controllers_v3.24.4.tar docker.io/calico/kube-controllers:v3.24.4

master节点执行

由于container有命名空间的概念,kubernetes的名称空间为k8s.io。


ctr -n k8s.io image import calico_node_v3.24.4.tar
ctr -n k8s.io image import calico_cni_v3.24.4.tar
ctr -n k8s.io image import calico_kube-controllers_v3.24.4.tar

[root@master ~]
...

...

IMAGE                                                             TAG                 IMAGE ID            SIZE
docker.io/calico/cni                                              v3.24.4             0b046c51c02a8       198MB
docker.io/calico/kube-controllers                                 v3.24.4             0830ebe059a9e       71.4MB
docker.io/calico/node                                             v3.24.4             32c45127e587f       226MB
registry.aliyuncs.com/google_containers/coredns                   v1.9.3              5185b96f0becf       14.8MB
registry.aliyuncs.com/google_containers/etcd                      3.5.4-0             a8a176a5d5d69       102MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.25.3             0346dbd74bcb9       34.2MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.25.3             6039992312758       31.3MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.25.3             beaaf00edd38a       20.3MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.25.3             6d23ec0e8b87e       15.8MB
registry.aliyuncs.com/google_containers/pause                     3.8                 4873874c08efc       311kB

[root@master ~]
[root@master ~]
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-c676cc86f-ddp44          1/1     Running   0          87m
kube-system   coredns-c676cc86f-mg278          1/1     Running   0          87m
kube-system   etcd-master                      1/1     Running   0          87m
kube-system   kube-apiserver-master            1/1     Running   0          87m
kube-system   kube-controller-manager-master   1/1     Running   0          87m
kube-system   kube-proxy-75svm                 1/1     Running   0          87m
kube-system   kube-proxy-7bl66                 1/1     Running   0          87m
kube-system   kube-scheduler-master            1/1     Running   0          87m
[root@master ~]
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   87m   v1.25.3
node     Ready    <none>          86m   v1.25.3

至此kubernetes集群搭建完毕~~~

总结

我是秋意临,欢迎大家一键三连、加入云社区

(⊙o⊙),我们下期再见!!!

参考文档

containerd:https://github.com/containerd/containerd/blob/main/docs/getting-started.md
kubernetes:https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/

Original: https://blog.csdn.net/qq_48450494/article/details/127738876
Author: 秋意临
Title: 部署kubernetes-v1.25.3(k8s)- 基于containerd容器运行时

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/660614/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球