概述
KubeEdge 是一个开源系统,可将本机容器化的业务流程和设备管理扩展到Edge上的主机。它基于Kubernetes构建,并为网络,应用程序部署以及云与边缘之间的元数据同步提供核心基础架构支持。它还支持MQTT,并允许开发人员编写自定义逻辑并在Edge上启用资源受限的设备通信。KubeEdge由云部分和边缘部分组成,边缘和云部分现已开源,本文将基于Centos7.6系统对KugeEdge进行编译与部署。
一、系统配置
1.1 集群环境
ke-cloud 云端 192.168.2.133 k8s、docker、cloudcore
ke-edge1 边缘端 192.168.2.134 docker、edgecore
ke-edge2 边缘端 192.168.2.135 docker、edgecore
1.2 禁用开机启动防火墙
systemctl disable firewalld
1.3 永久禁用SELinux
编辑文件/etc/selinux/config,将SELINUX修改为disabled,如下:
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
SELINUX=disabled
1.4 关闭系统Swap(可选)
Kbernetes 1.8 开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动,如下:
# sed -i 's/.*swap.*/#&/' /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0
1.5 所有机器安装Docker
# update-alternatives --set iptables /usr/sbin/iptables-legacy
# yum install -y yum-utils device-mapper-persistent-data lvm2 && yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo && yum makecache
# yum -y install docker-ce-18.06.3.ce
# systemctl enable docker.service && systemctl start docker
1.6 重启系统
# reboot
二、cloud节点部署K8s
2.1 配置yum源
[root@ke-cloud ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.2 安装kubeadm、kubectl
说明:如果想安装指定版本的kubeadm
[root@ke-cloud ~]# yum install kubelet-1.17.0-0.x86_64 kubeadm-1.17.0-0.x86_64 kubectl-1.17.0-0.x86_64
2.3 配置内核参数
[root@ke-cloud ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
[root@ke-cloud ~]# sysctl --system
[root@ke-cloud ~]# modprobe br_netfilter
[root@ke-cloud ~]# sysctl -p /etc/sysctl.d/k8s.conf
加载ipvs相关内核模块
如果重新开机,需要重新加载(可以写在 /etc/rc.local 中开机自动加载)
[root@ke-cloud ~]# modprobe ip_vs
[root@ke-cloud ~]# modprobe ip_vs_rr
[root@ke-cloud ~]# modprobe ip_vs_wrr
[root@ke-cloud ~]# modprobe ip_vs_sh
[root@ke-cloud ~]# modprobe nf_conntrack_ipv4
查看是否加载成功
[root@ke-cloud ~]# lsmod | grep ip_vs
2.4 拉取镜像
用命令查看版本当前kubeadm对应的k8s组件镜像版本,如下:
[root@ke-cloud kubeedge]# kubeadm config images list
I0201 01:11:35.698583 15417 version.go:251] remote version is much newer: v1.20.2; falling back to: stable-1.17
W0201 01:11:39.538445 15417 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0201 01:11:39.538486 15417 validation.go:28] Cannot validate kubelet config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.17
k8s.gcr.io/kube-controller-manager:v1.17.17
k8s.gcr.io/kube-scheduler:v1.17.17
k8s.gcr.io/kube-proxy:v1.17.17
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
使用kubeadm config images pull命令拉取上述镜像,如下:
[root@ke-cloud ~]# kubeadm config images pull
I0201 01:11:35.188139 6015 version.go:251] remote version is much newer: v1.18.6; falling back to: stable-1.17
I0201 01:11:35.580861 6015 validation.go:28] Cannot validate kube-proxy config - no validator is available
I0201 01:11:35.580877 6015 validation.go:28] Cannot validate kubelet config - no validator is available
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.17.17
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.17.17
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.17.17
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.17.17
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.4.3-0
[config/images] Pulled k8s.gcr.io/coredns:1.6.5
查看下载下来的镜像,如下:
[root@ke-cloud ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.17.17 3ef67d180564 2 weeks ago 117MB
k8s.gcr.io/kube-apiserver v1.17.17 38db32e0f351 2 weeks ago 171MB
k8s.gcr.io/kube-controller-manager v1.17.17 0ddd96ecb9e5 2 weeks ago 161MB
k8s.gcr.io/kube-scheduler v1.17.17 d415ebbf09db 2 weeks ago 94.4MB
quay.io/coreos/flannel v0.13.1-rc1 f03a23d55e57 2 months ago 64.6MB
k8s.gcr.io/coredns 1.6.5 70f311871ae1 15 months ago 41.6MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 15 months ago 288MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 3 years ago 742kB
2.5 配置kubelet(可选)
在cloud端配置kubelet并非必须,主要是为了验证K8s集群的部署是否正确,也可以在云端搭建Dashboard等应用。
获取Docker的cgroups
[root@ke-cloud ~]# DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f4)
[root@ke-cloud ~]# echo $DOCKER_CGROUPS
cgroupfs
配置kubelet的cgroups
[root@ke-cloud ~]# cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=k8s.gcr.io/pause:3.1"
EOF
启动kubelet
[root@ke-cloud ~]# systemctl daemon-reload
[root@ke-cloud ~]# systemctl enable kubelet && systemctl start kubelet
特别说明:在这里使用systemctl status kubelet会发现报错误信息,这个错误在运行kubeadm init 生成CA证书后会被自动解决,此处可先忽略。
2.6 初始化集群
kubeadm init --kubernetes-version=v1.17.17 \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=192.168.2.133 \
--ignore-preflight-errors=Swap
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.2.133:6443 --token befwji.rmuh07racybsqik5 \
--discovery-token-ca-cert-hash sha256:74d996a22680090540f06c1f7732e329e518d0147dc2e27895b0b770c1c74d84
进一步配置kubectl
[root@ke-cloud ~]# rm -rf $HOME/.kube
[root@ke-cloud ~]# mkdir -p $HOME/.kube
[root@ke-cloud ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@ke-cloud ~]# chown $(id -u):$(id -g) $HOME/.kube/config
查看node节点
[root@ke-cloud ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
ke-cloud NotReady master 4d19h v1.17.0
2.7 配置网络插件(可选)
下载flannel插件的yaml文件
[root@ke-cloud ~]# cd ~ && mkdir flannel && cd flannel
[root@ke-cloud ~]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
启动
[root@ke-cloud ~]# kubectl apply -f ~/flannel/kube-flannel.yml
查看
[root@ke-cloud ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
ke-cloud Ready master 4d19h v1.17.0
[root@ke-cloud ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-qbccz 1/1 Running 0 4d19h
coredns-6955765f44-xg6fg 1/1 Running 0 4d19h
etcd-ke-cloud 1/1 Running 0 4d19h
kube-apiserver-ke-cloud 1/1 Running 0 4d19h
kube-controller-manager-ke-cloud 1/1 Running 0 4d19h
kube-flannel-ds-75tgm 1/1 Running 0 4d19h
kube-proxy-jlrhq 1/1 Running 0 4d19h
kube-scheduler-ke-cloud 1/1 Running 0 4d19h
[root@ke-cloud ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d19h
注意:只有网络插件也安装配置完成之后,node才能会显示为ready状态。
三、KubeEdge的安装与配置
3.1 cloud端配置
cloud端负责编译KubeEdge的相关组件与运行cloudcore。
3.1.1 准备工作
下载golang
[root@ke-cloud ~]# wget https://golang.google.cn/dl/go1.14.4.linux-amd64.tar.gz
[root@ke-cloud ~]# tar -zxvf go1.14.4.linux-amd64.tar.gz -C /usr/local
配置golang环境
[root@ke-cloud ~]# vim /etc/profile
文件末尾添加:
# golang env
export GOROOT=/usr/local/go
export GOPATH=/data/gopath
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
[root@ke-cloud ~]# source /etc/profile
[root@ke-cloud ~]# mkdir -p /data/gopath && cd /data/gopath
[root@ke-cloud ~]# mkdir -p src pkg bin
下载KubeEdge源码
[root@ke-cloud ~]# git clone https://github.com/kubeedge/kubeedge $GOPATH/src/github.com/kubeedge/kubeedge
3.1.2 部署cloudcore
编译kubeadm
[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/kubeedge
[root@ke-cloud ~]# make all WHAT=keadm
说明:编译后的二进制文件在./_output/local/bin下,单独编译cloudcore与edgecore的方式如下:
[root@ke-cloud ~]# make all WHAT=cloudcore && make all WHAT=edgecore
创建cloud节点
[root@ke-cloud ~]# keadm init --advertise-address="192.168.2.133"
Kubernetes version verification passed, KubeEdge installation will start...
KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.log
CloudCore started
3.2 edge端配置
edge端也可以通过keadm进行配置,可以将cloud端编译生成的二进制可执行文件通过scp命令复制到edge端。
3.2.1 从云端获取令牌
在云端运行将返回令牌,该令牌将在加入边缘节点时使用。keadm gettoken
[root@ke-cloud ~]# keadm gettoken
ff8b486281e6808987e5934dc5d777105875193fd56383383a0514e48640e7f9.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MTIyMzQ1NTF9.4zu2F8fAlS11TTma6eGvf7lqD_VnGVCc4ngxh34f700
3.2.2 加入边缘节点
keadm join将安装edgecore和mqtt。它还提供了一个标志,通过它可以设置特定的版本。
keadm join --cloudcore-ipport=192.168.2.133:10000 --token=ff8b486281e6808987e5934dc5d777105875193fd56383383a0514e48640e7f9.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MTIyMzQ1NTF9.4zu2F8fAlS11TTma6eGvf7lqD_VnGVCc4ngxh34f700
重要的提示: * –cloudcore-ipport 标志是强制性标志。 * 如果要自动为边缘节点应用证书,–token则需要。 * 云和边缘端使用的kubeEdge版本应相同。
3.3 验证
边缘端在启动edgecore后,会与云端的cloudcore进行通信,K8s进而会将边缘端作为一个node纳入K8s的管控。
[root@ke-cloud ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
ke-cloud Ready master 4d19h v1.17.0
ke-node1 Ready agent,edge 105m v1.19.3-kubeedge-v1.5.0
ke-node2 Ready agent,edge 104m v1.19.3-kubeedge-v1.5.0
[root@ke-cloud ~]#
[root@ke-cloud ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-qbccz 1/1 Running 0 4d19h
coredns-6955765f44-xg6fg 1/1 Running 0 4d19h
etcd-ke-cloud 1/1 Running 0 4d19h
kube-apiserver-ke-cloud 1/1 Running 0 4d19h
kube-controller-manager-ke-cloud 1/1 Running 0 4d19h
kube-flannel-ds-4d6sr 0/1 Error 20 105m
kube-flannel-ds-75tgm 1/1 Running 0 4d19h
kube-flannel-ds-7j6qm 0/1 Error 22 105m
kube-proxy-jlrhq 1/1 Running 0 4d19h
kube-scheduler-ke-cloud 1/1 Running 0 4d19h
说明:如果在K8s集群中配置过flannel网络插件(见2.7),这里由于edge节点没有部署kubelet,
所以调度到edge节点上的flannel pod会创建失败。这不影响KubeEdge的使用,可以先忽略这个问题。
有疑问加站长微信联系(非本文作者)