> 随着微服务技术的出现,越来越多的组件独立出来成为单独的服务,服务之间通过网络通讯。能让业务扁平化。k8s是管理这些服务的载体。本文手把手教学如何搭建k8s。
# <font color=blue>k8s集群搭建:</font>
k8s有三个重要组件:
- 使用kubeadm是k8s官方推荐的一个进群部署工具
- kubectl 是命令行客户端(相当于mysql)
- kubelet 是后台进程(相当于mysqld)
>软件环境:
虚拟机: VMware® Workstation Pro 15
操作系统:CentOS Linux release 8.1
创建三个centos节点:
```
10.0.0.180 k8s-master
10.0.0.91 k8s-nnode1
10.0.0.136 k8s-nnode2
```
查看centos系统版本
```shell
[root@localhost package]# cat /etc/centos-release
CentOS Linux release 8.1.1911 (Core)
[root@localhost package]#
```
备注:第1步~第8步,所有的节点都要操作,第9、10步Master节点操作,第11步Node节点操作。
如果第9、10、11步操作失败,可以通过 kubeadm reset 命令来清理环境重新安装。
#### <font color=blue>1.关闭防火墙</font>
```
[root@localhost ~]# systemctl stop firewalld
```
备注:必须关闭
#### <font color=blue>2.关闭selinux</font>
```
setenforce 0
```
#### <font color=blue>3.关闭swap</font>
```
[root@localhost ~]# swapoff -a
```
vim /etc/fstab ,注释掉swap挂载这一行可以永久关闭swap分区
备注:k8s运行必须关闭掉swap分区
#### <font color=blue>4.添加主机名与IP对应的关系</font>
vim /etc/hosts 添加如下内容:
```shell
10.0.0.180 k8s-master
10.0.0.91 k8s-nnode1
10.0.0.136 k8s-nnode2
```
#### <font color=blue>5.将桥接的IPV4流量传递到iptables 的链</font>
```
[root@localhost ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@localhost ~]# sysctl --system
```
#### <font color=blue>6.安装docker</font>
卸载旧的docker:
```shell
[root@localhost ~]# sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
```
```shell
[root@localhost ~]# sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
```
```shell
[root@localhost ~]# sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
```
```shell
[root@localhost ~]# sudo yum install docker-ce docker-ce-cli containerd.io
```
安装成功后,查看docker版本
```shell
[root@localhost ~]# docker --version
Docker version 19.03.10, build 9424aeaee9
[root@localhost ~]#
```
修改Cgroupfs 为 Systemd(docker文件驱动默认由cgroupfs 改成 systemd,与k8s保持一致避免conflict):
vim /etc/docker/daemon.json
```shell
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
```
设置开机启动:
```shell
[root@localhost ~]# systemctl enable docker
[root@localhost ~]# systemctl start docker
```
查看文件驱动:
```shell
[root@localhost ~]# docker info | grep Driver
Storage Driver: overlay2
Logging Driver: json-file
Cgroup Driver: systemd
[root@localhost ~]#
```
#### <font color=blue>7.Kubernetes yum源配置:</font>
vim /etc/yum.repos.d/kubernetes.repo,添加文件内容如下:
```shell
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
enabled=1
```
#### <font color=blue>8.安装k8s</font>
```
yum install -y docker-ce kubelet kubeadm kubectl --nobest
```
设置k8s开机启动
```
systemctl enable kubelet
```
启动k8s后台daemon
```
systemctl start kubelet
```
#### <font color=blue>9.部署Kubernetes Master</font>
```shell
[root@localhost ~]# kubeadm init --kubernetes-version=1.18.3 \
> --apiserver-advertise-address=10.0.0.180 \
> --image-repository registry.aliyuncs.com/google_containers \
> --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16
W0529 03:59:44.481771 67801 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.3: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.3 not found
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.3: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.3 not found
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.3: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.3 not found
, error: exit status 1
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-proxy:v1.18.3: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-proxy:v1.18.3 not found
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@localhost ~]# kubeadm init --kubernetes-version=1.18.0 --apiserver-advertise-address=10.0.0.180 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16
W0529 04:00:11.903932 68144 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 10.0.0.180]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.0.180 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.0.180 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0529 04:01:28.678836 68144 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0529 04:01:28.679847 68144 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.510178 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 8urlva.75zrerl6uctfenec
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.180:6443 --token 8urlva.75zrerl6uctfenec \
--discovery-token-ca-cert-hash sha256:c462c05da6c3685a334b1b1743d4d9b30a38b78208c338f03f5e7d67befaf8bb
[root@localhost ~]#
```
记录生成的最后部分内容,此内容需要在其它节点加入Kubernetes集群之前就执行。
根据init后的提示,创建kubectl配置文件
```
[root@master01 ~]# mkdir -p $HOME/.kube
[root@master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
查看docker镜像:
```shell
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940c34f24f 2 months ago 117MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.18.0 d3e55153f52f 2 months ago 162MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.18.0 a31f78c7c8ce 2 months ago 95.3MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.18.0 74060cea7f70 2 months ago 173MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 3 months ago 683kB
registry.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 4 months ago 43.8MB
registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 7 months ago 288MB
[root@localhost ~]#
```
由于kube-apiserver默认只启动安全访问接口6443,而不启动非安装访问接口8080,kubectl是通过8080端口访问k8s kubelet的,所以要修改配置文件,使其支持8080端口访问:
vim /etc/kubernetes/manifests/kube-apiserver.yaml
把--insecure-port=0修改为:
--insecure-port=8080
下面就可以直接使用kubectl命令了
```shell
[root@localhost ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 28m v1.18.3
[root@localhost ~]#
```
#### <font color=blue>10.安装calico网络</font>
```shell
[root@localhost ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[root@localhost ~]#
```
查看master节点状态:
```shell
[root@localhost ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 41m v1.18.3
[root@localhost ~]#
```
查看calico网络是否创建成功:
```
[root@localhost ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-76d4774d89-78vpw 0/1 ContainerCreating 0 3m47s
calico-node-vqt8t 0/1 PodInitializing 0 3m47s
coredns-7ff77c879f-bmkxb 0/1 ContainerCreating 0 45m
coredns-7ff77c879f-pmlm9 0/1 ContainerCreating 0 45m
etcd-k8s-master 1/1 Running 0 45m
kube-apiserver-k8s-master 1/1 Running 0 17m
kube-controller-manager-k8s-master 1/1 Running 1 45m
kube-proxy-2rsm4 1/1 Running 0 45m
kube-scheduler-k8s-master 1/1 Running 1 45m
[root@localhost ~]#
```
再次查看node,可以看到master节点状态为ready
```shell
[root@localhost ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 45m v1.18.3
[root@localhost ~]#
```
安装kubernetes-dashboard(暂时不用,省略)
至此,k8s master节点创建完毕。
#### <font color=blue>11.Node节点加入集群</font>
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:
复制上面命令,在node节点上执行
在k8s-nnode1执行:
```shell
[root@localhost ~]# kubeadm join 10.0.0.180:6443 --token 8urlva.75zrerl6uctfenec \
> --discovery-token-ca-cert-hash sha256:c462c05da6c3685a334b1b1743d4d9b30a38b78208c338f03f5e7d67befaf8bb
W0529 04:52:05.907097 67097 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@localhost ~]#
```
在k8s-nnode2执行:
```shell
[root@localhost package]# kubeadm join 10.0.0.180:6443 --token 8urlva.75zrerl6uctfenec \
> --discovery-token-ca-cert-hash sha256:c462c05da6c3685a334b1b1743d4d9b30a38b78208c338f03f5e7d67befaf8bb
W0529 04:52:32.100812 66117 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@localhost package]#
```
在k8s-master查看集群节点数:
```shell
[root@localhost ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 52m v1.18.3
k8s-nnode1 Ready <none> 2m29s v1.18.3
k8s-nnode2 Ready <none> 2m3s v1.18.3
[root@localhost ~]#
```
测试k8s:
```shell
[root@localhost ~]# kubectl get namespace
NAME STATUS AGE
default Active 53m
kube-node-lease Active 53m
kube-public Active 53m
kube-system Active 53m
[root@localhost ~]# kubectl create namespace test
namespace/test created
[root@localhost ~]# kubectl get namespace
NAME STATUS AGE
default Active 54m
kube-node-lease Active 54m
kube-public Active 54m
kube-system Active 54m
test Active 2s
[root@localhost ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-f89759699-9265g 1/1 Running 0 65s
[root@localhost ~]#
```
创建nginx实例,并暴露端口:
```
[root@localhost ~]# kubectl create deployment nginx --image=nginx
[root@localhost ~]# kubectl expose deployment nginx --port=80 --type=NodePort
[root@localhost ~]# kubectl get pod,svc
```
```shell
[root@localhost ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-f89759699-9265g 1/1 Running 0 5m30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.10.0.1 <none> 443/TCP 62m
service/nginx NodePort 10.10.19.203 <none> 80:31806/TCP 2m29s
[root@localhost ~]#
```
在web浏览器输入以下地址,会返回nginx欢迎界面
http://10.0.0.180:31806/
在linux节点用curl访问,会返回nginx html欢迎页面的内容
```shell
[root@localhost ~]# curl http://10.0.0.180:31806/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@localhost ~]#
```
有疑问加站长微信联系(非本文作者))