k8s e2e测试部署和调试

aside section._1OhGeD · · 3136 次点击 · · 开始浏览    
这是一个创建于 的文章,其中的信息可能已经有所发展或是发生改变。

k8s e2e测试部署和调试

下面简单描述一下e2e测试的部署。

部署k8s all in one 集群

这个国内网络环境下部署比较困难,很多镜像要从google云拉取,被墙了导致很多网络错误。这边推荐使用easzup 项目来部署,它都从国内源来拉。

# 下载工具脚本easzup,举例使用kubeasz版本2.0.2
export release=2.0.2
curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/easzup
chmod +x ./easzup
# 使用工具脚本下载
./easzup -D

上述脚本运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/ansilbe

  • /etc/ansible 包含 kubeasz 版本为 ${release} 的发布代码
  • /etc/ansible/bin 包含 k8s/etcd/docker/cni 等二进制文件
  • /etc/ansible/down 包含集群安装时需要的离线容器镜像
  • /etc/ansible/down/packages 包含集群安装时需要的系统基础软件
ssh-keygen -t rsa -b 2048 -N '' -f ~/.ssh/id_rsa
ssh-copy-id $IP  # $IP 为所有节点地址包括自身,按照提示输入 yes 和 root 密码

安装集群

  • 4.1 容器化运行 kubeasz,详见文档
./easzup -S
  • 4.2 使用默认配置安装 aio 集群
docker exec -it kubeasz easzctl start-aio

部署成功输出结果为

前面省略
TASK [cluster-addon : 导入 metallb的离线镜像(若执行失败,可忽略)] ***********************************************************************************************

TASK [cluster-addon : 导入 metallb的离线镜像(若执行失败,可忽略)] ***********************************************************************************************

TASK [cluster-addon : 生成 metallb 相关 manifests] **************************************************************************************************

TASK [cluster-addon : 创建 metallb controller 部署] *************************************************************************************************

PLAY RECAP **************************************************************************************************************************************
192.168.0.32               : ok=111  changed=84   unreachable=0    failed=0
localhost                  : ok=22   changed=18   unreachable=0    failed=0

[INFO] save context: aio
[INFO] save aio roles' configration
[INFO] save aio ansible hosts
[INFO] save aio kubeconfig
[INFO] save aio kube-proxy.kubeconfig
[INFO] save aio certs
[INFO] Action successed : start-aio

5.验证安装

需要安装kubectl 二进制,加到环境变量里面去,注意版本。

如果提示kubectl: command not found,退出重新ssh登陆一下,环境变量生效即可

$ kubectl version                   # 验证集群版本     
$ kubectl get componentstatus       # 验证 scheduler/controller-manager/etcd等组件状态
$ kubectl get node                  # 验证节点就绪 (Ready) 状态
$ kubectl get pod --all-namespaces  # 验证集群pod状态,默认已安装网络插件、coredns、metrics-server等
$ kubectl get svc --all-namespaces  # 验证集群服务状态

如果前面都部署成功的话,对应的输出是:

root@docker-main:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}


root@docker-main:~#  kubectl get componentstatus
NAME                 AGE
scheduler            <unknown>
controller-manager   <unknown>
etcd-0               <unknown>


root@docker-main:~#  kubectl get node
NAME           STATUS   ROLES    AGE    VERSION
192.168.0.32   Ready    master   2m4s   v1.15.0


root@docker-main:~# kubectl get pod --all-namespaces
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE
kube-system   coredns-797455887b-8knvl                      1/1     Running   0          96s
kube-system   coredns-797455887b-pgvbj                      1/1     Running   0          96s
kube-system   heapster-5f848f54bc-vzzsg                     1/1     Running   0          87s
kube-system   kube-flannel-ds-amd64-hcx6f                   1/1     Running   0          111s
kube-system   kubernetes-dashboard-5c7687cf8-lff5v          1/1     Running   0          88s
kube-system   metrics-server-85c7b8c8c4-5wzb6               1/1     Running   0          92s
kube-system   traefik-ingress-controller-766dbfdddd-f6ptw   1/1     Running   0          84s




root@docker-main:~#  kubectl get svc --all-namespaces
NAMESPACE     NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                       AGE
default       kubernetes                ClusterIP   10.68.0.1      <none>        443/TCP                       2m45s
kube-system   heapster                  ClusterIP   10.68.115.3    <none>        80/TCP                        92s
kube-system   kube-dns                  ClusterIP   10.68.0.2      <none>        53/UDP,53/TCP,9153/TCP        101s
kube-system   kubernetes-dashboard      NodePort    10.68.59.48    <none>        443:36345/TCP                 93s
kube-system   metrics-server            ClusterIP   10.68.89.111   <none>        443/TCP                       98s
kube-system   traefik-ingress-service   NodePort    10.68.208.72   <none>        80:23456/TCP,8080:32059/TCP   89s

到这一步,k8s all in one集群算部署好了。

e2e 部署

首先要有golang的环境。把go安装好。然后到github上面下好kubernetes的源码。版本checkout到 all in one k8s的版本。

我这边go env为

GO111MODULE="on"
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOENV="/root/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/root/go"
GOPRIVATE=""
GOPROXY="https://goproxy.cn"
GOROOT="/root/.go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/root/.go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/root/go/src/k8s.io/kubernetes/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build691628942=/tmp/go-build -gno-record-gcc-switches"

然后在就可以build e2e 了

然后现在社区都通过kubetest 来维护整个生命周期。

安装

go get -u k8s.io/test-infra/kubetest

然后使用命令:

kubetest --build

不过这边也会遇到非常多的网络错误 :(

例如这边拉镜像失败:

root@k8s-all-in-one:~/go/src/k8s.io/kubernetes# kubetest --build
2019/11/01 15:20:13 process.go:153: Running: make -C /root/go/src/k8s.io/kubernetes quick-release
make: Entering directory '/root/go/src/k8s.io/kubernetes'
+++ [1101 15:20:13] Verifying Prerequisites....
+++ [1101 15:20:13] Building Docker image kube-build:build-742d44a132-5-v1.12.5-1
+++ Docker build command failed for kube-build:build-742d44a132-5-v1.12.5-1

Sending build context to Docker daemon  9.216kB
Step 1/16 : FROM k8s.gcr.io/kube-cross:v1.12.5-1
Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

To retry manually, run:

docker build -t kube-build:build-742d44a132-5-v1.12.5-1 --pull=false /root/go/src/k8s.io/kubernetes/_output/images/kube-build:build-742d44a132-5-v1.12.5-1

!!! [1101 15:20:29] Call tree:
!!! [1101 15:20:29]  1: build/release.sh:35 kube::build::build_image(...)
Makefile:461: recipe for target 'quick-release' failed
make: *** [quick-release] Error 1
make: Leaving directory '/root/go/src/k8s.io/kubernetes'
2019/11/01 15:20:29 process.go:155: Step 'make -C /root/go/src/k8s.io/kubernetes quick-release' finished in 16.13719432s
2019/11/01 15:20:29 main.go:319: Something went wrong: failed to acquire k8s binaries: error during make -C /root/go/src/k8s.io/kubernetes quick-release: exit status 2
root@k8s-all-in-one:~/go/src/k8s.io/kubernetes#
root@k8s-all-in-one:~/go/src/k8s.io/kubernetes#

解决办法,这边推荐借用Azure 中国 提供了 gcr.io 及k8s.gcr.io容器仓库的镜像代理服务,借道下载google的镜像。

然后有同学写了个wrapper脚本,一站式的下镜像,改tag之类的。

参考:

git clone https://github.com/silenceshell/docker-wrapper.git
sudo cp docker-wrapper/docker-wrapper.py /usr/local/bin/

这样之后,就可以使用

docker_wrapper.py  pull k8s.gcr.io/kube-cross:v1.12.5-1

来下载依赖的镜像。

root@k8s-all-in-one:~/go/src/k8s.io/kubernetes# kubetest --build
2019/11/01 15:21:10 process.go:153: Running: make -C /root/go/src/k8s.io/kubernetes quick-release
make: Entering directory '/root/go/src/k8s.io/kubernetes'
+++ [1101 15:21:10] Verifying Prerequisites....
+++ [1101 15:21:11] Building Docker image kube-build:build-742d44a132-5-v1.12.5-1
+++ [1101 15:21:13] Syncing sources to container
+++ [1101 15:21:16] Running build command...
stat /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/client-go: no such file or directory
+++ [1101 15:21:50] Building go targets for linux/amd64:
    cmd/kube-proxy
    cmd/kube-apiserver
    cmd/kube-controller-manager
    cmd/cloud-controller-manager
    cmd/kubelet
    cmd/kubeadm
    cmd/hyperkube
    cmd/kube-scheduler
    vendor/k8s.io/apiextensions-apiserver
    cluster/gce/gci/mounter
stat /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/client-go: no such file or directory
+++ [1101 15:23:36] Building go targets for linux/amd64:
    cmd/kube-proxy
    cmd/kubeadm
    cmd/kubelet
stat /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/client-go: no such file or directory
+++ [1101 15:24:17] Building go targets for linux/amd64:
    cmd/kubectl
stat /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/client-go: no such file or directory
+++ [1101 15:24:38] Building go targets for linux/amd64:
    cmd/gendocs
    cmd/genkubedocs
    cmd/genman
    cmd/genyaml
    cmd/genswaggertypedocs
    cmd/linkcheck
    vendor/github.com/onsi/ginkgo/ginkgo
    test/e2e/e2e.test
stat /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/client-go: no such file or directory
+++ [1101 15:26:26] Building go targets for linux/amd64:
    cmd/kubemark
    vendor/github.com/onsi/ginkgo/ginkgo
    test/e2e_node/e2e_node.test
+++ [1101 15:27:26] Syncing out of container
+++ [1101 15:27:58] Building tarball: src
+++ [1101 15:27:58] Starting tarball: client linux-amd64
+++ [1101 15:27:58] Building tarball: manifests
+++ [1101 15:27:58] Waiting on tarballs
+++ [1101 15:28:13] Building images: linux-amd64
+++ [1101 15:28:13] Building tarball: node linux-amd64
+++ [1101 15:28:14] Starting docker build for image: cloud-controller-manager-amd64
+++ [1101 15:28:14] Starting docker build for image: kube-apiserver-amd64
+++ [1101 15:28:14] Starting docker build for image: kube-controller-manager-amd64
+++ [1101 15:28:14] Starting docker build for image: kube-scheduler-amd64
+++ [1101 15:28:14] Starting docker build for image: kube-proxy-amd64
+++ [1101 15:28:14] Building hyperkube image for arch: amd64
+++ [1101 15:28:14] Building conformance image for arch: amd64
Sending build context to Docker daemon  50.06MB

到最后 Building conformance image 总是出现错误,不过我们不用关注了。因为要的二进制都制作好从docker里面捞出来了。

root@k8s-all-in-one:~/go/src/k8s.io/kubernetes/_output/dockerized/bin/linux/amd64# ll
total 2.3G
drwxr-xr-x 2 root root 4.0K Nov  1 15:27 .
-rwxr-xr-x 1 root root 216M Nov  1 15:27 e2e_node.test
-rwxr-xr-x 1 root root  12M Nov  1 15:27 ginkgo
-rwxr-xr-x 1 root root 146M Nov  1 15:27 kubemark
-rwxr-xr-x 1 root root 7.2M Nov  1 15:26 linkcheck
-rwxr-xr-x 1 root root 162M Nov  1 15:26 e2e.test
-rwxr-xr-x 1 root root  55M Nov  1 15:26 gendocs
-rwxr-xr-x 1 root root 235M Nov  1 15:26 genkubedocs
-rwxr-xr-x 1 root root 5.4M Nov  1 15:26 genswaggertypedocs
-rwxr-xr-x 1 root root 244M Nov  1 15:25 genman
-rwxr-xr-x 1 root root  55M Nov  1 15:25 genyaml
-rwxr-xr-x 1 root root  56M Nov  1 15:24 kubectl
-rwxr-xr-x 1 root root  48M Nov  1 15:24 kube-proxy
-rwxr-xr-x 1 root root  53M Nov  1 15:24 kubeadm
-rwxr-xr-x 1 root root 151M Nov  1 15:24 kubelet
-rwxr-xr-x 1 root root 149M Nov  1 15:23 kube-controller-manager
-rwxr-xr-x 1 root root  51M Nov  1 15:23 kube-scheduler
-rwxr-xr-x 1 root root 2.3M Nov  1 15:23 mounter
-rwxr-xr-x 1 root root 129M Nov  1 15:23 cloud-controller-manager
-rwxr-xr-x 1 root root  57M Nov  1 15:23 apiextensions-apiserver
-rwxr-xr-x 1 root root 196M Nov  1 15:23 kube-apiserver
-rwxr-xr-x 1 root root 239M Nov  1 15:23 hyperkube
-rwxr-xr-x 1 root root 2.8M Oct 31 15:31 go-bindata
-rwxr-xr-x 1 root root  15M Oct 31 15:30 openapi-gen
-rwxr-xr-x 1 root root 8.8M Oct 31 15:30 conversion-gen
-rwxr-xr-x 1 root root 8.8M Oct 31 15:30 defaulter-gen
-rwxr-xr-x 1 root root 8.8M Oct 31 15:30 deepcopy-gen
-rwxr-xr-x 1 root root 4.5M Oct 31 15:30 go2make
drwxr-xr-x 3 root root 4.0K Oct 31 15:30 ..

运行测试

装好了之后执行用例集:

kubetest --test --test_args="--ginkgo.focus=\[sig-api-machinery\]  --host=https://192.168.0.32:6443" --provider=local

效果是:

前面太长了,没截
[Fail] [sig-api-machinery] AdmissionWebhook [BeforeEach] Should be able to deny attaching pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:384

[Fail] [sig-api-machinery] AdmissionWebhook [BeforeEach] Should be able to deny pod and configmap creation
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:384

[Fail] [sig-api-machinery] CustomResourceConversionWebhook [BeforeEach] Should be able to convert a non homogeneous list of CRs
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:312

[Fail] [sig-api-machinery] AdmissionWebhook [BeforeEach] Should be able to deny custom resource creation and deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:384

[Fail] [sig-api-machinery] AdmissionWebhook [BeforeEach] Should not be able to mutate or prevent deletion of webhook configuration objects
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:384

[Fail] [sig-api-machinery] AdmissionWebhook [BeforeEach] Should deny crd creation
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:384

Ran 78 of 4411 Specs in 6694.822 seconds
FAIL! -- 62 Passed | 16 Failed | 0 Pending | 4333 Skipped
--- FAIL: TestE2E (6694.93s)
FAIL

Ginkgo ran 1 suite in 1h51m35.77151929s
Test Suite Failed
!!! Error in ./hack/ginkgo-e2e.sh:146
  Error in ./hack/ginkgo-e2e.sh:146. '"${ginkgo}" "${ginkgo_args[@]:+${ginkgo_args[@]}}" "${e2e_test}" -- "${auth_config[@]:+${auth_config[@]}}"--ginkgo.flakeAttempts="${FLAKE_ATTEMPTS}" --host="${KUBE_MASTER_URL}" --provider="${KUBERNETES_PROVIDER}" --gce-project="${PROJECT:-}" --gce-zone="${ZONE:-}" --gce-region="${REGION:-}" --gce-multizone="${MULTIZONE:-false}" --gke-cluster="${CLUSTER_NAME:-}" --kube-master="${KUBE_MASTER:-}" --cluster-tag="${CLUSTER_ID:-}" --cloud-config-file="${CLOUD_CONFIG:-}" --repo-root="${KUBE_ROOT}" --node-instance-group="${NODE_INSTANCE_GROUP:-}" --prefix="${KUBE_GCE_INSTANCE_PREFIX:-e2e}" --network="${KUBE_GCE_NETWORK:-${KUBE_GKE_NETWORK:-e2e}}" --node-tag="${NODE_TAG:-}" --master-tag="${MASTER_TAG:-}" --cluster-monitoring-mode="${KUBE_ENABLE_CLUSTER_MONITORING:-standalone}" --prometheus-monitoring="${KUBE_ENABLE_PROMETHEUS_MONITORING:-false}" --dns-domain="${KUBE_DNS_DOMAIN:-cluster.local}" --ginkgo.slowSpecThreshold="${GINKGO_SLOW_SPEC_THRESHOLD:-300}" ${KUBE_CONTAINER_RUNTIME:+"--container-runtime=${KUBE_CONTAINER_RUNTIME}"} ${MASTER_OS_DISTRIBUTION:+"--master-os-distro=${MASTER_OS_DISTRIBUTION}"} ${NODE_OS_DISTRIBUTION:+"--node-os-distro=${NODE_OS_DISTRIBUTION}"} ${NUM_NODES:+"--num-nodes=${NUM_NODES}"} ${E2E_REPORT_DIR:+"--report-dir=${E2E_REPORT_DIR}"} ${E2E_REPORT_PREFIX:+"--report-prefix=${E2E_REPORT_PREFIX}"} "${@:-}"' exited with status 1
Call stack:
  1: ./hack/ginkgo-e2e.sh:146 main(...)
Exiting with status 1
2019/10/31 23:38:09 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[sig-api-machinery\] --host=https://192.168.0.32:6443' finished in 1h51m35.827477558s
2019/10/31 23:38:09 main.go:319: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[sig-api-machinery\] --host=https://192.168.0.32:6443: exit status 1]

然后调试一个测试用例,这样适合自己写着的时候调试:

root@k8s-all-in-one:~/go/src/k8s.io/kubernetes#  kubetest --test --test_args="--ginkgo.focus=SSH.to.all     --host=https://192.168.0.32:6443" --provider=local

输出:

2019/11/01 13:45:18 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
2019/11/01 13:45:18 process.go:155: Step './hack/e2e-internal/e2e-status.sh' finished in 114.312821ms
2019/11/01 13:45:18 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
2019/11/01 13:45:18 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 76.761003ms
2019/11/01 13:45:18 process.go:153: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=SSH.to.all --host=https://192.168.0.32:6443
Setting up for KUBERNETES_PROVIDER="local".
Skeleton Provider: prepare-e2e not implemented
KUBE_MASTER_IP:
KUBE_MASTER:
I1101 13:45:19.578786   61483 e2e.go:241] Starting e2e run "35be1931-909f-4013-9a3d-9a235cd2e4fe" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1572587118 - Will randomize all specs
Will run 1 of 4411 specs

Nov  1 13:45:19.673: INFO: >>> kubeConfig: /root/.kube/config
Nov  1 13:45:19.676: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Nov  1 13:45:19.696: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Nov  1 13:45:19.724: INFO: 7 / 7 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Nov  1 13:45:19.724: INFO: expected 6 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Nov  1 13:45:19.724: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Nov  1 13:45:19.733: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-amd64' (0 seconds elapsed)
Nov  1 13:45:19.733: INFO: e2e test version: v1.15.0-dirty
Nov  1 13:45:19.734: INFO: kube-apiserver version: v1.15.0
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] SSH
  should SSH to all nodes and run commands
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
[BeforeEach] [k8s.io] [sig-node] SSH
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov  1 13:45:19.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ssh
Nov  1 13:45:19.831: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] SSH
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36
[It] should SSH to all nodes and run commands
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
STEP: Getting all nodes' SSH-able IP addresses
Nov  1 13:45:19.834: INFO: No external IP address on nodes, falling back to internal IPs
STEP: SSH'ing to 1 nodes and running echo "Hello from $(whoami)@$(hostname)"
Nov  1 13:45:19.931: INFO: Got stdout from 192.168.0.32:22: Hello from root@k8s-all-in-one
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Nov  1 13:45:20.097: INFO: Got stdout from 192.168.0.32:22: stdout
Nov  1 13:45:20.097: INFO: Got stderr from 192.168.0.32:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing root@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [k8s.io] [sig-node] SSH
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov  1 13:45:25.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-9879" for this suite.
Nov  1 13:45:31.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov  1 13:45:31.169: INFO: namespace ssh-9879 deletion completed in 6.068405507s
•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSNov  1 13:45:31.174: INFO: Running AfterSuite actions on all nodes
Nov  1 13:45:31.174: INFO: Running AfterSuite actions on node 1

Ran 1 of 4411 Specs in 11.505 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 4410 Skipped
PASS

Ginkgo ran 1 suite in 12.524250351s
Test Suite Passed
2019/11/01 13:45:31 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=SSH.to.all --host=https://192.168.0.32:6443' finished in 12.590311594s

基本上单机版本的e2e就是这么使用了。


有疑问加站长微信联系(非本文作者)

本文来自:简书

感谢作者:aside section._1OhGeD

查看原文:k8s e2e测试部署和调试

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

3136 次点击  ∙  1 赞  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传