operator-sdk demo 试一试

.container .card .information strong · · 364 次点击 · · 开始浏览    
这是一个创建于 的文章,其中的信息可能已经有所发展或是发生改变。

operator-sdk demo

参考文档

Golang Based Operator Tutorial

构建一个 demo project

前提

  • 安装 operator-sdk operator-sdk release
  • 有一个可访问的 v1.11.3+ 版本的 kubernetes 集群(若使用 apiextensions.k8s.io/v1 CRDs,需 v1.16.0+)
  • 以 admin 身份访问 kubernetes 集群

初始化项目

env:

$ operator-sdk version
operator-sdk version: "v1.3.0", commit: "1abf57985b43bf6a59dcd18147b3c574fa57d3f6", kubernetes version: "1.19.4", go version: "go1.15.5", GOOS: "linux", GOARCH: "amd64"
​
$ mkdir -p ${HOME}/golang_codes/operator_sdk_app/demo-operator
$ cd ${HOME}/golang_codes/operator_sdk_app/demo-operator

init:

$ operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/controller-runtime@v0.7.0
go: downloading ...
Update go.mod:
$ go mod tidy
go: downloading ...
Running make:
$ make
go: creating new go.mod: module tmp
Downloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.4.1
go: downloading ...
go fmt ./...
go vet ./...
go build -o bin/manager main.go

了解项目目录结构

The --repo=<path> flag is required when creating a project outside of $GOPATH/src, as scaffolded files require a valid module path.

上面这段话没懂 =。=

a. Manager:

主程序 main.go 初始化&运行 Manager。 Manager 为 custom resource API defintions 注册 _Scheme_、设置并运行 controller,webhooks。 Kubebuilder entrypoint doc Manager 可以限制所有 controller 监听资源的 namespace:

mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})
​
//默认监听 operator 所在的 namespace
​
//若需要监听所有 namespace,设置为空:
mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})
​
//监听一组特定的 namespace:
var namespaces []string // List of Namespaces
// Create a new Cmd to provide shared dependencies and start components
mgr, err := ctrl.NewManager(cfg, manager.Options{
 NewCache: cache.MultiNamespacedCacheBuilder(namespaces),
})

b. Operator scope:

namespace-scope vs cluster-scope:operator scope

create api & controller:

a. 多组 API:

$ operator-sdk edit --multigroup=true

_PROJECT _文件更新为:

domain: example.com
layout: go.kubebuilder.io/v3
multigroup: true
...

了解 multi-group migration

b. 创建:

$ operator-sdk create api --group=cache --version=v1alpha1 --kind=Memcached --resource=true --controller=true
Writing scaffold for you to edit...
api/v1alpha1/memcached_types.go
controllers/memcached_controller.go
Running make:
$ make
/root/golang_codes/operator_sdk_app/demo-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go build -o bin/manager main.go

了解 CRD API 规范
了解 kubebuilder api
了解 kubebuilder controller

充实逻辑

修改 CR API:

修改 CR 类型 (*_types.go)。 给 CR 定义添加注释:_ +kubebuilder:subresource:status_

Add the +kubebuilder:subresource:status marker to add a status subresource to the CRD manifest so that the controller can update the CR status without changing the rest of the CR object.

status 子资源?? 同步更新 deepcopy 接口实现:_ make generate _

生成 CRD 清单:

CR API 定义好之后,生成 CRD 清单:_ make manifests _

OpenAPI 校验:

OpenAPIv3 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached Custom Resource when it is created or updated. Markers (annotations) are available to configure validations for your API. These markers will always have a +kubebuilder:validation prefix. Usage of markers in API code is discussed in the kubebuilder CRD generation and marker documentation. A full list of OpenAPIv3 validation markers can be found here. To learn more about OpenAPI v3.0 validation schemas in CRDs, refer to the Kubernetes Documentation.

实现控制器:

reconciliation logic.

控制器逻辑实现

a. 控制器监听资源:

b. 协调循环:

c. 定义权限&生成 RBAC 清单:

部署运行

创建资源:

make install

配置测试环境:

a. 集群外

暂未 try

b. 集群内

make docker-build IMG=my/memcached-operator:v0.0.1

部署 operator:

make deploy IMG=my/memcached-operator:v0.0.1
注意: 以上步骤中未包含 webhook 准入校验的内容,可暂时 关闭准入控制器 ,关闭时修改 /etc/kubernetes/manifests/kube-apiserver.yaml 文件后,需重启 kubelet 生效。

部署完成之后,查看:

$ kubectl -n demo-operator-system get cm
NAME                           DATA   AGE
demo-operator-manager-config   1      4h6m
f1c5ece8.example.com           0      4h5m
$ kubectl -n demo-operator-system get pods
NAME                                                READY   STATUS    RESTARTS   AGE
demo-operator-controller-manager-57d894fbfc-gjv9b   2/2     Running   0          4h6m

pod 内运行了两个容器:kube-rbac-proxy, manager

OLM 部署 operator:

todo..

一些问题:

1. 构建镜像时执行 go mod download 失败

(1) 修改 Dockerfile,添加环境变量:

ENV GO111MODULE=on 
 GOPROXY=https://goproxy.cn,direct

(2) 若在虚机上 docker build,添加参数使共享节点网络:

dokcer build --network=host -t ...

2. 重复构建镜像之后产生 <none>:<none> 的镜像

# 查看
$ docker images -a | grep none
​
# 删除"坏"的 <none>:<none> 镜像
#docker rmi $(docker images -f "dangling=true" -q)
# 清理 dangling 镜像
$ docker image prune
​
# 换个版本重新构建
docker build -t ...

3. 防火墙导致拉取镜像失败

镜像 gcr.io/distroless/static:nonroot
没找到可用的 http_proxy ( ̄▽ ̄)"
使用替代镜像:gengweifeng/gcr-io-distroless-static-nonroot - Docker Hub

4. 在集群中构建镜像&部署时,pod 拉取镜像失败

首先确认 imagePullPolicy: IfNotPresent
(1) 若在 kubernetes 集群:
本地构建镜像,导致仅集群中某一个节点本地有该镜像,其他节点无。拷贝镜像,或限制 pod 节点亲和性解决。
(2) 若在 minikube:
构建镜像前执行:eval $(minikube docker-env)


有疑问加站长微信联系(非本文作者)

本文来自:Segmentfault

感谢作者:.container .card .information strong

查看原文:operator-sdk demo 试一试

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:701969077

364 次点击  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传