请描述什么是依赖倒置原则,为什么有时候依赖倒置原则又被称为好莱坞原则?
依赖倒置原则是一种解耦模块间关系的方法,它要求上层模块不能依赖于底层模块,他们应该共同依赖于一个抽象;抽象不能依赖于实现,实现应该依赖于抽象。
在底层实现发生变化或者引入新的底层实现时,通过共同依赖于抽象,可以使得改动对上层模块的影响最小化。
好莱坞原则通俗讲是你不要调用我,让我来调用你。这个关系的反转和依赖倒置的核心思想是一致的,好莱坞所讲的可以理解为框架会定义一系列的接口,各种基于框架开发的应用程序只需要实现这些接口,框架在启动之后它会来调用应用程序实现的这些接口,让程序运行起来。请描述一个你熟悉的框架,是如何实现依赖倒置原则的。
最近在做一个对图片做离线处理的pipeline系统,主要基于https://github.com/digitalocean/firebolt这个框架进行的开发。
此框架定义了consumer中接收消息的source node的接口,以及pipeline处理中算子node的接口,作为使用框架的开发人员只需要按接口要求把处理逻辑封装在这些接口方法中,然后在程序启动前,将实现注册到firebolt框架,然后启动firebolt,框架就会按照开发者定义的处理流程配置文件来按序执行pipeline处理。
source node负责接收消息
type Source interface {
Setup(config map[string]string, recordsch chan []byte) error
Start() error
Shutdown() error
Receive(msg fbcontext.Message) error
}
sync node 负责处理业务逻辑的算子
type SyncNode interface {
Setup(config map[string]string) error
Process(event *firebolt.Event) (*firebolt.Event, error)
Shutdown() error
Receive(msg fbcontext.Message) error
}
config file 用来定义这个pipeline处理流程
source: # one and only one source is required
name: kafkaconsumer
params:
brokers: ${KAFKA_BROKERS} # environment variables are supported
consumergroup: testapp
topic: logs-all
buffersize: 1000 # sources do not normally need buffering; this value is a pass-thru to the underlying kafka consumer
nodes:
- name: firstnode
workers: 1 # each node can be configured to run any number of workers (goroutines), the default is 1
buffersize: 100 # each node has a buffered input channel for the data that is ready to be processed, default size is 1
params: # params are passed as a map to the node's Setup() during initialization
param1.1: value1.1
param1.2: value1.2
children: # a node may have many children, the events returned by the node are passed to all child node's input channels
- name: secondnode
error_handler: # errors returned by 'secondnode' will be passed to this error handler
name: errorkafkaproducer # we provide built-in 'errorkafkaproducer' that writes JSON error reports to a Kafka topic
buffersize: 100
discard_on_full_buffer: true # if the buffer is full discard messages to avoid sending backpressure downstream for a low priority function
children:
- name: thirdnode
id: third-node-id # you can use the same node type in your hierarchy twice, but its id (defaults to name) must be unique
workers: 3
buffersize: 300
params:
param3.1: value3.1
param3.2: value3.2
主程序通过node.GetRegistry().RegisterNodeType注册已实现的node,并启动executor
// first register any firebolt source or node types that are not built-in
node.GetRegistry().RegisterNodeType("jsonconverter", func() node.Node {
return &jsonconverter.JsonConverter{}
}, reflect.TypeOf(([]byte)(nil)), reflect.TypeOf(""))
// start the executor running - it will build the source and nodes that process the stream
ex, err := executor.New(configFile)
if err != nil {
fmt.Printf("failed to initialize firebolt for config file %s: %v\n", configFile, err)
os.Exit(1)
}
ex.Execute() // the call to Execute will block while the app runs
由于golang没有Java强大的泛型和annotation,因此需要在主程序中显示的注册各种实现好的node
- 请用接口隔离原则优化 Cache 类的设计,画出优化后的类图
type CacheConfig interface {
}
type CacheStorage interface {
Get(key string) (interface{}, error)
Set(key, value string) error
Delete(key string) error
}
type CacheHandler interface {
ReBuild(conf CacheConfig) (CacheStorage, error)
}
type CacheProxy struct {
ActiveCache CacheStorage
CacheHandler
}
应用程序中使用时,使用方法为
var (
err error
activeCache CacheStorage
)
activeCache, err = NewCacheProxy(cacheConf)
远程系统调用时,使用方法为
var (
err error
var cacheHandler CacheHandler
)
cacheHandler, err = NewCacheProxy(cacheConf)
有疑问加站长微信联系(非本文作者)