<p>I'm having trouble coming up with an efficient design pattern for my CRUD functions.</p>
<p>I have multiple database structs and I want to have an abstract method of having CRUD functionality for all of them. My current plan is to have a repositories package that contains repos for each database struct (e.g. User, Order, Item). However, that means I'll be duplicating all the CRUD functions and just changing the tables being used in the queries.</p>
<p>All the research I've done on this has lead me to blogs that show nice design patterns for handling CRUD functionality for just one database struct so it isn't really relevant for me.</p>
<hr/>**评论:**<br/><br/>jerf: <pre><p>You're probably going to be looking for <a href="https://github.com/jinzhu/gorm" rel="nofollow">gorm</a> or something similar, to get them all into something somewhat similar structures.</p>
<p>Depending on your goals, you may also end up reaching for reflection. It's a bit to wrap your head around the first time, but once you get going, merely tedious rather than hard. Many Go people would suggest avoiding it; me, I would still suggest making it your last resort rather than the first thing you reach for, but you could end up needing it.</p>
<p>Still, you may want to play with the ORMs a bit, then post followup questions to <a href="/r/golang" rel="nofollow">/r/golang</a> or something if you find yourself wanting to use reflection to see if people have suggestions for better ways.</p></pre>danhardman: <pre><p>I'm not sure I want to tie this project in with an ORM just yet, especially while they haven't really matured. I'd also like to become more confident with the go standard library before I go messing around with too many 3rd party libraries.</p></pre>jerf: <pre><p>Well, part of why I suggest that is that if you try to do this with the standard library, the first thing you're going to end up doing is... writing your own ORM-esque sort of thing. Maybe not literally, but close enough.</p>
<p>If you're writing real CRUD apps, ORMs aren't a bad idea. Many of the ORM problems arise when you consider them as the <em>only</em> method of accessing the DB, and then try to jam then in everywhere, even where they don't belong. If you consider your ORM as <em>a</em> tool rather than <em>the</em> tool for accessing the DB, and use them only on tables where they make sense, and keep the ORM away from anything that isn't CRUD-y, they aren't so likely to turn into monsters.</p></pre>danhardman: <pre><p>That's a completely fair point and I 100% agree with you. Even so, I do think I want to do this without using 3rd party ORM and if I do end up making my own ORM-ish thing, then so be it. It'll be good to learn these things so when the time comes to start looking at alternative ORMs, I can be well informed.</p></pre>collinglass: <pre><p>I went the route of duplicating code for every database struct. In the end I found two things change between structs. </p>
<p>1) One is the comparison to see what fields need to be updated. In go all fields in a struct are set with their zero value even if you didn't define them when you initialized the object. You have to be aware of this.</p>
<pre><code>type Santa struct {
HoStrength int64
Phrases []string
}
func (s *Santa) Update(newS *Santa) {
if newS.HoStrength != 0 && newS.HoStrength != newS.HoStrength {
s.HoStrength = newS.HoStrength
}
// check length and range over strings being aware of "" empty string
}
</code></pre>
<p>2) The other is dealing with the conventions of the database package your using.</p>
<p>Mgo mongodb pkg has a type M map[string]interface{} for storing data and sql sets columns with func Exec(columns interface{}...) and func QueryRow(columns interface{}...). Other pkgs may behave differently.</p>
<p>In each case you have two options 1. hard code it or 2. create one func using the reflect pkg to iterate over the fields in a struct.</p>
<p>1) Hard code</p>
<pre><code> dbSanta := mgo.M{
"hoStrength": santa.HoStrength,
"phrases": santa.Phrases,
}
db.Exec(santa.HoStrength, santa.Phrases)
</code></pre>
<p>2) Reflect</p>
<p>I'd take a look at this stack overflow.</p>
<p><a href="http://stackoverflow.com/questions/23589564/function-for-converting-a-struct-to-map-in-golang" rel="nofollow">http://stackoverflow.com/questions/23589564/function-for-converting-a-struct-to-map-in-golang</a></p>
<p>It links a package that will handle turning structs into map[string]interface{} which is useful for mgo and []values which is useful for sql.</p>
<p>Other than those differences, I like create a SantaDataStore struct {} and define my functions on that instead of the struct itself because it allows me to have different datastore backends for each struct, for example a redis and an SQL.</p>
<p><strong>In the end...</strong> You can get away with doing most of it with reflect, and then do a manual comparison function to compare the new struct and the database version</p></pre>danhardman: <pre><p>That's the best option I can come up with at the moment. As I mentioned in the OP, I have a repositories package in which I have say a UserRepo, OrderRepo, ItemRepo which just handles the CRUD functions of the respective struct.</p>
<p>Do you think this is the best way then?</p>
<p>Also, how are you passing your db struct to these functions? My assumption would be that each repository would have a DBHandler field that would be an interface for the database driver I'm using.</p>
<p>Example:</p>
<pre><code>type UserRepo struct {
DB *sql.DB
}
func (r *UserRepo) Create(u models.User) error {
stmt, err := r.DB.Prepare("")
if err != nil {
return err
}
_, err = stmt.Exec()
if err != nil {
return err
}
return nil
}
</code></pre></pre>collinglass: <pre><p>That's how I do them. I think it would change if you wanted to target more than sql drivers. Mine look like this</p>
<pre><code>type UsersDataStore struct {
db *sql.DB
c string
STMT map[string]*sql.Stmt
}
</code></pre>
<p>c is my table name</p>
<p>STMT is a map of prepared statements</p></pre>danhardman: <pre><p>I'm intrigued about your choice of storing a map of prepared statements on the struct. Any reason? Is that instead of adding the CRUD functions onto the struct like I'm doing?</p></pre>manishrjain: <pre><p>After building backends for 3 startups, I experienced the exact same problem. So, I wrote this framework: <a href="https://github.com/manishrjain/gocrud" rel="nofollow">https://github.com/manishrjain/gocrud</a>. It allows you to have different database structs, aka entities, and recursively figures out relations between them, for e.g. Post -> (Comment, Like) -> Like; generates the JSON etc. Allows you to choose or even switch any data stores (e.g. MySQL, Cassandra), and supports and updates search engine (e.g. Elastic Search) automatically.</p></pre>danhardman: <pre><p>That's pretty awesome! I don't think I want to get tied into a framework just yet but I'll definitely be checking it out. How does it differ from GORM or GORP?</p></pre>manishrjain: <pre><p>There’re big differences between Gocrud and GORM. GORM focuses on tables and SQL. It helps you generate them, and do some level of relationship management, limited to SQL joins. It provides a better API to deal with SQL tables.</p>
<p>Gourd is completely different take on Crud. Instead of thinking in terms of tables, it thinks in terms of graph operations, i.e. nodes and edges (aka entities, predicates). This allows Gocrud to support literally any data store, not just SQL.</p>
<p>When you think about a typical web page, showing a Facebook post, it’s composed of many different relational tables. Post, Likes, Comments, where Comments can have more comments, which can have more likes. Retrieving all this information in relational methodology will take up a lot of code and effort. Gocrud can retrieve that in a single call (store.Get(“Post”, “id”).UptoDepth(10).Execute()), by traversing the entire sub-tree starting from Post, and converting that automatically to JSON. As you can imagine this makes things significantly simpler.</p>
<p>In addition, Gocrud keeps clear difference between data stores and search engines. If provided, Gocrud automatically updates and keeping search system in sync (say if you are using Elastic Search) with the data store, so you have the ability to run complex queries right from the beginning.</p>
<p>Overall, Gocrud gives you a scalable system, not just a better api over SQL tables, which I feel is what GORM does. I built Gocrud because I felt a lot of startups were building non-scalable backends, just because MySQL was easy to run. And then later on, when they got lot of users, scaling things became a huge challenge. With Gocrud, even if you start with MySQL, at some point later, you can just switch it out for say Cassandra or MongoDB, or any other custom / proprietary data store, with ease; without breaking any of your existing code.</p>
<p>So, what’s the con of using Gocrud: You’re tied in to a framework. True. But, what you get? You don’t get tied in to a data store, your unit testing gets a whole lot easier, you get a search engine from the get-go, and your code is a lot simpler (at least half based upon my recent port of another startup).</p>
<p><a href="https://mrjn.xyz/post/Porting-To-Gocrud/" rel="nofollow">https://mrjn.xyz/post/Porting-To-Gocrud/</a></p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传