<p>I have found these:</p>
<ul>
<li><a href="https://github.com/coopernurse/gorp">https://github.com/coopernurse/gorp</a> (1500 stars)</li>
<li><a href="https://github.com/eaigner/hood">https://github.com/eaigner/hood</a></li>
<li><a href="https://github.com/coocood/qbs">https://github.com/coocood/qbs</a></li>
<li><a href="https://github.com/astaxie/beedb">https://github.com/astaxie/beedb</a></li>
<li><a href="https://upper.io/db">upper.io</a></li>
<li><a href="https://github.com/jinzhu/gorm">https://github.com/jinzhu/gorm</a> (2000 stars)</li>
<li><a href="https://github.com/lunny/xorm">https://github.com/lunny/xorm</a></li>
</ul>
<p>There maybe more which I missed. Which ORM do think is better and why?</p>
<hr/>**评论:**<br/><br/>TheMerovius: <pre><p>I think <a href="https://github.com/jmoiron/sqlx">https://github.com/jmoiron/sqlx</a> gives the perfect balance between being simple and being convenient :) It's not an ORM, but then again, a "real" ORM is difficult to do in go.</p></pre>hobbified: <pre><p>I was recently writing an app that was about 50% really boring CRUD, and I figured that I could use an ORM to save me from writing some of the boilerplate code, and I looked at what was out there. My conclusion: there's nothing worth the effort.</p>
<p>The majority of the packages calling themselves ORMs are straight-up lying because they don't actually handle relationships at all. gorm was an exception, with relationship support, prefetching support, multicreate, composable queries, and what looks like a pretty reasonable design, but when I tried to use it in practice I ran into its shortcomings pretty quickly.</p>
<ul>
<li><p>It doesn't <em>exactly</em> require surrogate keys, but in practice it does because the only way it tells whether a row is in storage or not is by whether the PK is a zero value or not. So if you try to have a table with a <em>meaningful</em> PK things will start going wrong when it sees objects that have the PK filled in and decides they don't need to be inserted when they actually do.</p></li>
<li><p>It can't prefetch many-to-many relationships, so you end up with a ridiculous number of individual queries if you want to fetch a deep data structure.</p></li>
<li><p>It just couldn't handle a lot of the queries I wanted to write.</p></li>
</ul>
<p>I suppose I've been spoiled by 10 years of using <a href="http://p3rl.org/DBIx::Class">DBIx::Class</a> which can do pretty much <em>anything</em> and usually do it well, but I was definitely disappointed by the situation in Go.</p>
<p>I ended up just using sqlx, which wasn't too bad, although it was a pain in the ass having to have two versions of all my types, one with <code>sql.NullInt64</code>/<code>NullString</code>/<code>NullBool</code> in them, and one with the real things, and to have to manually copy data between them, just so that I could fetch queries with left joins in them.</p></pre>calebdoxsey: <pre><p>You probably know this, but others may not. You can create anonymous types directly in a function:</p>
<pre><code>package main
import "database/sql"
type User struct {
ID int
Email string
}
func GetUser() (*User, error) {
var u struct {
ID sql.NullInt64
Email sql.NullString
}
// run your query, fill in &u...
return &User{
ID: u.ID.Int64,
Email: u.Email.String,
}, nil
}
</code></pre>
<p>You still have to do the mapping, but at least you don't have to expose the extra type.</p></pre>hobbified: <pre><p>Yeah, that does help a bit :)</p></pre>ecmdome: <pre><p>I don't use sqlx but have looked at it and this is definitely helpful. Makes it a lot cleaner.</p></pre>Dont_Reddit_Me: <pre><blockquote>
<p>I ended up just using sqlx, which wasn't too bad, although it was a pain in the ass having to have two versions of all my types, one with sql.NullInt64/NullString/NullBool in them, and one with the real things, and to have to manually copy data between them, just so that I could fetch queries with left joins in them.</p>
</blockquote>
<p>This concept might help: <a href="https://en.wikipedia.org/wiki/Data_access_object" rel="nofollow">Data access object</a>.</p>
<p>It won't save you from duplication, but makes it more organized </p></pre>cypriss9: <pre><p>At UserVoice we use <a href="https://github.com/gocraft/dbr">https://github.com/gocraft/dbr</a> because it's simple, convenient, and fast. I personally prefer a struct mapping approach instead of a "full" ORM approach because it hits my sweet spot of full control and transparency over what is happening and convenience.</p></pre>ecmdome: <pre><blockquote>
<p>gocraft/dbr doesn't use prepared statements. We ported mysql's query escape functionality directly into our package, which means we interpolate all of those question marks with their arguments before they get to MySQL. The result of this is that it's way faster, and just as secure.</p>
</blockquote>
<p>Not sure how I feel about this. Can someone verify that it's just as secure?</p></pre>TheMerovius: <pre><p>Query escaping isn't generally accepted as secure, because it's a ridiculously hard problem. Prepared statements split the parsing-pass from the data-insertion pass, which is <em>guaranteed</em> to be secure. I also don't see, why it should be slower…</p></pre>calebdoxsey: <pre><p>It's not a ridiculously hard problem. MySQL has a <code>mysql_escape_string</code> function. (<a href="https://github.com/mysql/mysql-server/blob/09ddec8757b57893ccd2f2c2482b3eec5ca811e5/libmysql/libmysql.c#L1148" rel="nofollow">https://github.com/mysql/mysql-server/blob/09ddec8757b57893ccd2f2c2482b3eec5ca811e5/libmysql/libmysql.c#L1148</a>) Assuming they ported it properly it's secure.</p>
<p>Prepared statements can be slower because they involve multiple roundtrips to the server. (<a href="https://vividcortex.com/blog/2014/11/19/analyzing-prepared-statement-performance-with-vividcortex/" rel="nofollow">https://vividcortex.com/blog/2014/11/19/analyzing-prepared-statement-performance-with-vividcortex/</a>)</p></pre>TheMerovius: <pre><p>This is the first time I have heard the name of that function promoted as being secure, I usually only see it mention in horrible crash-and-burn contexts. Though I admit, it is also a first that I have heard of it as a part of the mysql server itself, as opposed to the functions of the same name in PHP and I don't know what the difference is (and if there is any. I assume so, otherwise you would probably not promote it as secure).</p></pre>TheMerovius: <pre><p>After reading your second link, I think the problem is blown out of proportions. Yes, it is a pitfall, that the sql package might use prepared statements unexpectantly, if used incorrectly, but once you know that, that should be easy to avoid. I would even say, that the vast majority of applications only need a finite (and small-ish) number of queries, so you should be able to just prepare them once and reuse them all the time (thus saving parsing time). But meh.</p></pre>elithrar_: <pre><p>It's 'hard' in that homebrew local interpolation is an area with a high attack surface. The old PHP functions didn't do this concept much of a favor, especially with function names like <code>mysql_real_escape_string</code> which replaced <code>mysql_escape_string</code> (absolute madness).</p>
<p>mgutz/dat has <a href="https://github.com/mgutz/dat#why-is-interpolation-faster" rel="nofollow">a comment around</a> local interpolation performance:</p>
<blockquote>
<p>Keep in mind that prepared statements are only valid for the current session and unless the same query is be executed MANY times in the same session there is little benefit in using prepared statements other than protecting against SQL injections. See Interpolation Safety section above.</p>
</blockquote>
<p>mgutz/dat also added some <a href="https://github.com/mgutz/dat/commit/3b89fc59253b3a994fa3e8ab6ac4e3eee18ef51f" rel="nofollow">fuzzing tests</a> in response to a comment I made here—noting that fuzzing isn't foolproof.</p>
<p>If you're still really concerned about it—and I can understand that, although the homework appears to have been done—note that <code>dat.EnableInterpolation</code> is set to <code>false</code> by default.</p></pre>hobbified: <pre><p>If it's done right, it is. Many cases of DB/driver combos don't use server-side prepared statements at all, but they still support placeholders in the API, and they just quote and substitute parameters for placeholders before sending the query to the server. Of course it's possible that there are bugs in the escaping or in the server's SQL parser that allow something untoward to happen, but it's also possible that there are bugs in the handling of prepared statements on the server that could cause trouble. In both cases, if people did their jobs correctly than you should be secure.</p></pre>interlock: <pre><p>We use dbr and sick the classic ORM stuff with it.</p></pre>Momer: <pre><p>While I sometimes enjoy using Ruby's ActiveRecord, when writing Go, I've taken to writing out my queries by hand.</p>
<p>It doesn't make sense to me to know exactly what my data looks like, exactly how I'm handling it, and then handing it over with fingers crossed to an ORM.</p>
<p>Using good ol' <a href="https://github.com/lib/pq">lib/pq</a> and some elbow grease for the really complex queries. </p></pre>elithrar_: <pre><p>I use <a href="https://github.com/mgutz/dat">https://github.com/mgutz/dat</a> - it has some composable, ORM-like features, although I use it to load SQL files off disk and for its performance. <a href="https://github.com/jmoiron/sqlx">sqlx</a> is my older (but still very good) recommendation otherwise.</p></pre>mgutz: <pre><p>I'm also a fan of <code>sqlx</code>. <code>dat</code> is built on <code>sqlx</code> and makes working with Postgres more convenient. <code>dat</code> is friendly. There are contributed helpers that wrap existing <code>*sqlx.DB</code>, <code>sqlx.Ext</code> and <code>*sqlx.Tx</code>.</p>
<pre><code>DB := runner.NewDBFromSqlx(sqlxDB)
tx := runner.WrapSqlxTx(sqlxTx)
ext := runner.WrapSqlxExt(sqlxExt)
</code></pre></pre>ickorn: <pre><p>Thanks, I didn't know about <a href="https://github.com/mgutz/dat" rel="nofollow">https://github.com/mgutz/dat</a>, finally looks like a Go ORM/database library that's a help and not a hindrance. Going to give it a try today see how it holds up.</p></pre>ericflo: <pre><p>This one gets my vote too. Great software!</p></pre>radimm: <pre><p>Have been using gorp for quite some time now, but recently found myself happy with jmoiron/sqlx only. So not really ORM at all. But at the end of the day I am not in for 'get something done fast' and ignore everything else game (like database, views, stored procedures, etc.)</p></pre>tlianza: <pre><p>We use xorm. Documentation has a lot of holes, but so far it has been a good library without major bugs.</p></pre>mephux: <pre><p>+1 Same here. They also have support for the pure-Go sqlite implementation.</p></pre>_rusht: <pre><p>I currently use and prefer <a href="https://github.com/jmoiron/sqlx">sqlx</a>, but <a href="https://github.com/gocraft/dbr">dbr</a> and <a href="https://github.com/mgutz/dat">dat</a> also look like solid options. </p>
<p>None of these are ORMs, and Go doesn't have anything even remotely as full featured as Active Record or Entity Framework, but then again I have found that it's easier to write an optimized SQL statement by hand than try to optimize a query in either one of those ORMs. </p></pre>ecmdome: <pre><p>I use gorm ... I definitely have had some frustration with it, but overall it works well. When I started using it the migration features weren't working so I have never used the migrations. </p></pre>Bromlife: <pre><p>Upon my entry to Golang I was used to using an ORM. I was porting an application over that was built on SQLAlchemy. After messing with some of the ORMs that you've listed and having discussions with the community I realised the error of my ways. <a href="http://www.hydrogen18.com/blog/golang-orms-and-why-im-still-not-using-one.html" rel="nofollow">This being one of the articles I read</a></p>
<p>Whilst it was a bit of a pain to pick up my SQL skills again, I was so happy with the results. When comparing my SQL query against the original app's SQLAlchemy query, it was orders of a magnitude smaller & efficient.</p>
<p>SQLx picks up a bit of the slack, without adding too much magic. </p>
<p>If you're using an ORM for scaffolding, then might I suggest either using a proper scaffolding tool or just using a GUI like pgAdmin3?</p>
<p>ORMs definitely made the initial entrance easier, but you will lose performance & they'll give you headaches on things like complex joins. Writing your own queries means you know exactly what is happening.</p></pre>QThellimist: <pre><p>I actually read something like ORM's are good because they cache the query result and handle the error inside them. They are not fast as raw sql but with cache they can actually make it faster. I didn't see any cache in then ORM packages I looked at (maybe missed) but with cache wouldn't ORM actually be a useful layer? </p></pre>mgutz: <pre><p><a href="https://github.com/mgutz/dat/tree/caching#caching">dat caching</a></p>
<p>It's not merged in yet bit it will be soon enough.</p></pre>Bromlife: <pre><p>All of the apps I've written in Golang are heavily cached anyway. I've made heavy use of Groupcache & Redis where appropriate. Adding that already existing layer to my database logic is simple & makes a lot more sense than relying on the ORM. With Groupcache I can maintain data caches between servers and only hit the database when necessary. Wouldn't you rather know where & how your queries were being cached?</p>
<p>It's pretty simple to roll your own caching layer that makes use of already existing libraries & tools like <a href="https://github.com/golang/groupcache" rel="nofollow">Groupcache</a> & <a href="https://github.com/go-redis/redis" rel="nofollow">Redis</a>. It means you are in control of how your data is being cached, which in my opinion is preferable.</p></pre>vmihailenco: <pre><p>Take a look at <a href="https://github.com/go-pg/pg" rel="nofollow">https://github.com/go-pg/pg</a></p>
<ul>
<li>PostgreSQL only (how often do you switch database? :)</li>
<li>automatic connection management</li>
<li>raw SQL for querying</li>
<li>struct mapping</li>
<li>fast and effecient since it combines PG client and struct mapping</li>
</ul></pre>mattn: <pre><p>Another one. <a href="https://github.com/naoina/genmai" rel="nofollow">https://github.com/naoina/genmai</a></p>
<p>I like this.</p></pre>nwjlyons: <pre><p>Thanks for this list. I just learnt about gorm. It's documentation looks really good. </p></pre>diegobernardes: <pre><p>And about migrations? Gophers tend to use raw sql in the data layer, but migrations isnt a thing that i can live without it, what you guys using?</p></pre>mnsota: <pre><p>go generate</p></pre>om0tho: <pre><p>Yeeeaah, buddy. <a href="https://github.com/variadico/scaneo" rel="nofollow">https://github.com/variadico/scaneo</a></p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
0 回复
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传