<p>I'm writing a web app, and I've designed it so most structs that the user will see (user, server, machine) all are stored in a map[int]*user (or the corresponding data type).</p>
<p>When I need to access some data about a user, it attempts to grab the data from the map, but, if the data is over 15 minutes old (or doesn't exist in the map), then it will update the map from the database (SELECT data FROM user) and use the new data.</p>
<p>My question is; Should I be using sync.Mutex in all my data 100% of the time? The map is only used to cache some data so I don't request it from the database so often, and it seems like avoiding using sync.Mutex (to avoid a data race) is not worth it. Is this ok to allow the possibility of a data race for a simple struct containing a username, etc.?</p>
<hr/>**评论:**<br/><br/>peterbourgon: <pre><p>Yes. There is no such thing as a benign data race.</p></pre>dgryski: <pre><p>Yes. <a href="https://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong">https://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong</a></p></pre>slavik262: <pre><blockquote>
<p><code>op_count.store(op_count.load(std::memory_order_relaxed) + 1, std::memory_order_relaxed);</code></p>
</blockquote>
<p>Wat. Try</p>
<p><code>op_count.fetch_add(1, std::memory_order_relaxed);</code></p></pre>gohacker: <pre><blockquote>
<p>Should I be using sync.Mutex in all my data 100% of the time?</p>
</blockquote>
<p>No. There's also <code>sync.RWMutex</code>.</p></pre>gngeorgiev: <pre><p>And channels</p></pre>comrade_donkey: <pre><p>and the entire <code>sync/atomic</code> package.</p></pre>mwholt: <pre><p>Careful using channels as mutex, though (don't) -- you can introduce less obvious data races, namely, making atomic operations non-atomic, even if there's not technically a "memory race".</p></pre>gngeorgiev: <pre><p>Both have their use cases</p></pre>sh41: <pre><p>As long as you want your program to be correct, yes, you should.</p></pre>ronny10: <pre><p>Yes. And in 1.9, you can use sync.map</p></pre>justinisrael: <pre><p>To sum up some of the other answers, it sounds like you are stating that you feel it is a performance concern to add synchronised access to your shared map. If that is your concern then you should probably run a benchmark to confirm your theory that preventing data races is not worth the performance loss but can still result in correct behavior. If you can't prove that, then you might has well add the race protecting approach to your shared memory. </p></pre>Acidic92: <pre><p>Maybe I should've added that the only time data is being written to the map, is when it's read from that database. The rest of the time, it's just read. So it just seemed unnecessary to add all that protection.</p></pre>carsncode: <pre><p>It's not unnecessary if it's in any way possible for reads and writes to happen at the same time. There's always sync. RWMutex to avoid serializing reads.</p></pre>neopointer: <pre><p>Like <a href="/u/carsncode" rel="nofollow">/u/carsncode</a> said, a read-write mutex is what you need. Basically the only time the read code will block is when there's a write occurring. Seems a good fit for your use case.</p></pre>George3d6: <pre><p>Of source not, just hide them well enough. </p>
<p>Instead of synchronizing using atomics and mutexes you can sometimes "sort of" synchronize using sleep.</p>
<p>There are also situations where data races are mostly benign or not obvious when you first launch your software, so extra points for finding those and exploiting them... you will never be out of a job and probably get to be the most productive developer ever.</p>
<p>The one important thing to remember is fixing the bugs bought on by race conditions should always be done in such a way that the resulting code still isn't thread safe, if possible (e.g. sleep(500ms) isn't working anymore, play around with other numbers).</p></pre>bonekeeper: <pre><p>LOL</p></pre>dnshane: <pre><p>Did you benchmark the mutex? Unless your application is handling hundreds of thousands of users per second I doubt locking will have a measurable impact on performance. If it has any impact, you can switch to a RWLock which should avoid blocking most of the time in your case. </p>
<p>If this has a real impact on performance that matters, then you can consider lockless data structures, rather than just shrugging and not worrying about it.</p></pre>stone_henge: <pre><p>What would you be trading the correctness for? In most cases it's going to be more fun to write the couple of lines of code that locks the map/user while modifying it than to find than debugging a rare data race and cleaning up in the aftermath.</p></pre>Acidic92: <pre><p>It just seems confusing; For example, do I add a lock before a for range loop? It seems like a hassle for some data where the only thing that could go wrong, is some information may be a few minutes out of data that is used from cache.</p>
<p>I started implimenting it to my code, but there were many situations where I wasn't sure if I should add a lock or not, and if I should add a lock to the whole map, along with a lock for each value of it.</p></pre>stone_henge: <pre><blockquote>
<p>It just seems confusing; For example, do I add a lock before a for range loop? </p>
</blockquote>
<p>Yes, if the application be mutating the data in the map or the map itself from another thread while you are ranging, you should.</p>
<blockquote>
<p>It seems like a hassle for some data where the only thing that could go wrong, is some information may be a few minutes out of data that is used from cache.</p>
</blockquote>
<p>It seems that you've fundamentally misunderstood the concept of a data race. Your data being out of date has nothing to do with the data race. Data races occur when two or more threads try to perform conflicting operations on data at the same time. For example, one thread could read the data while another is writing, meaning you can never be sure that the data is complete or consistent and read time. The worst case is not that your data is out of date, it's that your data is corrupt. You should always consider your data to be in an undefined state when you have a potential for data races.</p>
<blockquote>
<p>I started implimenting it to my code, but there were many situations where I wasn't sure if I should add a lock or not, and if I should add a lock to the whole map, along with a lock for each value of it.</p>
</blockquote>
<p>Add locks (or your preferred synchronization primitive) for any data that may otherwise be accessed simultaneously from more than one thread, where at least one of the threads could be mutating the data. If you know that to be true, it shouldn't be a question whether you should address the problem.</p></pre>Acidic92: <pre><blockquote>
<blockquote>
<p>I started implimenting it to my code, but there were many situations where I wasn't sure if I should add a lock or not, and if I should add a lock to the whole map, along with a lock for each value of it.</p>
</blockquote>
<p>Add locks (or your preferred synchronization primitive) for any data that may otherwise be accessed simultaneously from more than one thread, where at least one of the threads could be mutating the data. If you know that to be true, it shouldn't be a question whether you should address the problem.</p>
</blockquote>
<p>Thanks for all the info.</p>
<p>If I have something like the following:</p>
<pre><code>type user struct {
sync.Mutex
Username string
}
var users = make(map[int]*user)
</code></pre>
<p>and I want to range over each user, should I make a new mutex for the whole of the users map (var usersMutex sync.Mutex)? Or should I just lock each individual user within the map as I loop?</p>
<pre><code>for id, u := range users {
u.Lock()
// ...
u.Unlock()
}
</code></pre>
<p>or</p>
<pre><code>var usersMutex sync.Mutex
usersMutex.Lock()
for id, u := range users {
u.Lock()
// ...
u.Unlock()
}
usersMutex.Unlock()
</code></pre>
<p>Example one (locking only each individual map value)?</p>
<p>or</p>
<p>Example two (locking the whole map, along with each individual map value)?</p></pre>stone_henge: <pre><p>Just apply the same logic to the map. While one or more threads are ranging over it, could some other thread be mutating it, for example by adding or removing a value or replacing one of the values (in this case, user pointers)?</p>
<p>You described a case where the user is missing from the map altogether, in which case it is retrieved from a database and added to the map. If that could possibly happen while you are ranging, #2 is the correct way to go about it.</p></pre>Acidic92: <pre><p>Awesome, this really helped me, thanks heaps!</p></pre>sethammons: <pre><p>If you find yourself sprinkling locks and unlocks everywhere, consider encapsulating that logic inside a getter and setter. You can avoid the non-type safe atomic map in Go 1.9 and just have a UserContainer that is composed of a map of users and a mutex. Then you would have UserContainer.Update(...) or .Get() or whatever operations you need. In those functions, you would use UserContainer.mutex.Lock() and defer UserContainer.mutex.Unlock().</p></pre>stone_henge: <pre><p>Happy hacking!</p></pre>bonekeeper: <pre><p>You absolutely need to lock it to avoid data corruption from two goroutines editing the same data in memory at the same time, or reading data that is being edited by another thread (and the internal map state in memory is still in an undetermined state).</p>
<p>Alternatively you can isolate your maps into a single goroutine and query the data through channels (instead of reading from the map directly) but using a RW lock would have less contention, I believe.</p></pre>epiris: <pre><p>The best synchronization method is no synchronization at all, usually you can avoid big-bags-of-all-the-things. Performance probably isn't a issue, but when it is by initializing a lookup table that is never mutated after init. How you lookup your subsets of data will vary, but to give an example your ints map could have a lookup table in front of it like:</p>
<pre><code>// setup userLut in init(), don't use a ptr
// if it's just a mtx and map.
var userLut = [64]*userCacheT{}
// always lock cache, but not userLut
// lookup the cache slot for the user ID
// this could be as simple as: (ensure positive)
uc := userLut[requestUserID %64]
uc.Lookup(requestUserID)
// method Lookup holds mtx
</code></pre>
<p>Just to be clear, mtxs are fast, it's safe to assume under 100ns always, that isn't what above mitigates (because it doesn't need to). This alleviates contention by spreading the number of tasks that can use the cache concurrently to 64 (in a perfect distribution). If updating users ever became a multi-second task, you would want a a worker queue of some sort that each request waited independently without holding the mtx, only holding to check existence / expiration then releasing mtx and blocking caller until the update data came back through a worker pool. </p>
<p>A million ways to solve this, but just food for thought. Have a good one.</p></pre>darkmagician2: <pre><p>Someone already said it but just to reinforce, Go 1.9 added a sync.map feature so you can have a map that is automatically concurrent safe :)</p></pre>Acidic92: <pre><p>Yeap, I'm going with this method :) Thanks</p></pre>bonekeeper: <pre><p>a RWLock would be better since reads are far more common than writes in your case. Using a regular lock, only one request can read from the map at any time; with a RWLock many can safely read at the same time, until your DB updater grabs the write lock, at which point readers will wait for the update to finish.</p></pre>slowratatoskr: <pre><p>do you even ComSci bruh?</p></pre>Acidic92: <pre><p>Neg</p></pre>hell_0n_wheel: <pre><p>Everyone in this thread seems hung up on 'correctness' but according to CAP theorem, you're going to have to give up either availability or partition tolerance to achieve 'correctness'. For a web application, this doesn't make sense, and the industry proves this out: all the popular web apps (including Facebook, LinkedIn, Twitter) toss consistency out the window in favor of keeping availability and partition tolerance.</p>
<p>All this being said, OP needs to specify his requirements a little better before we can answer his question fully. If he's writing a banking app, then correctness is much more important, than if he's writing a blogging framework...</p></pre>sethammons: <pre><p>I believe you are misunderstanding CAP. CAP is for distributed data stores. The P is for Partition Tolerance which deals with the network dropping or delaying messages between nodes. Let's assume that the CPU could arbitrarily delay or drop instructions. Even in this case, you are not faced with giving up C or A unless the CPU or kernel was doing strange shit. From the wiki on CAP (<a href="https://en.wikipedia.org/wiki/CAP_theorem):" rel="nofollow">https://en.wikipedia.org/wiki/CAP_theorem):</a></p>
<p>"CAP is frequently misunderstood as if one had to choose to abandon one of the three guarantees at all times. In fact, the choice is really between consistency and availability only when a network partition or failure happens ; at all other times, no trade-off has to be made."</p></pre>WikiTextBot: <pre><p><strong>CAP theorem</strong></p>
<p>In theoretical computer science, the CAP theorem, also named Brewer's theorem after computer scientist Eric Brewer, states that it is impossible for a distributed data store to simultaneously provide more than two out of the following three guarantees:</p>
<p>In other words, the CAP Theorem states that in the presence of a network partition, one has to choose between consistency and availability. Note that consistency as defined in the CAP Theorem is quite different from the consistency guaranteed in ACID database transactions.</p>
<hr/>
<p><sup>[</sup> <a href="https://www.reddit.com/message/compose?to=kittens_from_space" rel="nofollow"><sup>PM</sup></a> <sup>|</sup> <a href="https://reddit.com/message/compose?to=WikiTextBot&message=Excludeme&subject=Excludeme" rel="nofollow"><sup>Exclude</sup> <sup>me</sup></a> <sup>|</sup> <a href="https://np.reddit.com/r/golang/about/banned" rel="nofollow"><sup>Exclude</sup> <sup>from</sup> <sup>subreddit</sup></a> <sup>|</sup> <a href="https://np.reddit.com/r/WikiTextBot/wiki/index" rel="nofollow"><sup>FAQ</sup> <sup>/</sup> <sup>Information</sup></a> <sup>]</sup>
<sup>Downvote</sup> <sup>to</sup> <sup>remove</sup> <sup>|</sup> <sup>v0.21</sup></p></pre>hell_0n_wheel: <pre><p>CAP is often talked about in terms of data store architecture, sure, but it can be applied to any distributed application.</p>
<p>I'm still unclear on OP's application, but I get the impression he doesn't want to lock his server to wait for his data cache to update... in effect, he's killing the availability of his application while it's locked, to achieve consistency in the data it serves.</p>
<p>If instead he used a reader-writer lock, he could keep the application available, even though it would only provide eventual consistency. This make sense?</p></pre>slowratatoskr: <pre><p>you can't be serious, right?</p></pre>Acidic92: <pre><p>My web app is simply storing a user's username, password, (general info), a machine's IP address and username and password and only things like this.</p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传