<p>Say you have a backend service written in Go. The service is made up of interacting objects, potentially a lot of them. How do you manage, view and store the state? How do you reason about the state at any given time?</p>
<p>Is the state stored in each object; spread throughout your app like LDL cholesterol? Does the state get aggregated in one place (ala redux store)? What if you want to rehydrate your backend from a single json file? Do you persist every thing to the 'database' and use that as your app state? </p>
<p>How do you personally reason about the state of your Go app?
Please chime in with real experiences! :)</p>
<hr/>**评论:**<br/><br/>epiris: <pre><p>Based on how your question reads, are your state boundaries within a single process space? That's really going to vary so much between each program. Typically for any full service/app I end up using channels, sync, atomic and unsafe atomic in that order of frequency and for the same things.</p>
<p>Atomic unsafe is the one I use the least, it's really simple to implement a multi reader single writer queue with very low contention for any T. But it's hard to justify and have only used it once in a real application.</p>
<p>Atomic I use often, sometimes more than mutex I suppose, but mutexs I still use more consistently. Atomic I use for very simple state transitioning in places that are hot and shouldn't have a (|RW)Mutex. For example any call sites with a lot of lock contention may have a condition that must be checked, but that condition has a low frequency of occurrence. Like a service that can shut down I may give a state field of T int64, and define a set of const like</p>
<pre><code>StateStopped 0
StateStarting
StateRunning
StateStopping
</code></pre>
<p>Then a simple helper method named transition(to, from int64) (ok book) that does a cas during startup / shutdown like transition(StateStarting, StateRunning) for example. Usually the Start(ctx) Stop(ctx) and any helper initThing(ctx) methods are guarded by a single mutex but have to still transition atomically because the service methods like GetThing() will race against it otherwise since they don't hold the lock. Instead the service methods will do a simple LoadInt64 and see if the state is running, if not will return a Err. This gives good performance while being hard to screw up. Sometimes when a struct has just a single field of a consistent T that needs read very often but can only be in a single state I'll use a sync value, again cause it's hard to screw up. For example a struct that has a single err field of T error. I may need to check before any operation if an error occurred and throw away the struct, but otherwise I can continue on and have other synchronization prims in use. These spots work well with sync value, performance close to atomic int but don't need multiple states, just guard an assignment.</p>
<p>Beloved WaitGroup needs no mention for the common goroutine fanning. I make a lot of use of errfuncs and have a small copy pasta lib that adds some sugar for T of func() error and you can sure do a lot with that function signature thanks to closures and it uses WaitGroup in a lot of places. A lesser used WaitGroup that I like to pull out sometimes in unit testing is creating a thundering herd for testing heavy contention. Basically just wg.Add(1) and start N funcs that call wg.Wait() at function enter, then call wg.Done() to start them all at once when ready.</p>
<p>Mutex I touched on, but that's something that I go to for colder call sites that isn't worth the dev effort of reasoning about state, which always has the potential for error. Lock defer unlock is harder to screw up in a big block of rarely called initialization code or periodic cache updates, etc. I don't get very interesting with locks, occasionally I'll use a lock for coordinating the entry of a set of goroutines that should have exactly one running at a time, but all must complete some work but not before collectively becoming ready, similar to WaitGroup usage above except you only want one function to run at a time.</p>
<pre><code>var l sync.Mutex
l.Lock()
for I := ... {
// create something expensive so this goroutine can be ready
go func() {
l.Lock()
defer l.Unlock()
}()
}
// now the expensive initialization is over we can release
l.Unlock()
</code></pre>
<p>Their are better patterns for above in most cases but it's good to illustrate locks as coordination rather than the usual guard this thing so -race doesn't bicker at me.</p>
<p>Channels are covered extensively so I won't bother. Happy coding.</p></pre>Pythonistic: <pre><p>In an enterprise single-sign on (SSO) service I developed, I identified five different kinds of state. In order of durability:</p>
<ol>
<li>Channels. A few asynchronous goroutines only receive work via channels, such as logging or managing connection pool resources. A small queue (~50) for most channels limits blocking.</li>
<li>Context, or request level state. We populate a <code>context.Context</code> struct with information about the request (IP address, GUID, etc.) that will be needed for logging and some background information for every request.</li>
<li>Maps. I use maps to store configuration values (easy to reload, which is a big win for production), resource pools (like a connection pool for the LDAP server), form tokens and XSS tokens/requests, and even login retry throttling. These are in-memory, each has an eviction implementation, and are guarded by mutexes.</li>
<li>Session Cookies, used to save session state between requests. The SSO cookie is only available to the SSO server and is encrypted to keep employees and third parties from crafting their own cookies. The cookie stores the user's primary credential (username), session expiration time, and sign-on type. If the cookie is cleared, the session ends with the next authentication challenge.</li>
<li>Database records. These are stored in places like the LDAP server (accessed with a wrapper around <a href="https://github.com/go-ldap/ldap">go-ldap/ldap</a>, the RADIUS server for TOTP challenges, and in a MySQL database. LDAP stores credentials, groups, and provides password authentication. MySQL stores single-use tokens for password resets and some secondary data used for verification that aren't in the LDAP records.</li>
</ol>
<p>My next project, porting Hazelcast to provide shared durable state between service instances, will need to provide state for authenticated clients during a TCP session (is the client logged in? does the client have rights to that partition?) and also will need to durably persist state across containers and partitions.</p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传