<p>I wrote a monolithic web application for server, but while researching realize that there is a microservice framework (having services for each module) which works really good as all services can work independently. Development can be easy as we can add new services as we build them. Main concern is using RPC good for interacting between Main web service and task service? Do people use this in production, does this not give any delay due to inter process communication as service work as different process?
I am planning to have auth module which does all auth tasks, and one redis instance which will cache all auth details so other modules don't need to interact with auth module and can fetch data from redis and validate user.
I read lot of articles about Microservice but no one explains how web server intracts with services (disadvantage of this) or i am missing those parts while searching. Can someone point me in right direction.</p>
<hr/>**评论:**<br/><br/>PsyWolf: <pre><p>There is some overhead compared to in process communication, but it's usually acceptable. There's no one sized fits all answer, but the benefits of a micro service approach often outweigh the small performance hit.</p>
<p>I'd recommend building one of your components not directly as a service but as a library package that you expose though a very thin layer that turns it into a micro service. Then if the overhead does turn out to be unacceptable, it's very simple to change your RPC calls into in process calls directly to the package.</p></pre>myth007: <pre><p>Thanks, that seems be to nice idea to validate and test both. Will try this out. btw what RPC library is being used, any pointers? I heard about gRpc which uses Google's Protobuffer.</p></pre>PsyWolf: <pre><p>I haven't used gRpc, but it seems like a fine choice if you want to be able to communicate with this service from different languages. If all of your services are in go, then the standard library has <a href="http://golang.org/pkg/net/rpc/" rel="nofollow">http://golang.org/pkg/net/rpc/</a> which is uses a go specific binary message format called gob under the hood. Gob was designed by the go team as a response to some of the issues they had with google's protobuffer when building the go implementation of them. You can read more about it here <a href="http://blog.golang.org/gobs-of-data" rel="nofollow">http://blog.golang.org/gobs-of-data</a></p></pre>myth007: <pre><p>Thanks for providing details for it. Will read about these RPC libraries. yes motivation behind using gRPC was to make it flexible for adding different language support so in case we need to use some different language for some specific task, but go should be fine as per current understanding.</p></pre>sleep_well: <pre><p>Think thrice before going ahead with "microservice" ideology, make sure you read the pros and cons throughly.</p>
<p>Go's native rpc implementation is more than adequate for building distributed serviced. You usually do not need to rely on any third party rpc implementations, unless there is an interoperability requirement. </p></pre>myth007: <pre><p>As of now i am fine with monolithic approach, but to make my server scalable from start i was thinking of microservice approach. I am not sure whether i need it now, but it seems to be more powerful, except communication cost using RPC, as some services might be on different systems. That is the main reason to take some basic idea from community. When i read articles, it makes be believe that this is great approach, may be because everyone talks about greatness. Thanks for your view. My another motivation is to find a scalable robust architecture with which i can work on future projects.</p></pre>ApatheticGodzilla: <pre><p>Microservices are probably a good idea for a large organization with lots of moving parts (think of Amazon for example) but for most projects a well designed monolithic approach will scale just fine. There's no such thing as a free lunch and microservices introduce a ton of complexity (database partitioning, latency etc) you really don't want to deal with at the beginning. </p></pre>dwevlo: <pre><p>There are ways to improve IPC performance if that becomes an issue: for example you can use shared memory if you need to transfer a lot of data. (<a href="https://github.com/calebdoxsey/tutorials/tree/master/integration/shm" rel="nofollow">https://github.com/calebdoxsey/tutorials/tree/master/integration/shm</a>)</p>
<p>I think you should call your auth service directly and hide redis behind it. (the auth service should either have its own redis instance, or namespace itself inside of a shared redis) Your services need well-defined APIs so they can remain relatively stable, and relying on an un-enforced, ad-hoc schema (like you'd have with redis) is going to lead to headaches later on.</p>
<p>As far as how the web server interacts, it's pretty straightforward. For however you define the protocol (RPC, HTTP, something custom) you will have some client that you make the call on:</p>
<pre><code>var res AuthenticateResult
err := client.Call("Authenticate", AuthenticateRequest{username, password}, &res) // or bcrypt hash or whatever
</code></pre>
<p>Of course you will need to know how to reach the server. You could use DNS (<code>auth.example.com</code>) or a config file:</p>
<pre><code># config.yaml
auth_server: 1.2.3.4
</code></pre>
<p>Or etcd/zookeeper.</p>
<p>One benefit to using HTTP (with say jsonrpc) is that it makes things a little easier to debug and load balance (you can throw nginx in front of a bunch of auth servers).</p>
<p>Unless you're planning to transfer a ton of data over these requests the built-in RPC library can handle a lot of traffic.</p></pre>myth007: <pre><p>Thanks a lot, this is very informative, specially the auth part. Only reason i added to have separate place as every service need to make sure auth is validated so every service will first call auth and then do the task or may be my main server first wait for auth to validate and then call other services to do their task.
My main concern is speed as if i add DNS then i need to do one extra step of routing through DNS to go to access service.
But surly this will be helpful as load on service increases and i need to add extra layer to do load balancing on different service instances. My main question here is that do people use such thing in real world application, is this fast enough to take the trade of over monolithic design?</p></pre>dwevlo: <pre><p>Yes many organizations use a service oriented architecture. I've seen both the queueing approach with kafka (where each service has a queue between it and the next service downstream), and the direct call approach. (service A makes an RPC call to service B)</p>
<p>You can scale a service the same way you scale a web application. Create more instances and use a load balancer, or round-robin a pool of servers:</p>
<pre><code># config.yaml
auth_servers:
- 1.2.3.4
- 5.6.7.8
</code></pre>
<p>As for DNS, you can use caching to alleviate some of the load (that's probably already going to happen anyway), but ideally you should maintain a pool of connections in your app and re-use them. So the actually # of DNS queries ends up being pretty small. (Service A connects to be service B, makes an RPC request, then puts the connection in an idle pool. When it needs to make another request, it just re-uses the previous connection.)</p>
<p>There are two types of scaling at play here: scaling in terms of requests/sec, and scaling in terms of your team. A service oriented architecture can help with both problems, but in my opinion it's the latter that makes it essential. When a whole team ends up working on a single monolothic rails app you end up with lots of headaches.</p></pre>lobster_johnson: <pre><p>We use HAProxy as a load balancer; all our microservice APIs are HTTP/1.1. All apps talk to a single host that distributes API calls to backends by looking at the path: <code>/api/foo/v1/...</code> always goes to the <code>foo</code> service.</p>
<p>HTTP is slower than direct RPC, but we've survived several years on this architecture, and it's not too bad.</p>
<p>We're planning to move to an RPC protocol (evaluating gRPC); however, you still need a way to discover services, and a load balancer is a way to do this. It seems wrong to build this into the client. We're considering either a custom-built proxy that can run on each machine.</p>
<p>Another alternative to load-balancing is to use DNS. Consul is one option here, since it can provide DNS-based discovery. But you still want to route requests to healthy nodes that have the least load, and DNS doesn't assist with that.</p>
<p>Don't listen to people to recommend against microservices. We've been doing microservices the last 3+ years, and it's been absolutely wonderful. That said, there are many challenges. It's a learning process.</p></pre>pib: <pre><p>Actually Consul does handle routing traffic away from unhealthy instances when you use its DNS for discovery. That's kind of one of it's main features.</p></pre>lobster_johnson: <pre><p>Sorry, I meant that DNS doesn't support "least load" routing. It randomizes the list of nodes so you get a <em>kind</em> of load balancing, but not quite.</p></pre>pib: <pre><p>I suppose if you set your "healthy" thresholds right it would work well enough.</p>
<p>I wonder if anyone is working on load-based DNS on Consul...</p></pre>PsyWolf: <pre><p>Nothing wrong with having a load balancer for RPC, but I can imagine you could get just as much flexibility without the overhead by having the clients "subscribe" to routing info from the "load balance publisher". When any routing needs to change, the publisher will push the new routes to any subscribers that were affected. Then the actual requests need not hop through the load balancer directly, but you have a single microservice managing the routing.</p></pre>lobster_johnson: <pre><p>We have apps written in Node.js, Ruby and Go. We'd need a library for each of those, both in the client and the server. Frontends that talk to microservices from the browser would need to be proxied by something which has this client (as opposed to today, where they just talk to HAProxy). The logic isn't the problem, it's just something you'd have to reimplement for each language, and maintain. Every time you update this library, three times, you also have to update every single client and server app. We have 20+ microservices and frontends, and the number is growing fast.</p></pre>PsyWolf: <pre><p>So I may have gone on a small research spree and I figured I'd share. It turns out that consul has a rich <a href="https://www.consul.io/docs/agent/http.html" rel="nofollow">HTTP api</a> that has health info and can support the pub/sub model I mentioned, <strong>plus</strong> it <a href="https://hashicorp.com/blog/haproxy-with-consul.html" rel="nofollow">plays nicely with HAProxy</a> so you can leave much of your infrastructure as is and only bypass your load balancer where you really need that extra juice.</p>
<p>Did I mention the http API has existing client libraries for all your languages:</p>
<ul>
<li><a href="https://github.com/hashicorp/consul/tree/master/api" rel="nofollow">go</a></li>
<li><a href="https://github.com/xaviershay/consul-client" rel="nofollow">ruby</a></li>
<li><a href="https://github.com/silas/node-consul" rel="nofollow">node.js</a></li>
</ul>
<p>So the amount of code you'd need to personally maintain would probably be limited to the interface between your infrastructure and these existing APIs. Admittedly, you would still need to maintain the more complex architecture in general.</p>
<p><em>Disclaimer: I've never used any of this software personally, and you're probably better off sticking with a simple load balancer as long as it works for you. It looks like you have a relatively smooth upgrade path if you ever want to make the switch.</em></p></pre>hobbified: <pre><p>Services and RPC usually don't go together, it's not the best way to make use of HTTP. Do you actually mean RPC when you say RPC?</p></pre>myth007: <pre><p>Lets say we have multiple services running, each service is a different process, so when i say RPC (Remote procedure call), i say how they contact with each other and pass data to do required processing. My final aim is to have a thin web server and multiple services each doing a special task like auth. I receive request on main server which propagate it to service and service do main task and send back request to main server. This will be useful as i can do that in go threads and call multiple services at same time and wait for each service to finish using channels and send reply back from main server.</p></pre>hobbified: <pre><p>Okay, in that case the overhead of the HTTP request is usually insignificant compared to whatever it is that your app actually <em>does</em>, so it's not the main thing to worry about. And the advantages in terms of clean APIs, data locality, scalability, cacheability etc. usually pay off, especially if you're going to have multiple apps that can make use of the same services. Of course it's your responsibility as a designer to plan things out in a way that you actually get those benefits.</p></pre>supreme_monkey: <pre><p>I am building a microservice system and for communication I use RPC over RabbitMQ. A simple example can be show here, <a href="https://www.rabbitmq.com/tutorials/tutorial-six-ruby.html" rel="nofollow">https://www.rabbitmq.com/tutorials/tutorial-six-ruby.html</a> .</p>
<p>What is your thoughts of that? That way I can scale and buffer calls easily. Using for example Go's standard RPC lib I might potentially hit back pressures etc?</p></pre>myth007: <pre><p>There are two kind of tasks:</p>
<ol>
<li><p>Tasks which are independent and don't need to send any callback. Like sending emails to user. Such tasks we can do using message queues where you adds a message in queue which is being served by some other process who is listening on the other side for a message from queue. In my case, i am planning to do that using Redis (i know redis) lists but you can also do that by RebbitMQ.</p></li>
<li><p>If these need to wait for replies then RPC is right method. Main focus needs to make sure that services are independent of each other. That way you can call multiple services in parallel using go routines and get reply fast. </p></li>
</ol>
<p>My main concern with this question is that as we are using RPC, so it will have its own cost but if we are able to do tasks in parallel that can be offset by that. Also as services are independent different developers can work on different services. As devs mentioned here in upper comments that they are using it in production so i think it can be used. It helps you functionality to scale, but RPC framework must be implemented in proper way. Go is right way to do this as it has inherit support of multithreading. Please add to my comments if you have any feedback.</p></pre>supreme_monkey: <pre><p>Yes, there is two scenarios. One which you don't need reply and one where you do. But you can use message queues for both scenarios. Just that in the case where you need a response (callback) you need to make something like this, <a href="https://www.rabbitmq.com/tutorials/tutorial-six-ruby.html" rel="nofollow">https://www.rabbitmq.com/tutorials/tutorial-six-ruby.html</a> .</p>
<p>What would you say is the benefit of doing pure RPC calls with the standard lib to something like a RabbitMQ queue with a reply back que (see example).</p></pre>myth007: <pre><p>Yes, we both are on same page. We can use both approaches, both are fine. I read everywhere people use RPC instead of message queue for response needed tasks, may be it is fast, can not say as never worked on queue system of RabbitMQ (we can ask others in subreddit). But definitely you need to do extra processing to identify what response is one needed by your specific call. Every extra processing takes its own cost and we need to minimize it. Even i am not sure RPC is best way but people recommended it, which is a tradeoff to make system scalable.</p></pre>myth007: <pre><p>Found this while looking around: <a href="https://sudo.hailoapp.com/services/2015/03/09/journey-into-a-microservice-world-part-1/" rel="nofollow">https://sudo.hailoapp.com/services/2015/03/09/journey-into-a-microservice-world-part-1/</a> .. i think you can use RabbitMQ for service invoking. I have never use RabbitMQ, so will go with RPC.</p></pre>
How expensive is using RPC in microservice framework? Do devs use any other method to interact between their services.
blov · · 1217 次点击这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传