Let's say I have two microservices, A and B. For every incoming http request A must process a part and B another part before delivering an http response.
Let's say I have a Kafka instance between A and B in which I want to queue the http requests, so B can read from here at its own pace and process. I want B to be able to perform the response.
In Go, can I use net/context to pass the information, and keeping the request open until B answers?. If not, how will you do this?.
评论:
justinisrael:
Dorubah:Somewhere in this mix you have the application server that is serving the http endpoint. It may be "A", which would need to delegate part of the request to "B". Or "B" to "A". Or even a "C" that serves http and will call "A" and "B" in order to form a response. But in a nutshell, whoever is handling the http request will be able to delegate to any number of microservices in order to ask extra questions, and then collect everything into a response.
TheMerovius:The thing is that I want "A" to call "B". Can "A" delegate on "B" the response of the request without having "B" contacting "A" back?.
I want the flow of the request to be "Browser -> A -> B -> Browser" instead of "Browser -> A -> B -> A -> Browser".
Thanks for your answer :).
Dorubah:At best, you'll have to do "Browser -> LB -> A -> B -> LB -> Browser". The TCP connection of the browser needs to terminate somewhere, you can't just move it between different services.
It would also be useful to know, why you want to do this in the first place; in essence, HTTP is a synchronous protocol; why do you care whether A waits around for the response?
What you are trying to do is certainly possible, but it's incredibly complicated. Your LB needs to pass an identifier of the open request, which B can then pass back at some point; it needs to keep state while doing that and you must make sure that the response is passed back to the same instance of the LB, because that is where the state is held. In the end, you'll operationally revert to just having a synchronous request, but will have put several layers of abstractions and round-trips on top of it, without any benefit.
So, likely, the solution to your problem is, to get rid of the "I have a Kafka instance between A and B". Kafka is for asynchronous messages, you are doing synchronous things, use RPCs and a LB.
justinisrael:Thank you. Actually I'm using RPC calls right now. Was just wondering if there was another way of replaying to the HTTP request from another microservice. But you are right, I need to close the TCP connection to send the data, so no benefit on this approach I said.
Dorubah:I've not written a setup like this before, but I am assuming it involves giving the file descriptor from A to B and letting B handle writing the response, since I am guessing they are two independent processes connected via the Kafka message bus. But that sounds like really strong coupling between microservices. I don't even know how you would do it on different hosts. Shouldn't B just be asked to provide some data? Someone has to finish the http request and I don't know how you go about getting an async external process to do it over a message bus, unless B has an outgoing connection to the browser as well.
matart:Yes, this was my initial thought, send everything into the message so "B" can use it. There is no benefit in that setup, right now B is just asked to provide some data using RPC, but there is people who consider this to keep a strong coupling between them and consider this Microliths, but I guess they are what they have to be, a pattern that fits well the problem.
I don't believe the browser can receive a request from B out of nowhere. Does the browser have a connection open to B like a websocket? Is this all behind an initial HTTP endpoint? Do you poll B and see if there is data?
You could make the request to A. Then poll B for a response.