RPC vs. Messaging – which is faster? • Particular Software

Sometimes developers only care about speed. Ignoring all the other advantages messaging has, they’ll ask us the following question:


This is a companion discussion topic for the original entry at https://particular.net/blog/rpc-vs-messaging-which-is-faster

Hi, I have two questions:

  1. If RPC done via “await” without calling thread block is there the same performance picture?
  2. If messaging uses request-response calls is there the same picture as with regular RPC?
    Thank you

Unfortunately, yes. The await still means a blocking thread, except that other threads can resume doing other work. That is, if there actually is other work. For example, storing data in the database and sending data over the wire, are both RPC calls. Because usually a database is also an RPC call. Instead of waiting for a single call and then waiting for the second call, you wait for both calls at the same time. But both are blocking calls.

The .NET async/await is asynchronous execution versus messaging, which is asynchronous communication. Only with asynchronous communication there is no blocking thread.

1 Like

With messaging request/response, there are two messages involved. So sending the request results in a message in the queue, but the sender continues doing whatever it should do. Responding to the message on the other side, again means a message in the queue. But the responding component can also continue work.

This can be used in two scenarios

  1. Your website (or API) accepts a new order and instead of trying to submit it to the database (and potentially run into blocking calls) you submit it as a message on the queue. Your website (or API) can immediately resume accepting other requests. This is extremely useful on a website where the database or something else is the bottleneck. You can easily scale-out webservers, databases and so on not so easily. Or it takes huge amounts of money. If you’ve ever tried to order tickets or a product from a website and it was so slow you couldn’t order and lost an opportunity to purchase, it’s possible they don’t implemented their website using messaging. I’ve experienced this myself when ordering tickets for the Game of Thrones finale episode in a movie theater and created an entire presentation around it.
  2. You want to show some data to an end-user and use messaging and request/response to retrieve the data. Don’t do this! Go into the database as fast as possible and return the data. Use messaging for background processing and storing data, not for retrieving it. :slight_smile:
1 Like

Great article, very in depth. One area that I am unclear after reading is, if we are favouring messaging over RPC, is the connection between the application and the servicebus/queue not also a RPC? Similar to the way that a call to the database requires a connection and a thread, I would have thought that connecting to servicebus to add a message to a queue would require the same.

So I understand the argument of which is faster and why RPC won’t scale linearly, but I am unsure as to how messaging helps prevents the scenario you describe “the process can’t spin up more threads to handle additional incoming requests because it ran out of memory”.

Additionally, the application that handles those messages will still have to make the RPC to the external service or database, and could still run into the same issue of not having enough threads available. Is the point that when an increase of messages is seen, the application handling those messages will scale out and thus allocate more resources so the above issue does not occur?

@igobl as in so many things, it depends. :wink:

Remember that in the article we’re talking about RPC meaning “remote method calls over HTTP”. If you want to expand that definition then almost all communication on the Internet using TCP instead of UDP is kind of RPC. (Turtles all the way down and all that.)

Many (most?) message queues do not even use HTTP in the first place, using something tailor-made for the task of messaging like AMQP. A protocol like AMQP has semantics for at-most-once or at-least-once delivery, and so is not beholden to HTTP’s problem where you get a timeout and you don’t know if it’s because the client couldn’t connect to the server at all vs. if the client did connect and is processing but the server couldn’t respond in time.

But beyond that, the important part is that web application servers are general purpose machines that must respond to and process a variety of requests, where message queues are ultra-optimized for one specific task: take a message and ensure that it’s safely stored. This is orders of magnitude faster/cheaper than all of the processing that most HTTP servers do, let alone all the waiting for databases and whatnot that HTTP servers do.

So could you overload your message broker and not be able to send a message? Yes. But that will require you to try very hard. And message brokers are designed to be able to scale and provide high reliabilty to the point where that can never happen.

On the receiving side it’s much easier. Your message receiver has a set concurrency level that it will not violate. Whether you choose to scale out or not, you are in control of how much read load you place on the queue, not a traffic spike from end users. And again, message queues are ridiculously optimized for this as well.

1 Like

Thanks for the explanation :slight_smile:

So could you overload your message broker and not be able to send a message?

I was more coming from the angle of the specific scenario given in the article “the process can’t spin up more threads to handle additional incoming requests because it ran out of memory”. i.e. it is not the message broker that is overloaded, but the web application server itself by having too many connections to whatever it might be (3rd parties, DB, Redis, Service Bus etc) and it runs out of threads to handle new work.

I think what I missed is I was expanding it to more than HTTP calls, which like you say is Turtles down the way and just wanted to clarify :smiley: