DEV Community

Cover image for Asynchronous Request-Response Pattern
Raggi
Raggi

Posted on

Asynchronous Request-Response Pattern

The Asynchronous Request-Response Pattern lets you decouple backend services processing from a frontend client, where the backend processing needs to be asynchronous, while the frontend client needs a clear/near-immediate response.

Imagine a scenario of a web application that compresses video files. As you can imagine the process of compressing a video file can take time from seconds to minutes or even hours in some cases!. In that case, it is not really an option for the server to wait for this long running operation to finish before sending out the response.

Synchronous Request-Response Pattern

To address this problem, backend engineers typically achieve backend asynchronous processing by adding a queue (achieved by using a message broker) that will isolate the backend processing (video compression in our case) from the request/response stages.

Asynchronous Backend processing

Problem

You can see how this approach can decouple the client which needs an almost immediate and clear response from the backend processing which needs to be done asynchronously which allows the client and the backend API to scale independently.
However, this approach adds another complexity, specifically when the clients needs to know when the job is done? (i.e video finished compressing) how can this be achieved in an asynchronous way?

Solution

This problem can be solved with 2 methods. HTTP Polling and WebSockets

HTTP Polling

HTTP Polling is where the client repeatedly sends a request to the server to get updated information, this information can be about a resource, a simple query or even updating or creating records in the database.
In our case, polling is used to check if the video has completed processing or not.

Asynchrous Request-Response Pattern With HTTP Polling

With this method the flow is as follows:

  • The client sends a request to the server to compress a video
  • The server adds the compression job to the queue (by emitting an event) and immediately responds to the client with a Job ID
  • The client can now start polling the server with the Job ID it received, for example, the server would provide an endpoint for checking the job status (GET /jobs/{id})
  • Now when the client polls the server, there are three possible scenarios, either the job is still processing, the job completed or the job failed. The server can response to either of these with different HTTP status codes. (202 Accepted, 200 OK, 400 Bad Request respectivly)

Note that instead of responding to the client with a Job ID, the server can also respond with a location url to poll the job status as well as the polling retrial interval (which can be an estimate of how long the processing might take to complete)

WebSockets

WebSockets can also be used to solve this problem, which is more suitable for when the client must receive the job status in real time.
The way it works is instead of polling for the job status, a WebSockets connection is established between the client and the server which keeps the client updated of the status of the job.

Asynchrous Request-Response Pattern With Web Sockets

Top comments (5)

Collapse
 
dkushwah profile image
Deependra kushwah

Looks incomplete to me

Collapse
 
ragrag profile image
Raggi

i would love to have your feedback on what exactly you feel is missing so i can perhaps include it :)

Collapse
 
turkdude profile image
Matt

A few possible improvements here. Client can send a request to the back-end and the request can be accepted by back end and placed in the queue and then server side crashes before sending a response to the client. From client side it is not clear if request was accepted. So this could be solved by client providing a unique ClientJobId to server and when server queues request it adds both ClientJobId and ServerJobId to the queued event. Then client can "retry" attempt using same ClientJobId and server can find matching already queued request and send back in progress message to client.

Also, server must cache ClientJobId|ServerJobId with link to result for some period of time AFTER the operation is completed as the client side operation may crash and restart and then pick up the work later in time. So for example server side could cache the ClientJobId|ServerJobId for several hours to allow reporting status to clients who restart to avoid having to rework in some cases. Otherwise the client might check on status for ClientJobId|ServerJobId and there is no result, requiring client to re-submit the work.

Collapse
 
holomdev profile image
Fabio Sanchez

thanks for the article. Where i learn more about this? Some book do you recommended.

Collapse
 
ragrag profile image
Raggi

a very good resource is docs.microsoft.com/en-us/azure/arc...,
other than that if you're more into books I highly recommend reading Web Scalability for Startup Engineers by Artur Ejsmont