Problem
You have a complex Ruby on Rails web application. This application consists of 2 parts:
- Front-end that renders HTML pages
- The back-end serves the data to those web pages using AJAX requests.
There is one specific page that when a user open’s It — the front-end is making a request to a Ruby on Rails server and Ruby code needs to make a request to an external API to get the data requested. Pretty straightforward scenario in modern web applications. The issue is that the mentioned API call is very slow. Let’s say that It can take up to a minute to return a response. As I mentioned It is an external API, so we have no control over how well It performs and we can’t make any changes there. So how to solve that issue? Well, there are a few things we can do:
- Straight forward approach. Every web server has a timeout configuration, so if your web server has a timeout set to 30 seconds, but requests take more time, a user will receive 504 error — https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504. You can increase a timeout, but there is no guarantee, that the next time you will make a request It will again be longer than the set number. So this solution will not be stable.
- Heat cache approach. We can heat the cache. If we know all the possible API requests we need to do during the day, we can schedule a worker that will put all those requests and their responses into the Rails cache with a 24-hour (or any other time a developer identifies as reasonable) expiration time. So every time a user opens a page, data will be taken directly from the cache instantly. This is a great solution, but It will only work if we know all possible API requests that any user can do and that this number is reasonable, It should not be let’s say 100K requests. You will need to have a huge cache that will be very expansive too. So this solution can only work in specific scenarios.
- Async API requests inside workers. The third and last option that I see here requires both front-end and back-end work. The idea is simple — the front-end triggers a request, and controller action schedules a Sidekiq worker (ActiveJob job), a worker collects the response and puts it in the cache, then sends a message to the ActionCable channel, the front-end receives a message that data is ready, triggers another request to get data from the cache. Now let’s move to the next section for the detailed solution.
Solution
Steps that the application needs to follow:
- Step 1. Front-end trigger request.
I will not post any code here because any front-end framework can be used to do that. The idea is simple, frontend should send a request, saying that the data is needed. After that, we should have some designed UI state that will show some message to a user, like:
We are collecting a data for you, just wait...
- Step 2. The Rails server receives a request and schedules a worker. In order to make this correct, let’s create a simple Ruby service that will do all the needed work and then just call this service in the controller action:
class AsyncRequest
def self.call(endpoint, params)
hash = "async_request_#{endpoint}?#{params.to_query}"
cached_response = Rails.cache.fetch(hash, expires_in: 2.hours)
return { pending: false, hash: hash, data: cached_response } if cached_response
AsyncQueryWorker.perform_async(endpoint, params, hash)
{ pending: true, hash: hash }
end
end
The logic is very simple, if this request was already cached, get It from the cache and return right away. In the opposite scenario call the AsyncQueryWorker
worker and return { pending: true, hash: hash }
hash to the frontend. Frontend should remember the hash value. Controller action will look as simple as this:
class PagesController < ApplicationController
def show
respond_to do |format|
format.json do
render json: AsyncRequest.call('some_api/endpoint.json', permitted_params)
end
end
end
end
- Step 3. Frontend needs to subscribe to a WebSocket channel, in order to receive a notification that the worker has done processing the request. I will add some ReactJS code here (because this is the framework I am using), so you will get an idea of what needs to be done. PS: any front-end framework will do the trick too.
const [websocketSub, setWebsocketSub] = useState(null);
const hash = ''; // a hash that front-end needs to record, received from backend in the previous step
const token = ''; // a user token, that needs to be placed in meta tag. Read Rails guide to learn more
useEffect(() => {
if (!isUndefined(hash)) {
if (token.length > 0 && !websocketSub) {
App.initCable(token);
const sub = App.cable.subscriptions.create({ channel: 'AsyncQueryChannel', hash: hash }, {
received: (event) => {
if (event.query_status === 'processed') {
doRequest();
// do the same request to the backend as above, because It's
// params are converted to hash, It will be taken from the cache
} else if (event.query_status === 'failed') {
// display some UI error
}
},
});
setWebsocketSub(sub);
}
// initial request here
doRequest();
}
return () => {
if (App.cable) {
each(App.cable.subscriptions.subscriptions, (sub) => {
if (includes(sub.identifier, 'AsyncQueryChannel')) {
App.cable.subscriptions.remove(sub);
}
});
setWebsocketSub(null);
// we need to remove subscription when request cycle is done
}
};
}, [hash]);
- Step 4. Now while the front is in loading state, it’s time to implement the worker itself.
class AsyncQueryWorker
include Sidekiq::Worker
sidekiq_options queue: :async_quaries, retry: 0 # add new separate queue
# we can have retries here, but It will increase the amount of time
# a user is waiting for a response, so let's not do this
def perform(endpoint, params, hash)
@endpoint = endpoint # a url where to make an API request
@params = params # params for that request
@hash = hash # uniq hash string that will be used for caching
return if request_in_progress? # to be sure we will not process same request more the once
do_request
end
private
def do_request
request
$redis.redis.set(redis_key, @hash, ex: 300) # if something will stuck, in 300 seconds, redis will clear this request
ActionCable.server.broadcast("async_query_#{@hash}", { query_status: 'processed' })
rescue Faraday::Error
ActionCable.server.broadcast("async_query_#{@hash}", { query_status: 'failed' })
$redis.redis.del(redis_key)
end
def request_in_progress?
request_in_progress = true
$redis.redis.watch(redis_key) do
request_in_progress = $redis.redis.get(redis_key).present?
$redis.redis.unwatch
end
request_in_progress
end
def connection
@_connection ||= Faraday.new(url: @endpoint) do |faraday|
faraday.request :url_encoded
faraday.headers['Content-Type'] = content_type
faraday.options.timeout = 45
faraday.response(:logger)
faraday.adapter Faraday.default_adapter
end
end
def request
@_request ||= connection.get(@endpoint, @params)
end
def redis_key
@_redis_key ||= "async-request-#{@hash}"
end
end
We also need to create a channel class:
class AsyncQueryChannel < ApplicationCable::Channel
def subscribed
stream_from "async_query_#{params[:hash]}"
end
end
- Step 5. Frontend code receives an event when that data is ready or when an error occurs through the web socket. Done)
Summary
I believe this is a great solution for a scenario when you are using some not performance reliable external API that will be triggered by a web page. Why do I think so? Because I did in my app years ago and It works)
Copy of my original post: https://medium.com/@zozulyak.nick/how-to-use-actioncable-with-async-requests-in-a-ruby-on-rails-web-app-a70820c1fabe
Top comments (0)