Intro
There is a history: You got a ticket because a specific API route is slow, and it's your job to fix that. Sounds familiar, right? Your first thought is to make whatever operation there fast, but it's not possible in this case. The second option is to give more resources to the service, but that does not sound like a real solution. The third option is to cache with Redis/memcached to avoid paying the penalty for every request. But what if I told you about a "free" solution that is as simple as adding some headers to the response?
The solution itself is tied to where the server is hosted, or at least where the domain exists. Each platform has its own ways to do so. For our example, we will use Cloudflare for the sake of simplicity.
Summary
Setup
For our example we need:
- Cloudflare account
- Hono project 🔗
- Three routes
-
/
to render a html page -
/no-cache
to return a data without cache -
/cached
to return cached data
-
After cloning the project, run npm run deploy
to deploy the project to cloudflare workers.
Opening the page will you see a page with two dates.
Waiting some few seconds and reloading the page.
Waiting some few seconds and reloading the page.
Without looking at the code, you may think that is a complex setup that is very specific. Not really. Here is the required code for caching the API response.
return new Response(new Date().toISOString(), {
headers: {
'Cache-Control': 'public, max-age=10' // keep the cache for 10 seconds
}
})
Just like that, we cached using http and without redis.
Explanation
Cloudflare has cache headers so it can cache fetch requests made to their own domain, in my case: https://[project].stneto.dev
. These headers aren't tied to a specific provider, so each of them can have their own version, such as: Netlify, Vercel, AWS Api Gateway and the list can go on.
The code example did show some JavaScript code, but the caching itself isn't tied to a language or a framework, so it's possible to do something like this as well:
Go (native http handler)
func handler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Cache-Control", "max-age=3600") // Cache for 1 hour (in seconds)
w.Header().Set("Expires", "Tue, 01 Jan 2024 00:00:00 GMT") // Cache expiration date and time
w.Header().Set("ETag", "123456") // ETag for conditional requests
w.Write([]byte("Hello 🤠"))
}
Rust (Axum)
async fn handler(_req: Request<Body>) -> Response<Body> {
// Set Cache-Control header to enable caching for 1 hour
Response::builder()
.header("Cache-Control", "max-age=3600")
.body(Body::from("Hello 🤠"))
.unwrap()
}
Closing thoughts
The know-how to better use the platform can do wonders for you, as the solution is there "for free," and you just need to know how to use it. There is a difference between using what the platform offers to you and a specific platform feature. Using response headers between platforms is quite simple, as you can just replace them, and you are good to go. But using some very specific behavior tied to a platform can be quite dangerous if you ever need to move away from there. It's about the diminishing returns; staying "platform-free" can be nice because you have more flexibility on where the service runs, but at the same time, you won't be using 100% of what the platform offers to you.
Top comments (1)
Caching control was always a pain for me personally and I never wanted to study further the details, thanks a lot for the explanation and amazing didactics