DEV Community

Cover image for How I built a Blog app using Edge Functions
Sujal Luhar
Sujal Luhar

Posted on

How I built a Blog app using Edge Functions

The app I am talking about,

Tech Stack,

  • Hono for easy routing and APIs
  • Cloudflare Workers for Edge functions and deployment
  • Prisma ORM to work with PostgreSQL
  • Prisma Accelerate for Connection Pool
  • NeonDB for Postgres Database on Cloud
  • React + Vite for Frontend Development
  • Tanstack Query to make API requests
  • Tanstack Router for file-based routing
  • AWS S3 for deployment of Frontend Build
  • AWS Cloudfront CDN for distribution of Frontend
  • npm to make custom package for Zod type to use both in FE and BE
  • Zod for validations and types

This project will have 3 directories at the end,
blogs/
├── backend/
├── common/
└── frontend/

In backend/ dir, we will write our Hono application.

Hono

Why we chose Hono to write routing and API logic?

Because cloudflare workers can't run express as it relies heavily on Node.js specific APIs. Cloudflare has developed its own runtime other than Node to make the edge runtime faster.

Cloudflare official doc suggests 2 frameworks to develop APIs for workers.

As we want to use Typescript, we go with Hono.
Hono is the library that is built while keeping edge environments in mind. It is fast and Node independent. This library can run on any runtime environment.

Cloudflare Workers

Why deploy on Cloudflare Workers?

Cloudflare is a leading provider of CDNs.

Edge functions work on CDNs nearest you so you get low Latency.
Unlike AWS lambda (Serverless Functions) and Vercel serverless functions have cold starts before processing your request, these workers starts in ~0ms.

These are cheaper at scale because you are paying per req and not running a server 24/7 like a EC2 instance for example. (That's why startups love it!)

Cloudflare protects your origin from DDoS attacks.

And most important part you can deploy a full end-to-end backend on it, but the only requirement is that every request your APIs process must have low compute time, less than 10ms. It means you can't run ML model, of course.

Although Waiting on network requests like fetch() calls or database queries does not count toward CPU time.

But for CRUD app,this is enough!

In addition, cloudflare workers provide cool dashboard at the end where you can monitor your backend,

Also where your backend is hosting currently,

As you can see my initial deployment was in France! But if you all hit it enough then an Orange dot in Mumbai, India will pop up!

NeonDB

It is always a good idea to get a database on the cloud as you don't have to worry about lots of infrastructure decisions like having to replicate database so that if one database goes down, your data is safe on the second one.

In production companies use AWS RDS instead.

Prisma ORM and Accelerate

Prisma ORM is the industry standard to work with Postgres database.

On top of that prisma provides support if you want to deploy your app on cloudflare workers.

Here they explained how to configure Prisma Client,

Deploy to Cloudflare Workers & Pages | Prisma Documentation

Learn the things you need to know in order to deploy an app that uses Prisma Client for talking to a database to a Cloudflare Worker or to Cloudflare Pages.

favicon prisma.io

Now, there is one problem!
What If your backend is running let's say on 50 regions simultaneously and each having 500 users querying for our database?

Your postgres DB can't maintain the concurrent connection of 25000 users! it has a limit like 100-500 connections.

To solve that we use connection pools, Prisma provides one through their offering of Prisma accelerate.

What does a connection pool do?
It acts as a waiter who quickly takes those 25000 orders and delivers them to the chef (our database) in small bunches. We can set the size of those bunches, like 50 or 100. So even though 25000 queries are pending, postgres resolves them 50 at a time and does not get overwhelmed.

npm

Make an account on npmjs.com and enable 2-factor authentication.

We want to make a custom package and publish it to npm so that we use it in both frontend and backend.

We gonna publish that /common directory that consist our Zod validation code.

Our backend needs Zod schema objects to safeparse the data coming from frontend.

Our frontend needs Zod types derived from z.infer

So here's how /common/src/ would look like,

common/
└── src/
    ├── validators/
    │   ├── blog.ts
    │   └── user.ts
    └── index.ts
Enter fullscreen mode Exit fullscreen mode

index.ts exporting everything from blog.ts and user.ts

then we gonna change name in package.json to @<yournpmusername>/blog-app

then we will run $ npm login to authenticate ourselves
and $ npm publish --access public to publish our custom package to npm.

Now we can install it like any other library in both frontend and backend with just $ npm install @<yournpmusername>/blog-app@latest command.

React + Vite

So as every React dev knows that vite became the new default to develop frontend applications.

Vite has very low build time for react apps and officially react suggests that you should make react app using Vite.

Tanstack Query

So this is a modern standard of fetching data from backend APIs.

Tanstack Query provide best Developer Experience and it does so many important things automatically.

We are using it because we are too lazy to bother with state management libraries for data fetch and making useState hooks for isLoading and isError variables.

It also does automatic caching of the data fetched and remembers it through that queryKey property. Now if another component needs to hit that same backend API route and we provided this exact queryKey to it, then useQuery first find in cache if the query is already made by some other component or not, if it does then no request is made to the API.

Tanstack Router

Tanstack Router gives you ability to implement neat file-based routing to your frontend application.

AWS S3 and Cloudfront CDNs

After finishing frontend using mentioned technologies, we will just run,

$ npm run build

Now we will need an AWS account using credit card or your UPI number with autopay enabled to make S3 bucket and Cloudfront CDN Distribution.

Now making a Cloudfront CDN and connecting it to S3 bucket is very easy and quick than before.

When you successfully make an Cloudfront distribution (It will take sometime in deployment of newly created distribution), AWS will give you distribution URL where your site will host after deployment.

To deploy,

check if your "scripts" field looks like this,

"scripts": {
    "dev": "vite",
    "build": "tsc -b && vite build",
    "lint": "eslint .",
    "preview": "vite preview",
    "deploy": "npm run build && aws s3 sync dist/ s3://blog-app-fe-edge --delete && aws cloudfront create-invalidation --distribution-id <Your_Cloudfront_Distribution_ID> --paths \"/*\""
  },
Enter fullscreen mode Exit fullscreen mode

Look I included that "deploy" field from my end.
So whenever I run $ npm run deploy, my frontend automatically builds and stores the final HTML/CSS/JS files in the ./dist directory, removes old files that are not in sync with new files in ./dist, and the last command invalidates the currently cached version of the site across all Cloudfront instances.

After any deployment where you need changes to go live immediately you run that aws cloudfront create-invalidation ... command. Without it you users still gets that old version of the site which is stored as cache in their nearest Cloudfront CDN instanse.

Deploying frontends using aws S3 and Cloudfront is the industry standard.

Now you can go to that Cloudfront URL and visit you fresh app.

This is my URL,

Yeah it looks ugly, but what I need to do now is buy one domain name for 1500-2000 rupees and connect it with this Cloudfront Distribution URL, which is very simple process.

After buying decent domain, My end-to-end product is deployed professionally without me worrying about sophisticated infrastructural decisions.

Now I am sharing you my GitHub repo with source code,

GitHub logo sujal-luhar / blog-app

Serverless blog app made using Cloudflare workers and Hono library

Architecture

User
  ↓ HTTPS
CloudFront (global CDN, edge caching)
  ↓ private OAC
S3 (stores dist/ files, not public)
  
User → API calls → Cloudflare Workers (Hono backend)
                        ↓
                  Prisma Accelerate
                        ↓
                    NeonDB (PostgreSQL)



After looking at the code you might say there are so many good code practices I could follow like,

  • Global error handling middleware
  • Whitelisting specific parameter
  • Unhandled promise rejection
  • Graceful Shutdowns
  • Logging
  • HTTP payloads
  • Rate Limiting etc. But you know what most of these are not applicable because the stack we used.

but still 2 things, Logging and Single global middleware to throw error message in the same format would be nice to have in my code.

You can sign up on shared Cloudfront URL without worrying about security as I am always hashing your passwords,

I believe in current age, every programmer must be familiar with how to use Edge networks to quickly dive into the market with a working product because speed matters.

That's why I shared this system with you so that any stuck individuals can become unstuck.

Top comments (0)