Now if you are a developer you have surely come across a little corner of the internet named "tech twitter" and as the name implies, it's a little hub for us tech enthusiasts to hang out and build projects and ship them.
Which on the surface is all good but the fact that i am writing this means i have a gripe with it.
My issue here? Serverless and the architecture everyone on x/twitter has no problem propogating and shoving down everyone's throat. "If you want to build an app, use next js, supabase, tailwind, vercel" and blah blah.
Now i am going to preface this once: I am not against next js or the tech being discussed here, but rather the herd mentality that comes with it and the overall glaze.
Now given i am in a hackathon and have 48 hours to build something relatively quickly and the friday beforehand i had finished rewatching the social network(great movie btw) then i would use next js but other than that, i have no reason to default to next js for anything i am building.
Now onto serverless. Serverless doesn't mean there is no server, it just means you are not in charge of managing the server you use. The cloud provider of your choice(google, amazon, azure) is the one that is doing that for you.
So the backend code is still there but instead of separating the frontend and the backend like how it normally and actually is supposed to be like, which is a much better architecture, now next js is used to couple both the frontend and backend into the one framework. Which isn't a good architecture at all and i don't want to hear otherwise.
Backend and Frontend should be separated. Leave all these meta frameworks that try to do both client side and server side rendering. And it's not just next js, there is also nuxt js, sveltekit and even angular js but just pointing out next js because that is everyone's favourite child especially on x/twitter. Sure, serverless has its moments (I'll give credit where due):
save money(short term) since you only pay when the endpoint is invoked/called by your end users so you end up paying only for when it was active(Long-running or high-throughput apps get expensive quickly.)
Useful for things and projects like iot and lightweight applications where justifying the need and expense and maintenance of a dedicated backend isn't worth it
Want edge functionality where the code is closer to the user thus being able to execute the requests much faster than normal(Cold starts can be anywhere from a few hundred milliseconds up to multiple seconds depending on runtime)building simple crud apps and mvps to test out ideas. Anything without a heavy database dependency
So they do have their use case for sure, but that doesn't mean use that as your default for everything. There is a popular javascript phrase that says: "Anything that can be written in javascript, will eventually be written in javascript" 😂 but that is no excuse for this situation here. Well i do no such thing.
I write my backends properly like how they should be written. I use fastapi and python for my backend projects, postgresql for the db, pydantic for the schemas and deploy either on render or my own personal vps. Which is a much better option than using these cloud providers here and there.
I mean if you have the funds to pay for a claude max subscription or any of these other ai tools, you can definetly pay for a vps server and use it.
So with pydantic and the way it is structered, my backend does have types i have defined and follow so you cannot just attack me for using python and its lack of static typing compared to your typescript. Python does support type hints, FastAPI enforces types at runtime using Pydantic.
Typescript has the most crazy pr known to any language i know of. So i do include types when building my backend so it's much safer for production thank you very much 🥳🥳.
For frontend i use good old react js along with vite. Plus react-query to make the api calls to the backend.
Yes, i use 2 different languages to build my stuff and it goes against what everyone keeps talking about how "1 language for both frontend and backend is so much more efficient".
I architect systems based on principles, not trends. I build so it doesn't choke on the first 10 concurrent users that use my products.
That way with this separation of concerns, we can deploy these separatley and if one has an issue, it can be addressed in isolation. Because imagine having to make changes to the entire app because i component is not working just because you thought it was a smart idea to couple them both inside your meta framework because that's what everyone on twitter is doing. Honestly smh.
This way i am able to architect my backend properly and build it out as i need it to be. This separation of concerns comes in quite handy when through some stroke of miracle your app goes viral and you actually get users:
You can scale it up properly by adding background tasks, queues, api gateways and proxy servers, load balancers and more instances of the backend server
You can migrate your application anywhere you want since you are not locked in to any specific vendor and can even move your app in house to your vps or infra you manage
You control the costs of any services you use. Because in the long run paying per api request gets expensive instead of just a constant bill on your own hardware.
It would be a lot more expensive to pay for a million requests when using serverless than when having a dedicated server. At scale, predictable fixed-cost compute beats per-request billing.You can build much bigger projects that require more processing power and requirements without having to worry about your requests timing out or resources running out like with edge functions since they do have requirements
Building on the previous point is because i have tried deploying a fastapi app on vercel, it can be done but it isn't recommended due to their architecture. They use those same serverless functions and they have limits of around 250mb bundle size limit uncompressed(but ram can go upto 2 gb for paid users and 4gb for enterprise users with fluid compute) and they timeout requests that take longer than 60 seconds or so.
So for a toy app where you messing around, sure but for something a bit more serious you putting in front of users, it might just leave a bad impression on them. Vercel is optimized for JavaScript frameworks, not Python backends
Example code snippet using next js to make api request:
// pages/api/posts/index.ts
import type { NextApiRequest, NextApiResponse } from "next";
import { db } from "../../../lib/db";
export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
// Handle GET
if (req.method === "GET") {
try {
const posts = await db.post.findMany({ orderBy: { createdAt: "desc" } });
return res.status(200).json({ success: true, data: posts });
} catch (err) {
return res.status(500).json({ success: false, error: "Server error" });
}
}
// Handle POST
if (req.method === "POST") {
try {
const { title, content } = req.body;
// Basic validation
if (!title || !content) {
return res.status(400).json({
success: false,
error: "Title and content are required",
});
}
const newPost = await db.post.create({
data: { title, content },
});
return res.status(201).json({ success: true, data: newPost });
} catch (err) {
console.error("Error creating post:", err);
return res.status(500).json({ success: false, error: "Server error" });
}
}
return res.status(405).json({ error: "Method Not Allowed" });
}
Now implementing with fastapi and python:
#db schema
from pydantic import BaseModel, ConfigDict
from datetime import datetime
class BlogPostResponse(BaseModel):
title: str
slug: str
excerpt: str
body: str
date_posted: datetime
# This tells Pydantic to read the data even if it's not a dict,
# allowing it to parse the SQLAlchemy ORM model directly.
model_config = ConfigDict(from_attributes=True)
#db model
from sqlalchemy import Column, Integer, String, Text, DateTime
from sqlalchemy.sql import func
from database import Base # Assuming you have your declarative base setup here
class BlogPost(Base):
__tablename__ = "blog_posts"
id = Column(Integer, primary_key=True, index=True)
title = Column(String, nullable=False)
slug = Column(String, unique=True, index=True, nullable=False)
excerpt = Column(Text, nullable=False)
body = Column(Text, nullable=False)
date_posted = Column(DateTime(timezone=True), server_default=func.now())
# api routes
from fastapi import APIRouter, Depends, HTTPException, status
from sqlalchemy.orm import Session
import crud, schemas
from database import get_db # Your database session dependency
router = APIRouter(prefix="/posts", tags=["Blog Posts"])
@router.get("/{post_slug}", response_model=schemas.BlogPostResponse)
def fetch_blog_post(post_slug: str, db: Session = Depends(get_db)):
# 1. Hit the database query function
post = crud.get_post_by_slug(db, post_slug=post_slug)
# 2. Handle the 404 cleanly
if post is None:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Post not found. Sir Database has no record of this."
)
# 3. Return the ORM model directly; Pydantic handles the rest based on BlogPostResponse
return post
And some will come at me about "but edge functions perform better because they are close to the user" yes that it true. (If you think that defeats my argument, you didn't read closely.) and while it is true you missed the fine print of that detail there.
On its own yes, however your app is going to need other dependencies like db or third party apis, so even if your edge function executes right next to the user, shoulder to shoulder.
If your db is in us-east-1 then it doesn't matter. Mr database takes priority and he is the one who holds the keys to your entire app.
That also goes for when you building your next ai saas app(which is most likely an openai api wrapper but alas) if for one of these days, the requests takes a bit longer than usual, then you might be cooked. Especially say a user uploads a 100mb pdf file for your chat-with-pdf service, now your api responses takes 15 seconds and already timed out and failed multiple times. A user lost and a bad review was gained on that day.
So these edge functions everyone on x/twitter is hyping up are not silver bullets that whenever you spin up a project you just default to that. Take 2 seconds to think through your own architecture for your own monstrosities(i mean projects) that you build and take into the world out there.
When should you use serverless?
- quick prototypes and mvp projects
- hackathons
- landing pages/marketing pages
- internal dashboards
- low traffic crud apps(yes your social network app no one is going to use)
- student projects
When you should ABSOLUTELY NOT use serverless?
- heavy file processing especially larger files
- ML workloads
- LLM calls that exceed timeouts given to you
- video processing(duh)
- real time streams
- long running tasks
- high throughput systems
- anything with tight performance needs
- ETL(Extract Transform Load) pipelines
- Batch jobs
Next.js shines for static/marketing sites, quick prototypes, or when 'ship fast' trumps long-term sanity
I welcome you all to Aura Systems Lab. This is a community where we make sound technical decisions and execute on them to build better products. If you agree and used critical thinking, please, welcome to the community.
If you have taken offence to what i have said, feel free to argue with the wall. The lion shall be off.
Top comments (0)