DEV Community

Cover image for Hyperlambda is 20 times faster than Fast API and Python
Thomas Hansen
Thomas Hansen

Posted on • Originally published at ainiro.io

Hyperlambda is 20 times faster than Fast API and Python

Edit - I was only retrieving 25 records from my Hyperlambda endpoint. When I fixed it, it executed 79,081 requests towards my Hyperlambda endpoint. That's 19 times more data than Python and Fast API could serve me, so the Volkswagen to Fighter Jet comparison still holds.

I just created a Python script that creates 50 worker threads, and executes an HTTP API endpoint, hammering it, over and over again, to see how many requests I can create in 30 seconds. I ran it twice, once towards my Fast API server, and another time towards my Hyperlambda server. Conclusion? Hyperlambda has more than 20 times better performance than Python.

  • Hyperlambda 97,875 requests
  • Python with "Fast API" is 4,225 requests

This implies that Hyperlambda is 20 times faster, have 20 times better performance, and scales literally "infinitely" better than Python. The latter becomes true because as multiple users are starting to hammer your server in parallel, the server overhead grows exponentially according to how much time it needs to deal with your requests. So the real "performance difference" here becomes arguably 400 times, and not 20 times, considering how many concurrent users the Hyperlambda code could deal with before needing to scale, versus the Python equivalent.

Implying if your Python solution can deal with 20 concurrent users, you could deal with 8,000 concurrent users if you upgrade to Hyperlambda

Python's bad Performance

Throughput is your absolutely most important metric to measure "scalability". The reasons why this is for all practical concerns "broken" in Python, is because Python's garbage collector isn't thread safe. Google literally tried to fix this for 20 years, but had to give up. Python cannot be fixed without destroying backwards compatibility. But the performance isn't even the whole story. On average, Python requires 10x as much complex code to deliver the same (inferior) result that Hyperlambda gives you. To understand the above problem, take a look at the following Python code, and compare its size to the Hyperlambda equivalent.

Python requiring 6.27 times as many tokens as Hyperlambda

To further the insult, please realise that the Hyperlambda HTTP endpoint was 100% automatically created using our CRUD generator, implying a "citizen" can connect to a database, click a button, and outperform a Python software developer with 40 years of experience.

In the following video I am discussing what these number implies, but basically the performance boost you're getting when switching from Python to Hyperlambda is larger than the performance gains you get when upgrading from a 20 year old Volkswagen to a modern fighter jet capable of flying at mach 2.

The code

Below is the code I used to test. If you want to test towards Hyperlambda just comment out the URL variable and uncomment the other URL variable. Then I used the CRUD generator to generate the Hyperlambda version. Remember to turn off authorisation requirements before you click the "Generate" button.

import asyncio
import aiohttp
import time

URL = "http://127.0.0.1:8000/artists"
#URL = "http://127.0.0.1:5000/magic/modules/chinook/artist"

async def worker(session, stop_time, counters):
    while time.time() < stop_time:
        try:
            async with session.get(URL) as resp:
                if resp.status == 200:
                    counters["ok"] += 1
                else:
                    counters["fail"] += 1
        except Exception:
            counters["fail"] += 1

async def main():
    duration = 30  # seconds
    concurrency = 50  # number of parallel workers

    counters = {"ok": 0, "fail": 0}
    stop_time = time.time() + duration

    async with aiohttp.ClientSession() as session:
        tasks = [
            worker(session, stop_time, counters)
            for _ in range(concurrency)
        ]
        await asyncio.gather(*tasks)

    total = counters["ok"] + counters["fail"]
    print(f"Requests sent: {total}")
    print(f"Successful:     {counters['ok']}")
    print(f"Failed:         {counters['fail']}")
    print(f"Req/sec:        {total / duration}")

if __name__ == "__main__":
    asyncio.run(main())
Enter fullscreen mode Exit fullscreen mode

Below is my Python endpoint's code.

from fastapi import FastAPI, Depends
from sqlalchemy import create_engine, Column, Integer, String
from sqlalchemy.orm import sessionmaker, declarative_base, Session
from typing import List

DATABASE_URL = "sqlite:///./chinook.db"

engine = create_engine(
    DATABASE_URL,
    connect_args={"check_same_thread": False}  # Required for SQLite
)

SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()

class Artist(Base):
    __tablename__ = "Artist"  # Chinook default name is capital A

    ArtistId = Column(Integer, primary_key=True, index=True)
    Name = Column(String, nullable=False)

app = FastAPI()

def get_db():
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()

@app.get("/artists")
def get_artists(db: Session = Depends(get_db)):
    artists = db.query(Artist).all()
    return [{"ArtistId": a.ArtistId, "Name": a.Name} for a in artists]

@app.get("/")
def root():
    return {"message": "Chinook API is running!"}
Enter fullscreen mode Exit fullscreen mode

Both HTTP endpoints are using the same SQLite "chinook" database, going towards the same tables, and running in "debug" mode. They're both on the same machine, a MacBook Pro M3 - And Hyperlambda is more than 20 times faster than Python with "Fast API" ...

Top comments (0)