DEV Community

loading...

Serving 1000 requests in 230ms with FastAPI + Vuejs

climentea profile image Alin Climente ・2 min read

I made a boilerplate project for FastAPI and Vue and I was curious if FastAPI can handle delivering initial static files (required for a single page application).

Bellow we are serving the dist folder we got after running npm run build (size around ~600kb). Docker image for FastAPI was taken from Sebastián Ramírez repo.

from fastapi import FastAPI, Request
from fastapi.responses import HTMLResponse
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates

app = FastAPI()

# Mounting default Vue files after running npm run build 
app.mount("/dist", StaticFiles(directory="dist/"), name="dist")
app.mount("/css", StaticFiles(directory="dist/css"), name="css")
app.mount("/img", StaticFiles(directory="dist/img"), name="img")
app.mount("/js", StaticFiles(directory="dist/js"), name="js")

templates = Jinja2Templates(directory="dist")

@app.get("/", response_class=HTMLResponse)
async def root(request: Request):
    return templates.TemplateResponse("index.html", {"request": request})
Enter fullscreen mode Exit fullscreen mode

Here is the gunicorn command I used:

gunicorn main:app --workers=8 -b "0.0.0.0:3000" --worker-class=uvicorn.workers.UvicornWorker --log-level info
Enter fullscreen mode Exit fullscreen mode

I did the load testing using baton a Golang CLI tool for load testing.

Flag -c represents the number of concurrent requests and flag -r is the number of requests to be performed.

Let's start the siege with 1000 requests:

>> baton -u http://localhost:3000 -c 10 -r 1000
====================== Results ======================
Total requests:                                  1000
Time taken to complete requests:         236.375341ms
Requests per second:                             4231
===================== Breakdown =====================
Number of connection errors:                        0
Number of 1xx responses:                            0
Number of 2xx responses:                         1000
Number of 3xx responses:                            0
Number of 4xx responses:                            0
Number of 5xx responses:                            0
=====================================================
Enter fullscreen mode Exit fullscreen mode

Looks pretty good: 1000 requests finished in about 230ms.

Let's try with 10K requests:

baton -u http://localhost:3000 -c 10 -r 10000
====================== Results ======================
Total requests:                                 10000
Time taken to complete requests:         2.526745739s
Requests per second:                             3958
===================== Breakdown =====================
Number of connection errors:                        0
Number of 1xx responses:                            0
Number of 2xx responses:                        10000
Number of 3xx responses:                            0
Number of 4xx responses:                            0
Number of 5xx responses:                            0
=====================================================
Enter fullscreen mode Exit fullscreen mode

10.000 requests finished in about 2.6s!

Of course in real life this will not happened. There are a lot of factors that will interfere: network speed, additional processing on the server, etc. Also, I omitted nginx from this setup which is a must in production, otherwise you risk a DoS attack.

Load test was run on a laptop with Intel i5, 8gb RAM, SSD.

Discussion (0)

pic
Editor guide