DEV Community

Cover image for A poor man's API
Nicolas Frankel for Apache APISIX

Posted on

A poor man's API

Creating a full-fledged API requires resources, both time and money. You need to think about the model, the design, the REST principles, etc., without writing a single line of code. Most of the time, you don't know whether it's worth it: you'd like to offer a Minimum Viable Product and iterate from there. I want to show how you can achieve it without writing a single line of code.

The solution

The main requirement of the solution is to use the PostgreSQL database. It's a well-established Open Source SQL database.

Instead of writing our REST API, we use the PostgREST component:

PostgREST is a standalone web server that turns your PostgreSQL database directly into a RESTful API. The structural constraints and permissions in the database determine the API endpoints and operations.

-- PostgREST

Let's apply it to a simple use case. Here's a product table that I want to expose via a CRUD API:

Product table

Note that you can find the whole source code on GitHub to follow along.

PostgREST's Getting Started guide is pretty complete and works out of the box. Yet, I didn't find any ready-made Docker image, so I created my own:

FROM debian:bookworm-slim                                                   #1

ARG POSTGREST_VERSION=v10.1.1                                               #2
ARG POSTGREST_FILE=postgrest-$POSTGREST_VERSION-linux-static-x64.tar.xz     #2

RUN mkdir postgrest

WORKDIR postgrest

ADD https://github.com/PostgREST/postgrest/releases/download/$POSTGREST_VERSION/$POSTGREST_FILE \
    .                                                                       #3

RUN apt-get update && \
    apt-get install -y libpq-dev xz-utils && \
    tar xvf $POSTGREST_FILE && \
    rm $POSTGREST_FILE                                                      #4
Enter fullscreen mode Exit fullscreen mode
  1. Start from the latest Debian
  2. Parameterize the build
  3. Get the archive
  4. Install dependencies and unarchive

The Docker image contains a postgrest executable in the /postgrest folder. We can "deploy" the architecture via Docker Compose:

version: "3"
services:
  postgrest:
    build: ./postgrest                                   #1
    volumes:
      - ./postgrest/product.conf:/etc/product.conf:ro    #2
    ports:
      - "3000:3000"
    entrypoint: ["/postgrest/postgrest"]                 #3
    command: ["/etc/product.conf"]                       #4
    depends_on:
      - postgres
  postgres:
    image: postgres:15-alpine
    environment:
      POSTGRES_PASSWORD: "root"
    volumes:
      - ./postgres:/docker-entrypoint-initdb.d:ro       #5
Enter fullscreen mode Exit fullscreen mode
  1. Build the above Dockerfile
  2. Share the configuration file
  3. Run the postgrest executable
  4. With the configuration file
  5. Initialize the schema, the permissions, and the data

At this point, we can query the product table:

curl localhost:3000/product
Enter fullscreen mode Exit fullscreen mode

We immediately get the results:

[{"id":1,"name":"Stickers pack","description":"A pack of rad stickers to display on your laptop or wherever you feel like. Show your love for Apache APISIX","price":0.49,"hero":false}, 
 {"id":2,"name":"Lapel pin","description":"With this \"Powered by Apache APISIX\" lapel pin, support your favorite API Gateway and let everybody know about it.","price":1.49,"hero":false}, 
 {"id":3,"name":"Tee-Shirt","description":"The classic geek product! At a conference, at home, at work, this tee-shirt will be your best friend.","price":9.99,"hero":true}]
Enter fullscreen mode Exit fullscreen mode

That was a quick win!

Improving the solution

Though the solution works, it has a lot of room for improvement. For example, the database user cannot change the data, but everybody can actually access it. It might not be a big issue for product-related data, but what about medical data?

The PostgREST documentation is aware of it and explicitly advises using a reverse proxy:

PostgREST is a fast way to construct a RESTful API. Its default behavior is great for scaffolding in development. When it’s time to go to production it works great too, as long as you take precautions. PostgREST is a small sharp tool that focuses on performing the API-to-database mapping. We rely on a reverse proxy like Nginx for additional safeguards.

-- Hardening PostgREST

Instead of nginx, we would benefit from a full-fledged API Gateway: enters Apache APISIX. We shall add it to our Docker Compose:

version: "3"
services:
  apisix:
    image: apache/apisix:2.15.0-alpine                              #1
    volumes:
      - ./apisix/config.yml:/usr/local/apisix/conf/config.yaml:ro
    ports:
      - "9080:9080"
    restart: always
    depends_on:
      - etcd
      - postgrest
  etcd:
    image: bitnami/etcd:3.5.2                                       #2
    environment:
      ETCD_ENABLE_V2: "true"
      ALLOW_NONE_AUTHENTICATION: "yes"
      ETCD_ADVERTISE_CLIENT_URLS: "http://0.0.0.0:2397"
      ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2397"
Enter fullscreen mode Exit fullscreen mode
  1. Use Apache APISIX
  2. APISIX stores its configuration in etcd

We shall first configure Apache APISIX to proxy calls to postgrest:

curl http://apisix:9080/apisix/admin/upstreams/1 -H 'X-API-KEY: 123xyz' -X PUT -d ' #1-2
{
  "type": "roundrobin",
  "nodes": {
    "postgrest:3000": 1                                                             #1-3
  }
}'

curl http://apisix:9080/apisix/admin/routes/1 -H 'X-API-KEY: 123xyz' -X PUT -d '    #4
{
  "uri": "/*",
  "upstream_id": 1
}'
Enter fullscreen mode Exit fullscreen mode
  1. Should be run in one of the Docker nodes, so use the Docker image's name. Alternatively, use localhost but be sure to expose the ports
  2. Create a reusable upstream
  3. Point to the PostgREST node
  4. Create a route to the created upstream

We can now query the endpoint via APISIX:

curl localhost:9080/product
Enter fullscreen mode Exit fullscreen mode

It returns the same result as above.

DDoS protection

We haven't added anything, but we're ready to start the work. Let's first protect our API from DDoS attacks. Apache APISIX is designed around a plugin architecture. To protect from DDoS, we shall use a plugin. We can set plugins on a specific route when it's created or on every route; in the latter case, it's a global rule. We want to protect every route by default, so we shall use one.

curl http://apisix:9080/apisix/admin/global_rules/1 -H 'X-API-KEY: 123xyz' -X PUT -d '
{
  "plugins": {
    "limit-count": {                 #1
      "count": 1,                    #2
      "time_window": 5,              #2
      "rejected_code": 429           #3
    }
  }
}'
Enter fullscreen mode Exit fullscreen mode
  1. limit-count limits the number of calls in a time window
  2. Limit to 1 call per 5 seconds; it's for demo purposes
  3. Return 429 Too Many Requests; the default is 503

Now, if we execute too many requests, Apache APISIX protects the upstream:

curl localhost:9080/product
Enter fullscreen mode Exit fullscreen mode
<html>
<head><title>429 Too Many Requests</title></head>
<body>
<center><h1>429 Too Many Requests</h1></center>
<hr><center>openresty</center>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

Per-route authorization

PostgREST also offers an Open API endpoint at the root. We thus have two routes: / for the Open API spec and /product for the products. Suppose we want to disallow unauthorized people to access our data: Regular users can access products, while admin users can access both the Open API spec and products.

APISIX offers several authentication methods. We will use the simplest one possible, key-auth. It relies on the Consumer abstraction. key-auth requires a specific header: the plugin does a reverse lookup on the value and finds the consumer whose key corresponds.

Here's how to create a consumer:

curl http://apisix:9080/apisix/admin/consumers -H 'X-API-KEY: 123xyz' -X PUT -d '    #1
{
  "username": "admin",                                                               #2
  "plugins": {
    "key-auth": {
      "key": "admin"                                                                 #3
    }
  }
}'
Enter fullscreen mode Exit fullscreen mode
  1. Create a new consumer
  2. Consumer's name
  3. Consumer's key value

We do the same with consumer user and key user. Now, we can create a dedicated route and configure it so that only requests from admin pass through:

curl http://apisix:9080/apisix/admin/routes -H 'X-API-KEY: 123xyz' -X POST -d ' #1
{
  "uri": "/",
  "upstream_id": 1,
  "plugins": {
    "key-auth": {},                                                             #2
    "consumer-restriction": {                                                   #2
      "whitelist": [ "admin" ]                                                  #3
    }
  }
}'
Enter fullscreen mode Exit fullscreen mode
  1. Create a new route
  2. Use the key-auth and consumer-restriction plugins
  3. Only admin-authenticated requests can call the route

Let's try the following:

curl localhost:9080
Enter fullscreen mode Exit fullscreen mode

It doesn't work as we are not authenticated via an API key header.

{"message":"Missing API key found in request"}
Enter fullscreen mode Exit fullscreen mode
curl -H "apikey: user" localhost:9080
Enter fullscreen mode Exit fullscreen mode

It doesn't work as we are authenticated as user, but the route is not authorized for user but for admin.

{"message":"The consumer_name is forbidden."}
Enter fullscreen mode Exit fullscreen mode
curl -H "apikey: admin" localhost:9080
Enter fullscreen mode Exit fullscreen mode

This time, it returns the Open API spec as expected.

Monitoring

A much-undervalued feature of any software system is monitoring. As soon as you deploy any component in production, you must monitor its health. Nowadays, many services are available to monitor. We will use Prometheus as it's Open Source, battle-proven, and widespread. To display the data, we will rely on Grafana for the same reasons. Let's add the components to the Docker Compose file:

version: "3"
services:
  prometheus:
    image: prom/prometheus:v2.40.1                                    #1
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml    #2
    depends_on:
      - apisix
  grafana:
    image: grafana/grafana:8.5.15                                     #3
    volumes:
      - ./grafana/provisioning:/etc/grafana/provisioning              #4
      - ./grafana/dashboards:/var/lib/grafana/dashboards              #4
      - ./grafana/config/grafana.ini:/etc/grafana/grafana.ini         #4-5
    ports:
      - "3001:3001"
    depends_on:
      - prometheus
Enter fullscreen mode Exit fullscreen mode
  1. Prometheus image
  2. Prometheus configuration to scrape Apache APISIX. See the full file here
  3. Grafana image
  4. Grafana configuration. Most of it comes from the configuration provided by APISIX.
  5. Change the default port from 3000 to 3001 to avoid conflict with the PostgREST service

Once the monitoring infrastructure is in place, we only need to instruct APISIX to provide the data in a format that Prometheus expects. We can achieve it through configuration and a new global rule:

plugin_attr:
  prometheus:
    export_addr:
      ip: "0.0.0.0"             #1
      port: 9091                #2
Enter fullscreen mode Exit fullscreen mode
  1. Bind to any address
  2. Bind to port 9091. Prometheus metrics are available on http://apisix:9091/apisix/prometheus/metrics on the Docker network

We can create the global rule:

curl http://apisix:9080/apisix/admin/global_rules/2 -H 'X-API-KEY: 123xyz' -X PUT -d '
{
  "plugins": {
    "prometheus": {}
  }
}'
Enter fullscreen mode Exit fullscreen mode

Send a couple of queries and open the Grafana dashboard. It should look similar to this:

Grafana dashboard

Conclusion

Creating a full-fledged REST(ful) API is a huge investment. One can quickly test a simple API by exposing one's database in a CRUD API via PostgREST. However, such an architecture is not fit for production usage.

To fix it, you need to set a façade in front of PostgREST, a reverse proxy, or even better, an API Gateway. Apache APISIX offers a wide range of features, from authorization to monitoring. With it, you can quickly validate your API requirements at a low cost.

The icing on the cake: when you've validated the requirements, you can keep the existing façade and replace PostgREST with your custom-developed API.

The source code is available on GitHub:

To go further:

Originally published at A Java Geek on November 20th, 2022

Top comments (8)

Collapse
 
writer8 profile image
Writer

Looks like we have a bulletproof production-ready environment here
Thanks for the awesome article! I love when experienced professionals share their invaluable knowledge
Definitely gonna use it

Collapse
 
nfrankel profile image
Nicolas Frankel

Thanks for your kind words ☺️

Collapse
 
drdamour profile image
chris damour

Ouch

Collapse
 
cavo789 profile image
Christophe Avonture

Very nice article; thanks!

Collapse
 
ecyrbe profile image
ecyrbe

This is a terrible Idea. Even with an API gateway in front of it.
No one should expose databases as an API to any user facing API.
This was debated long before and deemed Bad for many reasons. Use propre 3 tiers architecture. Don't fool yourself into cheap solutions

Collapse
 
nfrankel profile image
Nicolas Frankel

Terrible, bad and proper are subjective notions. A senior developer adapts the solution to the resources at their disposal.

Collapse
 
micka33 profile image
Mickaël CASSY

You should check out Supabase. They provide that exact service with complete authentication and RLS, as well as buckets to store files and an equivalent of Lambdas running on Deno.
The free version is quite enough for a small project.

Collapse
 
drdamour profile image
chris damour

Nothing about this is RESTful. Show me one thing conforming to HATEOAS