DEV Community

Cover image for How I Built a Web Vulnerabilty Scanner - OpenEye
Sharon
Sharon

Posted on

How I Built a Web Vulnerabilty Scanner - OpenEye

Creating secure web applications is no easy feat. Vulnerabilities like SQL injection, XSS, or CSRF are still among the most common attack vectors, yet not every developer has the time or the skills to run deep security scans.

So I decided to solve this problem by building OpenEye; a modern, cloud-hosted, and user-friendly web vulnerability scanner that leverages OWASP ZAP under the hood but wraps it with a clean Django-based interface, making vulnerability scanning accessible to both non-technical users and professionals who just want clear, concise output.

This blog covers the concept, architecture, implementation, security considerations, and deployment of the project.

Concept: Making Security Scanning Accessible

The idea was simple. What if anyone not just security experts could run a reliable vulnerability scan against their own websites, and instantly see a structured report highlighting risks by severity?

With OpenEye, users log in, enter a target URL, and initiate a scan. The system then spins up a dedicated ZAP container, performs both active and passive scanning, and outputs results grouped by severity levels — critical, high, medium, and low.

Users can also revisit past scans through their personal history panel, ensuring that important findings aren't lost in a sea of logs.

The frontend is built with Tailwind CSS + JavaScript and Django templates. It's simple and intuitive even a grade 3 student wouldn't get lost.

The goal was to reduce friction so whether you're a developer or someone without a technical background, you can quickly run a scan and make sense of the findings.

Why OWASP ZAP?

Initially, I considered building my own scanning engine but that would be reinventing the wheel. OWASP ZAP is an industry-standard DAST (Dynamic Application Security Testing) tool, and it already:

  • Detects SQLi, XSS, CSRF, authentication/session flaws, and misconfigurations
  • Provides a JSON API for integration
  • Has a robust active and passive scanning mechanism

By embedding ZAP inside Docker containers, each scan runs in isolation, preventing cross-contamination and ensuring resource efficiency.

Architecture Overview

Here's how the system fits together:

The app has a frontend and backend built with Django, PostgreSQL via Supabase for the database to store scan history, authentication with AWS Cognito, and our core functionality OWASP ZAP running in Docker containers. After testing locally to enable users to easily access the app, I hosted it on an AWS EC2 instance.

High level view of OpenEye

I've said a lot, deep breathes. Now let's take it slowly and peel out the implementation step by step.

Django Project Setup

I'll assume you already know how to set up a Django project and run it. If not, no worries at all just do a quick Google search and you'd find a ton of resources. Using Django isn't compulsory; use any framework that works for you. For me, I wanted to learn a little more Django, hence why.

If you'd be working totally locally, then you could just spin up a PostgreSQL database locally. Otherwise, use a cloud-hosted database. Supabase was my ideal choice because why not, it's free and easy to use. Create the necessary tables and fields to store your scan information. It's all up to you; you're free to store whatever you like.

OWASP ZAP Integration

One of the best parts about OWASP ZAP is that it exposes a REST API out of the box. That means instead of manually interacting with the ZAP desktop client, you can programmatically control scans from your own application. This was perfect for me because I wanted OpenEye to feel like a standalone platform.

At a high level, here's what I needed to do:

  1. Trigger a spider scan – to crawl the target application and discover URLs/endpoints
  2. Run an active scan – to test those discovered endpoints for vulnerabilities
  3. Fetch alerts – to retrieve all the issues ZAP found so I could parse, rank, and display them in OpenEye's dashboard

ZAP's REST API makes these tasks surprisingly straightforward. For example, here's a simplified version of the wrapper functions I built:

def start_spider_scan(self, target_url: str) -> str:
    return self._make_request(
        "/JSON/spider/action/scan/",
        {'url': target_url}
    ).get('scan', '')

def get_alerts(self, target_url: str) -> Dict[str, Any]:
    return self._make_request(
        "/JSON/core/view/alerts/",
        {'baseurl': target_url}
    )
Enter fullscreen mode Exit fullscreen mode

All I'm really doing here is sending HTTP requests to ZAP's REST API endpoints:

  • /JSON/spider/action/scan/ → tells ZAP to start crawling a target
  • /JSON/core/view/alerts/ → retrieves all alerts (vulnerabilities, misconfigurations, etc.) for that target

The _make_request() helper I wrote under the hood is just an HTTP client method that talks to ZAP running inside its Docker container. So instead of re-inventing the wheel and writing my own vulnerability scanner from scratch, I leverage ZAP's proven scanning logic but wrap it in my own backend API layer.

This approach gave me two big advantages:

  1. Abstraction & Control – My backend only needs to call simple Python functions like start_spider_scan() or get_alerts(). I don't have to expose raw ZAP API calls directly to the frontend.

  2. Custom Processing – Once I had the JSON responses, I could parse them, group issues by severity, and feed them into my own database and dashboard instead of relying on ZAP's default reporting.

So when a user runs a scan in OpenEye, they're really triggering these wrappers in my backend, which in turn communicate with ZAP's REST API inside Docker. Also, by default, ZAP listens on port 8080 inside the container.

For example, if I start ZAP in Docker like this:

docker run -u zap -p 8080:8080 -i ghcr.io/zaproxy/zaproxy:stable zap.sh -daemon -host 0.0.0.0 -port 8080
Enter fullscreen mode Exit fullscreen mode

Where:

  • -daemon runs ZAP headlessly (no GUI)
  • -host 0.0.0.0 makes it listen on all interfaces
  • -port 8080 exposes the API at http://localhost:8080
  • -p 8080:8080 maps the container port to the host, so my backend can reach it

Once ZAP is running, the API is always accessible at endpoints like:

  • http://localhost:8080/JSON/spider/action/scan/
  • http://localhost:8080/JSON/core/view/alerts/

Authentication with Cognito

Managing authentication securely is one of those things that looks simple on the surface "just add login/signup" but in reality it's full of pitfalls: password storage,token lifetimes, OAuth2 flows, etc. And I had a bit of a headache with setting the auth callback (side-eye to AWS for this).

Either way, Cognito gave me:

  • Secure defaults — password policies, account recovery, MFA
  • OAuth2.0 compliance — standard flows (authorization code, implicit, etc.)
  • Scalability — I don't need to worry about user pools or scaling login endpoints

Here's what happens when a user logs in to OpenEye:

1. Redirect to Cognito

When the user clicks "Login," they're redirected to my Cognito hosted login page. Cognito provides a default UI, which saved me time.

2. Authorization Code Returned

After the user enters their credentials, Cognito redirects them back to my Django app with an authorization code in the URL query string.

3. Backend Exchanges Code for Tokens

My Django app then takes that code and makes a server-to-server POST request to Cognito's /oauth2/token endpoint. This returns:

  • An ID Token basically a JWT containing user identity claims like email, sub, etc.
  • An Access Token and refresh token

4. User Session Established

Django decodes the ID token, extracts the user info, and establishes a session. From the app's perspective, the user is now authenticated.

One thing Cognito enforces for good reasons is that OAuth2 redirects must happen over HTTPS (except for localhost during development). This meant when I hosted my app on an EC2 instance, I couldn't just serve HTTP to Cognito,I had to set up SSL.

First, I created a free subdomain for my app using FreeDNS(you could check them out) after I set it up on an AWS EC2 instance, then ran this command:

sudo certbot --nginx -d openeye.chickenkiller.com
Enter fullscreen mode Exit fullscreen mode

Certbot automatically provisioned and installed free SSL certificates. Nginx handled HTTPS termination and forwarded requests to Django running on Gunicorn.

Then finally, the entire OAuth2 flow worked securely end-to-end...
Relax, continue reading I'll elaborate more on this.

AWS EC2 Deployment

This is not entirely synchronous because I had set up the app on the EC2 instance before creating a subdomain name and registering it with SSL, which was a series of back and forth. But to make it easier for you to follow, I mentioned this in my earlier step with setting up the auth, so please don't get confused.

Once I had the core pieces (ZAP API, Django backend, Cognito authentication), I needed a place to run everything in the cloud. For this, I chose an AWS EC2 t2.micro instance (Ubuntu 22.04) small, cheap, and well within the AWS Free Tier.

The very first thing I did after launching the instance was update packages and install the essentials; Python, Docker, Nginx, etc.

At this point, I had a clean Ubuntu server with the tools needed to run both my app and ZAP.

Django doesn't serve production traffic directly, so I used Gunicorn as the WSGI HTTP server. Gunicorn is lightweight and designed specifically for running Python web apps in production. It basically handles the load and spins up more instances of my app when necessary.

Then I put Nginx in front of Gunicorn for two reasons:

  1. Static files: Nginx serves static assets (CSS, JS, images) much faster than Gunicorn
  2. Reverse proxy: Nginx terminates HTTPS and forwards requests to Gunicorn on port 8000

So when clients hit https://openeye.chickenkiller.com:

  • Nginx terminates SSL, then proxies the request to Gunicorn running Django on port 8000
  • Static files are served directly by Nginx for speed

For the scanning engine, I didn't want to install ZAP directly on the host. Running it in Docker gave me isolation, portability, and easy lifecycle management (start, stop, update). So I just downloaded the image and ran it detached (-d means it stays up as a background service).

Now, my Django backend can talk to the ZAP API via http://localhost:8080.

Conclusion

So that's it! Not so intimidating as you thought, huh? Oh yeah, and why the name OpenEye, you ask? Well, why not ,don't you think it's the perfect name for a vulnerability scanner? OpenEye - it's literally a wide-opened eye for finding vulnerabilities(whatever that means). So that's it feel free to replicate this and add your own spice!

Top comments (0)