DEV Community

RjS
RjS

Posted on

Designing a Real-Time React Playground with Workers, Queues, and WebSockets

I built a full-stack SaaS project, an interactive playground to learn and practice React. Think of it as LeetCode, but for React.

Of course, the usual full-stack features are there (authentication, etc.), but that’s not the interesting part.

What is interesting is the system design and the problems I ran into while building and deploying it.


What Makes This Project Different

This project involves several non-trivial concepts:

  • Pub/Sub architecture
  • Queues and workers
  • WebSockets for real-time updates
  • Babel (standalone) for transpilation
  • Puppeteer for code execution
  • Docker for environment consistency
  • AWS for deployment
  • Isolated sandbox for safe execution

Core Flow

Here’s how the system works:

1: User submits code
2: Backend returns a solutionId
3: Frontend registers a WebSocket using the solutionId
4: Worker starts execution
5: Worker finishes execution
6: Redis publishes the result
7: Backend receives the result
8: Backend emits the result via WebSocket
9: Frontend receives the result and updates the UI


Why Transpile Twice?

You might wonder:

If we already transpile on the frontend, why do it again on the backend?

After transpiling on the frontend, if we pass that transpiled code to the backend, we lose the ability to properly simulate user interactions on it.

For validation, we need to simulate events (like clicks). To do that reliably:

  • We take the original user code
  • Transpile it again on the backend
  • Execute it inside Puppeteer
  • Trigger events programmatically

This ensures we can accurately simulate interactions and validate behaviour.


Sandbox Execution

Running arbitrary user code is dangerous.

To handle this:

  • Code is executed inside a controlled Puppeteer environment
  • Each execution happens in a new isolated page
  • This prevents malicious scripts from affecting the system

Deployment Challenges

Problem 1: Puppeteer Dependencies

Puppeteer requires Chrome and several system-level dependencies.

Most backend hosting platforms don’t provide these out of the box.

Solution: Docker

Docker allowed me to define a consistent environment with:

  • Chrome dependencies
  • Node environment
  • All required libraries

Now the app runs the same everywhere.


Problem 2: Render Deployment Failure

Initially, I deployed the backend on Render.

But I kept getting:

package dotenv not found
Enter fullscreen mode Exit fullscreen mode

I tried:

  • Reinstalling dependencies
  • Modifying the Dockerfile
  • Adding installation scripts

Nothing worked.


Switching to Railway

I moved to Railway, and guess what? It worked immediately.

But there was a catch:

  • Only 1 month free trial

So this wasn’t a long-term solution.


Moving to AWS

This is where things got real.

I had theoretical knowledge of AWS, but this was my first practical experience.

What I did:

  • Launched an EC2 instance
  • Configured a static IP
  • Set up security groups
  • Learned SSH (the hard way)
  • Cloned the backend repo
  • Configured environment variables

This process involved a few trials and errors, but it worked.


Problem 3: No HTTPS

AWS provides a public DNS, but it’s HTTP, not HTTPS. This becomes a problem because if the frontend is served over HTTPS, the browser expects all backend requests to also use HTTPS; otherwise, it blocks them due to mixed content security restrictions.

To get HTTPS:

  • I needed a domain
  • I didn’t want to spend money on a hobby project

Solution: Render as a Proxy

I reused Render, but differently.

Instead of hosting the backend:

  • I created a proxy service
  • It forwards requests from a secure Render URL → AWS backend

So now:

  • Users hit a secure HTTPS endpoint (Render)
  • Requests are forwarded to my AWS server

Problem 4: Keeping the Backend Updated

After deployment, another issue appeared:

When I push new code, how does AWS get it?

Initially, the answer was:

  • SSH into the instance
  • Pull the latest code manually

Clearly not scalable.


Solution: CI/CD

Time to implement CI/CD (this, too, I only knew in theory before this).

What I did:

  • Wrote a simple YAML pipeline
  • Automated pulling the latest code on deployment

Note: I still have a lot to learn about Docker and CI/CD.
This setup is basic but functional.


Final Result

Everything is now working end-to-end:

  • Secure access via proxy
  • Scalable job processing via queues
  • Real-time updates via WebSockets
  • Safe execution using Puppeteer + isolation
  • Automated deployment via CI/CD

Try It Out

👉 https://reactpg.vercel.app


What’s Next

I’ll be writing another blog soon covering:

  • System architecture in depth
  • Detailed flow diagrams
  • Design decisions and trade-offs

Closing Thoughts

This project forced me to move from theory → practice in multiple areas:

  • System design
  • DevOps
  • Deployment
  • Debugging real-world issues

And honestly, that’s where most of the learning happened.

Top comments (0)