Hello everyone reading this post - and even those who might just be scrolling by.
I’m a Rust developer who enjoys building tools and experimenting with new ideas. One day (yes, during a shower — where many engineering ideas seem to appear), I thought about a problem most developers have experienced at least once: deployment failures.
Even when CI pipelines pass, deployments can still fail due to things like:
dependency mismatches
missing environment variables
runtime configuration issues
infrastructure inconsistencies
These issues often appear only after code reaches the deployment stage, which can slow down development and break pipelines.
That got me thinking:
What if we could simulate deployment conditions before deployment actually happens?
So I decided to build a pre-deployment sandbox in Rust.
The idea is simple: run code inside an isolated environment before it reaches production and analyze potential failures early. This allows developers to detect issues before pushing code to their real deployment infrastructure.
What the Sandbox Does
The sandbox executes code in an isolated Docker environment, simulating a deployment-like setup. It can detect common issues such as:
dependency conflicts
runtime errors
missing environment variables
configuration problems
This gives developers an additional safety layer before their code reaches production.
Tech Stack
Here is the stack I used to build the project.
Backend: Rust, Axum (web framework), Tokio (async runtime)
Frontend: Next.js, TypeScript
Infrastructure: Docker for sandbox isolation, AWS ECS and EC2 for execution environments, AWS S3 & DynamoDB for storage and metadata
Yes, AWS can sometimes bill you more than expected - thankfully free credits exist 😄.
Rust Dependencies
Some of the main dependencies used in the backend include:
axum = "0.7"
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
async-stream = "0.3"
tokio-stream = "0.1"
async-openai = { version = "0.33.0", features = ["responses"] }
tower-http = { version = "0.5", features = ["cors"]}
http = "1.4.0"
reqwest = { version = "0.12", features = ["json"] }
cookie = "0.18"
time = "0.3"
aws-config = "1"
aws-sdk-dynamodb = "1"
chrono = { version = "0.4", features = ["serde"] }
anyhow = "1.0.102"
axum-extra = { version = "0.9.6", features = ["cookie"] }
aws-sdk-s3 = { version = "1", features = ["behavior-version-latest"] }
aws-sdk-bedrockruntime = { version = "1", features = ["behavior-version-latest"] }
Architecture Overview
At a high level, the sandbox works by simulating a deployment environment before the code actually reaches production.
The flow looks roughly like this:
- A developer submits their project or repository to the sandbox.
- The backend service (written in Rust using Axum) receives the request.
- The project is packaged and sent to a sandbox environment.
- A Docker container spins up to create an isolated runtime environment.
- The code runs inside this container while logs and execution output are captured.
- The system analyzes the results and reports potential issues such as runtime errors, missing dependencies, or configuration problems.
This architecture allows the code to be tested in a controlled environment that closely resembles a real deployment setup.
System Components
The sandbox is made up of several components working together to simulate a deployment environment and detect potential issues before code reaches production.
- API Layer (Rust + Axum)
The core backend of the system is written in Rust using the Axum framework. This API layer acts as the main entry point for the system.
It is responsible for:
receiving requests from the frontend
validating project submissions
triggering sandbox execution jobs
returning execution results and logs
Axum works well for this use case because it provides a fast and reliable async web framework built on top of Tokio.
- Sandbox Execution (Docker)
The most important part of the system is the sandbox execution environment.
Whenever a project is submitted, the backend spins up a Docker container that acts as an isolated runtime environment. The code is executed inside this container so that it cannot affect the host system.
Using Docker provides several benefits:
process isolation
reproducible environments
easy dependency management
safer execution of arbitrary code
This allows the sandbox to simulate conditions that are close to real deployment environments.
- Cloud Infrastructure (AWS)
To run the sandbox environments reliably, the system uses several AWS services.
The main services include:
EC2 for compute resources
ECS for container orchestration
S3 for storing artifacts and logs
DynamoDB for storing execution metadata
Using AWS makes it easier to scale sandbox execution when multiple jobs are triggered simultaneously.
- AI Analysis Layer
After code execution completes, the system can optionally analyze the results using an AI layer.
This analysis can help detect potential issues such as:
dependency problems
configuration mistakes
runtime errors
deployment risks
This step adds an additional layer of insight beyond simply running the code.
- Frontend Interface
The frontend is built using Next.js with TypeScript.
It provides a simple interface where users can:
submit projects to the sandbox
monitor execution status
view logs and analysis results
This makes the sandbox easier to use compared to interacting with the backend APIs directly.
Execution Flow
Now let's look at what actually happens when a developer submits a project to the sandbox.
The system follows a simple execution pipeline.
- Project Submission
A developer submits their project through the frontend interface built with Next.js. This request is sent to the Rust backend API.
The request may contain things like:
repository URL
project files
configuration data
environment variables
The API validates the request before starting the execution process.
- Job Creation
Once the request is validated, the backend creates a sandbox execution job.
This job contains all the information required to run the project inside the sandbox environment, such as:
project files
runtime configuration
resource limits
execution instructions
The job metadata can be stored in DynamoDB so the system can track execution status.
- Container Initialization
Next, the system launches a Docker container inside the sandbox environment.
This container acts as a completely isolated runtime where the project can run safely without affecting the host machine.
The container environment is prepared by:
installing required dependencies
configuring environment variables
mounting project files
setting execution limits
- Code Execution
After the container is ready, the project code is executed inside the sandbox.
During execution, the system captures:
runtime logs
errors
dependency issues
exit status
These logs help identify problems that may cause deployment failures.
- Result Analysis
Once execution finishes, the system collects the logs and execution results.
These results can then be analyzed to detect potential problems such as:
missing dependencies
configuration errors
runtime crashes
environment mismatches
Optional AI analysis can also be applied to help summarize potential issues.
- Result Reporting
Finally, the results are returned to the user through the frontend dashboard.
Developers can view:
execution logs
detected issues
analysis results
deployment readiness status
This allows developers to fix problems before their code reaches production deployment.
Challenges I Faced While Building This
Building a sandbox that executes code safely is not trivial. While developing this system, I ran into several interesting challenges.
- Secure Code Execution
One of the biggest concerns when building a sandbox is security. Since the system executes user-submitted code, it must be isolated from the host machine.
Using Docker containers helped solve this problem by providing an isolated runtime environment. Each execution runs inside its own container so that it cannot access the host system directly.
However, additional precautions are still necessary such as:
limiting container resources
restricting network access
enforcing execution timeouts
Managing Container Lifecycle
Spinning up containers dynamically for each job introduces new challenges.
Containers must be:
created quickly
monitored during execution
cleaned up after the job finishes
If containers are not properly removed, they can consume system resources and increase infrastructure costs.
- Handling Different Project Environments
Another challenge was dealing with different project configurations.
Projects can require different:
dependencies
runtime versions
environment variables
The sandbox environment needs to be flexible enough to support these variations while still remaining reproducible.
- Infrastructure Costs
Running sandbox environments on cloud infrastructure like AWS can become expensive if not managed carefully.
Services such as EC2 and ECS make container orchestration easier, but they also require proper resource management to avoid unnecessary costs.
Luckily, free credits help while experimenting 😄.
Top comments (0)