Autodock's core promise is that it takes a project (the messier the better) that lives on localhost and moves it to a real, inspectable and introspectable remote environment with as little friction as possible.
Recently, I wanted to try out Autodock on a new repo. So I picked a project that I like a lot: Lago, an open-source usage-based billing platform with a Rails API, a React frontend, a Go event processor, Postgres, Redis, and Kafka.
And to really stress test Autodock, I did the following:
- I cloned the Lago repo.
- I deleted every Dockerfile and docker-compose file.
- I tried to run the entire thing on Autodock anyway.
Autodock, true to form, iteratively reconstructed a fully functioning deployment of Lago, fixed real infra problems as they appeared, and then saved that knowledge in a way I could resume later :confetti:
In several important ways, the final setup felt smoother than a
traditional Docker-based workflow. And in a few ways, worse.
This post is about that tradeoff, with a significant bias towards Autodock as it's my project (but at least I'm honest about my bias!).
Why Lago
Up until writing this post, I'd only tested Autodock against production code bases I'm working on. I've also gotten some feedback from users about their codebases, but understandably, they can't share much. So a lot of my work these days is cloning mature open-source projects and trying to break Autodock with them.
Lago is a good test case precisely because it isn’t designed to be friendly to experiments like this.
It’s a production-grade usage-based billing system with:
- a Rails API
- a React frontend
- a Go event processor
- PostgreSQL, Redis, and Kafka in the critical path
- and a mix of native and language-level dependencies
It’s the kind of system where Docker and docker-compose are existential.
The challenge
So much for existential... When I somewhat sadistically rm'd all of the Docker stuff, that meant no:
docker compose up- prebuilt images
- pinned base images
- implicit service names like
db - or "this works because the container already has the right OS packages"
What was left was just the code, the docs, and whatever assumptions had been baked into the project over time.
From there my goal was to see if Autodock's feedback loop was strong enough to get the project deployed.
Every time something failed - a missing native library, a hard-coded hostname, a service that assumed it was running in Compose - that failure became input for improving the product. The whole thing yielded a few minor nits in prompts and logic, but on the whole, it worked!
Definition of done
For this experiment, success meant:
- the Rails API boots and responds to
/health - the frontend runs in a real browser, not just
curl - background infrastructure (Postgres, Redis, Kafka) is actually running
- the event processor connects and idles without crashing
- and the app can be exposed publicly over HTTPS
In other words: if I sent you a link, you could click around and believe it was a real application - even if it was clearly a dev environment.
First boot
I asked my trusty sidekick Claude to boot up an Autodock box.
╭─── Claude Code v2.0.69 ──────────────────────────────────────────────────────────────╮
│ │ Tips for getting started │
│ Welcome back Mike! │ Run /init to create a CLAUDE.m… │
│ │ ─────────────────────────────── │
│ * ▐▛███▜▌ * │ Recent activity │
│ * ▝▜█████▛▘ * │ No recent activity │
│ * ▘▘ ▝▝ * │ │
│ │ │
│ Opus 4.5 · Claude Max │ │
│ ~/Documents/GitHub/autodock/lago │ │
╰──────────────────────────────────────────────────────────────────────────────────────╯
Then, I asked it:
> Can you spin up an autodock box for this repo?
● autodock - env.launch (MCP)(name: "lago")
Your Autodock box is ready.
Environment Details:
- Name: lago
- Slug: xenacious-condor
- SSH: root@xenacious-condor.autodock.io
- Code synced to: /workspace/lago
- Auto-stop: 30 minutes of inactivity
At this point nothing is running yet. I ask, somewhat cheekily:
> Cool! Yeah, can you install deps and run services.
Autodock proceeded to chat with Claude for 5-6 iterations (a lot of the chatter was their mutual surprise about the lack of Docker files... oops...), getting Claude to eventually.
- read
README.md - read
docs/dev_environment.md - search for
go.mod,Gemfile, frontend configs - notice missing git submodules
- try to run things
- hit errors
- fix them
- rinse and repeat
Here’s an early example of that feedback loop:
● Search(pattern: "**/go.mod")
Found 1 file
● Read(events-processor/README.md)
● Bash(go mod download)
Followed shortly by a real failure that it hadn't anticipated in its initial scan:
/usr/bin/ld: cannot find -lexpression_go: No such file or directory
D'oh! It didn't understand that it needed a custom library. But Claude was smart enough to run an internet search, realize that the library was made by Lago, check it out, build it, and then get the build over the line.
With the events processor building cleanly, the next failures were more familiar.
Tests failed because there was no database.
So Claude installed PostgreSQL, Redis, and a Kafka-compatible broker (Redpanda), started the services, created users and databases, and wired everything together until the Go service could connect and idle cleanly.
Once the backend infrastructure was in place, it moved on to the rest of the stack:
- initialized the Rails API
- installed the correct Ruby version
- fixed missing system libraries for native gems
- ran migrations
- started Puma
- verified
/healthreturned something sensible
Then the frontend:
- installed Node and pnpm
- resolved workspace dependencies
- started the Vite dev server
- and promptly ran into CORS issues, in spite of the fact that Autodock has a metric done of guards against this. Again, d'oh!
However, Claude was smart enough to reach for the Autodock example library (it's sort of like context7, but for infra failures), which gave instructions on what to do. Claude updated the API’s CORS configuration, restarted the server, and the GraphQL requests started flowing.
At that point, everything was up.
- The Rails API was running.
- The frontend was loading in a real browser.
- Postgres, Redis, Kafka, and the events processor were alive in the background.
And Autodock exposed the app publicly:
● autodock - env.expose (MCP)(port: 3000)
Exposed port 3000 at https://3000--xenacious-condor.autodock.io
● autodock - env.expose (MCP)(port: 8080)
Exposed port 8080 at https://8080--xenacious-condor.autodock.io
At this point, Lago was behaving like a real application, in a real environment, that I could stop, restart, inspect, and break again if needed.
AUTODOCK.md
Once everything was working, I asked Autodock to "save" the environment.
It spit out AUTODOCK.md, which is sort of like a Dockerfile for LLMs. It's a compact "how to resume this box" playbook that captured the reality of what happened during our first deploy. In this particular file, it had:
- which ports were mapped?
- what hostnames did the frontend expect?
- where are the logs?
- what esoteric env vars did I need again?
AUTODOCK.md is basically a diary of the agent's victories and failures on its way to success.
For example: that custom native library we had to build (libexpression_go.so):
### Native Library: libexpression_go
The events-processor requires `libexpression_go.so`, a Rust library with Go FFI bindings from the [lago-expression](https://github.com/getlago/lago-expression) repo.
**Installed at:** `/usr/local/lib/libexpression_go.so`
**To rebuild from source:**
cd /workspace
git clone https://github.com/getlago/lago-expression.git
cd lago-expression
cargo build --release -p expression-go
sudo cp target/release/libexpression_go.so /usr/local/lib/
sudo ldconfig
Or the "classic" kind of bug where the variable name is almost right, but not quite:
2. **RSA key errors**: Use `LAGO_RSA_PRIVATE_KEY` (not `RSA_PRIVATE_KEY`) - must be base64 encoded
Or that Vite was cranky about hostnames unless we passed an escape hatch:
4. **Vite host validation**: Always start Vite with `__VITE_ADDITIONAL_SERVER_ALLOWED_HOSTS=.autodock.io`
This is the stuff that, in Docker land, is often difficult to encode in an actionable way. Not that you can't squeeze all this stuff into a Dockerfile, but without context, it's often hard to know why it is there and when to skip it.
Of course, one big thing we're missing here is layer caching. While our box is near-instantly resumable, building it again will basically take forever. I'm currently working on a box-freeze feature to work around this limitation!
Verdict
Autodock isn't a Docker killer (that wouldn't be nice, would it?), but it's a Docker-adjacent service for the agent age. If Docker gives you reproduceable builds, Autodock gives you reproduceable staging environments, warts and all. The type of thing you can deploy, get feedback for, or write dev.to articles about. Give it a shot on your app and see how it fares!


Top comments (0)