Every new project brings fresh challenges, but the very first one is always the same: getting it to run locally.
Every time I start working on a new project, I end up asking other developers how to launch it. Documentation is usually long, outdated, and full of manual steps, like adjusting configs, copying envs, secrets, and so on.
Luckily, it doesn’t have to be this way. A few small practices can make your local environment clean, repeatable, and easy to work with, not just for you, but for the entire team.
1. Too Many Steps Instead of One Clean Command
Most common thing I see in readme.md in projects I’m starting working on, is a list of steps to make your local env up and running. And it’s always the same story:
Alright, let’s go:
Install Docker — check.
Install PHP — check.
Install Composer — check...
…
Okay, looks good. Time to rundocker compose up
— and... oh, yea it’s failing.
Now I’m messaging another developer: “Hey, I did everything in the README and it’s not working.”
We debug it together.
“Oh, you just have to copyveryimportantfile.txt
tovery_veryimportantfile.txt
, then it’ll work.”
Cool. Let’s add the 2137th step to the setup instructions, then next developer won’t make same mistake…
In my opinion, every project should be bootstrapped with a single command. One. Not seven, not five, just one. It doesn’t matter if you use Docker, install local dependencies manually, or mix both. Starting the environment should be easy and predictable.
So instead of this:
## Requirements
- docker
- php >= 8.0
...
You can simply make this:
# ./launch.sh:
#!/bin/bash
if ! command -v brew &> /dev/null; then
echo "❌ Homebrew is not installed. Install it from https://brew.sh/"
exit 1
fi
if ! command -v docker &> /dev/null; then
echo "⚙️ Installing Docker..."
brew install --cask docker
fi
if ! command -v php &> /dev/null; then
echo "🐘 Installing PHP..."
brew install php@8.3
fi
Isn’t that simpler? You don’t need to update the docs, just update script here and that’s it.
Next, you have to bootstrap your project. Do you need to copy .env.dist
? Let’s copy it here:
[ -f .env ] || cp .env.local .env
You need to run yarn? That’s your next bash script line:
yarn
You need to run docker compo…, and so on, until your project will be up and running.
And one more thing — make your bootstrap script idempotent.
That means every operation should give the same result no matter how many times it runs. Installing dependencies? Only if they’re not already installed. Copying files? Only if they don’t exist.
It makes your script faster, safer, and a lot less likely to break because "something was already there".
That rule, allows you to keep your readme very simple and clean, with only the most important things.
2. Make CLI, but don’t reinvent the wheel, use existing tools
All applications are different, and all of them require different actions to do, during development, like make migrations, run tests, reset database etc. Different frameworks already have some CLI commands for such actions, but there are also many of others. To make it easy for developers, you can build CLI specifically for your project, which collect all useful actions in one place. But how to create that?
There are plenty of solid, battle-tested tools out there like Makefile
, Taskfile
, or even plain shell scripts. They work cross-platform, they’re easy to read, and they don’t require extra effort to set up or maintain. In most cases, they’re more than enough.
Avoid building custom CLIs that need to be compiled or rely on specific runtimes. I met with the projects that has some tools like that, compiled for linux, and it was hard to run them on mac or windows. I had to start a virtual machine with linux to run them. I can’t imagine what will I do if they will stop working some day, and I don’t have any source code of them, because it was not shared when transfering application between software houses.
3. Make developers experience friendly
If you want to let developers choose which services to run or ask them for input during setup, keep it simple. Don’t reinvent CLI menus in raw bash. It’s messy, unmaintainable, and hard to debug.
There are tools built exactly for that. One of the best is Gum - a small utility that helps build terminal user interfaces with checkboxes, file pickers, prompts, and more:
It’s lightweight, works well across platforms, and integrates easily with your scripts. You only need to install it at the very beginning of your bash script.
Use tools like this to keep your setup flexible and clean. Good DX doesn’t have to be complicated.
4. Let developers choose what services to run
Not every developer needs every service running all the time. A frontend dev might not care about your backend profiler or a local database admin tool. Forcing everyone to run the full stack wastes resources and slows things down.
If you are using docker compose, you can use functionality called profiles. Let’s take this example:
version: "3.9"
services:
app:
image: my-app
build: .
ports:
- "8080:80"
db:
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "3306:3306"
mailhog:
image: mailhog/mailhog
profiles:
- optional
ports:
- "8025:8025"
adminer:
image: adminer
profiles:
- optional
ports:
- "8081:8080"
grafana:
image: grafana/grafana-oss:latest
container_name: grafana
ports:
- "3000:3000"
profiles:
- monitoring
Now you can add a COMPOSE_PROFILES=
variable to your project’s .env
file. Docker Compose will read this value automatically and run only the services that match the specified profiles.
For example:
COMPOSE_PROFILES=optional,monitoring
This way, developers can easily control what parts of the stack get started
You can even take it one step further and write a small bash script using gum
to give developers a clean, interactive UI to select which profiles they want to launch. It makes setup feel more like an app than a checklist.
5. Don’t make developers import the database data manually
If your project doesn’t use any stack specific tool to setup database, and relies on a database dump to set things up, don’t expect developers to import it by hand. That should be part of the automated bootstrap process, just like anything else.
You can easily set this up in Docker. For example, if you're using MySQL or MariaDB, just mount a .sql
dump into the container. MySQL will automatically import it on first run:
services:
db:
image: mysql:8
environment:
MYSQL_DATABASE: app
MYSQL_ROOT_PASSWORD: root
volumes:
- ./dump.sql:/docker-entrypoint-initdb.d/dump.sql
That’s it. First time the container starts, it’ll load the dump. No need to run mysql -u root
by hand. No need to write it in the README. And no chance for someone to skip the step or break things.
Bonus: MacOS and Docker Desktop limitations
On macOS, Docker runs inside a VM, most developers use Docker Desktop, which works but comes with two main issues:
- Licensing - it's free only for small teams. Larger companies need a paid plan (check Docker's terms).
- Manual configuration - Docker Desktop’s default memory settings are too low for many real-world projects, and increasing them requires opening the UI. You can’t configure it from the terminal, which makes automation impossible and leads to confusing, silent errors.
A better option is Colima. It replaces Docker Desktop, uses the exact same CLI (docker
), and gives you full control via the terminal, so you can set it up in your launch script.
To start Colima with more RAM, just run:
colima start --memory 16
Now you're running Docker with 16GB of RAM, no licensing issues, and full automation support. One command and you're ready to go, clean and repeatable.
Linux users don’t have this problem, as Docker runs natively there.
Summary
Local environments don’t have to be painful. With a few simple rules you can make setup smooth for everyone on the team.
Use existing tools, avoid overengineering, and document only what you can't automate. It’ll save hours on onboarding, reduce frustration, and make your project feel a lot more professional.
You can find more articles like this at mateuszcholewka.com. Got a question? Drop a comment below or reach out on LinkedIn.
Top comments (0)