Today we'll define and implement our daily development workflows. We'll begin by identifying what makes a workflow truly effective — drawing from cutting-edge research on software delivery practices. Then we'll establish really simple mechanisms for managing code changes so we can move quickly and accurately as a team. And importantly, these workflows must be flexible enough to scale as our team and product grows. Let's dive in!
Table of Contents
A Reflection on Workflows
Let's begin with a simple question: What exactly is a "workflow"?
Most teams have some loose process — clone the repo, open a pull request, get it reviewed. That's a workflow, technically. But we can do better by going back to first principles.
Are some ways of working objectively better than others? According to the DevOps Research and Assessment (DORA) project — absolutely.
DORA has spent nearly a decade studying thousands of engineering teams, using rigorous, peer-reviewed methods. Their research doesn't just find patterns — it finds causal relationships. That means: improving your DORA capabilities is statistically likely to make your team more effective.
Two of the most powerful predictors of performance are:
- Minimal lead time — code should go from commit to production in under an hour.
- High deployment frequency — ideally, each commit is a deploy.
These aren't abstract ideals — they're backed by real-world data. And they tell us something important:
To improve how we build software, we must build **workflows that support continuous code flow* — fast, safe, and always moving forward.*
Let's find out what that can actually look like.
ℹ️ BTW want to dive deeper into the research? Check out my intro to Accelerate and DORA metrics, or explore the full Software Delivery Performance Model. And if you haven't yet, I can't recommend their Accelerate book enough — it's essential reading.
Don't Use Branches
The DORA metrics forces us to a key realization:
Branches introduce delays in continuously integrating and delivering code.
Here's why:
- Branches add time between writing code and running it in production. The fastest path are small changes pushed continuously to
main
. - Branches often accumulate multiple commits, reducing deployment frequency. The highest-performing teams push small changes straight to
main
.
To some, that may sound unsafe. But what we're describing is a well-established practice: Trunk-Based Development.
ℹ️ BTW if that sounds unfamiliar or intimidating, I've written a Beginner's Intro to Trunk-Based Development. Also, the DORA research explores this topic in detail.
Still, we need to answer some important questions:
If we don't use branches, we're also not using pull requests. So how do we protect code quality? Where do tests run? What about linting, security checks, and other automations?
The answer: Shift the workflows left.
Move checks to the developer's machine. Run tests, linters, and verifications before pushing to main
.
In other words, we need fast, local workflows that feel lightweight — but offer strong safety guarantees. And they need to be simple enough to evolve with the team.
So how do we do that? How do we make sure code is safe to ship without branches or PRs?
That's exactly what this article will explore. Let's get into it.
In Defense of Shell Scripting
Now that we've established the need for fast, reliable workflows, we need need to dig into how we can safely and continuously pull and push code. Thanks to
pkgx
we can use any language to write our workflows in — so what should we choose?
I suggest we start with humble Shell Scripting. Here's why:
- Industry Standard – Shell is the default choice for scripting across virtually all engineering teams.
- Least Surprising – Most developers already know it, or can at least read it.
- Practical & Low-Maintenance – It's not the prettiest, but it gets the job done with minimal overhead.
-
Portable & Consistent – With
pkgx
, we ensure everyone runs the same version ofbash
, so there won't be "works on my machine"-issues.
Keep in mind we're not here to build beautiful workflow code — we're here to ship products. Scripts can still be important, but they're not where we should do exciting new things that create exciting new problems. So it makes sense to choose the simplest, most boring tool for the job: shell scripts.
This mindset reflects a core principle for any product team: minimize complexity, stay pragmatic, and focus on what matters — shipping value, fast and safely.
Doctor
Let's kick off our workflows with a script to keep development environments healthy across the team by checking vital preconditions like whether the database is running and dependencies are installed.
ℹ️ BTW I like calling this script doctor because it verifies the health of our environment — but you can pick any name that fits your team.
Back in the "Setting Up an Elixir Dev Environment" article, we picked pkgx
to manage our dev environment. So first, let's verify it's installed and active:
$ cat bin/doctor
#!/usr/bin/env bash
set -euo pipefail
source "$(dirname "${BASH_SOURCE[0]}")/.shhelpers"
check "pkgx installed?" \
"which pkgx" \
"brew install pkgxdev/made/pkgx"
check "Developer environment active?" \
"which erl && which elixir" \
"dev"
ℹ️ BTW this sources
.shhelpers
for utility functions such ascheck
. The full.shhelpers
file is available here.
Running doctor
now gives us a clean bill of health:
$ bin/doctor
• pkgx installed? ✓
• Developer environment active? ✓
Next, let's build toward starting our Phoenix app (chosen back in "Building a Basic Elixir Web App"). First up: Make sure the local database is running:
$ git-nice-diff -U1 .
/bin/doctor
@@ -12 +12,5 @@ check "Developer environment active?" \
"dev"
+
+check "PostgreSQL server running?" \
+ "pgrep -f bin/postgres" \
+ "bin/db start"
ℹ️ BTW this references
bin/db
, a small script to easily start and stop the database. You can find it here.
If the database isn't running doctor
now fails and suggests a fix:
$ bin/doctor
• pkgx installed? ✓
• Developer environment active? ✓
• PostgreSQL server running? x
> Executed: pgrep -f bin/postgres
Suggested remedy: bin/db start
(Copied to clipboard)
And running the suggestion works:
$ bin/db start
• Creating /Users/cloud/perfect-elixir/priv/db ✓
• Initializing database ✓
• Database started:
waiting for server to start.... done
server started
↳ Database started ✓
$ bin/doctor
• pkgx installed? ✓
• Developer environment active? ✓
• PostgreSQL server running? ✓
This is the doctor pattern: check for issues, suggest a fix. It's easy to understand, easy to extend.
Let's jump ahead to a more complete version — with checks covering everything needed to start the app:
$ bin/doctor
Running doctor checks…
• pkgx installed? ✓
• Developer environment active? ✓
• PostgreSQL server running? ✓
• PostgreSQL server has required user? ✓
• Hex package manager installed? ✓
• Mix dependencies installed & compiled? ✓
• PostgreSQL database exists? ✓
✓ All checks passed, system is healthy
ℹ️ BTW the full
doctor
script is here.
And since everything is passing, we can launch our app:
$ iex -S mix phx.server
[info] Running MyAppWeb.Endpoint with Bandit 1.4.2 at 127.0.0.1:4000 (http)
…
Done in 260ms.
iex(1)>
That's it — bin/doctor
now ensures all the critical preconditions are met before development begins. It's simple, extendable, and safe.
But… how do we make sure developers remember to run it? Let's cover that next.
Update
Now let's create a script to get the latest code — a replacement for
git pull
that also ensures any new code is properly applied.
We'll start simple: check we're on main
and pull latest code:
$ cat bin/update
#!/usr/bin/env bash
set -euo pipefail
source "$(dirname "$0")/.shhelpers"
check "Branch is main?" \
"[ \"$(git rev-parse --abbrev-ref HEAD)\" = \"main\" ]" \
"git checkout 'main'"
step "Pulling latest code" "git pull origin 'main' --rebase"
$ bin/update
• Branch is main? ✓
• Pulling latest code ✓
After pulling, we need to handle any follow-up steps — like updating dependencies if mix.exs
changed and ensuring the local development environment remains valid. So let's extend the script:
$ git-nice-diff -U1 .
/bin/update
@@ -8 +8,4 @@ check "Branch is main?" \
step "Pulling latest code" "git pull origin 'main' --rebase"
+step "Getting dependencies" "mix deps.get"
+step "Compiling dependencies" "mix deps.compile"
+"$(dirname "$0")/doctor"
Now, running bin/update
does more than pull code — it sets up our environment and ensures it's fully healthy, so we're ready to keep working:
$ bin/update
• Branch is main? ✓
• Pulling latest code ✓
• Getting dependencies ✓
• Compiling dependencies ✓
Running doctor checks…
• pkgx installed? ✓
• Developer environment active? ✓
• PostgreSQL server running? ✓
• PostgreSQL server has required user? ✓
• Hex package manager installed? ✓
• Mix dependencies installed & compiled? ✓
• PostgreSQL database exists? ✓
✓ All checks passed, system is healthy
This is where our scripts begin to interlock. bin/update
becomes our go-to command to get the latest code and ensure our environment is sync, fully replacing git pull
— a small habit shift that quickly becomes second nature.
ℹ️ BTW
bin/update
is also where we should apply database migrations, but since we don't have those yet that step doesn't exist for now.
Shipit
Our final workflow script,
shipit
, is the cornerstone of our Continuous Integration and Delivery (CI/CD) process. It replaces git push
by only pushing code after verifying it's in a shippable state by running tests and quality checks.
Let's take a look:
$ cat bin/shipit
#!/usr/bin/env bash
set -euo pipefail
source "$(dirname "$0")/.shhelpers"
"$(dirname "$0")/update"
step --with-output "Running tests" "mix test"
check "Files formatted?" "mix format --check-formatted" "mix format"
step "Pushing changes to main" "git push origin \"main\""
cecho "\n" -bB --green "✓ Shipped! 🚢💨"
shipit
calls update
first, to ensure we're testing against the latest version of main
. That's how we continuously integrate our changes with the rest of the team's work.
When we run shipit
, here's what it looks like in action:
$ bin/shipit
• Branch is main? ✓
• Pulling latest code ✓
• Getting dependencies ✓
• Compiling dependencies ✓
Running doctor checks…
• pkgx installed? ✓
• Developer environment active? ✓
• PostgreSQL server running? ✓
• PostgreSQL server has required user? ✓
• Hex package manager installed? ✓
• Mix dependencies installed & compiled? ✓
• PostgreSQL database exists? ✓
✓ All checks passed, system is healthy
• Running tests:
.....
Finished in 0.07 seconds (0.03s async, 0.04s sync)
5 tests, 0 failures
Randomized with seed 579539
↳ Running tests ✓
• Files formatted? ✓
• Pushing changes to main ✓
✓ Shipped! 🚢💨
This gives us a simple but powerful daily rhythm: run bin/update
when starting the day, and bin/shipit
whenever a commit is ready. A lightweight but robust CI/CD flow that minimizes delays and maximizes confidence.
ℹ️ BTW like the other scripts, this
shipit
is intentionally basic, because this article is about paving a direction for our workflows. An initial simplicity is a good thing, as it makes it easier to build initial trust and encourage team-wide adoption and iteration.
As the project matures, shipit
can evolve to include additional quality gates — like linters, security checks, and even performance testing. But the most important thing for now is: build the habit of shipping frequently. It's how we learn fast and deliver real value.
Continuous Code Reviewing
We've established simple but powerful local workflows:
bin/update
to continuously integrate, and bin/shipit
to continuously push — effectively replacing git pull
and git push
.
But by eliminating branches, we've also removed pull requests. So, what about code reviews? Our scripts automate local flow, but what replaces the second set of eyes?
The answer: Code reviewing must also be continuous.
That may sound radical, but it's backed by research: Asynchronous reviews add delay — often hours or days — as code sits waiting for attention. In a fast-moving team, that latency is a dealbreaker. Instead, we need reviewing to happen immediately.
This shift requires a cultural change:
- When a commit is ready, review it right away.
- Don't move on — wait for it to ship.
- Remember: Code only delivers value in production.
How?
- Call a colleague over and review it together.
- Or better yet, write the code together from the start.
The goal is a fast, safe, collaborative flow to production — and frequent, small commits are key. When your team ships dozens of changes per hour, that's true continuous integration and delivery 🤩
ℹ️ BTW there are ways of working like this that are as old as programming itself. Pair programming and team programming (or "mobbing") are examples that naturally support continuous reviewing. While poor pairing can be draining, great pairing is joyful and productive 😊.
Further reading:
Conclusion
We began by outlining the principles behind fast, low-friction workflows, and landed on something deceptively simple: a few composable shell scripts that help us pull and push changes quickly, safely, and together.
By stripping away latency — like branches and pull requests — we've created a workflow optimized for high-trust teams that value speed, clarity, and continuous improvement. The simplicity of the scripts is not a limitation; it's an invitation to the team to evolve them as the team grows.
And yes — it's a little ironic that in an article series about Perfect Elixir, we've barely touched Elixir. But that's the point: perfect Elixir isn't just about Elixir. It's about designing the environment that lets great Elixir happen. The workflows we've explored today are broadly applicable, grounded in research, and shaped for the realities of modern, high-velocity product teams.
Top comments (0)