Most engineers don’t start with Terraform provisioners.
They arrive there naturally.
You provision an EC2 instance.
You SSH into it.
You install what you need.
Then you think:
“Why not automate this part too?”
So you reach for provisioners.
-
remote-execto run commands -
fileto copy scripts -
local-execto glue workflows together
And for a moment — everything feels clean and automated.
Until it doesn’t.
The Moment Things Break
You update your script.
Run:
terraform apply
`
And… nothing happens.
No commands run.
No changes applied.
No errors.
Just silence.
This is the moment most people think something is broken.
But nothing is broken.
Terraform is doing exactly what it was designed to do.
The Core Misunderstanding
Terraform is declarative.
It cares about the state of infrastructure — not the steps to configure it.
Provisioners, on the other hand, are imperative.
They introduce instructions like:
- Run this command
- Copy this file
- Execute this script
That’s a completely different model.
What Provisioners Actually Are
Provisioners are not part of your normal workflow.
They are:
Lifecycle hooks that run once, during resource creation (or destruction).
That’s it.
They are not:
- Continuous configuration tools
- Script runners
- Update mechanisms
The Three Types (Quick Context)
1. local-exec
Runs on your local machine.
Useful for:
- Logging
- Triggering external systems
- Quick integrations
2. remote-exec
Runs on the instance via SSH.
Useful for:
- Bootstrapping
- Installing packages
- Initial setup
3. file
Copies files to the instance.
Usually paired with remote-exec.
None of these are inherently bad.
The problem is how they’re used.
Why Senior Engineers Avoid Overusing Provisioners
It’s not about rules. It’s about experience.
1. They Don’t Rerun
Provisioners run only during creation.
If you change the script, Terraform won’t care.
To rerun them, you have to:
bash
terraform taint aws_instance.example
terraform apply
You’re now destroying infrastructure just to rerun a script.
That’s friction — and a signal.
2. They Depend on SSH
remote-exec and file require connectivity.
That introduces:
- Network dependencies
- Timing issues
- Authentication complexity
At scale, this becomes fragile.
3. They Break the Declarative Model
Terraform is designed to describe what should exist.
Provisioners introduce how things should happen.
That shift seems small — but it compounds quickly.
4. They Don’t Scale Cleanly
What works for one instance:
- Doesn’t work the same for 10
- Or 100
- Or across environments
Provisioners don’t give you consistency guarantees.
So When Should You Use Them?
Provisioners are still useful — when used intentionally.
Good use cases:
- Quick bootstrapping in prototypes
- Small automation gaps
- One-time setup tasks
Not for:
- Full configuration management
- Ongoing system changes
- Production-critical workflows
Better Alternatives
Instead of pushing everything into Terraform:
- Use user_data / cloud-init for instance initialization
- Use Packer to bake images
- Use configuration management tools for system setup
- Use SSM for remote execution without SSH
Each tool has a clear responsibility.
The Real Shift
The biggest lesson isn’t about provisioners.
It’s about thinking in layers.
Instead of asking:
“How do I make this run in Terraform?”
Ask:
“Where does this responsibility belong?”
- Infrastructure → Terraform
- Instance setup → cloud-init / images
- Configuration → dedicated tools
Final Thought
Provisioners are not the problem.
Misusing them is.
A senior engineer doesn’t avoid tools blindly —
they understand the boundaries where each tool is strongest.
And design systems that respect those boundaries.
Top comments (0)