Day 9 of my 30-Day Terraform Challenge focused on moving beyond basic Terraform modules into more practical, real-world infrastructure patterns.
Today’s learning was centered around three key areas:
- Module gotchas
- Module versioning
- Reusing modules safely across multiple environments
This was one of the most useful Terraform days so far because it introduced concepts that are essential when working in real teams and production environments.
Why This Matters
Terraform modules make infrastructure reusable and easier to manage, but they can also introduce subtle bugs and inconsistencies if not designed carefully.
In real-world DevOps and cloud engineering work, infrastructure should be:
- Reusable
- Predictable
- Versioned
- Safe across environments
That is exactly what today’s work helped me understand.
Module Gotcha #1: File Paths Inside Modules
One of the most common mistakes when working with Terraform modules is referencing files using relative paths without considering where Terraform is being executed from.
This becomes a problem when a module depends on files such as startup scripts, templates, or configuration files. If the file path is not handled correctly, Terraform may fail to locate the file.
The safer and more reliable approach is to always reference files relative to the module itself rather than where Terraform is being run. This makes the module portable and easier to reuse.
This was a very useful reminder that modules should always be written with portability in mind.
Module Gotcha #2: Inline Blocks vs Separate Resources
Another common issue comes from mixing inline resource configuration with separate resource definitions.
A good example is security group rules. Some Terraform resources allow rules to be defined directly inside the resource, while also allowing them to be created separately.
Mixing both approaches in the same module can lead to conflicts, confusing behavior, and harder debugging.
The better practice is to choose one style and remain consistent. Using separate resources tends to be more flexible and makes it easier to extend modules later without editing their internal structure too much.
This is a small design decision, but it has a big impact on maintainability.
Module Gotcha #3: Module Output Dependencies
One subtle but important gotcha is depending on an entire module when only one specific output or resource is needed.
When Terraform treats a whole module as a dependency, it can cause more resources than necessary to be recreated or reevaluated. This makes plans noisier and can lead to avoidable infrastructure changes.
The better design approach is to expose specific outputs and structure modules so dependencies remain as granular as possible.
This is one of those issues that may not seem serious at first, but in larger projects it can create a lot of unnecessary complexity.
Building a Reusable Module
For this challenge, I worked with a reusable Terraform module called webserver-cluster.
The purpose of this module is to provision a complete web server cluster setup in AWS, including the supporting infrastructure needed to serve traffic reliably.
The module provisions resources such as:
- A VPC lookup
- Subnets
- Security groups
- An Application Load Balancer
- A Launch Template
- An Auto Scaling Group
- EC2 instances configured to run a simple Python-based web server
It was exciting to see how much infrastructure could be abstracted into a reusable module that can be called from multiple environments.
Versioning the Module
One of the most important parts of today’s challenge was learning how to version Terraform modules.
Instead of always referencing a module directly from a local folder or an unpinned GitHub repository, I learned how to use version tags to control exactly which module release an environment should use.
This is a very important practice because infrastructure code changes over time. Without versioning, an environment could suddenly behave differently just because the source module was updated.
To solve this, I created versioned releases for my module:
- v0.0.1 — the initial reusable module release
- v0.0.2 — an updated version with a new customizable input variable
This allowed me to safely introduce improvements while keeping full control over which environments used which version.
What Changed Between Versions
The key improvement I introduced in the second version of the module was the ability to customize the text displayed by the web server.
This made the module more flexible and more useful across different environments.
For example, development environments can now display a custom message that clearly identifies them as development deployments, while production can remain more stable and unchanged.
This may seem like a small feature, but it demonstrates the real value of versioning: making controlled improvements without breaking existing environments.
Multi-Environment Reuse
This was the part of the challenge that tied everything together.
I configured different environments to intentionally use different module versions:
- Development uses the newer module version
- Production stays pinned to the older stable version
This is a best practice because development is where changes should be tested first. Production should only move to a newer module version after that version has been validated.
This approach helps teams reduce risk, avoid unexpected infrastructure changes, and maintain confidence in production deployments.
It also reflects how real engineering teams manage infrastructure releases in a controlled way.
Why Version Pinning Matters
One of the biggest lessons from today is that not pinning module versions is risky.
If a shared module changes and environments are not pinned to a specific release, then different engineers may end up deploying different infrastructure without realizing it.
That can lead to:
- Inconsistent deployments
- Unexpected behavior
- Hard-to-debug infrastructure issues
- Production instability
Version pinning helps solve this by making infrastructure behavior consistent and predictable.
It gives teams control over when and how changes are introduced.
Challenges I Faced
One of the biggest practical challenges I encountered was accidentally committing Terraform-generated files into Git.
These included:
-
.terraformdirectories -
terraform.tfstatefiles - provider binaries
This made my repository extremely large and caused push failures.
I fixed this by cleaning up the repository and adding a proper .gitignore file so that only the actual source files were tracked.
This turned out to be a valuable lesson on its own.
Infrastructure repositories should be clean, lightweight, and focused only on reusable source code—not generated artifacts.
Key Takeaways
Today’s challenge taught me several practical lessons:
- Modules should be written to be portable and reusable
- Small design choices inside modules can cause major issues later
- Versioning is essential for safe infrastructure reuse
- Development and production should not always move at the same speed
- Clean Git practices are just as important in Infrastructure as Code as they are in software development
These are the kinds of details that make the difference between writing Terraform that “works” and writing Terraform that is safe, maintainable, and team-ready.
Final Thoughts
Day 9 was one of the most practical and eye-opening parts of my Terraform challenge so far.
I moved beyond simply creating modules and started thinking more like a real infrastructure engineer:
- How do I make this reusable?
- How do I make this safe for teams?
- How do I control change across environments?
Those are the kinds of questions that matter in real production work.
I’m really enjoying how each day of this challenge builds not just technical skill, but also better engineering habits.
Connect With Me
I’m documenting my journey through the 30-Day Terraform Challenge as I continue learning more about:
- Terraform
- AWS
- Infrastructure as Code
- DevOps best practices
If you're also learning cloud or IaC, feel free to connect.
Top comments (0)