There's a point in every DevOps engineer's journey where things start to click. This week was that moment for me. Five assignments, one Azure subscription, and a whole lot of terminal output later — I now understand why teams reach for Ansible the moment they need to configure more than one server.
This post walks through everything I built this week: setting up a production-ready Ansible workstation, automating a fleet of 4 Azure VMs with ad-hoc commands, deploying a static website with a multi-play playbook, and finally deploying two applications using Terraform + Ansible together — including a production-grade role-based setup.
Assignment 1: Building a Production-Ready Ansible Workstation
Before touching a single server, real teams standardise their local environment. That means isolated dependencies, consistent editor settings, and automated quality checks that run before every commit.
The first thing I did was create an isolated Python virtual environment:
python3 -m venv .venv && source .venv/bin/activate
pip install ansible ansible-lint yamllint pre-commit
Why a venv? Because installing Ansible globally with sudo pip is a trap. Different projects need different Ansible versions, and a global install means one project can silently break another. The venv keeps everything contained and reproducible — anyone can clone the repo, run pip install -r requirements.txt, and get the exact same setup.
I then configured VS Code with the Red Hat Ansible extension, set up an ansible.cfg with team-standard defaults, generated an ED25519 SSH key, and wired up pre-commit hooks to run yamllint automatically before every commit. From that point on, badly formatted YAML couldn't even make it into the repo.
Assignment 2: Fleet Automation with Ad-Hoc Commands
With the workstation ready, it was time to actually talk to some servers. I provisioned 4 Azure Ubuntu VMs with Terraform — all in a single main.tf using a count loop:
variable "vm_roles" {
default = ["web1", "web2", "app1", "db1"]
}
Each VM got its own public IP, network interface, and NSG association. The SSH key was injected at provisioning time via the admin_ssh_key block — no passwords, ever.
After provisioning, I created a custom inventory with proper groups:
[web]
40.85.254.41
20.104.112.32
[app]
20.48.180.237
[db]
20.48.183.157
Then came the ad-hoc commands. This is where Ansible really shines for quick fleet operations:
# Ping all hosts
ansible all -i inventory.ini -m ping
# Check uptime across the fleet
ansible all -i inventory.ini -m command -a "uptime"
# Install nginx on web servers only
ansible web -i inventory.ini -m apt -a "update_cache=yes name=nginx state=present" --become
The --become flag is how Ansible escalates to root for privileged operations. Without it, package installs would fail with permission errors.
Assignment 3: Multi-Play Playbook for Web Deployment
Ad-hoc commands are great for quick tasks, but anything repeatable belongs in a playbook. This assignment introduced multi-play structure — separating install, deploy, and verify into distinct plays.
---
- name: Install and Configure Web Server
hosts: web
become: true
tasks:
- name: Install nginx
apt:
name: nginx
state: present
- name: Deploy Static Website Content
hosts: web
become: true
handlers:
- name: reload nginx
service:
name: nginx
state: reloaded
tasks:
- name: Deploy index.html
copy:
src: files/index.html
dest: /var/www/html/index.html
owner: www-data
mode: '0644'
notify: reload nginx
- name: Verify Deployment
hosts: localhost
connection: local
tasks:
- name: Check HTTP 200
uri:
url: "http://{{ web_ip }}"
status_code: 200
Why split into three plays? Because each play has a different responsibility. Play 1 handles infrastructure-level concerns (is the web server installed?). Play 2 handles application concerns (is the right content deployed?). Play 3 handles verification from the outside, the way a user would actually experience it.
The copy module pushes files from the controller to the remote hosts. The file lives on your machine, Ansible handles the transfer. No Git clones needed on the target servers.
Assignment 4: Mini Finance Site with Terraform + Ansible
This assignment introduced the clean separation that production teams live by: Terraform provisions infrastructure, Ansible configures it.
Terraform created the full Azure stack — resource group, VNet, subnet, NSG with ports 22 and 80, public IP, and a single Ubuntu VM. The output gave me the IP address which fed directly into the Ansible inventory.
One thing I learned here — the git module in Ansible keeps the SSH connection open during the clone, which can time out on slow connections. The workaround is using the shell module with a shallow clone:
- name: Clone repo
shell: git clone --depth 1 https://github.com/repo /var/www/html
args:
creates: /var/www/html/index.html
The creates argument makes this idempotent — if the file already exists, the task is skipped.
Assignment 5: Production-Grade EpicBook with Ansible Roles
This was the most complex assignment — and the most realistic. Instead of tasks in a single playbook, everything was organised into roles:
ansible/
├── roles/
│ ├── common/ # system updates, baseline packages, SSH hardening
│ ├── nginx/ # install, Jinja2 config template, site management
│ └── epicbook/ # app directory, repo clone, ownership, reload handler
├── group_vars/
│ └── web.yml # shared variables across roles
└── site.yml # role orchestration
The site.yml becomes beautifully simple:
---
- name: Prepare system
hosts: web
become: true
roles:
- common
- name: Install Nginx
hosts: web
become: true
roles:
- nginx
- name: Deploy EpicBook
hosts: web
become: true
roles:
- epicbook
The nginx role used a Jinja2 template for the server block — meaning the document root path comes from a variable, not hardcoded into the config. Change the variable, re-run the playbook, and the config updates automatically.
The real proof of quality was the idempotency check. Running the playbook a second time returned mostly ok with zero failures and one intentional skipped for the clone task. That's the standard. A playbook that changes things on every run is a liability.
Key Takeaways
Ansible's strength isn't just automation — it's automation you can reason about. Each task either changes something or it doesn't, and you can see exactly which at a glance. Roles take that further by making your automation reusable across projects. The common role I wrote this week could drop into any future project and just work.
The Terraform + Ansible combination is genuinely powerful. Terraform gives you consistent, reproducible infrastructure. Ansible gives you consistent, reproducible configuration. Together they cover the full lifecycle from "cloud resources don't exist" to "application is running and verified."
All code is available on GitHub: ansible-devops-week12
Top comments (0)