DEV Community

THARUN REDDY R
THARUN REDDY R

Posted on • Edited on

Deploying a Multi-Tier Web App on AWS: A Journey Through Challenges and Costly Lesson

As part of my journey to master cloud computing and DevOps, I recently completed an intermediate-level project: deploying a multi-tier web application on AWS. The goal was to build a secure, scalable architecture with a web server in a public subnet and a database server in a private subnet, using AWS services like EC2, VPC, and security groups. While the project was a success, it came with significant hurdles—both technical and financial. In this blog post, I’ll walk you through the project’s objectives, the numerous issues I faced, how I resolved them, and a costly lesson about AWS resource cleanup that left me with a $27 bill.

Project Objectives

The project had clear goals to test my AWS and Linux skills:

  • Launch EC2 Instances: Set up one web server and one database server using Amazon EC2.
  • Configure Nginx: Use Nginx as a reverse proxy on the web server to serve a PHP-based web application.
  • Automate with Ansible: Provision both servers using Ansible playbooks for consistency (though I ended up skipping this due to challenges).
  • Enhance Security: Create a Virtual Private Cloud (VPC) with public and private subnets, placing the database in the private subnet to protect it from direct internet access.

The final deliverable was a working web application accessible via the web server’s public IP, with the backend database securely tucked away in a private subnet.

Step-by-Step Implementation

Here’s how I built the project, broken down into key phases, along with the issues I faced and how I tackled them.

  1. Setting Up the VPC and Networking
  • To create a secure architecture, I started by configuring a custom VPC (multi-tier-vpc) with:
  • Public Subnet (10.0.1.0/24) for the web server.
  • Private Subnet (10.0.2.0/24) for the database server.
  • An Internet Gateway to allow the public subnet to access the internet.
  • Route Tables to manage traffic, with the public subnet routing to the Internet Gateway.

Issue: The database server, in the private subnet, couldn’t access the internet to fetch package updates, causing errors like:

Could not retrieve mirrorlist https://amazonlinux-2-repos-eu-north-1.s3.dualstack.eu-north-1.amazonaws.com/... Timeout was reached
Enter fullscreen mode Exit fullscreen mode

Solution: I added a NAT Gateway in the public subnet and updated the private subnet’s route table to route 0.0.0.0/0 traffic through it. This allowed the database server to download packages while remaining isolated from inbound traffic.

  1. Launching EC2 Instances I launched two EC2 instances using the Amazon Linux 2 AMI:
  • Web Server: Placed in the public subnet (10.0.1.37, public IP: 51.20.138.105), with a security group (web-sg) allowing HTTP (port 80) and SSH (port 22).
  • Database Server: Placed in the private subnet (10.0.2.129), with a security group (db-sg) allowing MariaDB (port 3306) and SSH (port 22) from the web server’s security group.
  • Both used the same SSH key pair (multi-tier-key) for access.

Issue: I couldn’t SSH into the database server from the web server, getting:

Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Enter fullscreen mode Exit fullscreen mode

Solution: The web server didn’t have the private key. I used scp to copy multi-tier-key.pem to the web server’s ~/.ssh/ directory and set permissions (chmod 400). I also verified the database server’s ~/.ssh/authorized_keys file contained the correct public key, using AWS Systems Manager Session Manager to access it initially.

  1. Configuring the Database Server I intended to install MySQL on the database server, but ran into an issue. Issue: The command sudo yum install mysql-server -y failed with:
No package mysql-server available.
Enter fullscreen mode Exit fullscreen mode

Solution: Amazon Linux 2 uses MariaDB as a drop-in replacement for MySQL. I installed mariadb-server instead:

sudo yum install mariadb-server -y
sudo systemctl start mariadb
sudo systemctl enable mariadb
Enter fullscreen mode Exit fullscreen mode

I secured MariaDB with mysql_secure_installation, setting a root password and restricting root access to localhost for security.

Issue: The web server couldn’t connect to the database, with telnet 10.0.2.129 3306 showing:

Host '10.0.1.37' is not allowed to connect to this MariaDB server.
Enter fullscreen mode Exit fullscreen mode

Solution: MariaDB’s user permissions didn’t allow connections from the web server’s IP. I created a user (web_user) with access from 10.0.1.37:

CREATE USER 'web_user'@'10.0.1.37' IDENTIFIED BY 'your_password';
GRANT ALL PRIVILEGES ON my_database.* TO 'web_user'@'10.0.1.37';
FLUSH PRIVILEGES;
Enter fullscreen mode Exit fullscreen mode
  1. Setting Up the Web Server I installed Nginx and PHP on the web server to serve a simple PHP script (index.php) that connected to the database.

Issue: Running telnet 10.0.2.129 3306 initially failed with:

-bash: telnet: command not found
Enter fullscreen mode Exit fullscreen mode

Solution: I installed the telnet package:

Solution: I installed the telnet package:
Enter fullscreen mode Exit fullscreen mode

Issue: Accessing http://51.20.138.105/index.php resulted in a 502 Bad Gateway error. Solution: This was a multi-faceted issue:

  • PHP-FPM Not Installed: The php-fpm service was missing, causing Nginx to fail when processing PHP files. I installed it:
sudo yum install php-fpm -y
sudo systemctl start php-fpm
sudo systemctl enable php-fpm
Enter fullscreen mode Exit fullscreen mode
  • Nginx Configuration Error: The /etc/nginx/conf.d/php.conf file had a syntax error (10.0.2.129 localhost;) and a misplaced location block. I corrected it to:
server {
    listen 80;
    server_name _;
    root /usr/share/nginx/html;
    index index.php index.html index.htm;
    location / {
        try_files $uri $uri/ =404;
    }
    location ~ \.php$ {
        fastcgi_pass unix:/run/php-fpm/www.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}
Enter fullscreen mode Exit fullscreen mode
  • Socket Mismatch: Nginx was configured for /var/run/php-fpm/php-fpm.sock, but PHP-FPM used /run/php-fpm/www.sock. I updated fastcgi_pass to match after checking /etc/php-fpm.d/www.conf.

Issue: The index.php file wasn’t initially on the web server. Solution: I used scp to upload it from my local machine to /usr/share/nginx/html/ and set permissions:

sudo chown nginx:nginx /usr/share/nginx/html/index.php
sudo chmod 644 /usr/share/nginx/html/index.php
Enter fullscreen mode Exit fullscreen mode
  1. Final Testing After resolving these issues, I tested the application:
  • Visited http://51.20.138.105/index.php and saw: "Connected successfully to the database!"
  • Verified SSH access to the database server via the web server:
ssh -i ~/.ssh/multi-tier-key.pem ec2-user@10.0.2.129
Enter fullscreen mode Exit fullscreen mode
  1. The Costly Mistake: Leaving Resources Running

After completing the project, I stopped my EC2 instances and left them idle for over two weeks, assuming that since I was using the AWS Free Tier, I wouldn’t incur charges. To my shock, I received a pending AWS bill for $27! It turned out that while the EC2 instances were stopped, other resources like the VPC, NAT Gateway, and associated components (e.g., Elastic IPs) were still active and accruing costs. NAT Gateways, in particular, are not covered by the Free Tier and can be surprisingly expensive.

Solution: I immediately terminated all resources:

  • EC2 Instances: Deleted both the web and database servers via the EC2 Console.
  • VPC: Removed the multi-tier-vpc, along with subnets, route tables, NAT Gateway, and Internet Gateway.
  • Security Groups and Key Pairs: Ensured no lingering resources remained.

This experience taught me the critical importance of fully cleaning up all AWS services after a project is complete, not just stopping instances. I learned this lesson the hard way, but it’s one I won’t forget.

What I Missed

The original plan included using Ansible to automate server provisioning. However, due to the numerous issues (network timeouts, package errors, configuration mismatches), I opted for manual setup. While this meant more hands-on troubleshooting, it deepened my understanding of AWS and Linux. I plan to incorporate Ansible in my next project to streamline automation.

Lessons Learned

This project was a rollercoaster, but it taught me invaluable lessons:

  • AWS Networking is Critical: Understanding VPCs, subnets, NAT Gateways, and security groups is essential for secure cloud architectures.
  • Debugging is a Skill: Logs (/var/log/nginx/error.log, /var/log/php-fpm/error.log) were my best friends for diagnosing issues like socket mismatches.
  • Security Matters: Placing the database in a private subnet and restricting MariaDB access to specific IPs significantly enhanced security.
  • Clean Up Resources: Always terminate all AWS resources (not just EC2 instances) to avoid unexpected charges. My $27 bill was a painful but crucial lesson in cloud cost management.
  • Persistence Pays Off: Each issue felt daunting, but breaking them down and tackling them systematically led to success.

Next Steps

I’m now diving into an Advanced Project: Fully Automated AWS Infrastructure with Terraform & Ansible. This will involve:

  • Using Terraform to provision AWS resources (VPC, EC2, IAM roles).
  • Configuring servers with Ansible for a scalable web app.
  • Adding CloudWatch monitoring, S3 state storage, and unit tests.

The struggles from this project, especially the cost oversight, have prepared me to appreciate automation tools like Terraform and Ansible, which should reduce manual errors and streamline setup. I’ll also be vigilant about cleaning up resources to avoid another surprise bill!

Conclusion

Deploying a multi-tier web app on AWS was challenging but rewarding. From network timeouts to Nginx configuration woes and an unexpected $27 bill, I faced a steep learning curve but emerged with a functional application and a deeper understanding of cloud infrastructure. The financial lesson about resource cleanup was a hard one, but it underscored the importance of diligence in cloud management. If you’re embarking on a similar project, don’t be discouraged by setbacks—each issue is a chance to learn, and always double-check your AWS resources before walking away. Happy cloud computing!

Top comments (0)