OK, first before I begin, a lot of you may have a question in your mind. Why did I choose the minikube instead of EKS or just spin up the container using docker compose and be done with it. Well I have thought of very same thing at the very beginning. Hell I even thought of like just running my projects in simple container in EC2 and host a static S3 website. Then questions came to my mind like what if due to a bug in programming (Which there is since I just wanted to host project so I skipped the code safety) which can break the code and stop the container? Then I have to manually ssh it to start it and I wont be home forever to do it. Sure I can have a periodic health check for the responses using cronjob and if not found then spin it up again. It will take a bit of scripting but what if I have the ability like kubernetes which just spins it up as it goes down. There I decided with minikube and it will help me with learning of kubernetes to.
The quest for running minikube successfully in EC2
In my home environment I have it installed and ran it but there was a snag I saw. minikube tunnel
is the way to make the site accessible through any other device in my private network else it was only responding to localhost:port
. I thought of minikube tunnel in background, but what if it stops suddenly? I wanted something reliable.
There I was thinking of how the communications are to be meant if it was a kubernetes. Client --> EC2 ingress port ---> ingress routes to service ---> service routes to pods
. In case of simple containers Client ---> EC2 port ----> Container port binding there
.
In case of minikube with minikube tunnel
active what I guess I figured it out like Client ---> EC2 ---> Minikube LoadBalancer IP <Minikube ip> ---> Ingress ---> Service ---> Pods
.
Looking at this I figured if tunnel
does is bridges gap between the ingress (The minikube running on the mini ip what I say it) and localhost. So as I now had a good guess of the working I wanted to test it by needing a way to route traffic coming from outside to the minikube ip
. As I thought of this I remembered there is a term already for this thing, Reverse Proxy. I immediately went for the nginx and made this configurations in /etc/nginx/nginx.conf
.
upstream backend_servers {
#server 10.110.163.66:8001;
#server 10.100.252.56:8002;
server 192.168.49.2:80;
}
server {
listen 80;
location / {
#proxy_pass http://192.168.49.2:80;
#proxy_pass http://192.168.49.2:8080;
#proxy_pass http://localhost:8000;
#proxy_pass http://10.110.163.66:8001;
#proxy_pass http://192.168.49.2:80;
# load balancing method
proxy_pass http://backend_servers;
}
}
I wont be removing others, just keeping it raw just to show how I tested this by changing service ports in yaml files.
However This very same configurations were not working when being deployed to EC2. I tried to narrow the problem down by using same method of communications simulations in brain and paper and even using AI. I came to a conclusion that maybe traffic routing is not working properly as curl localhost
was not working to. For this I went on a quest to learn iptables, and I found this incredible article here in dev and tried all shorts of stuff even though I understood little but yeah was going somewhere except the fact I was going nowhere. At that time I was reading about VPC, more accurately about NAT, and I kind of got the idea that iptables or netfilter type stuff is implemented in it that's why it was once an EC2 before becoming fully managed service. Anyway coming back to the topic I went on nearly 2 weeks for this solution and later thought of changing to other reverse proxy service and decided to try on apache2 and wallah it worked in an instant in EC2, so I settled on it.
/etc/apache2/sites-available/000-default.conf
<VirtualHost *:80>
ProxyPreserveHost On
# Reverse proxy for the application running on port 3000 on the same server
ProxyPass / http://192.168.49.2:80/
ProxyPassReverse / http://192.168.49.2:80/
</VirtualHost>
Before doing this we need to run the two commands and need a systemctl restart.
sudo a2enmod proxy
sudo a2enmod proxy_http
Ingress snag
This one is small but it took me 2 days to pin it down. When I had got my application running in minikube in EC2 I saw that two different pods taking same css path and this kind of making confusion as I had made file structure nearly same for html css but are in isolated environment only thing is ingress was routing wrong and later after one or two days I found out that in ingress we need to have the path written to the ingress else no connections will be formed, same thing happened to my /api
path in my main application.
Soo many times destroying and restarting EC2
I have lost count on how many times I messed things up. Destroyed and recreated EC2 a lot many times. IaC is really something.
My main.tf
module "key_pair" {
source = "terraform-aws-modules/key-pair/aws"
key_name = "project-solo"
create_private_key = true
}
# store
resource "local_file" "private_key_pem" {
content = module.key_pair.private_key_pem
filename = "${path.module}/project-solo.pem"
file_permission = "0600"
}
# id would be aws_instance.my-first-terraform-ec2
resource "aws_instance" "my-first-terraform-ec2" {
ami = "ami-0131b8f4c937c332f" # debian arm64 ami
instance_type = "t4g.medium"
# instance_type = "t4g.small"
subnet_id = aws_default_subnet.default.id
vpc_security_group_ids = [aws_security_group.allow_tcp.id]
key_name = module.key_pair.key_pair_name
iam_instance_profile = "ec2_instance_profile"
user_data = base64encode(<<-EOF
#!/bin/bash
apt-get update
# install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# install minikube
curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube-linux-arm64
sudo install minikube-linux-arm64 /usr/local/bin/minikube && rm minikube-linux-arm64
# install docker
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
# install docker
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
sudo apt install apache2 -y
sudo systemctl enable apache2
sudo systemctl start apache2
alias k="kubectl"
EOF
)
tags = {
Name = "Portfolio EC2"
}
# ebs root volume
root_block_device {
delete_on_termination = true
volume_size = 30
volume_type = "gp3"
}
}
# going to use default vpc id
# The ID here becomes data.aws_vpc.default.id
data "aws_vpc" "default"{
default = true
}
# default subnet az1 i choose for my ec2
resource "aws_default_subnet" "default" {
availability_zone = local.aws_az["indiaAZ1"]
}
# create a security group with default vpc ID
resource "aws_security_group" "allow_tcp" {
name = "allow_tcp"
description = "Allow tcp inbound traffic"
vpc_id = data.aws_vpc.default.id
tags = {
Name = "allow_tcp"
}
}
# ingress rule for tcp to port 22 from anywhere in the world for ssh
resource "aws_vpc_security_group_ingress_rule" "allow_tcp_22_ipv4" {
security_group_id = aws_security_group.allow_tcp.id
cidr_ipv4 = "0.0.0.0/0"
from_port = 22
ip_protocol = "tcp"
to_port = 22
}
# ingress rule for tcp to port 80 from anywhere in the world for nginx httpd 80
resource "aws_vpc_security_group_ingress_rule" "allow_tcp_80_ipv4" {
security_group_id = aws_security_group.allow_tcp.id
cidr_ipv4 = "0.0.0.0/0"
from_port = 80
ip_protocol = "tcp"
to_port = 80
}
# an eggress rule to allow tcp from vm to the www
resource "aws_vpc_security_group_egress_rule" "allow_all_traffic_ipv4" {
security_group_id = aws_security_group.allow_tcp.id
cidr_ipv4 = "0.0.0.0/0"
ip_protocol = "-1" # semantically equivalent to all ports
}
Note: Ignore the VPC CIDR part as I was practicing VPC stuff using terraform and same vars file I used in my main.tf file
My vars.tf
variable "aws_regions" {
type = map(string)
default = {
india = "ap-south-1"
}
}
locals {
aws_az = {
indiaAZ1 = "${var.aws_regions["india"]}a"
indiaAZ2 = "${var.aws_regions["india"]}b"
indiaAZ3 = "${var.aws_regions["india"]}c"
}
}
# CIDR array for vpc to be created for individual use
variable "vpc_cidr_individual" {
type = map(string)
default = {
cidr1 = "10.1.0.0/16"
cidr0 = "10.0.0.0/16"
}
description = "CIDR blocks of the vpc for individual use"
}
# CIDR array for subnet
variable "vpc_cidr_subnet" {
type = map(string)
default = {
publicA = "10.0.0.0/24"
publicB = "10.0.1.0/24"
privateA = "10.0.16.0/20"
privateB = "10.0.32.0/20"
peerpublicA = "10.1.0.0/24"
}
}
# environment
variable "environment" {
default = "Shahin"
}
One more thing when my EC2 starts up I need to use command sudo usermod -aG docker $USER && newgrp docker
as I dunno why using it in user data script didn't seem to work. If any of you know the reason please let me know in comments below, thank you.
And after this step I run this command minikube start --driver=docker &&kubectl create namespace static &&kubectl create namespace sololeveling &&minikube addons enable ingress
and apache2 config for reverse prxy and my EC2 becomes ready for the github actions to ssh and apply the yaml and start the application. All that is left to update the secrets variable in my actions with the newly generated .pem
file.
Top comments (0)