What is this article about ?
Well, let's start with the one basic truth in every developer's life, Databases are a pain to setup in your local machine.
That's why we have docker, where we can easily run a pre-configured database with any DBMS we want in a few minutes.
But still, at a certain point even a docker container will consume enough resources that may hinder our machine's performance.
Hence, the powerful combo of AWS EC2 + Docker + PostgreSQL
In this article, we'll see how to dockerize a postgreSQL database in an aws ec2 instance and how to consume the database resources in our project.
Step 1. Launching an AWS EC2 instance :
Select an ubuntu distribution instance which is included in the aws free tier.
Then select the Create new key pair option to generate the ssh keys that's allow you to connect to your ec2 instance.
N.B., You can adapt the Network Settings to limit the access to your ec2 instance or leave them as default
Then you can simply launch the instance and you'll see it running in your ec2 dashboard.
Step 2. Connecting to your ec2 instance via ssh keys :
Now that our ec2 instance is up & running, we can simply connect to it by clicking on connect in your ec2 instance.
You can open the ssh client tab, to see & follow the steps to connect to your ec2 instance via the ssh keys you created & downloaded in Step 1.
Congrats !! you're successfully connected to your instance via your terminal.
Step 3. Installing docker in your ec2 instance:
We will install docker in our ec2 instance using the repository.
You can run the following commands in your terminal that's connected to the your instance:
# Updating the apt package index
sudo apt-get update
# Installing the packages to allow apt to download the repository over https
sudo apt-get install ca-certificates curl gnupg lsb-release
# Add Docker’s official GPG key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
#Setting up the repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Updating the apt package index
sudo apt-get update
#Installing the latest version of the docker-cli, compose, etc
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
N.B. after setting up docker you'll have to run every docker command as root (i.e., sudo docker ps)
If you don't want that, you can add the user to the docker group, you'll find the steps in the docker documentation
Step 4. Running a PostgreSQL DB in a docker container :
To run a postgreSQL container in our ec2 instance, you can execute the following docker command.
sudo docker run --name postgresql -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=post123 -p 5432:5432 -v /data:/var/lib/postgresql/data -d postgres:alpine
--name postgresql : name of our container
-e POSTGRES_USER=postgres : name of the user
-e POSTGRES_PASSWORD=post123 : password of the user
-p 5432:5432 : binding ec2 instance port to docker port, our postgreSQL DB is exposed in this port
-v /data:/var/lib/postgresql/data : volume binding to persist & saving our data in the ec2 instance.
-d : running in detached mode, so that our container always runs in the background
postgres:alpine : postgres docker image. we chose the alpine image, due to its small size.
After the command has finished executing, you can see your running container by running :
sudo docker ps
In my case, you can see that the postgreSQL is running on the 5432 port of the ec2 instance, which means that our postgreSQL DB is still inaccessible by any external client (our db container is only exposed inside the instance at this point).
Which leads us to our next & final step.
Step 5. Exposing your DB server :
Now, in order to make our DB accessible to external clients, we need to change the security groups of the ec2 instance, to do that:
- select your ec2 instance & select the security group related to the instance.
- Select the "Inbound rules" tab, then click "Edit inbound rules".
Now you need to add the following rule to your inbound rules.
- Protocol : TCP
- Port : 5432
- Source : 0.0.0.0/0
N.B.,0.0.0.0/0 allows any external client to access your db, you can limit the access by selecting the exact ip adresses.
Once you've done that, well congrats, your PostgreSQL is fully accessible.
To access it, we will use the postgreSQL terminal (psql)
psql -h public_ip_adress -p 5432 -U postgres
- public_ip_adress : you can find it in your ec2 dashboard.
Thank you for reading this article, hope it helped you.
If you have any questions leave them in the comments or contact me directly in amedd.me
Top comments (3)
Nice try!! but, what about a docker-compose file??
Thanks, docker-compose for one postgres DB ? I haven't tried it for one database, but I'll try it for running multiple databases in the ec2 isntance.
Here is my docker-compose version github.com/raschmitt/dev-container...