DEV Community

Nonso Echendu
Nonso Echendu

Posted on

1

Setting up a VPC Infrastructure For Jenkins, Artifactory, Sonarqube on AWS using Terraform

Hey builders! So this is an article very much related to a previous one, where I wrote about setting up a VPC infrastructure on AWS but using the AWS console UI.

In this article, however, we'll be using Terraform, an IAC tool, to automate the setup and configuration of the vpc infrastructure and instances.

Here's a link to this terraform project's github repo - https://github.com/NonsoEchendu/terraform-for-aws-instances

You see, as Devops engineers, our job is to automate things, and make them work faster and seamlessly. And that's what makes using Terraform efficient. Because with just one command we can create and configure the whole infrastructure, as well as delete them all at once.

Alright, let's dive in...

Prerequisites

Before we get into the terraform script though, here are some prerequisites you must have:

  1. Install terraform

  2. Install AWS CLI

  3. Set up the AWS CLI with an IAM user having sufficient permissions (e.g., AdministratorAccess), using this command

    aws configure
    


Then fill in your AWS credentials.

Objective

We want to create EC2 instances for a Jenkins, Artifactory and Sonarqube servers. All resources will be in one VPC. The Jenkins server will be in a public subnet and the Artifactory and Sonarqube will both be in a private subnet.

By placing Artifactory and Sonarqube in a private subnet, we are not exposing them directly to the public or the internet, thus reducing the risk of unauthorized access or attacks.

Repository Structure

Image description

AWS Architecture Diagram

This is an architectural diagram of what we want to do or achieve.

Image description

Terraform Script

Now let's take a look at the terraform scripts that'll be doing all the work for us.

1. The providers.tf file.

This script defines and configures the AWS provider for Terraform.

provider "aws" {
  region = "us-east-1"
}
Enter fullscreen mode Exit fullscreen mode

The provider "aws" declares that terraform will interact with AWS services by using the AWS provider plugin. It allows Terraform to create and manage resources in AWS, such as EC2 instances, VPCs, etc.

The region = "us-east-1" specifies that all AWS resources in this Terraform configuration will be created in the us-east-1 region.

2. The variables.tf file.

variable "cidr" {
  default = "10.0.0.0/16"
}
Enter fullscreen mode Exit fullscreen mode

This creates a variable cidr, with a default value of 10.0.0.0/16. We'll be calling this variable in the main terraform script.

3. The main.tf file.

Now this is a lengthy script, 300+ lines, but we'll be taking it bit by bit.

Again, you can find the whole script in the project repo here.

  • Alright, let's start with the first resource, which creates a VPC.
resource "aws_vpc" "main_vpc" {
  cidr_block = var.cidr
  instance_tenancy = "default"
  tags = {
    Name = "javaVPC"
  }
}


We're giving it the name main_vpc, and that's what we'll be using to reference the VPC throughout the script when attaching subnets, instances and the likes.

cidr_block = var.cidr. We're assigning the value of cidr_block using the variable we defined earlier in the variables.tf file.

The CIDR block defines the range of private IP addresses available for use within the VPC. For a /16 block, you get 65,536 IP addresses.

Then, we also give it a tag with the name javaVPC (the 'java' there is cause the script was originally written to run a java app). This tag will help you identify the VPC in the AWS Console.

  • Next, let's create an internet gateway. This is an important component because that's what will allow our resources like ec2 instances within our VPC to access the internet.
resource "aws_internet_gateway" "main_igw" {
  vpc_id = aws_vpc.main_vpc.id
  tags = {
    Name = "javaVpcInternetGateway"
  }
}


We're giving it the name main_igw and attaching it to the VPC we created earlier.

Without an Internet Gateway, the VPC would remain isolated, and no outbound or inbound traffic to/from the internet would be possible.

  • Next, we'll be creating route tables - a public and a private one. First, the public route table.
resource "aws_route_table" "public_rt" {
  vpc_id = aws_vpc.main_vpc.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main_igw.id
  }
  tags = {
    Name = "javaPublicRouteTable"
  }
}


So we're creating a public route table and attaching it to the vpc. While there's no terraform resource explicitly for public route table, it is how we configure the route table that makes it public or private.

The cidr_block is set to 0.0.0.0/0 meaning we're creating a route entry for all internet-bound traffic. Then we're also directing the traffic to the internet gateway we created earlier. With this, subnets associated with this route table can communicate with the internet.

  • Next, let's create a public subnet. This will be what will host our Jenkins server and Bastion host (yeah :) i know we never mentioned Bastion, we'll get to it), that need direct internet access.
resource "aws_subnet" "public_subnet" {
  vpc_id                  = aws_vpc.main_vpc.id
  cidr_block              = "10.0.1.0/24"
  availability_zone       = "us-east-1a"
  map_public_ip_on_launch = true
  tags = {
    Name = "javaPublicSubnet"
  }
}


Again, we're attaching this subnet to our vpc (all resources created in this project are being attached to this vpc).

For the cidr_block, IP address range, 10.0.1.0/24, allocates 256 IP addresses (from 10.0.1.0 to 10.0.1.255).

Then we're also placing this subnet in the us-east-1a availability zone for fault tolerance and high availability.

map_public_ip_on_launch set to true automatically assigns a public IP address to instances launched in this subnet (Jenkins, Bastion host), as both instances need direct internet access.

We still haven't attached or associated this public subnet to the public route table we created earlier. There's another terraform resource for that.

  • So we create a route_table_association Terraform resource, that will simply associate our public subnet to the public route table.
resource "aws_route_table_association" "public_rta" {
  subnet_id      = aws_subnet.public_subnet.id
  route_table_id = aws_route_table.public_rt.id
}


Let's move to creating a private route table and subnet.

  • First, though, we need to create 2 things - an elastic IP and a NAT gateway. Why though? I'll explain.

For security reasons, we don't want our Artifactory and Sonarqube instances to have direct public internet access. But they still need to access the internet to download updates, plugins, or dependencies.

So we'll be creating an Elastic IP which we will assign to a NAT gateway.

The Elastic IP address is used specifically for the NAT Gateway, not for the private instances like Artifactory and Sonarqube.

The purpose of the Elastic IP for the NAT Gateway is to provide a static, public IP address that the NAT Gateway can use to enable internet access for the resources in the private subnet.

Without an Elastic IP, the NAT Gateway would be assigned a dynamically allocated public IP address, which could change over time. And so by associating an Elastic IP with the NAT Gateway, the public IP address remains constant, which is important for maintaining reliable internet connectivity for the resources in the private subnet.

The private instances, like Artifactory and Sonarqube, do not need public IP addresses assigned to them directly. They reside in the private subnet and will then access the internet through the NAT Gateway, using the Elastic IP.

The NAT gateway will then be placed in the public subnet which has access to the internet.

If it's a bit confusing, just please look at the architecture diagram again.

Here's the terraform configuration for this:

# Elastic IP for NAT Gateway
resource "aws_eip" "nat_eip" {
  domain = "vpc"
}
# NAT GAteway
resource "aws_nat_gateway" "main_nat" {
  allocation_id = aws_eip.nat_eip.id
  subnet_id     = aws_subnet.public_subnet.id
  tags = {
    Name = "NatGateway"
  }
}
  • Next, we can create our private route table and attach to our vpc
resource "aws_route_table" "private_rt" {
  vpc_id = aws_vpc.main_vpc.id
  tags = {
    Name = "javaPrivateRouteTable"
  }
}
  • Then, we need to create a route in the private route table that directs all internet-bound traffic through the NAT gateway.

Basically, this is to allow resources in the private subnet, such as the Artifactory and Sonarqube servers, to access the internet indirectly through the NAT Gateway, without having a direct public IP address.

Aws route configuration:

resource "aws_route" "private_nat" {
  route_table_id         = aws_route_table.private_rt.id
  destination_cidr_block = "0.0.0.0/0"
  nat_gateway_id         = aws_nat_gateway.main_nat.id
}


destination_cidr_block = "0.0.0.0/0" sets the destination CIDR block for the route to 0.0.0.0/0, which represents all internet traffic (i.e., any destination outside the VPC).

  • Next, we create a private subnet and associate it with the private route table.
# Private Subnet
resource "aws_subnet" "private_subnet" {
  vpc_id            = aws_vpc.main_vpc.id
  cidr_block        = "10.0.2.0/24"
  availability_zone = "us-east-1a"
  tags = {
    Name = "javaPrivateSubnet"
  }
}
# Private Route Table Association with Private Subnet
resource "aws_route_table_association" "private_rta" {
  subnet_id      = aws_subnet.private_subnet.id
  route_table_id = aws_route_table.private_rt.id
}

Now we're moving to creating the instances.

  1. We start with the Jenkins server
  • But first, let's create a Security Group for the Jenkins instance.
resource "aws_security_group" "jenkins_sg" {
  name   = "jenkins_sg"
  vpc_id = aws_vpc.main_vpc.id
  tags = {
    Name = "jenkins_sg"
  }
}


Then, we're going to add inbound and outbound rules to this security group.

# Jenkins Security Group Inbound Rule 1
resource "aws_vpc_security_group_ingress_rule" "jenkins_sg_inbound_rule1" {
  security_group_id = aws_security_group.jenkins_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 22
  ip_protocol       = "tcp"
  to_port           = 22
}
# Jenkins Security Group Inbound Rule 2
resource "aws_vpc_security_group_ingress_rule" "jenkins_sg_inbound_rule2" {
  security_group_id = aws_security_group.jenkins_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 8080
  ip_protocol       = "tcp"
  to_port           = 8080
}
# Jenkins Security Group Outbound Rule 1
resource "aws_vpc_security_group_egress_rule" "outbound_rule1" {
  security_group_id = aws_security_group.jenkins_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  ip_protocol       = "-1"
}


Basically, we're setting inbound rules to allow access to ports 22 (for SSH) and 8080 (Jenkins HTTP UI).

cidr_ipv4 is set to 0.0.0.0/0 meaning any ip address is allowed to access these ports.

  • Next, let's create the ec2 instance for Jenkins itself.
resource "aws_instance" "jenkins_server" {
  ami                    = "ami-04b4f1a9cf54c11d0"
  instance_type          = "t2.micro"
  key_name = "new-test-key-pair"
  vpc_security_group_ids = [aws_security_group.jenkins_sg.id]
  subnet_id              = aws_subnet.public_subnet.id
  user_data              = filebase64("./jenkins_user_data.sh")
  tags = {
    Name = "JenkinsServer"
  }
}


You can change your own AMI id to one of your choice. The one stated above is for Ubuntu 24.04 x86 architecture.

key_name argument requires that you assign an already created key pair. In this case, i have already created a key pair from my AWS console named new-test-key-pair.

We also attach this instance to the Jenkins security group we created earlier. Also attaching the instance to the public subnet.

We're also making use of user_data arg. This will help us to pass a script to our Jenkins EC2 instance during its launch. The jenkins_user_data.sh script installs docker and docker compose, and spins up a Jenkins container from a jenkins docker image.

To get the user-data scripts check the project repository under the user-data-scripts directory.


And so we're done with the Jenkins instance.

Let's move to the next ec2 instances - Artifactory and Sonarqube.

But before that, we have to create one instance before them.

2. The Bastion Host. The one mentioned earlier.

But why do we need to create this Bastion host?

Well, the Bastion host will serve as a secure gateway in the public subnet that provides SSH access to instances in private subnets. It will act as the single entry point to SSH into our private Artifactory and Sonarqube instances.

  • Let's create first, the Bastion Security Group, and add inbound and outbound rules.
resource "aws_security_group" "bastion_sg" {
  name   = "bastion_sg"
  vpc_id = aws_vpc.main_vpc.id
  tags = {
    Name = "bastion_sg"
  }
}
# Bastion Security Group Inbound Rule 1
resource "aws_vpc_security_group_ingress_rule" "bastion_sg_inbound_rule1" {
  security_group_id = aws_security_group.bastion_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 22
  ip_protocol       = "tcp"
  to_port           = 22
}
# Bastion Security Group Outbound Rule 1
resource "aws_vpc_security_group_egress_rule" "bastion_outbound_rule1" {
  security_group_id = aws_security_group.bastion_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  ip_protocol       = "tcp"
  from_port         = 22
  to_port           = 22
}
# Bastion Security Group HTTPS Outbound Rule 
resource "aws_vpc_security_group_egress_rule" "bastion_outbound_https" {
  security_group_id = aws_security_group.bastion_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  ip_protocol       = "tcp"
  from_port         = 443
  to_port           = 443
}


With the ingress or inbound rule, we're simply allowing access to port 22 (for SSH).

cidr_ipv4 is set to 0.0.0.0/0 meaning any ip address is allowed to access this port.

Note: Replace "0.0.0.0/0" with a more restrictive CIDR block that only includes trusted IP addresses. For example, your trusted IP range. Allowing 0.0.0.0/0 means any device connected to the internet can attempt to connect to the bastion host using SSH.

We also set some egress or outbound rules. Both rules allow Bastion Security Group to send outbound traffic on ports 22 (SSH) and 443 (HTTPS) to any IP address (0.0.0.0/0).

  • Next, we create the Bastion instance:
resource "aws_instance" "bastion_host" {
  ami                    = "ami-04b4f1a9cf54c11d0"
  instance_type          = "t2.micro"
  key_name               = "new-test-key-pair"
  vpc_security_group_ids = [aws_security_group.bastion_sg.id]
  subnet_id              = aws_subnet.public_subnet.id
  tags = {
    Name = "bastion_host"
  }
}


The setup is very similar to the Jenkins instance creation config, with the exception that here, we're using the Bastion Security Group in the Bastion instance.


3. Now we come to the Artifactory instance.

  • Let's create the Security group with inbound and outbound rules for it:
# Artifactory Security Group Inbound Rule 1
resource "aws_vpc_security_group_ingress_rule" "artifactory_sg_inbound_rule1" {
  security_group_id            = aws_security_group.artifactory_sg.id
  from_port                    = 22
  ip_protocol                  = "tcp"
  to_port                      = 22
  referenced_security_group_id = aws_security_group.bastion_sg.id
}
# Artifactory Security Group Inbound Rule 2
resource "aws_vpc_security_group_ingress_rule" "artifactory_sg_inbound_rule2" {
  security_group_id            = aws_security_group.artifactory_sg.id
  from_port                    = 8081
  ip_protocol                  = "tcp"
  to_port                      = 8081
  referenced_security_group_id = aws_security_group.jenkins_sg.id
}
# Artifactory Security Group Inbound Rule 3
resource "aws_vpc_security_group_ingress_rule" "artifactory_sg_inbound_rule3" {
  security_group_id            = aws_security_group.artifactory_sg.id
  from_port                    = 8082
  ip_protocol                  = "tcp"
  to_port                      = 8082
  referenced_security_group_id = aws_security_group.jenkins_sg.id
}
# Artifactory Security Group Outbound Rule 
resource "aws_vpc_security_group_egress_rule" "artifactory_https_outbound_rule" {
  security_group_id = aws_security_group.artifactory_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 443
  to_port           = 443
  ip_protocol       = "tcp"
}
resource "aws_vpc_security_group_egress_rule" "artifactory_http_outbound_rule" {
  security_group_id = aws_security_group.artifactory_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 80
  to_port           = 80
  ip_protocol       = "tcp"
}


The inbound rules we added are to allow access to ports 22 (SSH), 8081 for the Artifactory UI, and 8082 for Repository-specific services.

There's an argument, though, that we're using here that we didn't use in the other security groups, the referenced_security_group_id arg.

Its work? Instead of allowing access from all IPs (cidr_ipv4), we're restricting SSH (port 22) access to only the Bastion security group. And access to ports 8081 and 8081 are from only the Jenkins Security Group.

Meaning that only instances in the Bastion security group can SSH into the Artifactory instance, and only instances in the Jenkins SG can access ports 8081 and 8082.

For the Outbound rules:

Since we'll be installing docker and updating the linux machine's dependencies on the Artifactory instance, we set outbound rules for tcp on port 443 (https) and port 80 (http) to allow the instance to communicate package repositories, Docker repositories, and any other services using HTTPS.

  • Now we can create the Artifactory ec2 instance:
resource "aws_instance" "artifactory_server" {
  ami                         = "ami-04b4f1a9cf54c11d0"
  instance_type               = "t2.medium"
  key_name                    = "new-test-key-pair"
  vpc_security_group_ids      = [aws_security_group.artifactory_sg.id]
  subnet_id                   = aws_subnet.private_subnet.id
  associate_public_ip_address = false
  user_data                   = filebase64("user-data-scripts/artifactory_user_data.sh")
  tags = {
    Name = "ArtifactoryServer"
  }
  depends_on = [aws_nat_gateway.main_nat]
}


In this instance, we're still using the same AMI, but with a different instance type, t2.medium, as Artifactory requires a higher cpu and ram specification.

We're also disabling a public ip address from being assigned to this instance. And we're adding the Artifactory instance in the private subnet we created earlier on.

Another important thing i noticed while testing and running this project was that, the Artifactory and Sonarqube insatnces are normally created and running before the NAT gateway becomes "available".

Remember that the NAT gateway is what enables instances in the private subnet to connect to the internet. So, the instances running before the nat gateway becomes "available" makes updating linux dependencies, installation of tools like docker and docker-compose, by the user-data script to fail.

So the solution?

The depends_on argument. The Artifactory instance will depend on the NAT gateway, meaning that the NAT gateway will be created and status be "available" first, before Terraform creates the Artifactory instance.


4. Now we can go to creating the Sonarqube Instance and its security group.

  • The Sonarqube Security Group:
resource "aws_security_group" "sonarqube_sg" {
  name   = "sonarqube_sg"
  vpc_id = aws_vpc.main_vpc.id
  tags = {
    Name = "sonarqube_sg"
  }
}
# sonarqube Security Group Inbound Rule 1
resource "aws_vpc_security_group_ingress_rule" "sonarqube_sg_inbound_rule1" {
  security_group_id            = aws_security_group.sonarqube_sg.id
  from_port                    = 22
  ip_protocol                  = "tcp"
  to_port                      = 22
  referenced_security_group_id = aws_security_group.bastion_sg.id
}
# Sonarqube Security Group Inbound Rule 2
resource "aws_vpc_security_group_ingress_rule" "sonarqube_sg_inbound_rule2" {
  security_group_id            = aws_security_group.sonarqube_sg.id
  from_port                    = 9000
  ip_protocol                  = "tcp"
  to_port                      = 9000
  referenced_security_group_id = aws_security_group.jenkins_sg.id
}
# Sonarqube Security Group Outbound Rule 
resource "aws_vpc_security_group_egress_rule" "sonarqube_https_outbound_rule" {
  security_group_id = aws_security_group.sonarqube_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 443
  to_port           = 443
  ip_protocol       = "tcp"
}
resource "aws_vpc_security_group_egress_rule" "sonarqube_http_outbound_rule" {
  security_group_id = aws_security_group.sonarqube_sg.id
  cidr_ipv4         = "0.0.0.0/0"
  from_port         = 80
  to_port           = 80
  ip_protocol       = "tcp"
}


Similar inbound and outbound rules as that of Artifactory are used here.

  • And finally, the Sonarqube instance:
resource "aws_instance" "sonarqube_server" {
  ami                         = "ami-04b4f1a9cf54c11d0"
  instance_type               = "t2.medium"
  key_name                    = "new-test-key-pair"
  vpc_security_group_ids      = [aws_security_group.sonarqube_sg.id]
  subnet_id                   = aws_subnet.private_subnet.id
  associate_public_ip_address = false
  user_data                   = filebase64("user-data-scripts/sonarqube_user_data.sh")
  tags = {
    Name = "SonarqubeServer"
  }
  depends_on = [aws_nat_gateway.main_nat]
}

Running the Terraform Scripts

To run the terraform configuration scripts, we'll use these commands:

  1. Change to the project root directory, and run:
terraform init
Enter fullscreen mode Exit fullscreen mode

This will initialize the terraform working directory, installing also the required provider (AWS in this case) plugins.


2.

terraform plan
Enter fullscreen mode Exit fullscreen mode

This will show you what Terraform will do when you run the terraform apply command. It'll also help you detect any syntax errors or config issues before making changes.


3.

terraform apply
Enter fullscreen mode Exit fullscreen mode

This executes the actions in the execution plan (e.g., creating, modifying, or destroying resources).

It will also prompt a confirmation message if terraform should proceed to execute, to which you should input yes to proceed.

Conclusion


Voila! We've successfully used terraform to set up an AWS infrastructure for Jenkins, Artifactory and Sonarqube.

We hosted them all in a VPC, put Jenkins and Bastion in a public subnet and Artifactory and Sonarqube in a different private subnet.

You can login to your AWS Console and confirm that all these resources were created and working well.

P.s. With just one command, terraform destroy you can destroy all these resources at once.

If you've manually created different AWS resources before like the ones we created in this project, you'll know how tedious it can be.

But with Terraform with just few commands you can create and delete all these resources in an instant.

I do hope you enjoyed this article, as i enjoyed writing it.

Please do like, share and leave your comments.

Till the next, happy building!

Heroku

Simplify your DevOps and maximize your time.

Since 2007, Heroku has been the go-to platform for developers as it monitors uptime, performance, and infrastructure concerns, allowing you to focus on writing code.

Learn More

Top comments (0)

Billboard image

The Next Generation Developer Platform

Coherence is the first Platform-as-a-Service you can control. Unlike "black-box" platforms that are opinionated about the infra you can deploy, Coherence is powered by CNC, the open-source IaC framework, which offers limitless customization.

Learn more