DEV Community

Chris Hunt
Chris Hunt

Posted on

Rancher Personal Server Setup

Introduction

By profession, I'm a Software Engineer. Like many others in the same line of work, I've accumulated a few low traffic sites and apps which I've put together for friends and myself. The problem isn't specifically the sites and apps, it's the hosting. I want a simple and cheap hosting solution that don't have to worry about. Something that's easily maintainable and upgradable but also easy to use.

A few years ago, I discovered Rancher which is a Docker orchestration stack. It used its own orchestration engine called Cattle upon which ran a user friendly UI that allowed me to host my sites and apps.

Rancher v1 on an EC2 has served me well for nearly three years but over that time my server had a number of little hacks and quirks regarding routing and certificates which were not easily replicable. Rancher v1 is also not being maintained since v2 was released.

It was time to build a new server. I'm not a sysadmin or a network guy so I really want a simple solution that just works.

Requirements

My requirements haven't really changed from my previous setup with Rancher v1. Let's look at those requirements:

  • A cheap, cloud based server
  • A Docker orchestration system with project separation
  • An administration UI
  • Access to ECR
  • Ability to host under 10 low traffic websites and a couple of long running Node apps
  • A way of distributing HTTP requests to containers
  • Certificate generation and management
  • Ability to scale if required

Step up Rancher v2

The documentation for Rancher v2 promised to solve the routing and certificate hacks with out of the box functionality. Rancher v2 uses Kubernetes as it's orchestration engine too which is better documented than Cattle was. This seemed a good place to start but the migration docs (https://rancher.com/docs/rancher/v2.x/en/v1.6-migration/) seemed quite a faff. I decided to start from a clean install.

Spoiler alert: Rancher v2 lived up to the billing and gave me exactly what I required however there were a few setup hoops to jump through to get to that situation - hence writing this article hoping it may help others.

Server setup

I started by setting up a t2.medium EC2 instance using Amazon Linux 2 AMI with 20GB EBS storage and an Elastic IP along with my key pair.

Ensure that the Security Group of the instance has inbound access to ports 80, 8080, 443 and 8443. This will allow requests to both the Rancher UI (via 8080 and 8443) and to our hosted sites (through 80 and 443).

Use a suitable key pair to secure your access. This is beyond the scope of this article but full details can be found at https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html

The next thing is to ensure our server has a fixed IP. By default, the IP will get allocated when the server boots up. This means that when we restart our server, it may fire up with a different IP. From the AWS console, in EC2 Dashboard, select Elastic IP from the left hand menu. We can allocate a new IP and then allocate it to our instance. It's important that we release IP addresses which are not in use as we will be charged for IPs that you have allocated but not associated with an instance. You do not pay for IPs which are associated with an instance.

Once fired up, we can access your instance using your key through terminal:

ssh -i ~/.ssh/key.pem 52.16.31.9

and you're greeted with the following prompt.
Our server

Start by doing an update of the software on the server. sudo yum update. This may take a minute or so to get everything up to date.

The only software required on the server to run Rancher is Docker. This is a simple case of following the tutorial at https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html#install_docker.

One additional step is to ensure that Docker starts when the server boots using this command...

sudo systemctl enable docker

So far, so good!

Install and access Rancher

Installing Rancher server and SSL certificate

Next we install Rancher and access the UI. Hoorah for online docs. Rancher's single node installation guide covered everything I needed to know - https://rancher.com/docs/rancher/v2.x/en/installation/single-node/

I wanted the Rancher data to be persisted in case my container had issues
I could rescue that data.

I also wanted to use Let's Encrypt to provide a certificate for my Rancher UI access.

In order to get the certificate, we need to start the server container bound to ports 80 and 443

docker run -d --restart=unless-stopped \
  -p 80:80 -p 443:443 \
  -v /opt/rancher:/var/lib/rancher \
  rancher/rancher:latest --acme-domain mydomain.com

The container will now fire up and grab an SSL certificate from Let's Encrypt. You should now be able to connect to your server at https://mydomain.com (or whatever you've set up to point at your IP). You'll be asked to set a password for your admin account and also for the domain your Rancher UI is going to run on. We'll amend the URL to add a port :8443 to our URL.

As I was also going to be running the agent on the same node (server) as the Rancher server, we need to bind ports 80 and 443 to different ports (8080 and 8443 respectively).

This means we have to do a little juggling with our containers. We can do this as we've already got our SSL certificate. We need to remove our current container and then fire it up on the new ports.

To do this, list the Docker containers with docker ps and then remove the running container with docker rm -fv 637 where 637 is the first few characters of the container ID. See example below.

[ec2-user@ip-172-31-12-141 /]$ docker ps
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                      NAMES
6372f4c2ae95        rancher/rancher:latest   "entrypoint.sh --acm…"   18 minutes ago      Up 18 minutes       0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   clever_pike
[ec2-user@ip-172-31-12-141 /]$ docker rm -fv 637
637

Fire it up again with the new port mappings:

docker run -d --restart=unless-stopped \
  -p 8080:80 -p 8443:443 \
  -v /opt/rancher:/var/lib/rancher \
  rancher/rancher:latest --acme-domain mydomain.com

Because we've stored our data on a host volume, all of our settings and our certificate have been persisted.

Within a few seconds, I could hit mydomain.com:8443 in my browser and Rancher was up and running.

Adding a Cluster

We have Rancher Server set up. We now need a cluster (which, as we've previously discussed, in this case will be a single node) to run our applications on.

From the main menu, click on Clusters > Add Cluster.

Rancher offers a lot of options to add a cluster from different providers. It will provision the resources for you. We're going to add a cluster from an existing node (server).

Add Cluster options

Adding a cluster has a lot of options but we'll concentrate on the basics to get up and running. The most basic is to just give the cluster a name. Click Next and we're presented by Docker command.

We want our agent running with all the roles etcd, Control Panel and Worker. Check all these boxes.

sudo docker run -d --privileged --restart=unless-stopped \
   --net=host -v /etc/kubernetes:/etc/kubernetes \
   -v /var/run:/var/run rancher/rancher-agent:v2.3.0 \
   --server https://mydomain.com:8443 \
   --token cp6jcp9lcvw8b279brstp92bvfkg8xgv8b6dkkp9xz7n6ktxqsctzq \
   --etcd --controlplane --worker

Copy and paste that command on to our server terminal to fire up our worker. Rancher also starts Kubernetes services behind the scenes. If you want to see what Rancher has set up for us, run docker ps. This lists the running containers. At the bottom, we can see the Rancher Server with our external mapped ports and then the remaining containers are managing our agent.
Rancher containers

Back in the UI, we're informed of the status of the agent coming up. This takes a few minutes as each agent image needs to be downloaded and started.

Up and running cluster

Our cluster info

Take some time to have a look around the UI. Some of the features are quite self explanatory and a bit of exploring should find those.

A few things I'd recommend looking at at this point (but beyond the scope of this article):

  • Core Rancher settings
  • Cluster settings
  • Change your default security provider (I went for Github) and add a user
  • Add namespaces. Our apps will later be placed under a namespace. Namespaces help us separate our sites and apps.

Accessing ECR

I use ECR as a registry for my Docker images so we need to allow access to pull images. This can be set up using an instance profile on EC2 with access to the ECR registry.

From the EC2 console, select the instance and then Actions > Instance Settings > Attach/Replace IAM Role. From here, we can create an IAM role through the screen prompts and attach the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ECRGetImage",
            "Effect": "Allow",
            "Action": [
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage",
                "ecr:BatchCheckLayerAvailability"
            ]
        }
    ]
}

This gives our server full read permissions to all of our images on ECR. This ensures that our server has access and Kubernetes manages login to the repository.

Setting up SSL for sites

The last stage before we start adding our applications is setting up SSL.

This is achieved by adding the Let's Encrypt Certificate Manager application. In the context of Rancher, an application is a preconfigured image which we can launch directly from the Rancher UI.

Open up our cluster and click on Apps from the menu.
Add App menu

Click Launch and select the Let's Encrypt Certificate Manager. We are now presented with several options to get this started. We need to change the issuer from the stage issuer to the prod issuer using the option Let's Encrypt Cluster Issuer clients. We also need to enter our email address.

Provisioning our first workload

A workload is a containerised application. Both our sites and our apps are workloads.

Now that we've done all of the server setup, deploying a workload is little more than an exercise of completing a UI form. From our cluster, click the Deploy button. We're presented by an intuitive form where most of the options will be familiar to Docker users.
Deploy workload options

Give your workload a name, enter the full ECR image name, complete any other options as required and click the Launch button. Our workload should fire up.

If your app doesn't require connecting to the outside world, you're done. Our web app needs a couple more steps however.

We want to set up a load balancer to direct the traffic coming in to our server to the correct workload determined by the host name. From the cluster menu, select the Load Balancers tab followed by the Add Ingress button. Add a name for the load balancer and then ensure that it's on the same namespace as the website workload that you set up.

We then need to set up the rule which will direct traffic to our workload. The form looks as below. Enter the host name. To direct all traffic (rather than only a sub path, enter / in the path input box. Select the web app workload that we've just set up and the port that that workload accepts requests on.

Load balancer rules

We now have requests on port 80 directed to our workload. The final step is to ensure that we can also accept secure requests on port 443. The instructions to complete this are on the introduction of the Certificate Manager app.

Cert manager instructions

Back on our Load Balancers tab, we can use the menu for our load balancer and select View/Edit YAML. The first thing we need to add is in the metadata.annotations section.

kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/secure-backends: "true"

This is a one time addition. We then need to add the following for each of the sites which we're setting up:

spec:
  tls:
  - hosts:
    - www.mydomain.com
    - mydomain.com
    secretName: mydomain-crt

Note that we can add multiple domains to a certificate. The certificate will be saved in Kubernetes secrets and the secret name is defined here.

Upon saving the YAML, cert-manager should kick in and find this config and obtain a certificate from Let's Encrypt. Assuming we've pointed our host name at our server, the site should now be available on port 443 too.

Troubleshooting

I got to the situation above through a little trial and error. I have noted a few situations which popped up. Here's a few places to find information to help you debug a situation.

Pod logs

Assuming you've only set up one pod for your workload, you can access the log of that pod by selecting the workload. When presented with a list of pods, you can use the dropdown menu to view the log of the specific pod.

View logs

This will hopefully help us determine whether requests are reaching the pods.

If you have more than one pod per workload, it may be worth reducing deployed pods to one so that you know where requests should be headed.

Cert Manager logs

If you can't access your site on port 443 then it's worth checking that your certificate was created correctly. This can be done through the certificate manager logs.

Click on your certificate manager workload within your cluster and then from the pod, select View Logs.
Cert manager logs
In the logs, try searching for your host name. Hopefully there will be a message which will help you. This may be something like port 80 wasn't accessible or that your configuration wasn't correct. The couple of errors that I've had in here were well worded and finding the solution wasn't a problem.

ECR Permissions

On one occasion, I had an issue where a Docker image could not be pulled from ECR. For some reason, the EC2 Profile wasn't present. Restarting the server fixed the issue.

If you wanted to check the user and role within the EC2 instance, you can run the following command:

aws sts get-caller-identity

What next?

Walking through the steps above achieves the goals I set out at the start. We have a server with user friendly administration. We can simply set up other services through the UI.

Use Rancher namespaces and projects to separate and organise your workloads. While it's pretty easy to manage a couple of workloads, once you add database apps, logging apps, admin UIs connecting to your workloads, you'll certainly appreciate a way of organising them.

I'd recommend getting familiar with the Secrets functionality in Rancher and how to link them in to your application. This will assist making your server more secure.

Try scaling the cluster with another node by firing up another server, installing Docker and then go through the same process of adding a node as we looked at earlier (but with only a worker role).

Final points

I pulled all the points above from a number of tutorials, articles and Stack Overflow questions. Put that together with a decent amount of trial and error, I got my personal server set up in a user friendly and maintainable state. I'm not a "server guy" and don't have in depth knowledge of a lot of the concepts I've dabbled in so I welcome feedback and thoughts on improvement of the process and article.

Top comments (1)

Collapse
 
anuradhde profile image
Anuradha DE Silva

How can we change the Rancher Service port in Rancher HA?