DEV Community

Cover image for The Home Server Journey - 3: An Actually Global "Hello"
Beppe
Beppe

Posted on • Edited on

The Home Server Journey - 3: An Actually Global "Hello"

Hi again

For starters, I apologize for setting up the wrong expectations: chapter 1 anticipated all that stuff about external IPs and name servers [like it would be immediately necessary], but event in chapter 2 we still only accessed applications inside the local network

In my defense, I wished to make all the prerequisites clear from the get-go, so that there where no surprises about what you were getting into. Anyway, let's address that today, sooner rather than later, shall we?

Your Virtual Home Receptionist

The reason why you can't simply type http://<my WAN IP>:8080 or http://mybeautifuldomain.com:8080 to view the test page you're running is due to K3s (and Kubernetes implementations in general) not exposing pods by default. And that's the sensible decision, considering that not every application is supposed to be reached from outside, like a database providing storage for your backend service

Those access restrictions are created with the container runtime using iptables to set network rules. I've learned it the hard way when K3s conflicted with the nftables system service I had already running (fixed by disabling the latter). Time to update to new tools, containerd!

Moreover, while it's obvious where external requests should go to when we have a single application, how to handle many containers exposed to Internet access? For instance, it's not practical or sometimes even possible to demand an explicit port to select different Web services (e.g. mydomain.com:8081 and mydomain.com:8082), being the standard to use the default 80 and 443 ports for HTTP and HTTPS, respectively

With such requirement, it's common to share a single IP address (host and port) using paths (e.g. mydomain.com/app1 and mydomain.com/app2) or subdomains (e.g. app1.mydomain.com and app2.mydomain.com) to address a particular process. To achieve that, messages have to be interpreted for rerouting [to a local port] before reaching their intended target, a task performed by what is known as a reverse proxy

In K8s terminology, the component responsible for that functionality is called Ingress. It's actually a common interface for different implementations named Ingress Controllers, of which it's usually recommended (if you're not doing anything fancy or advanced) to use the NGINX-based one, as it's officially maintained

I emphasize usually because K3s is different and comes with a Traefik-based ingress controller by default. Taking that into account, as much as I like NGINX outside the container's world, I'd rather keep things simple and use what's already in place

It's totally fine to use Ingress-Nginx, tough. Just get into the GitHub repository and follow the instructions. I've used it myself before running into problems with upgrades (but you're not that dumb, right?). Be aware that all components will be created in their own predefined namespace, valid system-wide

Now we may use the provided controller to set our ingress rules:

apiVersion: networking.k8s.io/v1                    
kind: Ingress                                     # Component type
metadata:
  name: proxy                                     # Component name
  namespace: test                                 # You may add the default namespace for components as a paramenter
status:
  loadBalancer: {}
spec:
  ingressClassName: traefik                       # Type of controller being used    
  rules:                                          # Routing rules
  - host: choppaserver.dynv6.net                  # Expected domain name of request, including subdomain
    http:                                         # For HTTP or HTTPS requests (standard ports)
      paths:                                      # Behavior for different base paths
        - path: /                                 # For all request paths
          pathType: Prefix
          backend:
            service:
              name: welcome-service               # Redirect to this service
              port:
                number: 8080                      # Redirect to this internal service port
Enter fullscreen mode Exit fullscreen mode

As now the ingress exposes the service to the external network, it's not required to set the configuration type: LoadBalancer for welcome-service, as done on the previous chapter, which changes its behavior to the default ClusterIP

Save it to a file, apply the manifest with kubectl, and in the case everything is correct (including the network instructions from the first article), the test page should be accessible via domain name OR you get something like this:

Image description

Which is just the browser being picky with non-secure access, using http:// instead of https:// or not having valid certificates (more on that later). If you want to check the contents of the page, just be a pro for now and use curl:

$ curl "http://choppaserver.dynv6.net"                                                                                                                                                                
<!DOCTYPE HTML>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
<li><a href="welcome.txt">welcome.txt</a></li>
</ul>
<hr>
</body>
</html>
$ curl "http://choppaserver.dynv6.net/welcome.txt"                                                                                                                                                     
Hello, world!
Enter fullscreen mode Exit fullscreen mode

(Almost the same... Who needs browsers anyway?)

Papers, please

Seriously, though, SSL certificates are important nowadays [with HTTPS being ubiquitous], and we need to get one. The easiest way is to let them be automatically generated by cert-manager. To cut it short, get the latest version of the manifest file from GitHub or simply install directly from the download link:

$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/<desired version>/cert-manager.yaml
namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-cluster-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
Enter fullscreen mode Exit fullscreen mode

(Quite a few pieces, huh?)

Wait for the manager pods to complete their initialization:

Cert-manager pods

The manifest not only creates many standard K8s resources but also defines new custom ones, like the ClusterIssuer we have to manually add now for each environment (only one in our case):

apiVersion: cert-manager.io/v1      # API service created by cert-manager
kind: ClusterIssuer                 # Custom component type
metadata:
 name: letsencrypt
 namespace: cert-manager
spec:
 acme:
   # The ACME server URL
   server: https://acme-v02.api.letsencrypt.org/directory
   # Email address used for ACME registration
   email: <your e-mail here>
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class:  traefik            # Ingress controller type
Enter fullscreen mode Exit fullscreen mode

(As always, save the .yaml and apply. A ClusterIssuer is not namespace-scoped and can be used by Certificate resources in any namespace)

Here we're using the ACME challenge with an HTTP01 solver to get a valid result from the Let's Encrypt certificate authority, and I honestly can't explain to you properly what all that means. However, one thing that I know is that this particular solver requires the ability to open port 80 to receive requests. It's common for system ports (below 1024) to be restricted by your ISP, and if that's your case a DNS01 solver might be useful. Please consult the documentation for more

Finally, we can edit our ingress manifest to define the domains requiring a certificate:

apiVersion: networking.k8s.io/v1                    
kind: Ingress                                     # Component type
metadata:
  name: proxy                                     # Component name
  namespace: test                                 # You may add the default namespace for components as a paramenter
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt   # Reference the utilized ClusterIssuer name 
    kubernetes.io/ingress.class: traefik          # Ingress controller type (yes, again)
status:
  loadBalancer: {}
spec:
  ingressClassName: traefik                       # Ingress controller type
  tls:                                            # Certificate options
  - hosts:                                        # List of valid hosts
      - choppaserver.dynv6.net
    secretName: certificate                       # Secret component that will store the certificates      
  rules:                                          # Routing rules
  - host: choppaserver.dynv6.net                  # Expected domain name of request, including subdomain
    http:                                         # For HTTP or HTTPS requests
      paths:                                      # Behavior for different base paths
        - path: /                                 # For all request paths
          pathType: Prefix
          backend:
            service:
              name: welcome-service               # Redirect to this service
              port:
                number: 8080                      # Redirect to this internal service port
Enter fullscreen mode Exit fullscreen mode

After a while, a secret with the name defined in secretName should appear in the namespace you're using:

Certificate secret

And the browser should stop complaining about the lack of security in your Web page:

Secure access

Congrats! Now you the actual world can hear you say "hello". For more information on setting up cert-manager, see this guide (or this one for Ingress-Nginx)

See you next time

Top comments (0)