DEV Community

Cover image for How to setup nginx Ingress w/ automatically generated LetsEncrypt certificates on Kubernetes
Chris
Chris

Posted on • Updated on

How to setup nginx Ingress w/ automatically generated LetsEncrypt certificates on Kubernetes

Background: After first searching around there are plenty of tutorials to show how to do this, but none of them just worked w/o modifications, they were outdated or used helmet (which is not needed and I wanted to use as few tools as possible), so I had to look up multiple sources for getting it running. Basically this got a compilation of the articles found online that worked for me.

I'm omitting much of the background information or explanations here, as these can be found in the linked tutorials. Instead this shall serve as a brief set of instructions to get it working.

As a bonus we'll also setup port forwarding for any arbitrary service (non http/https - like databases) to become accessible from the internet over the same IP/LoadBalancer as the websites.

I'm using the Kubernetes offered by DigitalOcean, so if your new I'd be happy for using my link here to sign up. You can spin up a k8s cluster starting at 10$/month.

Prerequisites

  • Basic understanding of kubernetes objects / types
  • A k8s cluster, with kubectl ready / setup
  • Access to a DNS Provider to setup some DNS entries to point to your cluster

Sources

Credits to the sources used used in this article (no particular order):

Create some dummy 'echo' deployments

We want some dummy webservices to just respond to http requests. Create the following two files:

# echo1.yaml
apiVersion: v1
kind: Service
metadata:
  name: echo1
spec:
  ports:
  - port: 80
    targetPort: 5678
  selector:
    app: echo1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo1
spec:
  selector:
    matchLabels:
      app: echo1
  replicas: 2
  template:
    metadata:
      labels:
        app: echo1
    spec:
      containers:
      - name: echo1
        image: hashicorp/http-echo
        args:
        - "-text=echo1"
        ports:
        - containerPort: 5678
Enter fullscreen mode Exit fullscreen mode
# echo2.yaml
apiVersion: v1
kind: Service
metadata:
  name: echo2
spec:
  ports:
  - port: 80
    targetPort: 5678
  selector:
    app: echo2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo2
spec:
  selector:
    matchLabels:
      app: echo2
  replicas: 1
  template:
    metadata:
      labels:
        app: echo2
    spec:
      containers:
      - name: echo2
        image: hashicorp/http-echo
        args:
        - "-text=echo2"
        ports:
        - containerPort: 5678
Enter fullscreen mode Exit fullscreen mode

Apply both deployments:

$ kubectl apply -f echo1.yaml
$ kubectl apply -f echo2.yaml
Enter fullscreen mode Exit fullscreen mode

This will create 2 deployments along with 2 services, listening on cluster internal port 80:

$ kubectl get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
echo1        ClusterIP   10.245.164.177   <none>        80/TCP    1h
echo2        ClusterIP   10.245.77.216    <none>        80/TCP    1h
Enter fullscreen mode Exit fullscreen mode

Setup Ingress nginx

Getting ingress-nginx up and running only requires two commands:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml

service/ingress-nginx created
Enter fullscreen mode Exit fullscreen mode

At this point nginx-ingress is setup and a Load Balancer (caution: cloud providers charge fees for every Load Balancer) will spin up. This can take some minutes to show up. You can check the status with the following command:

$ kubectl get -n ingress-nginx service
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
ingress-nginx   LoadBalancer   10.245.209.25   155.241.87.123   80:30493/TCP,443:30210/TCP   1h
Enter fullscreen mode Exit fullscreen mode

UPDATE:

There's an open issue regarding network routing causing problems later when a pod tries to request certificates and runs into timeout trying to self check the requested domain.

Due to this we have to change the 'externalTrafficPolicy' of the just created 'ingress-nginx' Service. Create a file 'Service_ingress-nginx.yaml' w/ the following content:

# Service_ingress-nginx.yaml
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  # default for externalTrafficPolicy = 'Local', but due to an issue
  # (https://stackoverflow.com/questions/59286126/kubernetes-cluterissuer-challenge-timeouts,
  # https://github.com/danderson/metallb/issues/287)
  # it has to be 'Cluster' for now
  externalTrafficPolicy: Cluster
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
Enter fullscreen mode Exit fullscreen mode

When you diff / apply this file, the difference should be that the externalTrafficPolicy now is 'Cluster'. Disadvantage of this is, that you can loose track of the original IP requesting services, but for me this does not matter and resolves the certificate requesting issue.

kubectl diff -f Service_ingress-nginx.yaml # show differences
kubectl apply -f Service_ingress-nginx.yaml # activate changes
Enter fullscreen mode Exit fullscreen mode

Note down your external IP as we'll use it as our one gate for services we want to make publicly accessible.

Now go to your DNS provider and create two entries (A records of any kind) to point to your external IP.

Create a file 'echo_ingress.yaml' and adjust it to match the DNS entries just created:

# echo_ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"    
spec:
  rules:
  - host: echo1.yourdomain.com
    http:
      paths:
      - backend:
          serviceName: echo1
          servicePort: 80
  - host: echo2.yourdomain.com
    http:
      paths:
      - backend:
          serviceName: echo2
          servicePort: 80
Enter fullscreen mode Exit fullscreen mode

Create the ingress by applying it:

$ kubectl apply -f echo_ingress.yaml
Enter fullscreen mode Exit fullscreen mode

The two sites should now be avaible via http. Go ahead and try visiting them in the browser.

cert-manager

The next step is to install cert-manager, which will later use Issuers get our certificates.

$ kubectl create namespace cert-manager
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml
Enter fullscreen mode Exit fullscreen mode

This installs cert-manager. You can check for running pods:

$ kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-5c47f46f57-n9bb6              1/1     Running   0          31s
cert-manager-cainjector-6659d6844d-6zjh5   1/1     Running   0          31s
cert-manager-webhook-547567b88f-9hj5f      1/1     Running   0          31s
Enter fullscreen mode Exit fullscreen mode

To check if cert-manager runs correctly, we can now issue self signed certificates for testing. Generate a file named 'test-cert-manager.yaml':

# test-cert-manager.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: cert-manager-test
---
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
  name: test-selfsigned
  namespace: cert-manager-test
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: selfsigned-cert
  namespace: cert-manager-test
spec:
  commonName: example.com
  secretName: selfsigned-cert-tls
  issuerRef:
    name: test-selfsigned
Enter fullscreen mode Exit fullscreen mode

Apply it and check the output:

$ kubectl apply -f test-cert-manager.yaml
$ kubectl describe certificate -n cert-manager-test
Enter fullscreen mode Exit fullscreen mode

The events of the describe output should state something like 'Certificate issued successfully'.

If it's OK, delete the test resources:

$ kubectl delete -f test-cert-manager.yaml
Enter fullscreen mode Exit fullscreen mode

With cert-manager up and running, we're missing one final piece; an Issuer (or in our case, we'll pick a ClusterIssuer so that we don't need to specify namespaces and it just works globaly) for generating valid certificates.

ClusterIssuer's

We'll create two issuers. The first one (staging) is to test if everything works correctly. Otherwise, if something is wrong like a wrong DNS setting and we just use the production issuer, we could get temporarly get rejected from letsencrypt's servers.

Create the following two files and adjust your e-mail address:

# staging_issuer.yaml
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    # The ACME server URL
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: me@example.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-staging
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class:  nginx
Enter fullscreen mode Exit fullscreen mode
# prod_issuer.yaml
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: me@example.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class:  nginx
Enter fullscreen mode Exit fullscreen mode

Create the issuers:

$ kubectl create -f staging_issuer.yaml
$ kubectl create -f prod_issuer.yaml
Enter fullscreen mode Exit fullscreen mode

Extend the previously created 'echo_ingress.yaml' so that it looks like this:

# echo_ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"    
    cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
  tls:
  - hosts:
    - echo1.yourdomain.com
    - echo2.yourdomain.com
    secretName: letsencrypt-staging
  rules:
  - host: echo1.yourdomain.com
    http:
      paths:
      - backend:
          serviceName: echo1
          servicePort: 80
  - host: echo2.yourdomain.com
    http:
      paths:
      - backend:
          serviceName: echo2
          servicePort: 80
Enter fullscreen mode Exit fullscreen mode

Apply the adjustments:

$ kubectl apply -f echo_ingress.yaml
Enter fullscreen mode Exit fullscreen mode

Now, in my case it took several minutes until the certificate got issued. Check the events section of the folling command:

$ kubectl describe certificate letsencrypt-staging
Enter fullscreen mode Exit fullscreen mode

After some time it should state something like 'Certificate issued successfully'. Then you can reload the sites in your browser and check the certificates. They should now be issued by letsencrypt fake authorities.

If this succeeds, we can finally come to the last step of substituting the issuer with the production one. Adjust 'echo_ingress.yaml' one more time, switching to 'letsencrypt-prod':

# echo_ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"    
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  tls:
  - hosts:
    - echo1.yourdomain.com
    - echo2.yourdomain.com
    secretName: letsencrypt-prod
  rules:
  - host: echo1.yourdomain.com
    http:
      paths:
      - backend:
          serviceName: echo1
          servicePort: 80
  - host: echo2.yourdomain.com
    http:
      paths:
      - backend:
          serviceName: echo2
          servicePort: 80
Enter fullscreen mode Exit fullscreen mode

Apply the adjustments again:

$ kubectl apply -f echo_ingress.yaml
Enter fullscreen mode Exit fullscreen mode

Now, again after some possibly minutes, your site should be issued a valid certificate.

$ kubectl describe certificate letsencrypt-prod
Enter fullscreen mode Exit fullscreen mode

Congratulations! We finally have our sites running with valid certificates. The nice thing at this point is, that it's scalable. From now on it's easy to add or remove sites.

Troubleshooting certificate issueing

When at some point certificates are not issued it can help to know which components are involved until a cert is successfully issued. In my case for example challenges could not be fullfilled. Therefore just follow the guide on the official cert-manager site until the root is identified:

https://cert-manager.io/docs/faq/acme/

Bonus - make other services avaiable outside of cluster

Say you have some service other then websites - like a database - running inside your cluster and want to make it accessible from outside.

One method would be to just attach a 'NodePort' service to it, but that would come with restrictions, like only ports beyond 30000 and if one of your nodes go down or get replaced, the ip to access the service would need to be adjusted.

Instead, we can just add arbitrary ports to our existing load-balanced nginx-ingress service (TCP or UDP). Therefore, only some small modifications must be made.

Here's an example of opening TCP port 9000 and forwarding it to some service running internal at port 27017.

Previously we just applied the resources from the URL 'https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml' and 'https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml'.

They contain two resources that are active but need to be adjusted and re-applied. First, create a file 'ingress-nginx-tcp-configmap.yaml':

# ingress-nginx-tcp-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  9000: "default/someservice:27017"
Enter fullscreen mode Exit fullscreen mode

Second, create a file called 'ingress-nginx-service.yaml':

# ingress-nginx-service.yaml
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  externalTrafficPolicy: Local
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
    - name: proxied-tcp-9000
      port: 9000
      targetPort: 9000
      protocol: TCP

Enter fullscreen mode Exit fullscreen mode

Apply the adjustments.

$ kubectl apply -f ingress-nginx-tcp-configmap.yaml
$ kubectl apply -f ingress-nginx-service.yaml
Enter fullscreen mode Exit fullscreen mode

Your service sould no be accessible from outside of the cluster with the same IP as the websites.

Top comments (12)

Collapse
 
3ddpaz profile image
Ed

This post it's worth 1.000.000 Hearts. I solved the issue and actually understood it. there are many flavors to get cert-manager on track with kubernetes, on EKS from AWS. Btw never install cert-manager from gitlab. You will end up deleting a entire cluster just to get rid off it. Ingress works like champ on gitlab's. nothing else.. once again thanks bro..

Collapse
 
3ddpaz profile image
Ed

I solved my production problem, thanks :D. but what about let's say. staging-api.mysite.com on my staging namespace it creates the certificate but when on the browser it shows. CN=Fake LE Intermediate X1 and is not trusted and firefox is not opening it. any idea? I've production and staging namespaces with their own ingress copy/paste with different names. (staging this case)

Collapse
 
chrisme profile image
Chris

Maybe you just have misinterpreted what staging is reffered to in different contexts:

In context of letsencrypt staging certs:

As far as I know he LetsEncrypt Staging Authority issues exactly those kind of certificates that you mentioned. They are not trusted by browsers, but only used for initially testing if issuing certificates works in general. After that works you need to switch to letsencrypt production authority.

In context of your staging API:

It does not mean that for your staging environment you use the letsencrypt staging authority. Instead you also have to switch this to the production authority.

Collapse
 
dineshrathee12 profile image
Dinesh Rathee

LetsEncrypt have revoked around 3 million certs last night due to a bug that they found. Are you impacted by this, Check out ?

DevTo
[+] dev.to/dineshrathee12/letsencrypt-...

GitHub
[+] github.com/dineshrathee12/Let-s-En...

LetsEncryptCommunity
[+] community.letsencrypt.org/t/letsen...

Collapse
 
dragoscirjan profile image
Dragos Cirjan

I'm trying to apply the above setup in a Vagrant set of machines running Ubuntu 18.04.

Unfortunately, when trying to
kubectl apply -f Service_ingress-nginx.yaml
everything runs well, but then
vagrant@k8smaster:/vagrant/proxy$ kubectl get --all-namespaces service
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
ingress-nginx ingress-nginx LoadBalancer 10.102.210.119 80:32550/TCP,443:32197/TCP 22m
...

I tried to add in Service_ingress-nginx.yaml:
externalIPs:

  • 192.168.1.231 where the ip above is the machine's external IP

kubectl get --all-namespaces service will show an external IP, but I cannot view any of the domains in browser...

Installing Docker & Kubernetes with this Makefile: github.com/dragoscirjan/configs/bl...

Maybe I'm missing smth.
Would be really greatfull if you could advise.

Collapse
 
adieolami profile image
Adie Olalekan

This is awesome Chris. One question please, is this certificate self renewing?

Collapse
 
chrisme profile image
Chris

Hi Adie, yes cert-manager takes care of that job.

At least that's what the cert-manager repo claims: 'It will ensure certificates are valid and up to date periodically, and attempt to renew certificates at an appropriate time before expiry.'

Though I'll still have to wait some time before being able to really confirm it :-)

Collapse
 
adieolami profile image
Adie Olalekan

Okay, Thank you

Collapse
 
mikkowsx profile image
Mikko Hirvonen

Thanks a lot! I went through quite many instructions about let's encrypt with kubernetes but this was the first one with successful result. You saved a lots of my time. Thanks!

Collapse
 
chrisme profile image
Chris

nice love to hear that

Collapse
 
sinanmujan profile image
Sinan Mujan

Thank you so much, this helped me solve my problem with issuing the certificate, your Service_ingress-nginx.yaml file was the key. Awesome article!

Collapse
 
chrisme profile image
Chris

Tank you, I'm glad it helped!