<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lance Nehring</title>
    <description>The latest articles on DEV Community by Lance Nehring (@lance_nehring).</description>
    <link>https://dev.to/lance_nehring</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lance_nehring"/>
    <language>en</language>
    <item>
      <title>Adventures with K0S in AWS</title>
      <dc:creator>Lance Nehring</dc:creator>
      <pubDate>Fri, 09 Aug 2024 14:22:47 +0000</pubDate>
      <link>https://dev.to/lance_nehring/adventures-with-k0s-in-aws-4ho9</link>
      <guid>https://dev.to/lance_nehring/adventures-with-k0s-in-aws-4ho9</guid>
      <description>&lt;p&gt;At the time of this writing (August 2024), K0S is at version v1.30.3.  There's a tremendous about of outdated and incorrect information on the Internet (which impacts AI, if you're into asking AmazonQ or ChatGPT questions), so be aware of the date of this article. My goal is to keep it current - we'll see how that goes.&lt;/p&gt;

&lt;p&gt;This isn't actually a tutorial - the end state is not desirable and the information is too dense. This is a more of an "engineering notebook" - akin to what my fellow graybeards may recall from engineering school.&lt;/p&gt;

&lt;p&gt;My plan was to establish a Kubernetes presence on AWS without the incurring the costs of Amazon's EKS. I wanted a lightweight, but fully functional, K8S installation that I could stand up and tear down to prove out orchestration and deployment of containerized projects that come along.... such as those for a startup company where attention to cloud cost is paramount.  I'm certainly not against EKS for those situations where the cost is justified and I have used it heavily in the past.&lt;/p&gt;

&lt;p&gt;Picking through the various smaller K8S projects out there, I've settled on &lt;a href="https://docs.k0sproject.io/stable/" rel="noopener noreferrer"&gt;K0S&lt;/a&gt; since it's supposed to be "The Zero Friction Kubernetes". The features I'm after with this experiment are similar to what I've used with EKS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ability to pull images from ECR&lt;/li&gt;
&lt;li&gt;use the AWS cloud provider functionality to get those AWS specific things: using tags to annotate subnets, worker nodes, route tables, etc for use by the K0S installation.&lt;/li&gt;
&lt;li&gt;use the pod identity agent to address pods that require certain privileges within AWS via IAM roles&lt;/li&gt;
&lt;li&gt;use an ingress controller to manage the provisioning and lifecycle of AWS ELBs - namely NLBs and ALBs. K0S has a tool called "k0sctl" to manage installation, but it requires SSH access to the nodes. I have no other use for SSH and don't need to expand the attack surface, so I won't install it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Establish a VPC for testing
&lt;/h2&gt;

&lt;p&gt;I won't cover the mechanics of creating the VPC, subnets, Internet gateway, route table, security groups, NACLs, etc. I personally use the IaC tool &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; whenever possible.  There is a learning curve to use something like Terraform (and learning HCL), but the benefits are enormous - especially when you need consistency so you don't waste time chasing ghosts resulting from misconfigured infrastructure.&lt;/p&gt;

&lt;p&gt;I'm using a VPC with a class B private CIDR (172.16.0.0/16) in the us-east-1 region, enabled for DNS hostnames and DNS resolution. I created 3 public subnets (each with a 20 bit subnet mask) even though we're only using 1 subnet to start with. The main route table needs a route for 0.0.0.0/0 that goes to the Internet Gateway for the VPC. I didn't create any private subnets in order to reduce the cost and need for any NAT gateways for this experiment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create your EC2 instance
&lt;/h2&gt;

&lt;p&gt;Similarly, I won't cover the mechanics of creating an EC2 instance. Terraform comes in really handy here since you may find yourself repeatedly doing &lt;code&gt;terraform apply&lt;/code&gt; and &lt;code&gt;terraform destroy&lt;/code&gt; as you start and stop your experiments.  I'm using a "t3a.large" node to start with using the latest AL2023 AMI - to have enough vCPU, memory, and networking to keep us out of harms way, without costing too much (in case we forget to destroy the instance after testing).  Also, I'm not bothering to set up SSH to get a shell on the instance and I'm using &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html" rel="noopener noreferrer"&gt;AWS System Manager&lt;/a&gt; instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Establish an IAM role for the EC2 instance to use as its instance profile
&lt;/h2&gt;

&lt;p&gt;We're going to do ourselves a giant favor in terms of security and start using IAM roles immediately.  Many articles related to AWS just talk about putting credentials in some "~/.aws/credentials" file. Yes, you can do that, but you immediately create an issue that will fail a security audit, and you're actually making your life harder by having to track and secure those credentials. So don't cheat and use your personal IAM access keys, or &lt;a href="https://en.wikipedia.org/wiki/Krampus" rel="noopener noreferrer"&gt;Krampus&lt;/a&gt; will find you.&lt;br&gt;
You can use Terraform for this as well.  Effectively you need an IAM role, I named mine "k0s_instance" and attached these AWS managed policies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AmazonEC2ContainerRegistryReadOnly&lt;/li&gt;
&lt;li&gt;AmazonEKS_CNI_Policy  (for when we experiment with the AWS CNI)
You'll also need to create 2 customer managed policies and attach those to the role.   The policy permissions information is from the docs here: &lt;a href="https://cloud-provider-aws.sigs.k8s.io/prerequisites/#iam-policies" rel="noopener noreferrer"&gt;https://cloud-provider-aws.sigs.k8s.io/prerequisites/#iam-policies&lt;/a&gt;  I named mine "k0s_control_plane_policy" and "k0s_node_policy".&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Install k0s
&lt;/h2&gt;

&lt;p&gt;We're going to start at the smallest cluster - a single node. All of the following installation steps are done in a "root" shell on the EC2 instance. This means the control plane and worker artifacts will be running on the same node. There are side effects with node selection and tolerations that we'll run into, but we'll address that later.&lt;br&gt;
&lt;a href="https://docs.k0sproject.io/stable/install/" rel="noopener noreferrer"&gt;https://docs.k0sproject.io/stable/install/&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sSLf https://get.k0s.sh | sudo sh
k0s sysinfo
mkdir -p /etc/k0s
k0s config create &amp;gt; /etc/k0s/k0s.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Get the ECR credential provider binary
&lt;/h2&gt;

&lt;p&gt;This link can help you determine what releases are available: &lt;a href="https://github.com/kubernetes/cloud-provider-aws/releases" rel="noopener noreferrer"&gt;https://github.com/kubernetes/cloud-provider-aws/releases&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://cloud-provider-aws.sigs.k8s.io/credential_provider/" rel="noopener noreferrer"&gt;AWS credential provider documentation&lt;/a&gt; is very light. You get to create a configuration file and then search the K0S docs for how to manipulate the kubelet arguments to use that file.  This &lt;a href="https://medium.com/@sajjadzaheri/how-to-authenticate-aws-ecr-on-any-kubernetes-cluster-the-right-way-26b6ee190125" rel="noopener noreferrer"&gt;article&lt;/a&gt; is for K3S, but shows that the configuration file can be YAML instead of JSON - something that isn't mentioned in the credential provider docs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RELEASE=v1.30.3
curl -OL https://storage.googleapis.com/k8s-staging-provider-aws/releases/${RELEASE}/linux/amd64/ecr-credential-provider-linux-amd64
mv ecr-credential-provider-linux-amd64 /etc/k0s/ecr-credential-provider
chmod 0755 /etc/k0s/ecr-credential-provider
cat &amp;lt;&amp;lt; EOF &amp;gt; /etc/k0s/custom-credential-providers.yaml
apiVersion: kubelet.config.k8s.io/v1
kind: CredentialProviderConfig
providers:
- name: ecr-credential-provider
  matchImages:
  - "*.dkr.ecr.*.amazonaws.com"
  - "*.dkr.ecr.*.amazonaws.com.cn"
  apiVersion: credentialprovider.kubelet.k8s.io/v1
  defaultCacheDuration: '0'
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Edit the k0s.yaml
&lt;/h2&gt;

&lt;p&gt;Here is where we actually set the kubelet argments so that the ECR credential provider will work.  The clues for this were from this &lt;a href="https://gist.github.com/schakko/d53deb3e75309ea5577693a21cb3cbc3" rel="noopener noreferrer"&gt;Gist&lt;/a&gt;. Combining that information with this K0S &lt;a href="https://docs.k0sproject.io/stable/worker-node-config/" rel="noopener noreferrer"&gt;doc&lt;/a&gt;, we discover that it is possible to use "--kubelet-extra-args" on the k0s command line to set the "--extra-args for the kubelet. &lt;br&gt;
Also, there seems like no possible way to get the default K0S CNI of kuberouter to work in AWS. I don't know the root cause - possibly there's CIDR conflicts with what I chose for my VPC CIDR - but it was a simple change to set the "spec.network.provider" value to "calico" in the "/etc/k0s/k0s.yaml" file that we created. Calico worked fine for me without further configuration.&lt;br&gt;
So, for now we're using Calico as the CNI. I feel like I should be able to use the AWS VPC CNI plugin, but that has not yet been successful for me. &lt;em&gt;This may need to be revisited if the AWS Load Balancer Controller requires it.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k0s install controller --single --enable-cloud-provider --kubelet-extra-args="--image-credential-provider-config=/etc/k0s/custom-credential-providers.yaml --image-credential-provider-bin-dir=/etc/k0s" -c /etc/k0s/k0s.yaml
systemctl daemon-reload
k0s start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will "start" the cluster, which we need to do to get the our kubectl configured that we'll do next.  The single node cluster won't truly start yet - and that's ok for now.   You'll notice Pods in the pending state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k0s status
k0s kubectl get pod -A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install and configure kubectl
&lt;/h2&gt;

&lt;p&gt;Here we grab a version of kubectl that matches our kubernetes version so that we maximize compatibility.  We use the k0s command to generate a valid config file and put it in the expected place.  Note the this file contains the "keys to the kingdom" as far as the k0s installation is concerned, so treat it appropriately.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k0s version
curl -LO https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
mkdir -p ~/.kube
k0s kubeconfig admin &amp;gt; ~/.kube/config
chmod 0600 ~/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install helm and add the stable and cloud-provider-aws repos
&lt;/h2&gt;

&lt;p&gt;We're embracing helm charts for repeatable, stable, versioned installations of everything we can.  Install the latest version of helm and setup a few repos that we intend to use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dnf install -y git
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm repo add stable https://charts.helm.sh/stable
helm repo add aws-cloud-controller-manager https://kubernetes.github.io/cloud-provider-aws
helm repo add eks https://aws.github.io/eks-charts
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install the helm chart for the aws-cloud-controller-manager
&lt;/h2&gt;

&lt;h3&gt;
  
  
  AWS Tags
&lt;/h3&gt;

&lt;p&gt;The documentation for the &lt;a href="https://cloud-provider-aws.sigs.k8s.io/prerequisites/" rel="noopener noreferrer"&gt;AWS Cloud Provider&lt;/a&gt; is rather underwhelming. Especially frustrating is the lack of direct information about tagging AWS resources. There's some information here that can help: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://repost.aws/knowledge-center/eks-vpc-subnet-discovery" rel="noopener noreferrer"&gt;https://repost.aws/knowledge-center/eks-vpc-subnet-discovery&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.com/questions/49802306/aws-integration-on-kubernetes" rel="noopener noreferrer"&gt;https://stackoverflow.com/questions/49802306/aws-integration-on-kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.golinuxcloud.com/setup-kubernetes-cluster-on-aws-ec2/" rel="noopener noreferrer"&gt;https://www.golinuxcloud.com/setup-kubernetes-cluster-on-aws-ec2/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using a kubernetes cluster name of "testcluster", the tags we start with are:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
Tags for EC2 instance, VPC, subnets:
&lt;tr&gt;
&lt;th&gt;Key&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;kubernetes.io/cluster/testcluster&lt;/td&gt;
&lt;td&gt;owned&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
Additional tags for subnets:
&lt;tr&gt;
&lt;th&gt;Key&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;kubernetes.io/role/elb&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;kubernetes.io/role/alb-ingress&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;kubernetes.io/role/internal-elb&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  AWS Cloud Provider
&lt;/h3&gt;

&lt;p&gt;I found it necessary to edit the node selector and tolerations of the daemonset to be able to get the pod scheduled in the case of this single node deployment. I was also unable to get AWS route tables annotated to the point where the aws-cloud-controller-manager would be happy about configuring cloud routes.  Not sure what "cloud routes" are supposed to be, but for now, I've disabled that feature.  There's more on it &lt;a href="https://blog.scottlowe.org/2021/10/12/using-the-external-aws-cloud-provider-for-kubernetes/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
We are doing all this in the custom helm values file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; /etc/k0s/accm-values.yaml
---
args:
  - --v=2
  - --cloud-provider=aws
  - --configure-cloud-routes=false
nodeSelector:
  node-role.kubernetes.io/control-plane: "true"
tolerations:
- key: node.cloudprovider.kubernetes.io/uninitialized
  value: "true"
  effect: NoSchedule
EOF
helm -n kube-system upgrade --install aws-cloud-controller-manager aws-cloud-controller-manager/aws-cloud-controller-manager --values /etc/k0s/accm-values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install pod identity agent
&lt;/h2&gt;

&lt;p&gt;Unfortunately, I didn't see a helm chart for the eks-pod-identity-agent hosted on a Helm repo.  So we're forced to clone the git repo and install the helm chart from that work area.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; /etc/k0s/epia-values.yaml
---
clusterName: testcluster
env:
  AWS_REGION: us-east-1
EOF
git clone https://github.com/aws/eks-pod-identity-agent.git
cd eks-pod-identity-agent/
helm install eks-pod-identity-agent --namespace kube-system ./charts/eks-pod-identity-agent --values ./charts/eks-pod-identity-agent/values.yaml --values /etc/k0s/epia-values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Is it working so far?
&lt;/h2&gt;

&lt;p&gt;Kubectl should be happy with the node and the pods. It can take a few minutes for the pods to reach a "Running" state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get node -o wide
kubectl get pod -A -o wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From what I've seen, the worker node (our only node, at this point) must have IP addresses assigned, else the &lt;code&gt;kubectl logs&lt;/code&gt; command will fail if you try to inspect logs from the pods/containers.  I found that you can find logs in the "/var/log/container" container directory of the EC2 instance that I'm using.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ingress Controller
&lt;/h2&gt;

&lt;p&gt;Here's where things take another complicated twist.  The AWS Cloud Controller Manager contains a legacy AWS load balancer controller capable of managing legacy ELBs and NLBs.  The code for the NLB management shows an older and newer API... I'm not sure what the switch is between the two, but it may very well be the "--v=2" argument that was passed to the aws-cloud-controller-manager.  Oddly the newer API for NLBs is not capable of configuring for proxy protocol, whereas the docs suggest that it does, and so does the code for the older API.&lt;br&gt;
It appears that this legacy code in the aws-cloud-controller-manager is bascially EOL - you can still use it, but broken things are not getting fixed.  The push seems to be with a follow-on project, the AWS Load Balancer Controller.  It is absolutely confusing, but I did find an &lt;a href="https://www.doit.com/demystifying-the-kubernetes-aws-load-balancer-controller/" rel="noopener noreferrer"&gt;article&lt;/a&gt; to explain it better.&lt;/p&gt;
&lt;h2&gt;
  
  
  Install the Nginx Ingress Controller
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.github.io/ingress-nginx/" rel="noopener noreferrer"&gt;https://kubernetes.github.io/ingress-nginx/&lt;/a&gt;&lt;br&gt;
The available customization values can be found with this nifty helm command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're creating a special configuration here based on: &lt;a href="https://kubernetes.github.io/ingress-nginx/deploy/#network-load-balancer-nlb" rel="noopener noreferrer"&gt;https://kubernetes.github.io/ingress-nginx/deploy/#network-load-balancer-nlb&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The idea is that the Nginx ingress controller is behind an NLB that accepts HTTPS and HTTP traffic (TCP:443 and TCP:80).  The HTTPS traffic has the SSL terminated at the NLB using the certificate given the the annotation.  That traffic of HTTPS origin, is then fed to the nginx controller as HTTP traffic.  The traffic of HTTP origin is sent by the NLB to a "tohttps" port (TCP:2443) at the nginx controller, that merely responds to the client with a code 308 permanent redirect - to force the client to use HTTPS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Notes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The "$" in the http-snippet are escaped in this heredoc to protect them from the shell.&lt;/li&gt;
&lt;li&gt;The shell variable CERT_ARN must be set to whatever certificate ARN that you have in your AWS Certificate Manager that you intend to use.&lt;/li&gt;
&lt;li&gt;Since this annotation using the legacy AWS load balancer controller, only a single certificate ARN can be specified.&lt;/li&gt;
&lt;li&gt;The "proxy-real-ip-cidr" is set to the CIDR of the VPC I'm using.  You can force proxy protocol to work, by uncommenting the comments in the heredoc.  The controller will not actually enable proxy protocol on the NLB's target groups, so you'll have to use the AWS console and do that manually. It can work, but it's not solution for production.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CERT_ARN="xxxxxxxx"
cat &amp;lt;&amp;lt; EOF &amp;gt; /etc/k0s/nic-values.yaml
---
controller:
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ${CERT_ARN}
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
#      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    targetPorts:
      http: tohttps
      https: http
  config:
#    use-proxy-protocol: "true"
    use-forwarded-headers: "true"
    proxy-real-ip-cidr: "172.16.0.0/16"
    http-snippet: |
      server {
        listen 2443;
        return 308 https://\$host\$request_uri;
      }
  containerPort:
    tohttps: 2443
EOF
helm upgrade -i ingress-nginx ingress-nginx/ingress-nginx --values /etc/k0s/nic-values.yaml -n ingress-nginx --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Simple application for testing
&lt;/h2&gt;

&lt;p&gt;Once the nginx ingress controller is running, we can attempt to test. For simplicity, this application is just a yaml manifest instead of a helm chart (there's likely a better way to do this). You can adjust the ingress host to something other than "web.example.com" - to potentially match that SSL cert that you're using - or not, depending on whether your testing can handle SSL name mismatch errors.&lt;/p&gt;

&lt;p&gt;"simple-web-server-with-ingress.yaml":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Namespace
metadata:
  name: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
  namespace: web
spec:
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: httpd
        image: httpd:2.4.53-alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: web-server-service
  namespace: web
spec:
  selector:
    app: web
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-server-ingress
  namespace: web
spec:
  ingressClassName: nginx
  rules:
  - host: web.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-server-service
            port:
              number: 5000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply with &lt;code&gt;kubectl apply -f simple-web-server-with-ingress.yaml&lt;/code&gt;.  It will take a few minutes for the NLB to finish provisioning and pass initial health checks. You can monitor the progress in the AWS EC2 console.  The ingress and service can be seen with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get ingress -A -o wide
kubectl get service -A -o wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can attempt to curl to the NLB's DNS address (same as the address shown by kubectl). I'm giving curl the "-k" option to ignore the SSL cert mismatch, and I'm also setting a "Host" HTTP header, since the ingress is explicitly for "web.example.com".&lt;/p&gt;

&lt;p&gt;So, when I execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -k -H 'Host: web.example.com' https://xxxxxxxxxxxxxxxxxxxx.elb.us-east-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I get the expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;It works!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If I try to similarly curl using HTTP instead of HTTPS, as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -k -H 'Host: web.example.com' http://xxxxxxxxxxxxxxxxxxxx.elb.us-east-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I get the expected 308 permanent redirect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;&amp;lt;title&amp;gt;308 Permanent Redirect&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;308 Permanent Redirect&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;
&amp;lt;hr&amp;gt;&amp;lt;center&amp;gt;nginx&amp;lt;/center&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that since I didn't change "web.example.com" to some DNS that I own, that if I tell curl to follow the redirect as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -L -k -H 'Host: web.example.com' http://xxxxxxxxxxxxxxxxxxxx.elb.us-east-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I get the expected error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl: (6) Could not resolve host: web.example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusions at this point
&lt;/h2&gt;

&lt;p&gt;We've shown that it is possible to get K0S working on a single node in AWS.  Using the nginx ingress controller can work for an NLB, but there are issues that make it undesirable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It's using a legacy AWS load balancer controller contained in the AWS cloud manager controller, which means:

&lt;ul&gt;
&lt;li&gt;Risk of that code being removed at some unknown point in the future&lt;/li&gt;
&lt;li&gt;Current documentation doesn't match the actual features&lt;/li&gt;
&lt;li&gt;NLBs are not configurable for TLS SNI or proxy protocol&lt;/li&gt;
&lt;li&gt;No support for ALBs&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;The nginx ingress controller has a somewhat complicated configuration to accomplish a common HTTP to HTTPS redirect.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Moving forward
&lt;/h2&gt;

&lt;p&gt;In the current &lt;a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.8/deploy/installation/#additional-requirements-for-non-eks-clusters" rel="noopener noreferrer"&gt;AWS Load Balancer Controller&lt;/a&gt; docs we find this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Additional requirements for non-EKS clusters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure subnets are tagged appropriately for auto-discovery to work&lt;/li&gt;
&lt;li&gt;For IP targets, pods must have IPs from the VPC subnets. You can configure the amazon-vpc-cni-k8s plugin for this purpose.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;I'm going to revisit using the amazon-vpc-cni-k8s plugin.  I'm thinking that I missed the kubelet configuration requirements when experimenting before and didn't actually have it installed properly.  It appears that they may be components that require installation directly on the worker node - like with the ECR credential provider.  We'll see - every day is a learning experience.&lt;/p&gt;

&lt;p&gt;Has anyone else tried to use K0S in this way? or have advice/clarifications/questions that I may (or may not) be able to answer?&lt;/p&gt;

</description>
      <category>k0s</category>
      <category>aws</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
