<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Buster Styren</title>
    <description>The latest articles on DEV Community by Buster Styren (@styren).</description>
    <link>https://dev.to/styren</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/styren"/>
    <language>en</language>
    <item>
      <title>Lock down your Kubernetes services with OAuth2 Proxy</title>
      <dc:creator>Buster Styren</dc:creator>
      <pubDate>Wed, 12 Apr 2023 08:44:41 +0000</pubDate>
      <link>https://dev.to/styren/lock-down-your-kubernetes-services-with-oauth2-proxy-28d9</link>
      <guid>https://dev.to/styren/lock-down-your-kubernetes-services-with-oauth2-proxy-28d9</guid>
      <description>&lt;p&gt;&lt;strong&gt;In this tutorial I will show how you can use Oauth2 Proxy to limit access to your Kubernetes services only to members of your GitHub organization, team or just to yourself.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you're like me then you might have a long list of self-hosted third-party services running inside of your Kubernetes clusters, such as Grafana, Sentry.io, Elastic and Jaeger. Some of these services may have their own mechanism for authentication, some can be configured with an external OAuth provider, and some won't have any authentication mechanism at all.&lt;/p&gt;

&lt;p&gt;How can we easily expose our services over the internet with a proper audit trail and without letting any unauthorized users in?&lt;/p&gt;

&lt;p&gt;One surprisingly simple solution is to create a GitHub OAuth app and configure your ingresses to use a proxy that permits access based on which GitHub teams a user is part of.&lt;/p&gt;

&lt;p&gt;That's a lot of words, but I'll show you that setting it up is just minutes of work and the result is protected internal services with just two ingress annotations:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;nginx.ingress.kubernetes.io/auth-signin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://oauth2.symbiosis.host/oauth2/start?rd=https://$host$uri&lt;/span&gt;
&lt;span class="na"&gt;nginx.ingress.kubernetes.io/auth-url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://oauth2.symbiosis.host/oauth2/auth&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Requirements&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Kubernetes cluster&lt;/li&gt;
&lt;li&gt;Helm&lt;/li&gt;
&lt;li&gt;Kubectl&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.github.io/ingress-nginx/" rel="noopener noreferrer"&gt;NGINX ingress controller&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cert-manager.io/docs/" rel="noopener noreferrer"&gt;Cert-manager&lt;/a&gt; (to generate TLS certs for OAuth2 Proxy, optional)&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  What is OAuth2 Proxy?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://oauth2-proxy.github.io/oauth2-proxy/docs/" rel="noopener noreferrer"&gt;OAuth2 Proxy&lt;/a&gt; is a reverse proxy that authenticates users for a whole range of different Oauth identity providers, such as Keycloak, Google, GitHub, OpenID Connect, &lt;a href="https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/oauth_provider" rel="noopener noreferrer"&gt;and more&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Different providers have different configurations. What makes this powerful in the case of GitHub is the ability to allow or reject requests based on properties such as GitHub organization membership, team membership, email domain and others.&lt;/p&gt;

&lt;p&gt;A running Oauth2 Proxy will expose endpoints for signing in, signing out, callbacks used by the OAuth provider, and more. These endpoints can in turn be used to configure NGINX Ingress to ensure that requests to a specific ingress are only performed by a user that is properly signed in and has the proper authority.&lt;/p&gt;

&lt;p&gt;However, before we can install and configure OAuth2 Proxy we first have to configure the OAuth provider.&lt;/p&gt;

&lt;h1&gt;
  
  
  Creating an OAuth app on GitHub
&lt;/h1&gt;

&lt;p&gt;Sign in to GitHub and browse to &lt;a href="https://github.com/settings/developers" rel="noopener noreferrer"&gt;Settings → Developer settings → OAuth Apps → New OAuth App&lt;/a&gt;, fill in the name and page of your application.&lt;/p&gt;

&lt;p&gt;The authorization callback URL is the URL that GitHub will redirect to when the authorization is finished. This URL will not be reachable as we haven't configured the OAuth2 Proxy ingress yet. However, it should have the following structure, with &lt;code&gt;example.com&lt;/code&gt; replaced by your domain name:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://example.com/oauth2/callback" rel="noopener noreferrer"&gt;https://example.com/oauth2/callback&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the app is created generate a client secret and save the generated value. We will use this value soon to configure OAuth2 Proxy.&lt;/p&gt;

&lt;h1&gt;
  
  
  Installing OAuth2 Proxy with Helm
&lt;/h1&gt;

&lt;p&gt;I've used Helm to install OAuth2 Proxy and found it to be pretty convenient. Check out the &lt;a href="https://github.com/oauth2-proxy/manifests/tree/main/helm/oauth2-proxy" rel="noopener noreferrer"&gt;Helm chart&lt;/a&gt; for the full list of parameters.&lt;/p&gt;

&lt;p&gt;Below is an example &lt;code&gt;values.yaml&lt;/code&gt; file that permits access to any user that is part of the "example-org" GitHub organization, with TLS certificates generated by cert-manager.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;clientID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-client-id&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with your GitHub OAuth app client ID&lt;/span&gt;
  &lt;span class="na"&gt;configFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;provider = "github"&lt;/span&gt;
    &lt;span class="s"&gt;scope = "user:email read:org"&lt;/span&gt;
    &lt;span class="s"&gt;github_org = "example-org"  # Replace with your GitHub org name, or github_team=team_name to limit access to a specific team&lt;/span&gt;
    &lt;span class="s"&gt;email_domains = [ "*" ]  # Replace with [ "example.com" ] to limit access to users with specific email domain&lt;/span&gt;
    &lt;span class="s"&gt;cookie_domains = [ "example.com" ]  # Replace with domain names that the proxy is allowed to redirect to after auth, prepend . for wildcards, i.e. ".example.com"&lt;/span&gt;
    &lt;span class="s"&gt;whitelist_domains = [ "example.com" ]  # Same as above&lt;/span&gt;

&lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/oauth2&lt;/span&gt;
  &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
  &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;acme.cert-manager.io/http01-edit-in-place&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
    &lt;span class="na"&gt;cert-manager.io/cluster-issuer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with name of your cert-manager ClusterIssuer&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/tls-acme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;example.com&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with host address for the oauth2-proxy ingress&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;example.com&lt;/span&gt;  &lt;span class="c1"&gt;# Same as above&lt;/span&gt;
    &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;oauth2-tls&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Make sure to replace &lt;code&gt;example-client-id&lt;/code&gt;, &lt;code&gt;example-org&lt;/code&gt; and &lt;code&gt;example.com&lt;/code&gt; with your GitHub OAuth app client ID, GitHub organization and domain name.&lt;/p&gt;

&lt;p&gt;You may have to edit the cert-manager annotations based on your own configuration, for example by using the &lt;code&gt;cert-manager.io/issuer&lt;/code&gt; annotation for namespaced certificate issuers.&lt;/p&gt;

&lt;p&gt;We also need to configure a cookie secret that is used by OAuth2 Proxy to encrypt and decrypt user session cookies. Any base64 encoded string is valid, a secure base64 encoded string can be generated using &lt;code&gt;openssl&lt;/code&gt; with the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

openssl rand &lt;span class="nt"&gt;-base64&lt;/span&gt; 32 | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; 32 | &lt;span class="nb"&gt;base64&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, we can add the OAuth2 Proxy helm repository and install the chart with our values file, cookie and client secrets:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
helm &lt;span class="nb"&gt;install &lt;/span&gt;oauth2-proxy oauth2-proxy/oauth2-proxy &lt;span class="nt"&gt;--file&lt;/span&gt; values.yaml &lt;span class="nt"&gt;--set&lt;/span&gt; config.cookieSecret&lt;span class="o"&gt;=&lt;/span&gt;example-cookie-secret &lt;span class="nt"&gt;--set&lt;/span&gt; config.clientSecret&lt;span class="o"&gt;=&lt;/span&gt;example-client-secret


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  oauth2-proxy without cert-manager
&lt;/h2&gt;

&lt;p&gt;If you don't have cert-manager installed you can either &lt;a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/" rel="noopener noreferrer"&gt;create the TLS certificate manually&lt;/a&gt; or disable TLS altogether:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;clientID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-client-id&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with your GitHub OAuth app client ID&lt;/span&gt;
  &lt;span class="na"&gt;configFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;github_org = "example-org"  # Replace with your GitHub org name, or github_team=team_name to limit access to a specific team&lt;/span&gt;
    &lt;span class="s"&gt;scope = "user:email read:org"&lt;/span&gt;
    &lt;span class="s"&gt;email_domains = [ "*" ]  # Replace with [ "example.com" ] to limit access to users with specific email domain&lt;/span&gt;
    &lt;span class="s"&gt;provider = "github"&lt;/span&gt;
    &lt;span class="s"&gt;cookie_secure = false&lt;/span&gt;
    &lt;span class="s"&gt;cookie_domains = [ "example.com" ]  # Replace with domain names that the proxy is allowed to redirect to after auth&lt;/span&gt;
    &lt;span class="s"&gt;whitelist_domains = [ "example.com" ]  # Same as above&lt;/span&gt;

&lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/oauth2&lt;/span&gt;
  &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
  &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;example.com&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with host address for the oauth2-proxy ingress&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Note that you need to change the schema of the callback and auth URLs to &lt;code&gt;http://&lt;/code&gt;, both in the GitHub OAuth configuration and in the annotations below.&lt;/p&gt;

&lt;h1&gt;
  
  
  Configuring our ingress to use OAuth2 Proxy
&lt;/h1&gt;

&lt;p&gt;Now that OAuth2 Proxy is up and running, we can configure our ingress to protect our internal services with GitHub OAuth.&lt;/p&gt;

&lt;p&gt;Add the following annotations to your ingress:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;nginx.ingress.kubernetes.io/auth-signin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://example.com/oauth2/start?rd=https://$host$uri&lt;/span&gt;
&lt;span class="na"&gt;nginx.ingress.kubernetes.io/auth-url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://example.com/oauth2/auth&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Make sure to replace &lt;code&gt;example.com&lt;/code&gt; with your own domain name.&lt;/p&gt;

&lt;p&gt;The first annotation &lt;code&gt;auth-signin&lt;/code&gt; will redirect unauthenticated requests to the OAuth2 Proxy login page. After the user logs in and authorizes the application, they will be redirected back to the original requested URL thanks to the &lt;code&gt;rd&lt;/code&gt; query parameter that fowards the redirect URL to the proxy.&lt;/p&gt;

&lt;p&gt;The second annotation &lt;code&gt;auth-url&lt;/code&gt; specifies the authentication URL that is used to verify the user's session.&lt;/p&gt;

&lt;p&gt;Use the commands below if you don't have a service to expose yet in order to launch a protected hello-world application. Make sure to replace &lt;code&gt;example.com&lt;/code&gt; with your domain name.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl create deployment web &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;gcr.io/google-samples/hello-app:1.0
kubectl expose deployment web &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8080
kubectl create ingress web &lt;span class="nt"&gt;--class&lt;/span&gt; nginx &lt;span class="s1"&gt;'--rule=example.com/=web:8080'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--annotation&lt;/span&gt; &lt;span class="s1"&gt;'nginx.ingress.kubernetes.io/auth-signin=https://example.com/oauth2/start?rd=https://$host$uri'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--annotation&lt;/span&gt; &lt;span class="s1"&gt;'nginx.ingress.kubernetes.io/auth-url=https://example.com/oauth2/auth'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Accessing an endpoint protected by OAuth2 Proxy will redirect you to a GitHub OAuth sign-in page.&lt;/p&gt;

&lt;p&gt;Make sure to grant access to the organization that you are authenticating against for OAuth2 Proxy to be able to find it and verify your authority.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6x51kqmqdqxtp9m2rp0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6x51kqmqdqxtp9m2rp0h.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If everything is set up correctly you should be redirected to your protected endpoint after signing in.&lt;/p&gt;

&lt;h1&gt;
  
  
  Limitations
&lt;/h1&gt;

&lt;p&gt;Unfortunately, OAuth2 Proxy cannot be configured to allow specific GitHub teams or organizations to access a resource on the ingress level. If you wish to limit access to ingress1 only to team1 while also limiting access to ingress2 only to team2 you will have to configure multiple OAuth2 Proxy instances and GitHub OAuth applications.&lt;/p&gt;

&lt;p&gt;Your services are still exposed on the internet. A security vulnerability in GitHub OAuth, the OAuth2 Proxy or the configuration might expose your service to the whole wide world. Protecting internal services behind a private network is always a good idea.&lt;/p&gt;

&lt;h1&gt;
  
  
  Putting it all together
&lt;/h1&gt;

&lt;p&gt;To conclude, we've set up a GitHub OAuth app that we've configured with OAuth2 Proxy to allow access to layer 7 ingresses only to the members of our GitHub organization or team. We can use the aforementioned annotations to tag any ingress that we wish to only protect to authenticated users.&lt;/p&gt;

&lt;p&gt;Oauth2-proxy isn't the silver bullet for securing internal services, rather it is an easy way to secure many different services or UIs without much work. It can also act as a first line of defense even for services that are only routable on a private network, or services that offer their own authentication mechanism, like Grafana.&lt;/p&gt;

&lt;p&gt;You can check out OAuth2 Proxy's &lt;a href="https://oauth2-proxy.github.io/oauth2-proxy/docs/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; for more tips, tricks and config parameters.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
      <category>github</category>
    </item>
    <item>
      <title>How to save a fortune with self hosted GitHub runners</title>
      <dc:creator>Buster Styren</dc:creator>
      <pubDate>Mon, 27 Feb 2023 19:05:42 +0000</pubDate>
      <link>https://dev.to/styren/how-to-save-a-fortune-with-self-hosted-github-runners-2m93</link>
      <guid>https://dev.to/styren/how-to-save-a-fortune-with-self-hosted-github-runners-2m93</guid>
      <description>&lt;p&gt;GitHub has made it possible to run GitHub Actions using your own self-hosted runners. Thanks to the &lt;a href="https://github.com/actions/actions-runner-controller" rel="noopener noreferrer"&gt;Actions Runner Controller&lt;/a&gt; it is surprisingly easy to run actions in your Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;In this post we will show how to install Actions Runner Controller into an existing Kubernetes cluster to run customized runners at a fraction of the cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;At Symbiosis we run &lt;strong&gt;a lot&lt;/strong&gt; of tests on each commit, so we've spent a considerable time to make sure they run quickly and can perform complex integration tests.&lt;/p&gt;

&lt;p&gt;Using GitHub's own runners is therefore not ideal, as the average commit would cost us almost a dollar each and sadly we make a lot of small changes.&lt;/p&gt;

&lt;p&gt;So we could either pay $40 for 5000 minutes of a 2 CPU github runner. Or we could pay $2 to rent a 2 CPU 8 GB Kubernetes node for 5000 minutes and run our actions on there instead.&lt;/p&gt;

&lt;p&gt;And as we see below, can also customize our runners to add even more flexibility over the default GitHub runners by heavily modify the runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To follow this tutorial you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Kubernetes cluster&lt;/li&gt;
&lt;li&gt;NGINX ingress (or other ingress controller)&lt;/li&gt;
&lt;li&gt;Certmanager (optional)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installing Actions Runner Controller
&lt;/h2&gt;

&lt;p&gt;The Actions Runner Controller (ARC) is the service responsible for monitoring your selected repositories and firing up new runners.&lt;/p&gt;

&lt;p&gt;We will show you how to install it using &lt;code&gt;kubectl&lt;/code&gt; but using helm is &lt;a href="https://github.com/actions/actions-runner-controller/blob/master/docs/installing-arc.md" rel="noopener noreferrer"&gt;just as easy&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f https://github.com/actions-runner-controller/actions-runner-controller/releases/download/v0.25.2/actions-runner-controller.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Connecting Actions Runner Controller to GitHub
&lt;/h2&gt;

&lt;p&gt;So we have the controller running, that's great. Now we need to authenticate so that commits, PRs, comments or any other event can be picked up by the controller and trigger the start of a runner.&lt;/p&gt;

&lt;p&gt;We have two options, either we can create a Personal Access Token (PAT) with admin access to our repos or we can &lt;a href="https://github.com/actions/actions-runner-controller#deploying-using-github-app-authentication" rel="noopener noreferrer"&gt;create a github app&lt;/a&gt; and install it into the repos instead.&lt;/p&gt;

&lt;p&gt;For simplicity we will authenticate using a PAT.&lt;/p&gt;

&lt;h3&gt;
  
  
  Personal Access Token (PAT)
&lt;/h3&gt;

&lt;p&gt;Create a token under &lt;a href="https://github.com/settings/tokens/new" rel="noopener noreferrer"&gt;Settings &amp;gt; Developer settings &amp;gt; Personal access token&lt;/a&gt;. Make sure you have admin access to the repos your runners will run on.&lt;/p&gt;

&lt;p&gt;Select the &lt;code&gt;repo (Full control)&lt;/code&gt; permission, and if your runners will run in an organization you need to select the following permissions as well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;admin:org (Full control)&lt;/li&gt;
&lt;li&gt;admin:public_key (read:public_key)&lt;/li&gt;
&lt;li&gt;admin:repo_hook (read:repo_hook)&lt;/li&gt;
&lt;li&gt;admin:org_hook (Full control)&lt;/li&gt;
&lt;li&gt;notifications (Full control)&lt;/li&gt;
&lt;li&gt;workflow (Full control)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, let's store the token we just created in a secret that our controller can use for authentication.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic controller-manager \
    --namespace=actions-runner-system \
    --from-literal=github_token=&amp;lt;YOUR PERSONAL ACCESS TOKEN&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating a workflow
&lt;/h2&gt;

&lt;p&gt;Before we move on it is perhaps a good time to create an actual workflow that will eventually trigger our self-hosted runner.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Run test on PRs
on:
  pull_request: {}
jobs:
  test:
    name: "Run tests"
    runs-on: [self-hosted]
    steps:
    - name: Checkout repo
      uses: actions/checkout@master
    - name: Run tests
      run: yarn test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This workflow triggers on commits to pull requests and runs &lt;code&gt;yarn test&lt;/code&gt;. Lets put it into &lt;code&gt;.github/workflow/test-workflow.yaml&lt;/code&gt; and push the changes to our repository.&lt;/p&gt;

&lt;p&gt;Notice the &lt;code&gt;runs-on: [self-hosted]&lt;/code&gt; option that will instruct GitHub to select any of your own self-hosted runners. Don't worry, you can be more specific about which type of runner to use. More on that later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Webhook &amp;amp; Ingress
&lt;/h2&gt;

&lt;p&gt;Runners can trigger based on either push or pull-based mechanics. For example by polling or by configuring webhooks. Most triggers come with certain drawbacks, some spawn too many runners and some spawn too few which may put your actions on a slow moving queue.&lt;/p&gt;

&lt;p&gt;However there is the &lt;code&gt;workflowJob&lt;/code&gt; trigger that has none of these drawbacks, but requires us to create an Ingress and configure a GitHub webhook. So this step isn't strictly necessary but we can assure you it's worth the effort.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: actions-runner-controller-github-webhook-server
  namespace: actions-runner-system
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
  tls:
  - hosts:
    - your.domain.com
    secretName: your-tls-secret-name
  rules:
  - http:
      paths:
      - path: /actions-runner-controller-github-webhook-server
        pathType: Prefix
        backend:
          service:
            name: actions-runner-controller-github-webhook-server
            port:
              number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Ingress is configured for NGINX ingress, so make sure to edit it depending on your ingress controller. It also assumes that cert-manager is configured to automatically provision a TLS certificate.&lt;/p&gt;

&lt;p&gt;Next step is to define the webhook in GitHub. Go to &lt;code&gt;Settings &amp;gt; Webhooks &amp;gt; Add webhook&lt;/code&gt; in your target repository.&lt;/p&gt;

&lt;p&gt;First let's set the payload URL to point to the ingress, for example using the details above: &lt;a href="https://your.domain.com/actions-runner-controller-github-webhook-server" rel="noopener noreferrer"&gt;https://your.domain.com/actions-runner-controller-github-webhook-server&lt;/a&gt;. Set content type to &lt;code&gt;json&lt;/code&gt; and enable the &lt;em&gt;Workflow Jobs&lt;/em&gt; permission.&lt;/p&gt;

&lt;p&gt;Once it is done you can create the webhook and go to &lt;em&gt;Recent Deliveries&lt;/em&gt; to verify that the ingress can be reached successfully.&lt;/p&gt;

&lt;h2&gt;
  
  
  Listening on events
&lt;/h2&gt;

&lt;p&gt;We have our controller running, it's authenticated and we have a workflow. The only thing left is to create the actual runners.&lt;/p&gt;

&lt;p&gt;Now, we could just create a Runner resource and be done with it, but just like a Pod it wouldn't have any replicas or any autoscaling.&lt;/p&gt;

&lt;p&gt;Instead we create a RunnerDeployment and a HorizontalRunnerAutoscaler. And for any Kubernetes user you will notice plenty of similarities to regular Deployments and HPAs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: actions-runners
spec:
  template:
    spec:
      repository: myorg/myrepo
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
  name: actions-runners
spec:
  minReplicas: 0
  maxReplicas: 5
  scaleTargetRef:
    kind: RunnerDeployment
    name: actions-runners
  scaleUpTriggers:
  - githubEvent:
      workflowJob: {}
    duration: "30m"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applying the above manifest will launch a deployment that will scale up to five concurrent runners. Remember to change the manifest to track the repository of choice (and make sure the access token has access to it).&lt;/p&gt;

&lt;p&gt;Voilà! We're now able to create a pull request to verify that the runner is automatically triggered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using labels to identify runners
&lt;/h2&gt;

&lt;p&gt;In a repo with many workflows, for example a monorepo, it may be necessary to run many different runners at once.&lt;/p&gt;

&lt;p&gt;In order to more carefully select which runner to use for a specific workflow we can define custom labels:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: actions-runners
spec:
  template:
    spec:
      repository: myorg/myrepo
      labels:
      - my-label
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this label we're able to select this runner by setting both &lt;code&gt;self-hosted&lt;/code&gt; and &lt;code&gt;my-label&lt;/code&gt; in our workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Run test on PRs
on:
  pull_request: {}
jobs:
  test:
    name: "Run tests"
    runs-on: [self-hosted, my-label]
    steps:
    - name: Checkout repo
      uses: actions/checkout@master
    - name: Run tests
      run: yarn test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Customizing runners with custom volumes
&lt;/h2&gt;

&lt;p&gt;Runners can be configured to pass through volumes from the host system, or to attach a PVC to runners.&lt;/p&gt;

&lt;p&gt;At Symbiosis we use PVCs to expose KVM to our runners, in order to run integrations tests with vitualization enabled. We also use PVCs to attach large images that are used to set up a multi-tenant cloud environment for integration testing.&lt;/p&gt;

&lt;p&gt;Custom volumes can also be used for &lt;a href="https://github.com/actions/actions-runner-controller/blob/master/docs/using-custom-volumes.md#docker-image-layers-caching" rel="noopener noreferrer"&gt;layer caching&lt;/a&gt;, to improve the speed of building OCI images.&lt;/p&gt;

&lt;p&gt;The below runner will provision a 10Gi PVC that will be shared. Notice that we're using &lt;code&gt;RunnerSet&lt;/code&gt; instead of &lt;code&gt;RunnerDeployment&lt;/code&gt;. This resource functions much like the &lt;code&gt;StatefulSet&lt;/code&gt; in that it will allocate the runner on a node where the volume can be properly mounted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: actions-runners
spec:
  template:
    spec:
      repository: myorg/myrepo
      volumeMounts:
      - mountPath: /runner/work
        name: pvc
      volumes:
      - name: pvc
        ephemeral:
          volumeClaimTemplate:
            spec:
              accessModes: [ "ReadWriteOnce" ]
              resources:
                requests:
                  storage: 10Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This PVC can be used to store any data that we need between runs without having to store an unnecessarily large amount of data in the GitHub actions cache! At the time of writing this article 100GiB of storage using GitHub runners would costs $24/mo. With cloud providers like Linode, Symbiosis or Scaleway that cost would be closer to $8/mo.&lt;/p&gt;

&lt;h2&gt;
  
  
  To summarize
&lt;/h2&gt;

&lt;p&gt;Running your own actions runners requires some upfront configuration but come with a list of benefits such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduced costs&lt;/li&gt;
&lt;li&gt;Attaching custom volumes (such as hostpath or PVCs) to your runners&lt;/li&gt;
&lt;li&gt;Customizing images or adding sidecars&lt;/li&gt;
&lt;li&gt;Integrate workflow runs into the existing Kubernetes observability stack&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Therefore, we highly recommend running your own runners to save cost, simplify management and increase flexibility by bringing the runners into the Kubernetes ecosystem.&lt;/p&gt;

&lt;p&gt;Check out Symbiosis &lt;a href="https://symbiosis.host/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>gratitude</category>
    </item>
    <item>
      <title>Symbiosis: The cloud platform for Kubernetes</title>
      <dc:creator>Buster Styren</dc:creator>
      <pubDate>Tue, 29 Nov 2022 13:15:03 +0000</pubDate>
      <link>https://dev.to/styren/symbiosis-the-cloud-platform-for-kubernetes-50f4</link>
      <guid>https://dev.to/styren/symbiosis-the-cloud-platform-for-kubernetes-50f4</guid>
      <description>&lt;p&gt;As an engineer, you are faced with many decisions when building a Kubernetes environment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How do I set up GitOps &amp;amp; CI/CD?&lt;/li&gt;
&lt;li&gt;How do I create application delivery pipelines?&lt;/li&gt;
&lt;li&gt;How do I keep costs down?&lt;/li&gt;
&lt;li&gt;How do I quickly boot new prod-like environments?&lt;/li&gt;
&lt;li&gt;How can I set up self-service for engineering teams?&lt;/li&gt;
&lt;li&gt;How to avoid maintaining docker-compose files for local development or testing?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For years I've been configuring and maintaining tens of tools on-top of EKS to create an efficient and automated setup for day-2 operations and simplify deployment of new services.&lt;/p&gt;

&lt;p&gt;But, maintaining a complicated k8s stack can take considerable effort as infrastructure evolves and changes over time. Moreover, the high costs and sluggishness of EKS or GKE make it harder to use Kubernetes outside of running prod and staging environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://symbiosis.host" rel="noopener noreferrer"&gt;Symbiosis&lt;/a&gt; is a managed Kubernetes service that plugs some of these holes to make it simple for DevOps to manage day-2 operations and for developers to build, test and deploy. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In this guide I will take you through how to create a k8s cluster and add your projects to Symbiosis.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;Install the CLI with &lt;code&gt;brew install symbiosis-cloud/tap/sym&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Then login with &lt;code&gt;sym login&lt;/code&gt; to authenticate to your Symbiosis team. Make sure you have an account set up already.&lt;/p&gt;

&lt;p&gt;Creating your first k8s cluster is as easy as issuing &lt;code&gt;sym cluster create&lt;/code&gt; -- or use our &lt;a href="https://registry.terraform.io/providers/symbiosis-cloud/symbiosis/latest/docs" rel="noopener noreferrer"&gt;terraform provider&lt;/a&gt; for IaC.&lt;/p&gt;

&lt;p&gt;The kube context is automatically installed, so you can access the cluster with &lt;code&gt;kubectl&lt;/code&gt;, &lt;code&gt;k9s&lt;/code&gt; and other tools that read the kubeconfig file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Well done!&lt;/strong&gt; You have a working k8s cluster. If you're new to k8s you might read our guide on how to &lt;a href="https://symbiosis.host/docs/quickstart/deploying-a-container" rel="noopener noreferrer"&gt;deploy a container&lt;/a&gt; to your cluster.&lt;/p&gt;

&lt;p&gt;Let's look into what else Symbiosis can offer!&lt;/p&gt;

&lt;h2&gt;
  
  
  Projects
&lt;/h2&gt;

&lt;p&gt;With Projects you can link your GitHub repositories to automatically create k8s environments for any occasion. For example in development, testing or as a staging environment in preparation for a release.&lt;/p&gt;

&lt;p&gt;The core idea of Projects is to offer the simplicity of PaaS but without any opinionated or limiting abstractions.&lt;/p&gt;

&lt;p&gt;Engineers should be able to easily build, test and release on prod-like environments without breaking a sweat.&lt;/p&gt;

&lt;p&gt;Projects is currently in closed preview.&lt;/p&gt;

&lt;h3&gt;
  
  
  The &lt;code&gt;sym.yaml&lt;/code&gt; file
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;sym.yaml&lt;/code&gt; file is placed at the root of your repositories and define the structure of your project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deploy:
  helm:
  - chart: "./charts"
    values:
      vaultToken: "{{.secret.VAULT_TOKEN}}"
      replicas: 2
test:
- image: "backend:latest"
  command: "go test"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Define and customize your Helm charts or Kustomize manifests to have them wired to your project.&lt;/p&gt;

&lt;p&gt;With the file in place we can instantly boot a cluster that runs our project with &lt;code&gt;sym run&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Tests run inside your cluster and can be used to easily define complex end-to-end or integration tests that span across many services.&lt;/p&gt;

&lt;p&gt;Running your test suite in the cluster with &lt;code&gt;sym test&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI/CD
&lt;/h3&gt;

&lt;p&gt;Pipelines are automatically created to build, test and ship your projects. In practice this means a preview cluster is created that will run your tests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrfy2irztipcs6m4akyx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrfy2irztipcs6m4akyx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The preview environment can also be used to help with PR reviews or used to share progress with your team.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitOps
&lt;/h3&gt;

&lt;p&gt;In the projects settings you can configure a production cluster. Merging a PR will automatically apply any changes to your prod environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4d8dyvztls3xh5e44nx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4d8dyvztls3xh5e44nx2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Some users may need to use Symbiosis for dev and CI/CD but deploy to a cluster in AWS or Google Cloud. It's on our roadmap to add options to deploy to other providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't lose sleep over k8s bills
&lt;/h2&gt;

&lt;p&gt;Kubernetes shouldn't have to cost a fortune. At Symbiosis we offer compute at less than half of AWS or DigitalOcean.&lt;/p&gt;

&lt;p&gt;We're also committed to making sure our platform is efficient. Did you know that EKS nodes reserve 1.6GiB of memory? Meaning a smaller cluster with 4GB nodes will waste at least 41% of all memory.&lt;/p&gt;

&lt;p&gt;Cheap bandwidth reduce the friction when using multiple clouds and services. We're determined to combat vendor lock-in effects by charging $5 per TB of traffic, compared to $92 with AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Going further
&lt;/h2&gt;

&lt;p&gt;To learn more about Symbiosis I recommend reading our &lt;a href="https://symbiosis.host/docs" rel="noopener noreferrer"&gt;docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Keep tabs on what we're doing over on &lt;a href="https://twitter.com/symbiosiscloud" rel="noopener noreferrer"&gt;twitter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Happy coding! 🖤&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
