<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Darek Dwornikowski ☁</title>
    <description>The latest articles on DEV Community by Darek Dwornikowski ☁ (@7d1).</description>
    <link>https://dev.to/7d1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/7d1"/>
    <language>en</language>
    <item>
      <title>Django with cloud sql auth proxy locally and in Cloud Run</title>
      <dc:creator>Darek Dwornikowski ☁</dc:creator>
      <pubDate>Tue, 20 Sep 2022 07:40:12 +0000</pubDate>
      <link>https://dev.to/7d1/django-with-cloud-sql-auth-proxy-locally-and-in-cloud-run-5am7</link>
      <guid>https://dev.to/7d1/django-with-cloud-sql-auth-proxy-locally-and-in-cloud-run-5am7</guid>
      <description>&lt;p&gt;Google's Cloud SQL Auth Proxy [3] allows you to set up a connection to a GCP SQL server from wherever you are via a secured tunnel. You can then point your application to a local address SQL proxy listens on and use it like local db. At least the DB "feel" like local. &lt;/p&gt;

&lt;p&gt;The same proxy is used in many GCP services like Google Cloud Run and GKE (if you configure it as a sidecar). &lt;/p&gt;

&lt;p&gt;Running SQL proxy is pretty easy (example is for pre 2.0.0 version):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./cloud_sql_proxy -instances=my-gcp-project:europe-north1:mydbinstancename=tcp:127.0.0.1:20000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Where&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;my-gcp-project:europe-north1:mydbinstancename&lt;/code&gt; is the "connection" name of your managed SQL server. &lt;/li&gt;
&lt;li&gt;tcp: directive tells the proxy where to listen locally&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now the trick is that when you run your proxy on your dev machine, or in a Docker compose for local development, you are fine with TCP. Yet, when you use Cloud Run, it expects you to connect to a UNIX socket, typically residing in this path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/cloudsql/my-gcp-project:europe-north1:mydbinstancename/.s.PGSQL.5432
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is pretty different than our TCP connection - would that mean we have to have a sepoarate Django config for local development and deployment? Not necessarily. &lt;/p&gt;

&lt;p&gt;If you look at a typical Django (postgres in this case) settings, you will notice it seems like it is expecting TCP connections. However, by convention when HOST starts with &lt;code&gt;/&lt;/code&gt; the engine will treat it as a UNIX socket. One thing you need to remember is to set the PORT to empty value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;DATABASES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;'default'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;'ENGINE'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'django.db.backends.postgresql_psycopg2'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;'NAME'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'DB_NAME'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'some db '&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="s"&gt;'USER'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'DB_USER'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'my user'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="s"&gt;'PASSWORD'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'DB_PASS'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="s"&gt;'HOST'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'DB_HOST'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'127.0.0.1'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="s"&gt;'PORT'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'DB_PORT'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;'20000'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In Cloud Run what you need to do is to set an enviromental variable DB_HOST to &lt;code&gt;/cloudsql/my-gcp-project:europe-north1:mydbinstancename&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is it. Now locally you use SQL connection with TCP and with UNIX on Cloud Run. All just by changing env variables just as the Twelve Factor App [1] pattern teaches us to:) &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[1] &lt;a href="https://12factor.net"&gt;https://12factor.net&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;[2] &lt;a href="https://cloud.google.com/sql/docs/postgres/connect-instance-cloud-run"&gt;https://cloud.google.com/sql/docs/postgres/connect-instance-cloud-run&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;[3] &lt;a href="https://cloud.google.com/sql/docs/postgres/connect-admin-proxy#unix-sockets"&gt;https://cloud.google.com/sql/docs/postgres/connect-admin-proxy#unix-sockets&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>django</category>
      <category>googlecloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Using go modules from private repositories in Azure DevOps Pipelines</title>
      <dc:creator>Darek Dwornikowski ☁</dc:creator>
      <pubDate>Wed, 11 Dec 2019 09:52:01 +0000</pubDate>
      <link>https://dev.to/7d1/using-go-modules-from-private-repositories-in-azure-devops-pipelines-44dk</link>
      <guid>https://dev.to/7d1/using-go-modules-from-private-repositories-in-azure-devops-pipelines-44dk</guid>
      <description>&lt;p&gt;This post will explain how to use go modules that you keep in private repositories in GitHub. Sometimes you have internal modules that you do not really want to expose to the open source community. There might be several reasons for it, for example you are still working on the solution and it is not ready to see public, or it is maybe a protected intellectual property. You keep your code in a private repo then and locally go get uses your ssh keys to access the repo and download the package to the go mod cache. &lt;/p&gt;

&lt;p&gt;However, your CI/CD tooling like Azure DevOps (ADO) do not have access to the these private repositories immediately. For that it needs to be equipped with an SSH key that it then can use to access github. This post shows how you can configure it end to end. &lt;/p&gt;

&lt;p&gt;In TL;DC what we will do is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;generate an ssh key pair&lt;/li&gt;
&lt;li&gt;add a public key to the github repo&lt;/li&gt;
&lt;li&gt;upload private key to the Azure DevOps secure files&lt;/li&gt;
&lt;li&gt;configure the Azure DevOps pipeline via YAML&lt;/li&gt;
&lt;li&gt;have fun doing it&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Generate private key pair
&lt;/h2&gt;

&lt;p&gt;This is a simple step, let's generate a key pair to be used to authenticate ADO to GitHub.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh-keygen -t rsa -b 4096 -C "your@email.com"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;When asked, save the key pair under &lt;code&gt;mykey&lt;/code&gt; name. You will have &lt;code&gt;mykey&lt;/code&gt; which stores the private key and &lt;code&gt;mykey.pub&lt;/code&gt; with a public key. &lt;/p&gt;

&lt;h2&gt;
  
  
  Add public key to the github repo
&lt;/h2&gt;

&lt;p&gt;Assuming your repo is called &lt;code&gt;my-go-module&lt;/code&gt;, navigate to &lt;br&gt;
&lt;code&gt;https://github.com/{your_org}/my-go-module/settings/keys&lt;/code&gt;and upload the contents of &lt;code&gt;mykey.pub&lt;/code&gt; to the deploy keys. &lt;/p&gt;
&lt;h2&gt;
  
  
  Upload private key to the Azure DevOps secure files
&lt;/h2&gt;

&lt;p&gt;Now you need to upload the private key contents to the ADO secure files. You can find them in the Pipelines -&amp;gt; Library -&amp;gt; Secure files. Upload the &lt;code&gt;mykey&lt;/code&gt; private file there and call it myPrivateKey. &lt;/p&gt;
&lt;h2&gt;
  
  
  Configure the Azure DevOps pipeline via YAML
&lt;/h2&gt;

&lt;p&gt;Now we have all the things in place. Last thing to do is install the SSH key in the pipeline so that go get can use it to access github. For that we will use the &lt;code&gt;InstallSSHKey&lt;/code&gt; task.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;InstallSSHKey@0&lt;/span&gt;
        &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;knownHostsEntry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;here you put github known host entry&amp;gt;&lt;/span&gt;
          &lt;span class="na"&gt;sshPublicKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;here you put your public key content&amp;gt;&lt;/span&gt; 
          &lt;span class="na"&gt;sshKeySecureFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myPrivateKey&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First run this command and copy the line not starting with &lt;code&gt;#&lt;/code&gt;. Paste it into the &lt;code&gt;knownHostsEntry&lt;/code&gt; parameter. This will make sure git will not ask for adding github into the known_hosts file but it would be already there.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;➜ ssh-keyscan  github.com
&lt;span class="c"&gt;# github.com:22 SSH-2.0-babeld-95694f5e&lt;/span&gt;
&lt;span class="c"&gt;# github.com:22 SSH-2.0-babeld-95694f5e&lt;/span&gt;
github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ&lt;span class="o"&gt;==&lt;/span&gt;
&lt;span class="c"&gt;# github.com:22 SSH-2.0-babeld-95694f5e&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now copy contents of the file &lt;code&gt;mykey.pub&lt;/code&gt; to &lt;code&gt;sshPublicKey&lt;/code&gt; parameter and finally set &lt;code&gt;sshKeySecureFile&lt;/code&gt; to the secure file name you have chosen (like myPrivateKey). &lt;/p&gt;

&lt;p&gt;Now this task will configure access to the private repository. You need still do to one thing before you can download the module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;git config --global url."git@github.com:{yourorg}/my-go-module".insteadOf "https://github.com/{yourorg}/my-go-module"&lt;/span&gt;
          &lt;span class="s"&gt;go build&lt;/span&gt;
        &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Build&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;the&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;binaries'&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This step is needed so that go get tries to access the module with ssh not with https by default. Substitute &lt;code&gt;{yourorg}&lt;/code&gt; with your organization name or your nickname so that it matches the URI of your module. &lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>github</category>
      <category>go</category>
    </item>
    <item>
      <title>Resizing persistent volumes in EKS/AKS/GKE</title>
      <dc:creator>Darek Dwornikowski ☁</dc:creator>
      <pubDate>Sun, 25 Aug 2019 16:36:00 +0000</pubDate>
      <link>https://dev.to/7d1/resizing-persistent-volumes-in-eks-aks-gke-3oa7</link>
      <guid>https://dev.to/7d1/resizing-persistent-volumes-in-eks-aks-gke-3oa7</guid>
      <description>&lt;p&gt;Persistent volumes (PVs) in Kubernetes is the way to abstract out block volumes used by pods in the cluster. The way pods request PVs is by creating Persistent Volume Claims (PVCs), which work like resource demand documents. Storage Class is responsible for actual implementation of how the PVs are actually handled, it can be provided by ceph volumes, or in the case of cloud providers, by EBS volumes in AWS, Managed Disks in Azure and Disks in GCP. &lt;/p&gt;

&lt;p&gt;This post is a note made to myself, when I was checking how does the PV resize works in managed kuberentes services. Feel free to replicate the commands and see what happens by yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Elastic Kubernetes Service
&lt;/h2&gt;

&lt;p&gt;Let's first create the cluster fast with &lt;code&gt;eksctl&lt;/code&gt;. Mind it takes a significant time. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;eksctl create cluster -n darek -r eu-central-1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After a while you should see some nodes running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  ~ kubectl get nodes 
NAME                                              STATUS   ROLES    AGE   VERSION
ip-192-168-26-190.eu-central-1.compute.internal   Ready    &amp;lt;none&amp;gt;   67s   v1.13.7-eks-c57ff8
ip-192-168-55-249.eu-central-1.compute.internal   Ready    &amp;lt;none&amp;gt;   67s   v1.13.7-eks-c57ff8

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's see the storage class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;➜  ~ kubectl get sc gp2 -o yaml&lt;/span&gt; 
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;storage.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;StorageClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubectl.kubernetes.io/last-applied-configuration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2"},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs"}&lt;/span&gt;
    &lt;span class="na"&gt;storageclass.kubernetes.io/is-default-class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
  &lt;span class="na"&gt;creationTimestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2019-08-24T09:23:35Z"&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gp2&lt;/span&gt;
  &lt;span class="na"&gt;resourceVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;281"&lt;/span&gt;
  &lt;span class="na"&gt;selfLink&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/apis/storage.k8s.io/v1/storageclasses/gp2&lt;/span&gt;
  &lt;span class="na"&gt;uid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;d903b89e-c650-11e9-968c-0273bdd9611a&lt;/span&gt;
&lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;fsType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ext4&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gp2&lt;/span&gt;
&lt;span class="na"&gt;provisioner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/aws-ebs&lt;/span&gt;
&lt;span class="na"&gt;reclaimPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Delete&lt;/span&gt;
&lt;span class="na"&gt;volumeBindingMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Immediate&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To resize volumes, storage class needs to have the &lt;code&gt;allowVolumeExpansion&lt;/code&gt; set to true. Fortunatelly, it can be patched. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;k patch sc gp2 -p '{"allowVolumeExpansion": true}'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, create a basic PVC of size 10Gi and storageclass set to &lt;code&gt;gp2&lt;/code&gt;. We will us the same yaml for AKS and GKE later too, save it as &lt;code&gt;pvc.yml&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-volume-pvc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10Gi&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gp2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the PVC. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f pvc.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The PVC should eventually create a PV of size 10, which is reflected in a real EBS volume created in the AWS. The name of the EBS is encoded in &lt;code&gt;Source/VolumeID&lt;/code&gt;. You can get it by doing describe operation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  kubectl describe pv/pvc-0262b546-c65a-11e9-8f74-06e2726fa54c
Name:              pvc-0262b546-c65a-11e9-8f74-06e2726fa54c
Labels:            failure-domain.beta.kubernetes.io/region=eu-central-1
                   failure-domain.beta.kubernetes.io/zone=eu-central-1b
Annotations:       kubernetes.io/createdby: aws-ebs-dynamic-provisioner
                   pv.kubernetes.io/bound-by-controller: yes
                   pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      gp2
Status:            Bound
Claim:             default/test-volume-pvc
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          10Gi
Node Affinity:
  Required Terms:
    Term 0:        failure-domain.beta.kubernetes.io/zone in [eu-central-1b]
                   failure-domain.beta.kubernetes.io/region in [eu-central-1]
Message:
Source:
    Type:       AWSElasticBlockStore (a Persistent Disk resource in AWS)
    VolumeID:   aws://eu-central-1b/vol-056a0bc0115292add
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
Events:         &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command shows the EBS volume name to be &lt;code&gt;vol-056a0bc0115292add&lt;/code&gt;. &lt;br&gt;
Now let's resize the PVC. Change the storage size in &lt;code&gt;pvc.yml&lt;/code&gt; to 12Gi and do kubectl apply again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
pvc-0262b546-c65a-11e9-8f74-06e2726fa54c   12Gi       RWO            Delete           Bound    default/test-volume-pvc   gp2                     19h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ok so the PV shows 12Gi. Let's check the size of the actual EBS volume with aws cli. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ec2 describe-volumes --volume-ids vol-056a0bc0115292add --query 'Volumes[0].Size'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It should now show 12.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Have in mind that in Amazon you can do resize operation on EBS only once every 6h. So next resize won't work, unless you wait for it. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can delete the clutser by doing &lt;code&gt;eksctl delete cluster -n darek&lt;/code&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Azure AKS
&lt;/h2&gt;

&lt;p&gt;Let's now check AKS. Create a cluster in AKS. The below snippet creates a resource group darek and a cluster called &lt;code&gt;darekEKS&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az group create --name darek --location westeurope
az aks create --resource-group darek --name darekEKS --node-count 2 --enable-addons monitoring --generate-ssh-keys
az aks get-credentials --resource-group darek --name darekEKS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Storage class in AKS is called default, it will provision azure managed disk.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  kubectl get sc
NAME                PROVISIONER                AGE
default (default)   kubernetes.io/azure-disk   13m
managed-premium     kubernetes.io/azure-disk   13m

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's see the whole class, similarly as in EKS the resize is not enbled.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  kubectl get storageclasses.storage.k8s.io default -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"storage.k8s.io/v1beta1","kind":"StorageClass","metadata":{"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"true"},"labels":{"kubernetes.io/cluster-service":"true"},"name":"default","namespace":""},"parameters":{"cachingmode":"ReadOnly","kind":"Managed","storageaccounttype":"Standard_LRS"},"provisioner":"kubernetes.io/azure-disk"}
    storageclass.beta.kubernetes.io/is-default-class: "true"
  creationTimestamp: "2019-08-24T09:31:20Z"
  labels:
    kubernetes.io/cluster-service: "true"
  name: default
  resourceVersion: "459"
  selfLink: /apis/storage.k8s.io/v1/storageclasses/default
  uid: ee21d5ae-c651-11e9-9f0b-aac03b918570
parameters:
  cachingmode: ReadOnly
  kind: Managed
  storageaccounttype: Standard_LRS
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
volumeBindingMode: Immediate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's patch it: &lt;code&gt;kubectl patch sc default -p '{"allowVolumeExpansion": true}'&lt;/code&gt; &lt;br&gt;
And create the PVC, remember to change the &lt;code&gt;storageClassName&lt;/code&gt; to default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-volume-pvc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10Gi&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Is PVC and PV created?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-volume-pvc   Bound    pvc-eb332309-c654-11e9-a0fd-36761452f1dd   10Gi       RWO            default        26s

➜  kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
pvc-eb332309-c654-11e9-a0fd-36761452f1dd   10Gi       RWO            Delete           Bound    default/test-volume-pvc   default                 41s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The name of the Azure disk will be seen in &lt;code&gt;Source/DiskURI&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ ~ kubectl describe pv
Name:            pvc-eb332309-c654-11e9-a0fd-36761452f1dd
Labels:          &amp;lt;none&amp;gt;
Annotations:     pv.kubernetes.io/bound-by-controller: yes
                 pv.kubernetes.io/provisioned-by: kubernetes.io/azure-disk
                 volumehelper.VolumeDynamicallyCreatedByKey: azure-disk-dynamic-provisioner
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    default
Status:          Bound
Claim:           default/test-volume-pvc
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        12Gi
Node Affinity:   &amp;lt;none&amp;gt;
Message:
Source:
    Type:         AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
    DiskName:     kubernetes-dynamic-pvc-eb332309-c654-11e9-a0fd-36761452f1dd
    DiskURI:      /subscriptions/6690b014-bdbd-4496-98ee-f2f255699f70/resourceGroups/MC_darek_darekEKS_westeurope/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-eb332309-c654-11e9-a0fd-36761452f1dd
    Kind:         Managed
    FSType:
    CachingMode:  ReadOnly
    ReadOnly:     false
Events:           &amp;lt;none&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now change the disk size to 12 and see. We can check the disk size with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; az disk  show --ids /subscriptions/6690b014-bdbd-4496-98ee-f2f255699f70/resourceGroups/MC_darek_darekEKS_westeurope/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-eb332309-c654-11e9-a0fd-36761452f1dd --query 'diskSizeGb'
12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ok works in AKS too. What is quite weird, when you do &lt;code&gt;kubectl get pv&lt;/code&gt; it shows the new size but the PVC still shows 10Gi.&lt;/p&gt;

&lt;p&gt;Clean resources you have created: &lt;code&gt;az group delete -n darek -y&lt;/code&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Google Kubernetes Engine
&lt;/h2&gt;

&lt;p&gt;I created a &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/regional-clusters"&gt;zonal cluster&lt;/a&gt; using the CLI in my project called &lt;code&gt;turnkey-cooler-31343&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud config set project turnkey-cooler-31343
gcloud container clusters create darek --zone europe-north1-a --cluster-version "1.13.7-gke.19" --machine-type n1-standard-1 --num-nodes 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And generate a kubeconfig:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters get-credentials darek --zone europe-north1-a --project turnkey-cooler-31343
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In GKE the default storageclass is called &lt;code&gt;standard&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                 PROVISIONER            AGE
standard (default)   kubernetes.io/gce-pd   3m44s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just as in AKS and EKS the auto expand was not enabled, let's patch the storage class.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl patch sc standard -p '{"allowVolumeExpansion": true}'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And create a pvc using our PVC yaml. Remember to change &lt;code&gt;storageClassName&lt;/code&gt; to &lt;code&gt;standard&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f pvc.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And check the disk:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pv 
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
pvc-f79ca9c5-c750-11e9-911b-42010aa601a8   10Gi       RWO            Delete           Bound    default/test-volume-pvc   standard                8m30s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now check the name of the disk in GCP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pv/pvc-f79ca9c5-c750-11e9-911b-42010aa601a8 -o jsonpath='{.spec.gcePersistentDisk.pdName}'
gke-darek-a2e51c42-dyn-pvc-f79ca9c5-c750-11e9-911b-42010aa601a8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the size of the disk.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  gcloud compute disks describe gke-darek-a2e51c42-dyn-pvc-f79ca9c5-c750-11e9-911b-42010aa601a8 --zone europe-north1-a | grep sizeGb
sizeGb: '10'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's now resize the disk, change the pvc.yml or patch the PVC.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  gcloud compute disks describe gke-darek-a2e51c42-dyn-pvc-f79ca9c5-c750-11e9-911b-42010aa601a8 --zone europe-north1-a | grep sizeGb
sizeGb: '12'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ok all fine but as in AKS and EKS the PVC still shows 10Gi. &lt;/p&gt;

&lt;p&gt;Clean the infra: &lt;code&gt;gcloud container clusters delete darek --zone europe-north1-a&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>eks</category>
      <category>gke</category>
    </item>
  </channel>
</rss>
