<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: moeed-k</title>
    <description>The latest articles on DEV Community by moeed-k (@moeedk).</description>
    <link>https://dev.to/moeedk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/moeedk"/>
    <language>en</language>
    <item>
      <title>Containers, Clusters &amp; K8s: My Experience with DevOps</title>
      <dc:creator>moeed-k</dc:creator>
      <pubDate>Sat, 10 Jun 2023 12:57:37 +0000</pubDate>
      <link>https://dev.to/moeedk/containers-clusters-k8s-my-experience-with-devops-4297</link>
      <guid>https://dev.to/moeedk/containers-clusters-k8s-my-experience-with-devops-4297</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gqS2D11p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vn8fmzoyzcts6hcu4jsa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gqS2D11p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vn8fmzoyzcts6hcu4jsa.png" alt="DevOps" width="591" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you even pronounce Kubernetes?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Hint: coo-ber-net-ees&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post I'll be recounting some of the invaluable lessons I've learned over the past few months. At the start of the final semester of my undergrad, I elected an introductory course to DevOps. I'd heard the term here and there, and I also had &lt;strong&gt;some&lt;/strong&gt; idea about just how popular the role of a &lt;em&gt;DevOps engineer&lt;/em&gt; had become in recent years. But other than that, I really had no clue about what the role entailed.&lt;/p&gt;

&lt;p&gt;Fast forward 4 months to today and I can proudly say that I'm &lt;em&gt;far less&lt;/em&gt; clueless than when I had started out. Heck, I even managed to deploy my own Kubernetes cluster in a working project. &lt;/p&gt;

&lt;p&gt;In a nutshell, DevOps is all about streamlining the processes involved in software development &amp;amp; deployment to make life easier for everyone (the developer, the software users, and the maintainers). &lt;/p&gt;

&lt;p&gt;Here's some of the stuff I learned about (and will be discussing in this post as well):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Virtualization&lt;/li&gt;
&lt;li&gt;Containers&lt;/li&gt;
&lt;li&gt;Pods&lt;/li&gt;
&lt;li&gt;Dockerfiles&lt;/li&gt;
&lt;li&gt;Building dockerfiles&lt;/li&gt;
&lt;li&gt;Container Clusters&lt;/li&gt;
&lt;li&gt;Achieving High-Availability&lt;/li&gt;
&lt;li&gt;Micro-Services &lt;/li&gt;
&lt;li&gt;Service meshes&lt;/li&gt;
&lt;li&gt;CI/CD &lt;/li&gt;
&lt;li&gt;GitOps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not to mention a whole &lt;strong&gt;plethora&lt;/strong&gt; of tools. Really, there's just so many &lt;em&gt;great&lt;/em&gt; DevOps tools out there, that it's equally rewarding and daunting trying to learn about them all. I mean, just take a look at this periodic table of &lt;em&gt;&lt;strong&gt;DevOps tools&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7RcyF7Y0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oxrzpndd89ndqp00wmcs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7RcyF7Y0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oxrzpndd89ndqp00wmcs.png" alt="Source: digital.ai" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But more important than the tools, is to understand the &lt;em&gt;need&lt;/em&gt; for DevOps in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's a container, and why does my cluster have so many of them?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dnLwLeMf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7s76figiqlwfsdsrddeo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dnLwLeMf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7s76figiqlwfsdsrddeo.png" alt="Virtual Machines" width="484" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we answer that question, we need to know a bit about &lt;strong&gt;virtualization&lt;/strong&gt;. Traditionally, Virtual Machines were used to run several distinct applications on a single hardware system. In most cases, this involved installing and configuring the VM software, installing the OS, and then finally deploying the application. As you can see, there's many different layers to this process. What if all I wanted to do was deploy a small service that only provides me some kind of button highlighting functionality in a web app? Do I really need to spin up an entire VM? That doesn't sound too efficient.&lt;/p&gt;

&lt;p&gt;This is where containers come in.  Compared to regular VMs, containers use a highly streamlined form of OS virtualization. They can be thought of as logically isolated processes that contain &lt;em&gt;everything&lt;/em&gt; needed to run a particular piece of software. All the dependencies, source files and environment configurations are bundled into a neat little &lt;strong&gt;container&lt;/strong&gt;. Moreover, these containers are highly portable and function exactly the same no matter where you run them. There's no need to configure the developer environment for each machine you chose to deploy a container on. Just get the container up and running and you'll execute the exact same code across every machine you choose to run it on.&lt;/p&gt;

&lt;p&gt;In order to run a container, there's a three step process involved. You first write something called a &lt;strong&gt;dockerfile&lt;/strong&gt;, which is basically a schematic defining the structure of your container. Then using this schematic you build a &lt;strong&gt;container image&lt;/strong&gt; with the help of an &lt;strong&gt;image builder&lt;/strong&gt;. Then finally, you run the container in a &lt;strong&gt;container runtime&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XrbGx9-3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q45kejitn2m4ftv2vcly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XrbGx9-3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q45kejitn2m4ftv2vcly.png" alt="Running a container" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you take one or more containers and bundle them into a wrapper, you get something called a &lt;strong&gt;pod&lt;/strong&gt;. Each pod exists to serve a particular function, something that you can call a &lt;strong&gt;service&lt;/strong&gt;. A machine that runs multiple pods together is called a &lt;strong&gt;node&lt;/strong&gt;, and a &lt;strong&gt;container orchestration&lt;/strong&gt; software like Kubernetes manages a group of several nodes called a &lt;strong&gt;cluster&lt;/strong&gt;. It handles &lt;strong&gt;high-availability&lt;/strong&gt; and &lt;strong&gt;resilience&lt;/strong&gt;, &lt;strong&gt;auto-scales&lt;/strong&gt; up or down based on needs, provides &lt;strong&gt;authorization&lt;/strong&gt;, and does many, many more things.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lM8LAGJx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7edaxkajbyhzoizy8sme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lM8LAGJx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7edaxkajbyhzoizy8sme.png" alt="K8s Cluster" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's pretty cool! But there's one glaring problem to address. How do we make each of these little services running on their own pods (&lt;strong&gt;micro-services&lt;/strong&gt; basically) communicate with each other in a safe and efficient manner? For that we use something like &lt;strong&gt;Istio&lt;/strong&gt;. It's a piece of software that injects a separate container called a sidecar-proxy inside a pod (pods are wrappers for multiple containers, remember?), and all communication going to and from the pod passes through the sidecar-proxy first. This sidecar provides many extremely useful features that vastly improve and standardize the communication between services. A whole network of these proxies is called a &lt;strong&gt;service mesh&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BlDMRMSx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ibn37ix2hb26v81ntebd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BlDMRMSx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ibn37ix2hb26v81ntebd.png" alt="Istio Service Mesh" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There's a &lt;em&gt;whole lot&lt;/em&gt; more to the world of containers and container orchestration, but this is pretty much the gist of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI/CD, Version Control, Source Control, Single Source of Truth...
&lt;/h2&gt;

&lt;p&gt;A big part of DevOps is creating &lt;strong&gt;CI/CD pipelines&lt;/strong&gt; that make the process of software development &lt;strong&gt;efficient&lt;/strong&gt; and &lt;strong&gt;reliable&lt;/strong&gt;. CI/CD is short for Continuous Integration/Continuous Development, and it basically means creating a set of &lt;strong&gt;automated&lt;/strong&gt; steps that allow a developer to have changes in their code quickly deployed into a running form. &lt;/p&gt;

&lt;p&gt;Here's what a basic &lt;strong&gt;CI/CD workflow&lt;/strong&gt; would look like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A developer writes some code and makes a commit to their code repository.&lt;/li&gt;
&lt;li&gt;A tool like &lt;strong&gt;GitHub Actions&lt;/strong&gt; or &lt;strong&gt;Jenkins&lt;/strong&gt; builds a container image from the code and runs some user-defined tests against it.&lt;/li&gt;
&lt;li&gt;If the tests clear, the image is pushed to a &lt;strong&gt;Container Registry&lt;/strong&gt; (think of it like GitHub but for containers).&lt;/li&gt;
&lt;li&gt;The image is pulled from the registry and deployed inside a Kubernetes cluster. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you use a tool like &lt;strong&gt;Flux&lt;/strong&gt; or &lt;strong&gt;Argo CD&lt;/strong&gt;, you can add one more dimension to this whole process by introducing &lt;strong&gt;GitOps&lt;/strong&gt; to your workflow. Flux, for example, will constantly monitor the configuration of your current cluster and compare it against a &lt;strong&gt;configuration repo&lt;/strong&gt; defined by the developers (this is usually just a git repo too). If it finds any discrepancy between the desired state and the current state, Flux will bring the cluster state in line with the configuration defined in the git repo. This is why its called &lt;em&gt;Git&lt;/em&gt; Ops, because Git acts as the &lt;strong&gt;Single Source of Truth&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BBLszQm3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipfo929obk33x1e4v780.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BBLszQm3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipfo929obk33x1e4v780.png" alt="MS Azure CI/CD workflow" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another crucial concept is Version/Source control. This basically refers to keeping track of changes made to code. Most of us are familiar with version control in one way or the other (such as git), but there's a different dimension to it that I discovered recently. Rolling new updates to an already running software platform can be tricky. But tools like Kubernetes provide features like &lt;strong&gt;Deployments&lt;/strong&gt; that help route traffic between different software versions, and Istio does something similar at a more granular level to route traffic between &lt;strong&gt;canary builds&lt;/strong&gt; of different services in the service mesh.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HH0ADqfh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/58ip6852uik0lv3bfsob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HH0ADqfh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/58ip6852uik0lv3bfsob.png" alt="Source Control" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Open Source Community
&lt;/h2&gt;

&lt;p&gt;An amazing thing about the world of DevOps is that &lt;em&gt;so much&lt;/em&gt; of all the magic making it possible is &lt;strong&gt;open-source&lt;/strong&gt;. The communities behind each tool are constantly pushing out new updates and improving features by the day. &lt;/p&gt;

&lt;p&gt;During my course, this inspired me to take a deeper look at some of these tools in the hopes of contributing to them. A crucial thing to remember is that you don't have to start out big, in fact that's near impossible. &lt;strong&gt;Start small&lt;/strong&gt;, but start early. Remember, the quicker you fail at something, the quicker you get to learn and improve. &lt;/p&gt;

&lt;p&gt;In order to get started with open-source contributions, you can begin by scoping out the GitHub repo for the project you'd like to work on. Take a look at the issues list posted under each repo, and see if you can find something that makes sense to you. Usually, you'll find issues labeled with tags such as 'bug' or 'good first issue', and this will help you identify the nature of the problem you want to solve. &lt;/p&gt;

&lt;p&gt;Do be mindful of the guidelines that each project follows. If you can't find them in the documentation or on the project website, you can try looking through previously merged pull-requests and try to mimic their code structure. &lt;/p&gt;

&lt;p&gt;It can take a while to get a grasp of everything and to know where and how to start, but don't give up. Just keep at it and soon enough you'll find yourself making just a &lt;em&gt;tiny bit&lt;/em&gt; of sense of things. After that, write down your solution, make a Pull-Request, and then wait for the code to get approved. Don't lose heart if it doesn't get merged. Use any feedback you get to improve your next contribution. Even something as simple as updating the documentation can be &lt;em&gt;really&lt;/em&gt; useful to the wider community, so don't shy away from the vast world of open-source!&lt;/p&gt;

&lt;p&gt;A wonderful platform that incubates many amazing tools is called the &lt;strong&gt;Cloud Native Computing Foundation&lt;/strong&gt;. They bring together a great many useful tools under one hub, promote development, and are a rich source of information for anyone interested in following open-source development. I highly recommend that you check it out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bXBPnWTy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xx48idbr7bm0oqa9uh9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bXBPnWTy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xx48idbr7bm0oqa9uh9g.png" alt="CNCF" width="405" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;These past few months have been a highly educative experience. Even though I have only scratched the surface of the vast world of DevOps, I have already begun to appreciate the complexity of all the processes involved in giving life to so much of modern software. Development isn't &lt;em&gt;just&lt;/em&gt; about coding. It's also about getting that &lt;strong&gt;code to run everywhere, all the time, for everyone&lt;/strong&gt;. My course has come to an end, but my journey with DevOps has just begun.  &lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How to Deploy a PostgreSQL cluster on Kubernetes</title>
      <dc:creator>moeed-k</dc:creator>
      <pubDate>Wed, 10 May 2023 15:45:02 +0000</pubDate>
      <link>https://dev.to/moeedk/how-to-deploy-a-postgresql-cluster-on-kubernetes-1p6h</link>
      <guid>https://dev.to/moeedk/how-to-deploy-a-postgresql-cluster-on-kubernetes-1p6h</guid>
      <description>&lt;p&gt;A PostgreSQL cluster can be thought of as a collection of databases copied across several instances. Each instance can be thought of as an independent node containing all your databases. Normally, we have more than one node per cluster in order to deploy a highly-available database solution. &lt;/p&gt;

&lt;p&gt;A PostgreSQL cluster cannot be directly deployed on Kubernetes because it has some specific requirements. These requirements can be fulfilled by a Kubernetes Operator that uses a Custom Resource Definition (CRD) to help manage and deploy PostgreSQL. &lt;/p&gt;

&lt;p&gt;Today we'll be taking a look at using the Zalando Postgres Operator to deploy a PostgreSQL cluster using K8s. &lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisites:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Have Kubernetes installed (for this blog I'll be using minikube).&lt;/li&gt;
&lt;li&gt;Have kubectl installed.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Manual Deployment:
&lt;/h2&gt;

&lt;p&gt;Create an empty directory to start working in. First we'll clone the zalando postgres operator repo from github.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

git clone https://github.com/zalando/postgres-operator.git
cd postgres-operator



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next we'll apply the YAML manifests in the given order:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl create -f manifests/configmap.yaml  
kubectl create -f manifests/operator-service-account-rbac.yaml 
kubectl create -f manifests/postgres-operator.yaml  
kubectl create -f manifests/api-service.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next we run a script provided in the repo that helps us set-up an acid-minimal-cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

./run_operator_locally.sh


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That's it! We now check if the postgres operator is operational.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl get pod -l name=postgres-operator


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This should show us the pod in a &lt;code&gt;RUNNING&lt;/code&gt; state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the UI
&lt;/h2&gt;

&lt;p&gt;Next we'll create our cluster. But first, we'll deploy the UI so that it makes this process easier to visualize. To deploy the UI, run the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f ui/manifests/


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This can take a while to go into the &lt;code&gt;RUNNING&lt;/code&gt; state, so check with the following command till the pod is up.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl get pod -l name=postgres-operator-ui


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After both the Operator and UI pods are up, we now port-forward to access the interface from our browser. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl port-forward svc/postgres-operator-ui 8081:80


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Creating a Cluster
&lt;/h2&gt;

&lt;p&gt;Open up your local browser and access &lt;code&gt;localhost:8081&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can configure your cluster any way you like. I'll be going forward with the configuration. Press the create cluster button when you're done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmoilk74e3hoqes4i27g1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmoilk74e3hoqes4i27g1.png" alt="UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait until all the status checks have cleared. &lt;/p&gt;
&lt;h2&gt;
  
  
  Connecting to the cluster using PSQL
&lt;/h2&gt;

&lt;p&gt;PSQL is a CLI that allows us to query postgres. To use it with our newly deployed cluster, we run port-forward the default 5432 used by Postgres to 6432. We also set up the name of the master pod of our acid-minimal cluster as an environment variable.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

export PGMASTER=$(kubectl get pods -o jsonpath={.items..metadata.name} -l application=spilo,cluster-name=acid-minimal-cluster,spilo-role=master -n default)


kubectl port-forward $PGMASTER 6432:5432 -n default


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Setup the following environment variables as well (one define the SSLMODE and the other defines the K8s secret when creating acid-minimal-cluster):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

export PGPASSWORD=$(kubectl get secret postgres.acid-minimal-cluster.credentials.postgresql.acid.zalan.do -o 'jsonpath={.data.password}' | base64 -d)
export PGSSLMODE=allow


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now connect with PSQL to the default &lt;code&gt;postgres&lt;/code&gt; database:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

psql -U postgres -h localhost -p 6432


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We have successfully deployed our K8s PG cluster. You can now run queries as you like. In the next part we'll take a look at deploying Apache-AGE load balancing using Pg-Pool II for AGE.&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/zalando/postgres-operator/blob/master/docs/quickstart.md" rel="noopener noreferrer"&gt;https://github.com/zalando/postgres-operator/blob/master/docs/quickstart.md&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>apacheage</category>
      <category>postgres</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Improving build times with Docker Layer Caching</title>
      <dc:creator>moeed-k</dc:creator>
      <pubDate>Thu, 23 Mar 2023 08:55:29 +0000</pubDate>
      <link>https://dev.to/moeedk/improving-build-times-with-docker-layer-caching-3bg9</link>
      <guid>https://dev.to/moeedk/improving-build-times-with-docker-layer-caching-3bg9</guid>
      <description>&lt;p&gt;An important concept in container image building is layer-caching. It is important to understand in order to optimize your build times and to streamline your CI/CD workflows.&lt;/p&gt;

&lt;p&gt;In this post, we'll be taking a look at three different aspects of layer-caching:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Optimizing dockerfiles to re-use the maximum number of layers.&lt;/li&gt;
&lt;li&gt;Layer-caching between two different images using &lt;code&gt;--cache-from&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Using BuildKit's inline cache. &lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Understanding Layers
&lt;/h2&gt;

&lt;p&gt;Let's take a look at an example of a simple dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# pull base image
FROM ubuntu:latest

# install wget
RUN apt-get update &amp;amp;&amp;amp; \
    apt-get -y install wget
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we can observe two layers. The first one pulls the base image, and the second one installs wget. &lt;/p&gt;

&lt;p&gt;With layer caching, our aim is to build a minimal amount of layers in each build, while re-using previous layers as much as possible. In essence, we're trying to reduce the number of steps in our build process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimizing Dockerfiles to re-use layers
&lt;/h2&gt;

&lt;p&gt;If we build the simple dockerfile we took as an example above, it will take a few seconds for the base image to be pulled. If we change the second layer to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# install wget and nginx
RUN apt-get update &amp;amp;&amp;amp; \
    apt-get -y install wget &amp;amp;&amp;amp; \
    apt-get -y nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then re-build, we will see that only the second layer will be run again, and the base image will &lt;em&gt;not&lt;/em&gt; be pulled again. This is because the first layer has been cached and is being re-used.&lt;/p&gt;

&lt;p&gt;However, if we change our first layer, we will be &lt;em&gt;invalidating&lt;/em&gt; all subsequent layers that come after it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# pull different image
FROM ubuntu:18.04

# install wget nginx
RUN apt-get update &amp;amp;&amp;amp; \
    apt-get -y install wget &amp;amp;&amp;amp; \
    apt-get -y nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This time, all layers after the first layer will be built again. &lt;/p&gt;

&lt;p&gt;Keeping this in mind, we should always try to follow the two following best practices:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Keep commands that are not likely to change at the start of the dockerfile.&lt;/li&gt;
&lt;li&gt;Commands that are going to change often should ideally be near the end of the dockerfile.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Using layer-caching between Images
&lt;/h2&gt;

&lt;p&gt;If there is an image that has already been built and it shares some of the layers with your own dockerfile, you can use that image as part of the build cache with the help of the &lt;code&gt;--cache-from&lt;/code&gt; flag. &lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IMG="my-image"

# Pull an existing image
docker pull ${IMG}:old-ver

docker build --cache-from ${IMG}:old-ver -t ${IMG}:new-ver .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  BuildKit Inline Cache
&lt;/h2&gt;

&lt;p&gt;The problem with the above approach is that it requires us pulling an image from a remote registry first. By using BuildKit's inline cache, we can cache images to our local registry in order to avoid expensive pull operations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export DOCKER_BUILDKIT=1

# Build and cache image
$ docker build --build-arg BUILDKIT_INLINE_CACHE=1 .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Set up Apache-AGE for development: Installing and Modifying the Source</title>
      <dc:creator>moeed-k</dc:creator>
      <pubDate>Mon, 20 Feb 2023 19:09:07 +0000</pubDate>
      <link>https://dev.to/moeedk/set-up-apache-age-for-development-installing-and-modifying-the-source-5889</link>
      <guid>https://dev.to/moeedk/set-up-apache-age-for-development-installing-and-modifying-the-source-5889</guid>
      <description>&lt;p&gt;Apache-AGE is an exciting open source graph-database extension for PostgreSQL. It basically turns PGSQL into a multi-model DB. Which is to say, it allows you to enhance your good ol' relational database so that it supports graph DB functionality too. &lt;/p&gt;

&lt;p&gt;Contributing to open source projects can be very rewarding. It is both a great way to improve your coding skills and also to help grow your favorite tools.&lt;/p&gt;

&lt;p&gt;However, it can be daunting if you're a beginner and aren't quite sure where to start. It's always a good idea to hunt down any existing documentation and then work your way through that. You'll eventually exhaust your resources though, and at that point it is best just to start reading through the code itself. Read comments inside the code. Use a debugger. Set breakpoints. Try to understand the flow of execution.&lt;/p&gt;

&lt;p&gt;But even before you get to &lt;strong&gt;that&lt;/strong&gt;, you need to set up an environment for development! You want to be able to make changes in the code and quickly see it get reflected in the running program.&lt;/p&gt;

&lt;p&gt;That's what this post will (hopefully) help you with. We'll go through the process of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Installing Postgres from source&lt;/li&gt;
&lt;li&gt;Installing AGE from source&lt;/li&gt;
&lt;li&gt;Using a debugger with AGE&lt;/li&gt;
&lt;li&gt;Updating the AGE source code&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;NOTE: This guide has been made using Ubuntu in mind. &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Installing Postgres from Source
&lt;/h2&gt;

&lt;p&gt;The first thing we need to do is get a version of PG that is compatible with AGE. At the time of this writing, AGE supports both PG 11 and 12, so we'll be going with V12. &lt;/p&gt;

&lt;p&gt;Download the source from here:&lt;br&gt;
&lt;a href="https://www.postgresql.org/ftp/source/v12.0/" rel="noopener noreferrer"&gt;https://www.postgresql.org/ftp/source/v12.0/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This guide assumes that you download the .gz file. Go to your Downloads directory, open up a new terminal (right click inside the directory and select 'open terminal') and decompress the file using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gzip -d postgresql-12.0.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we unpack the tarball:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -xf postgresql-12.0.tar 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now &lt;code&gt;cd&lt;/code&gt; into the newly created directory. But before we do anything else, we're going to install two packages that are used by both AGE and PG to parse/lex queries. These packages are FLEX (a tokenizer) and BISON (a parser).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install build-essential libreadline-dev zlib1g-dev flex bison
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we're ready to install PG. First we'll run the configure command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./configure --enable-debug --enable-cassert
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE: If you're wondering more about what the &lt;code&gt;--enable-debug&lt;/code&gt; and &lt;code&gt;--enable-cassert&lt;/code&gt; flags do, you can check it out from the PG docs at: &lt;a href="https://www.postgresql.org/docs/current/install-procedure.html#CONFIGURE-OPTIONS" rel="noopener noreferrer"&gt;PG Docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now lets run the make commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;make all
sudo make install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we should have Postgres installed. Lets try running it. We can start the PG server using the pg_ctl utility, but first we need to  tell our system where the binaries for that are located. So we'll add the (default is /usr/local/pgsql/bin) location to our PATH variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PATH=/usr/local/pgsql/bin:$PATH
export PATH
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that if you want to make it permanent (so that the PATH is updated each time you open a new terminal), you should edit your &lt;code&gt;~/.bash_profile&lt;/code&gt; file and append the export command.&lt;/p&gt;

&lt;p&gt;Now we can initialize our first DB cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;initdb -D $HOME/pgdata
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The -D flag just specifies the location of the DB cluster. I'm choosing to make it inside a folder called pgdata inside my $HOME path.&lt;/p&gt;

&lt;p&gt;Time to start the server and test it out:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pg_ctl -D $HOME/pgdata start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything has been done correctly till this point, our server should have started successfully!&lt;/p&gt;

&lt;p&gt;Let's test it out. We'll use a tool called psql, which is a command line interface to interact with Postgres. For this guide, I'll just connect to the default DB named 'postgres'.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything is working good so far, it's time to install AGE. Use \q to quit out of psql for now.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Installing AGE from Source
&lt;/h2&gt;

&lt;p&gt;Download or clone AGE from the github repo:&lt;br&gt;
&lt;a href="https://github.com/apache/age/releases" rel="noopener noreferrer"&gt;AGE Github&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Extract it just like before. Now before we install anything, we'll make a slight change to make debugging easier down the line. &lt;/p&gt;

&lt;p&gt;Open the makefile, and go the line that starts with 'PG_CPPFLAGS'. At the end of this line, append the &lt;code&gt;-O0&lt;/code&gt; flag. This will tell compiler to keep the optimization level at 0, which will make it easier to step through the code with the debugger.&lt;/p&gt;

&lt;p&gt;Now to install AGE. Assuming you didn't change anything and kept all the default paths, your PG install dir should be &lt;code&gt;=/usr/local/pgsql/&lt;/code&gt;. With that in mind, run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo make PG_CONFIG=/usr/local/pgsql/bin/pg_config install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We've now installed AGE!&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Using a Debugger with AGE
&lt;/h2&gt;

&lt;p&gt;Now let's go through the process of using a debugger to analyze the AGE code. We'll be using the GNU Debugger, commonly known as GDB (it comes pre-installed with Ubuntu). &lt;/p&gt;

&lt;p&gt;First, let's load up our newly installed AGE extension. Use psql to connect to the 'postgres' DB again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE EXTENSION age;
LOAD ‘age’;
SET search_path = ag_catalog, "$user", public;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CREATE EXTENSION only has to be run once, but do keep in mind that the LOAD and SET commands have to be re-entered every time you log back into psql. &lt;/p&gt;

&lt;p&gt;Using the create_graph command in the ag_catalog namespace, we'll create our first graph called 'people'.&lt;/p&gt;

&lt;p&gt;If you want more information on how to write cypher queries using AGE, you can look up the AGE documentation at: &lt;a href="https://age.apache.org/age-manual/master/index.html" rel="noopener noreferrer"&gt;AGE Docs&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM ag_catalog.create_graph('people');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's add one person to this graph.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * 
FROM cypher('people', $$
    CREATE (a {name: 'Andres'})
    RETURN a
$$) as (a agtype);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Okay, now time to debug the code. Open a new terminal. We need to attach GDB to the running PGSQL process. To find the PID of the process, we use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ps -ef | grep postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll get a lot of IDs (since PG runs many background processes), but we only need to look for the PID of the process connected to the default database named 'postgres'.&lt;/p&gt;

&lt;p&gt;In my case, it looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9xasly0tuprmkq15or7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9xasly0tuprmkq15or7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here I can see that the PID I'm looking for is 21812.&lt;/p&gt;

&lt;p&gt;Now we start gdb:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gdb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From inside the GDB interface, we attach to the process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;attach 21812
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But even now GDB doesn't know where the source files are stored on our system. So we need to update its search path. In my case, the files are inside my Downloads directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dir /home/moeed/Downloads/age-PG12-v1.1.1-rc1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GDB commands refresher: &lt;br&gt;
b for breakpoint, (b )&lt;br&gt;
c for continue - continues to the next breakpoint&lt;br&gt;
n for next line&lt;br&gt;
s for step into&lt;br&gt;
p for print, (p *) for pointers&lt;br&gt;
bt for call stack&lt;br&gt;
d for delete all breakpoints&lt;br&gt;
list for context&lt;br&gt;
q for quit&lt;/p&gt;

&lt;p&gt;Now let's set a breakpoint for a function call somewhere. A good point to start would be the AGE parser. If you go through the source code, you'll find a function called 'parse_cypher' inside the cypher_parser.c file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;b parse_cypher
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the breakpoint set, go back to the psql terminal and run the following query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * 
FROM cypher('people', $$
    MATCH (a)
    RETURN a
$$) as (a agtype);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will return us the one node we created earlier inside the 'people' graph. But since we've attached the process to GDB, our code should have gone into a blocked state.&lt;/p&gt;

&lt;p&gt;Going back to the GDB terminal, we can press c to continue the code until our breakpoint. &lt;/p&gt;

&lt;p&gt;Once the breakpoint is reached, type &lt;code&gt;finish&lt;/code&gt; to run the code until the parse_cypher function returns. You'll see that we exit to line 458 of cypher_analyze.c.&lt;/p&gt;

&lt;p&gt;Type &lt;code&gt;bt&lt;/code&gt; (short for backtrace) to check out the call stack so far.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g4fvmrepwjduracti86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g4fvmrepwjduracti86.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here we can see that we're inside the 'convert_cypher_to_subqery' function call.&lt;/p&gt;

&lt;p&gt;Now run &lt;code&gt;list&lt;/code&gt; to see where we are in the code. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4u8gow9be9dmvygrezj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4u8gow9be9dmvygrezj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's around line 458. We can see that the result of the parse_cypher call is assigned to stmt.&lt;/p&gt;

&lt;p&gt;For now, press &lt;code&gt;c&lt;/code&gt; again in GDB to finish running the query.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Updating the AGE source code
&lt;/h2&gt;

&lt;p&gt;Now let's make some changes to our code. We’ll add another stmt, called stmt2, just below the first. So open up the cypher_analyze.c source file and make the following changes on line 458:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5leygdfj0fbwjmtgss6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5leygdfj0fbwjmtgss6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now go to the root of the AGE source code directory and run the make command again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo make PG_CONFIG=/usr/local/pgsql/bin/pg_config install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Time to test if our changes have been updated. &lt;/p&gt;

&lt;p&gt;We'll run the entire process again. Quit out of GDB with &lt;code&gt;q&lt;/code&gt;, quit the psql terminal with &lt;code&gt;\q&lt;/code&gt;, and start afresh. &lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;psql postgres&lt;/code&gt;, LOAD and SET age like before. &lt;/p&gt;

&lt;p&gt;Find the PID of the psql process and use it to start GDB. Set GDB search path to the source files.&lt;/p&gt;

&lt;p&gt;Set a break point for the parse_cypher function, run the MATCH query inside the psql terminal once more. Go to the GDB terminal and run the code till the breakpoint using c.  &lt;/p&gt;

&lt;p&gt;The parse_cypher function should be called twice now (once for stmt and once for stmt2). After the first breakpoint, if you press c again, the code will break again upon the second call. You can also use the list command to see the updated code context. If you press &lt;code&gt;c&lt;/code&gt; for the third and final time, the code will run till the query is completely executed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9oq9ixhf1i5do8hckie9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9oq9ixhf1i5do8hckie9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope you were able to learn something new from this post! Thanks for reading :)&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>apacheage</category>
      <category>opensource</category>
      <category>linux</category>
    </item>
    <item>
      <title>Set up Django on a Linux VM with Google Cloud Platform</title>
      <dc:creator>moeed-k</dc:creator>
      <pubDate>Sat, 18 Feb 2023 14:05:42 +0000</pubDate>
      <link>https://dev.to/moeedk/set-up-django-on-a-linux-vm-with-google-cloud-platform-318m</link>
      <guid>https://dev.to/moeedk/set-up-django-on-a-linux-vm-with-google-cloud-platform-318m</guid>
      <description>&lt;p&gt;As a senior in my BSCS program I've been working on my Final Year Project since the last several months. Early on, I decided to develop my project as a client-server web-application, and since the majority of the backend code was in Python, it was a pretty easy choice for me to choose Django as the web framework. Having the backend all be in one language streamlines a lot of things, but that isn't the point of this post.&lt;/p&gt;

&lt;p&gt;So coming to the actual point: Up until now I've been hosting django on my local machine, and while that's worked fine until now, I'll eventually have to move on to using a &lt;em&gt;real server&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;To this end, I decided to experiment by setting up a basic Django VM using Google Cloud Platform, and this post is going to be a documentation of that process (for both my own future reference and for anyone else who might be interested in doing something similar).&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Creating a Google Cloud Platform Account
&lt;/h2&gt;

&lt;p&gt;For now, I went with the free trial handed out by GCP. While you do need to setup a credit/debit card, you won't be charged anything initially and will be given 300$ in credits for the first 90 days.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqcrvpubnue5l6axp0mg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqcrvpubnue5l6axp0mg.png" alt=" " width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Enabling the Google Compute API
&lt;/h2&gt;

&lt;p&gt;Next up we want to enable the Google Engine API. On the left-side panel hover over the Compute Engine section, and then select &lt;strong&gt;VM instances&lt;/strong&gt;. &lt;br&gt;
You'll be taken to a page that will ask you to enable the Compute API.&lt;br&gt;
&lt;strong&gt;Enable the computation!&lt;/strong&gt;&lt;br&gt;
This might take a minute or two, but you'll be notified by the little bell icon on the top right once it's ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Creating the VM Instance
&lt;/h2&gt;

&lt;p&gt;Now we're reading to create our VM. Once again, click on Compute Engine on your left, select VM instances, and you'll be taken to a page where you can configure your machine. Everything here is pretty straight forward, but there's a few things I want to focus on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e69h31smggyv8oo9g2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e69h31smggyv8oo9g2v.png" alt=" " width="800" height="1177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we want to be sure of two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The boot disk is selected as a Linux image (I'll be using Ubuntu).&lt;/li&gt;
&lt;li&gt;The 'Allow HTTPS traffic' box is checked.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While we could always add in HTTPS traffic rules later, checking this box here simplifies the process from the get-go. &lt;/p&gt;

&lt;h2&gt;
  
  
  4. Configuring SSH Keys
&lt;/h2&gt;

&lt;p&gt;Quick refresher: SSH is used to connect securely to a shell running on a remote machine. There's two Keys, a public one a private one. Since I'm working on Windows, I'll be using PuTTy to generate my keys.&lt;/p&gt;

&lt;p&gt;PuTTy download link: &lt;a href="https://www.puttygen.com/" rel="noopener noreferrer"&gt;https://www.puttygen.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open up PuTTYgen after installation, choose an encryption algorithm (I'm going with RSA), decide on a Key Comment, and click generate. You should get something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0uaue6zlx80ncl9a70s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0uaue6zlx80ncl9a70s.png" alt=" " width="800" height="637"&gt;&lt;/a&gt;&lt;br&gt;
Save the private key somewhere on your local PC, and copy the public key generated in the box on top onto your clipboard. &lt;/p&gt;

&lt;p&gt;Now open up the Google Cloud Console again. This time, select Metadata from the left hand-side panel. Go to the SSH keys tab, and add your public key here. All the VMs under the current project will inherit these keys.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmwtxaeh0b58so6u6cr9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmwtxaeh0b58so6u6cr9.png" alt=" " width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alright we're done with the setup! Now we just have to connect to our VM remotely. Now open up PuTTy (not PuTTygen). From here, enter the External IP of your VM instance (if you're not sure what the IP is you can look it up from the Google Cloud Console). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dnto1y1r40eay13pfbt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dnto1y1r40eay13pfbt.png" alt=" " width="705" height="688"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the left hand-side panel in PuTTY, expand SSH, expand Auth, select credentials, and select your previously generated private key. Now click open. When asked to 'login as', choose what you wrote down as a Key Comment earlier.&lt;/p&gt;

&lt;p&gt;We've successfully connected to our VM using SSH.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6lhuhpdg9gfidc9cro3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6lhuhpdg9gfidc9cro3.png" alt=" " width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Modifying Firewall rules
&lt;/h2&gt;

&lt;p&gt;Remember what I said about firewall rules earlier when creating the VM instance? Well we're mostly setup already and only have to add one extra rule for Django. &lt;/p&gt;

&lt;p&gt;From Cloud Console, go to the VPC section and then select 'Firewall' from the left hand-side. Now click on the Create Firewall Rule button. Here are the settings I used:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4yfcn7qbmgtdy8audqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4yfcn7qbmgtdy8audqk.png" alt=" " width="800" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnuc6vjtm2bywcckayupm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnuc6vjtm2bywcckayupm.png" alt=" " width="800" height="841"&gt;&lt;/a&gt;&lt;br&gt;
In particular, I set the IP range to 0.0.0.0/0 (to allow all public IPs to connect to the VM), and I specified port 8000 as open (since that is what Django uses by default).&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Setting up Django
&lt;/h2&gt;

&lt;p&gt;Now all that's left to do is download and install django on our VM. On the terminal, run the following commands to install (if you're wondering about python, it came pre-installed with the Ubuntu image):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt install python-django&lt;br&gt;
sudo apt install python-django-common&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now we create a new django project called 'mysite' using the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo django-admin startproject mysite&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Just one more thing left to do before we run our server. We have to add the IP of our VM to the ALLOWED_HOST list in our django settings. So go into the mysite folder (inside the mysite root directory), open 'settings.py' using nano (command: sudo nano settings.py), and add the external IP of the VM (in single quotes) to the ALLOWED_HOSTS list.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nyf011deg6078jpp83m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nyf011deg6078jpp83m.png" alt=" " width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now go to the root of the 'mysite' directory, and start the server using the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo python manage.py runserver 0.0.0.0:8000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now let's confirm if our server is working. From your local machine's browser, enter the external IP and port of your VM instance to connect to the django server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ytcwz77slbzioemhds3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ytcwz77slbzioemhds3.png" alt=" " width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All done!&lt;/p&gt;

</description>
      <category>react</category>
      <category>javascript</category>
      <category>webdev</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
