<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Afzal Ansari</title>
    <description>The latest articles on DEV Community by Afzal Ansari (@afzal442).</description>
    <link>https://dev.to/afzal442</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/afzal442"/>
    <language>en</language>
    <item>
      <title>Building a resilient, scalable AWS Lambda + S3 architecture</title>
      <dc:creator>Afzal Ansari</dc:creator>
      <pubDate>Sat, 17 Jan 2026 11:55:23 +0000</pubDate>
      <link>https://dev.to/afzal442/building-a-resilient-scalable-aws-lambda-s3-architecture-3ldc</link>
      <guid>https://dev.to/afzal442/building-a-resilient-scalable-aws-lambda-s3-architecture-3ldc</guid>
      <description>&lt;p&gt;I’ve built and reviewed several serverless systems where AWS Lambda and Amazon S3 form the backbone—file ingestion pipelines, media processing platforms, and event-driven APIs. Over time, I noticed a recurring challenge: people either see Lambda + S3 as too simple ("just trigger a function on upload") or too abstract when diagrams become overwhelming.&lt;/p&gt;

&lt;p&gt;In this article, I’ll walk you through how I think about designing a complex yet easy-to-understand Lambda + S3 architecture, using real-world patterns and the latest AWS capabilities. I’ll also show you how I draw this architecture in Lucidchart, so I can explain it clearly and broadly.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I’m trying to solve
&lt;/h3&gt;

&lt;p&gt;When I design serverless systems, I usually want three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Simple entry points for users and clients&lt;/li&gt;
&lt;li&gt;Asynchronous, resilient processing behind the scenes&lt;/li&gt;
&lt;li&gt;Strong cost and operational control as the system scales&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Instead of thinking in terms of individual AWS services, I divide the architecture into layers. This mental model also maps very well to a Lucidchart diagram.&lt;/p&gt;

&lt;h4&gt;
  
  
  Edge / Client layer
&lt;/h4&gt;

&lt;p&gt;This is where requests originate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Web or mobile clients&lt;/li&gt;
&lt;li&gt;CLI tools or third‑party webhooks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of the time, clients never talk to Lambda directly. I prefer to keep a clean boundary.&lt;/p&gt;

&lt;h4&gt;
  
  
  API &amp;amp; ingress layer
&lt;/h4&gt;

&lt;p&gt;Here I typically use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Gateway for REST or HTTP APIs&lt;/li&gt;
&lt;li&gt;Lambda Function URLs for very focused, internal endpoints&lt;/li&gt;
&lt;li&gt;CloudFront + WAF when the API is public and needs protection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This layer is responsible only for authentication, validation, and routing—not heavy logic.&lt;/p&gt;

&lt;h4&gt;
  
  
  Compute layer (Lambda)
&lt;/h4&gt;

&lt;p&gt;This is where Lambda shines.&lt;/p&gt;

&lt;p&gt;I usually split responsibilities across multiple functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Request Lambdas – fast, synchronous (auth, presigned URLs)&lt;/li&gt;
&lt;li&gt;Processor Lambdas – async workers (image resize, metadata extraction)&lt;/li&gt;
&lt;li&gt;Indexer Lambdas – background tasks (search indexing, embeddings)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Storage &amp;amp; data plane (S3‑centric)
&lt;/h4&gt;

&lt;p&gt;This is the heart of the system.&lt;/p&gt;

&lt;p&gt;I usually work with multiple buckets, each with a clear purpose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ingest-bucket – raw uploads&lt;/li&gt;
&lt;li&gt;processed-bucket – derived artifacts&lt;/li&gt;
&lt;li&gt;archive-bucket – long‑term retention&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key S3 features I actively use:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Direct uploads with presigned URLs (clients bypass Lambda)&lt;/li&gt;
&lt;li&gt;S3 Intelligent‑Tiering to control costs automatically&lt;/li&gt;
&lt;li&gt;S3 Object Lambda when I want on‑the‑fly transformations&lt;/li&gt;
&lt;li&gt;S3 Batch Operations + Lambda for large‑scale reprocessing jobs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Below is the infra diagram for the above structure:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0t8ezwb2rrejjcmdq66a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0t8ezwb2rrejjcmdq66a.png" alt=" " width="800" height="708"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final thoughts
&lt;/h3&gt;

&lt;p&gt;Lambda and S3 are often introduced as simple services, but together they can power extremely sophisticated systems. The key is to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep synchronous paths small&lt;/li&gt;
&lt;li&gt;Push everything else to events&lt;/li&gt;
&lt;li&gt;Let S3 do what it does best: scale and store cheaply&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
      <category>s3</category>
    </item>
    <item>
      <title>End to end argo-workflow for CI/CD</title>
      <dc:creator>Afzal Ansari</dc:creator>
      <pubDate>Wed, 14 May 2025 14:24:40 +0000</pubDate>
      <link>https://dev.to/cloudnativeos/end-to-end-argo-workflow-for-continuous-integration-5f8b</link>
      <guid>https://dev.to/cloudnativeos/end-to-end-argo-workflow-for-continuous-integration-5f8b</guid>
      <description>&lt;p&gt;If you're just getting started with GitOps or CI/CD pipelines in Kubernetes, Argo Workflows offers a powerful and Kubernetes-native way to automate your build pipelines.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll walk through a complete Continuous Integration (CI) and Continuous Deployment (CD) pipeline using Argo Workflows that performs the following steps:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrpy9wlr37dq9o79w1tm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrpy9wlr37dq9o79w1tm.png" alt="Image description" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔁 &lt;strong&gt;The Pipeline:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;✅ Clone a GitHub repository&lt;br&gt;
🔧 Build and push a Docker image using BuildKit&lt;br&gt;
🔐 Scan the Docker image for vulnerabilities&lt;br&gt;
🔍 Scan Kubernetes manifests for misconfigurations&lt;br&gt;
⏭ Deploy Kubernetes manifests to your cluster&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s break down each part of the workflow. But, before doing so, let's first understand basics of my argo workflow template.&lt;/p&gt;

&lt;p&gt;⚙️ &lt;strong&gt;Understanding the Argo Workflow Structure&lt;/strong&gt;&lt;br&gt;
Argo Workflows is a Kubernetes-native workflow engine where each CI/CD process is defined as a custom resource (Workflow or WorkflowTemplate). In my case, I used a WorkflowTemplate named ci-build-workflow, which can be triggered manually or automatically through a CronWorkflow.&lt;/p&gt;

&lt;p&gt;The workflow is defined using Argo’s CRD (Custom Resource Definition), and it includes metadata such as name or generateName. This value is used as a prefix for naming the pods in which each workflow step (or template) runs.&lt;/p&gt;

&lt;p&gt;📦 &lt;strong&gt;Volume Handling&lt;/strong&gt;&lt;br&gt;
Just like regular Kubernetes pods, you can declare volumes in Argo workflows. These volumes—like work or buildkitd—are mounted across different steps, enabling shared storage between containers. This is especially useful for tasks that rely on common directories (like cloning a repo and building from it).&lt;/p&gt;

&lt;p&gt;🔁 &lt;strong&gt;Parameterized Arguments&lt;/strong&gt;&lt;br&gt;
Argo allows the use of parameters in workflows to make them dynamic. In my CI workflow, I’ve defined parameters such as the GitHub repo owner, repo name, Docker image tag, registry name, etc. These parameters can be passed into the workflow during runtime or set statically in a CronWorkflow.&lt;/p&gt;

&lt;p&gt;🧩 &lt;strong&gt;Workflow Entry Point: DAG Template&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: main
  dag:
    tasks:
      - name: clone-repo
        ...
      - name: build-image
        ...
        depends: clone-repo
      - name: scan-image
        ...
        depends: build-image
      - name: scan-k8s
        ...
        depends: clone-repo &amp;amp;&amp;amp; build-image
     - name: deploy-kubernetes
       ...
       depends: build-image &amp;amp;&amp;amp; scan-k8s &amp;amp;&amp;amp; clone-repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pipeline uses a DAG (Directed Acyclic Graph) to run tasks in order:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;scan-k8s&lt;/code&gt; and &lt;code&gt;build-image&lt;/code&gt; both run after cloning&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scan-image&lt;/code&gt; runs after the image is built&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deploy-kubernetes&lt;/code&gt; runs after the image is built, k8s manifests scanned and cloning a repo.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup ensures tasks are run in the right order without unnecessary blocking.&lt;/p&gt;

&lt;p&gt;📂 &lt;strong&gt;Volumes &amp;amp; Storage&lt;/strong&gt;&lt;br&gt;
The workflow uses a shared &lt;code&gt;Persistent Volume Claim&lt;/code&gt; (PVC) to exchange files between steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumeClaimTemplates:
- metadata:
    name: work
  spec:
    accessModes: ["ReadWriteOnce"]
    resources:
      requests:
        storage: 64Mi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, &lt;code&gt;BuildKit&lt;/code&gt; uses an &lt;code&gt;emptyDir&lt;/code&gt; volume for its internal state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumes:
  - name: buildkitd
    emptyDir: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we can build and push container images to a Docker registry like Docker Hub, we need to authenticate. Argo Workflows uses Kubernetes secrets to securely store and access these credentials during execution.&lt;/p&gt;

&lt;p&gt;In our workflow, the build and push steps rely on credentials to authenticate with Docker Hub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumes:
- name: docker-creds
  secret:
    secretName: '{{inputs.parameters.docker_secret_name}}'
    items:
    - key: .dockerconfigjson
      path: config.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This volume mount makes your Docker config file available inside the container at runtime, which is required for tools like BuildKit to push images securely.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;How to Create the Docker Secret:&lt;/em&gt;&lt;br&gt;
To generate the .dockerconfigjson file, you can create the Kubernetes secret using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic docker-config \
  --from-file=.dockerconfigjson=$HOME/.docker/config.json \
  --type=kubernetes.io/dockerconfigjson

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the workflow, we refer to this secret using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: docker_secret_name
  value: docker-config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the secret in place, your workflow can securely authenticate and push the built image to Docker Hub, completing the CI loop without exposing any sensitive information.&lt;/p&gt;

&lt;p&gt;Now, let's understand every template involved in this case.&lt;/p&gt;

&lt;p&gt;📦 &lt;strong&gt;Step 1: Cloning the Repository&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: clone-repo
  template: clone
  arguments:
    parameters:
    - name: owner
      value: cloud-hacks
    - name: repo
      value: argocd-io
    - name: ref
      value: main
    - name: clone_path
      value: /work
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're using the official &lt;code&gt;alpine/git&lt;/code&gt; image to shallow-clone a GitHub repository. It stores the code in a shared volume (/work) so later steps like build and scan can use it.&lt;/p&gt;

&lt;p&gt;🛠️ &lt;strong&gt;Step 2: Building and Pushing the Docker Image&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: build-image
  template: build-image
  arguments:
    parameters:
    - name: image
      value: example/nginx
    - name: path
      value: .
    - name: version
      value: v4
    - name: registry
      value: docker.io
    - name: docker_secret_name
      value: docker-config
    - name: insecure
      value: "false"
  depends: clone-repo

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're using the rootless BuildKit container (moby/buildkit) to build the Docker image. It reads the source from /work, builds the image, and is set up to push the image (when &lt;code&gt;push=true&lt;/code&gt; is enabled in &lt;code&gt;--output&lt;/code&gt;). We also provide:&lt;/p&gt;

&lt;p&gt;The Docker image name and version&lt;br&gt;
Docker registry credentials via a Kubernetes secret&lt;br&gt;
Secure or insecure registry flag&lt;br&gt;
Note: In this template, &lt;code&gt;push=false&lt;/code&gt; is set — if you want to push, change it to &lt;code&gt;push=true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;🐛 &lt;strong&gt;Step 3: Scanning the Image with Trivy&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: scan-image
  template: scan-image
  arguments:
    parameters:
    - name: image
      value: example/nginx:v4
    - name: severity
      value: CRITICAL,HIGH
    - name: exit-code
      value: "0"
  depends: build-image

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we use Trivy to scan the Docker image (example/nginx:v4) for known vulnerabilities. This helps catch issues before the image is deployed.&lt;/p&gt;

&lt;p&gt;We scan for CRITICAL and HIGH severity vulnerabilities&lt;br&gt;
&lt;code&gt;exit-code: 0&lt;/code&gt; ensures the workflow doesn't fail even if vulnerabilities are found (customize this as needed)&lt;br&gt;
Trivy pulls the image from Docker Hub, so make sure the build-image step pushes the image first.&lt;/p&gt;

&lt;p&gt;🛡 &lt;strong&gt;Step 4: Scanning Kubernetes Manifests with Kubescape&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: scan-k8s
  template: scan-k8s
  arguments:
    parameters:
    - name: path
      value: /work/dev
    - name: verbose
      value: "true"
  depends: clone-repo &amp;amp;&amp;amp; build-image

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’re using &lt;code&gt;kubescape&lt;/code&gt; to scan the Kubernetes YAML files located in &lt;code&gt;/work/dev&lt;/code&gt; for misconfigurations, policy violations, and security issues. It helps ensure that the manifests follow best practices.&lt;/p&gt;

&lt;p&gt;⚙️ &lt;strong&gt;Step 5: Deploy Kubernetes manifests using Kubectl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: deploy-kubernetes
  inputs:
    parameters:
      - name: path
      - name: namespace
  container:
    image: bitnami/kubectl:latest
    command: [sh, -c]
    args:
      - |
        echo "Deploying Kubernetes resources from {{inputs.parameters.path}}..."
        kubectl apply -f {{inputs.parameters.path}} -n {{inputs.parameters.namespace}}
    volumeMounts:
      - mountPath: /work
        name: work

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We use official &lt;code&gt;bitnami/kubectl&lt;/code&gt; image. It expects a path (like &lt;code&gt;/work/dev&lt;/code&gt;) where my Kubernetes manifests are located.&lt;br&gt;
The container mounts the shared work volume that was used in the clone step, ensuring that the files are accessible.&lt;br&gt;
Once executed, it runs &lt;code&gt;kubectl apply&lt;/code&gt; on the provided path to deploy the resources.&lt;/p&gt;

&lt;p&gt;🔒 &lt;strong&gt;Note&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Make sure that your Argo Workflow controller has the correct RBAC permissions to interact with Kubernetes resources (like pods, deployments, services, etc.) in the namespace where you intend to deploy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here's what we achieved so far using Argo Workflows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Stage             Tool          Purpose
Clone Repo    alpine/git    Fetch source code
Build Image   BuildKit  Build Docker image in a
                                secure way
Scan Image    Trivy         Identify vulnerabilities in           
                         Docker image
Scan Manifests  Kubescape   Catch Kubernetes YAML issues
Deploy Kubernetes  Kubectl      Deploy kubernetes resources 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🕒 &lt;strong&gt;Automating CI with CronWorkflow&lt;/strong&gt;&lt;br&gt;
To make the CI process completely hands-off, I added a CronWorkflow that runs every Tuesday at 9 AM UTC. This means our CI pipeline automatically triggers once a week without needing any manual input.&lt;/p&gt;

&lt;p&gt;This is particularly useful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically building and scanning your base images weekly.&lt;/li&gt;
&lt;li&gt;Ensuring your Kubernetes manifests stay compliant.&lt;/li&gt;
&lt;li&gt;Catching vulnerabilities on a routine basis, even if there are no recent code changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's what the CronWorkflow spec looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
  schedule: "0 9 * * 2"  # Every Tuesday at 9 AM UTC
  timezone: "UTC"
  concurrencyPolicy: "Replace"  # If the previous run is still running, replace it
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  workflowSpec:
    workflowTemplateRef:
      name: ci-build-workflow

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this in place, the entire CI process—from cloning the repo, building and scanning the image, to pushing it—is performed weekly without requiring any developer to trigger the pipeline.&lt;/p&gt;

&lt;p&gt;As with all emerging tools out there, everything has to start somewhere and this is also the case with the Argo dashboard. It is minimal but does what it needs to do. Argo shows all the workflows and their steps, it updates automatically and all progress and logs can be viewed from here. This makes it very easy to monitor how everything is going.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmero0n6d0sdytx6gws3q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmero0n6d0sdytx6gws3q.png" alt="Image description" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9v1jfqok9w906dqla3y0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9v1jfqok9w906dqla3y0.png" alt="Image description" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Find the complete code and configuration for this setup on GitHub:&lt;br&gt;
GitHub Repository Link CI-build-Workflow: &lt;a href="https://github.com/Cloud-Hacks/argo-wf/blob/main/quick-start/wf-ci-workflow.yaml" rel="noopener noreferrer"&gt;https://github.com/Cloud-Hacks/argo-wf/blob/main/quick-start/wf-ci-workflow.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub Repository Link CronJob Example: &lt;a href="https://github.com/Cloud-Hacks/argo-wf/blob/main/quick-start/wf-cronjob-ci.yaml" rel="noopener noreferrer"&gt;https://github.com/Cloud-Hacks/argo-wf/blob/main/quick-start/wf-cronjob-ci.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://argo-workflows.readthedocs.io/en/latest/walk-through/dag/" rel="noopener noreferrer"&gt;https://argo-workflows.readthedocs.io/en/latest/walk-through/dag/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🌟 Let’s Connect!&lt;br&gt;
I love sharing insights about DevOps, Kubernetes, and GitOps tools like ArgoCD. If you found this article helpful or have questions, let’s continue the conversation on LinkedIn!&lt;br&gt;
👉 Connect with me on &lt;a href="http://linkedin.com/in/afzalansari07/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>argoworkflow</category>
      <category>argo</category>
      <category>cicd</category>
    </item>
    <item>
      <title>How to Configure Pods to Enable IAM Roles for Service Accounts</title>
      <dc:creator>Afzal Ansari</dc:creator>
      <pubDate>Mon, 13 Jan 2025 18:28:22 +0000</pubDate>
      <link>https://dev.to/afzal442/how-to-configure-pods-to-enable-iam-roles-for-service-accounts-1ikh</link>
      <guid>https://dev.to/afzal442/how-to-configure-pods-to-enable-iam-roles-for-service-accounts-1ikh</guid>
      <description>&lt;p&gt;In this blog, we will dive into configuring Kubernetes Pods to use IAM Roles for Service Accounts (IRSA), enabling applications running in your Pods to securely access AWS services without embedding AWS credentials. This approach is secure, scalable, and aligns well with modern DevOps best practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
Before getting started, ensure you have the following ready:&lt;/p&gt;

&lt;p&gt;An existing Amazon EKS cluster: If you don’t have one, follow the guide in Get started with Amazon EKS.&lt;/p&gt;

&lt;p&gt;IAM OpenID Connect (OIDC) provider configured: Learn to create one or verify its existence by following the Create an IAM OIDC provider for your cluster guide.&lt;/p&gt;

&lt;p&gt;AWS CLI installed: Ensure version 2.12.3 or later or version 1.27.160 or later is installed and configured. Check your version with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws --version | cut -d / -f2 | cut -d ' ' -f1&lt;/code&gt;&lt;br&gt;
Update it if needed, following the Installing AWS CLI guide.&lt;/p&gt;

&lt;p&gt;kubectl installed: Ensure it matches your Kubernetes version (within ±1 minor version). Follow the Set up kubectl and eksctl guide if necessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Guide to Enable IAM Roles for Service Accounts&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Create an IAM Policy&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First, let's create an IAM policy that defines the permissions required by your application. For example, if your application needs read-only access to an S3 bucket, create a policy file s3-readonly-policy.json with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3:GetObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::my-bucket/*"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create the policy using the AWS CLI:&lt;br&gt;
&lt;code&gt;aws iam create-policy --policy-name S3ReadOnlyPolicy --policy-document file://s3-readonly-policy.json&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Create an IAM Role for the Service Account&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using the IAM OIDC provider, let's create an IAM role that trusts the OIDC provider associated with your cluster.&lt;/p&gt;

&lt;p&gt;First, retrieve the OIDC provider URL:&lt;br&gt;
&lt;code&gt;aws eks describe-cluster --name my-cluster1 --query "cluster.identity.oidc.issuer" --output text&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Then, create a trust policy trust-policy.json for your service account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Federated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::your-account-id:oidc-provider/oidc-provider-url"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sts:AssumeRoleWithWebIdentity"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"StringEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"oidc-provider-url:sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"system:serviceaccount:namespace:service-account-name"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;your-account-id&lt;/code&gt;, &lt;code&gt;oidc-provider-url&lt;/code&gt;, &lt;code&gt;namespace&lt;/code&gt;, and &lt;code&gt;service-account-name&lt;/code&gt; with the appropriate values.&lt;/p&gt;

&lt;p&gt;Let's create the IAM role as follows:&lt;br&gt;
&lt;code&gt;aws iam create-role --role-name S3ReadOnlyRole --assume-role-policy-document file://trust-policy.json&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Attach the policy to the role:&lt;br&gt;
&lt;code&gt;aws iam attach-role-policy --role-name S3ReadOnlyRole --policy-arn arn:aws:iam::your-account-id:policy/S3ReadOnlyPolicy&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a Kubernetes Service Account&lt;/strong&gt;
Let's create a Kubernetes service account and annotate it with the IAM role ARN:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;apiVersion:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;v&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;kind:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ServiceAccount&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;metadata:&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;s&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="err"&gt;-access-sa&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;namespace:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;default&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;annotations:&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;eks.amazonaws.com/role-arn:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;arn:aws:iam::your-account-id:role/S&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="err"&gt;ReadOnlyRole&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Configure Your Pod to Use the Service Account&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's update your Pod's YAML configuration to use the newly created service account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: my-app
  namespace: default
spec:
  serviceAccountName: s3-access-sa
  containers:
  - name: app
    image: public.ecr.aws/nginx/nginx:1.12.0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's deploy the Pod now,&lt;br&gt;
&lt;code&gt;kubectl apply -f pod.yaml&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Verify the Configuration&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the Pod is running, we will need to check the ARN of the IAM role that the Pod is using.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe pod my-app | grep AWS_ROLE_ARN:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We will find the output as follows.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;AWS_ROLE_ARN:                 arn:aws:iam::111122223333:role/my-role&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ab6jeunfyr94g6hqpjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ab6jeunfyr94g6hqpjo.png" alt="Image description" width="625" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Configuring IAM roles for service accounts is a powerful way to grant AWS permissions to Kubernetes workloads securely. This approach eliminates the need for hardcoding credentials and simplifies permissions management. By following these steps, you’ve enabled a secure and scalable method to integrate AWS services with your Kubernetes applications.&lt;/p&gt;

&lt;p&gt;Feel free to reach out in the comments below if you encounter any issues or have questions! 🚀&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>security</category>
    </item>
    <item>
      <title>Implementing Authorization with OpenFGA | Part 2</title>
      <dc:creator>Afzal Ansari</dc:creator>
      <pubDate>Thu, 05 Dec 2024 18:09:07 +0000</pubDate>
      <link>https://dev.to/afzal442/implementing-authorization-with-openfga-part-2-290l</link>
      <guid>https://dev.to/afzal442/implementing-authorization-with-openfga-part-2-290l</guid>
      <description>&lt;p&gt;If you haven't followed up with my last post, i.e., Part 1, you can definitely grab the necessary concepts for this following post.&lt;/p&gt;

&lt;p&gt;In this blog, I will focus on various aspects, from what you can experiment with to a practical workaround example.&lt;/p&gt;

&lt;p&gt;We will understand the workflow of authorization model using a use case and its implementation on solving the issue of permissions access.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modeling SaaS Project Permissions with OpenFGA
&lt;/h2&gt;

&lt;p&gt;To understand our Fine-Grained Authorization (FGA) model, we will take a SaaS project use case into account. We will model a project organization permission model using OpenFGA. Our goal is to build a service that enables users to develop and collaborate on features efficiently.&lt;/p&gt;

&lt;p&gt;We will implement a subset of the feature permission model using the OpenFGA Go SDK and validate the model through a few access control scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Requirements Recap&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users can be admins or members of services.&lt;/li&gt;
&lt;li&gt;Each role inherits the permissions of the lower level (i.e., admins inherit member access).&lt;/li&gt;
&lt;li&gt;Teams and organizations can have members.&lt;/li&gt;
&lt;li&gt;Organizations can own services.&lt;/li&gt;
&lt;li&gt;Organization admins have admin access to all services under that organization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will configure OpenFGA locally where we will use docker to run the tool and step further on my toes&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up OpenFGA
&lt;/h2&gt;

&lt;p&gt;There are multiple ways to set up OpenFGA, but we will leverage the Docker setup, as it is quite easy and traceable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running OpenFGA Locally
&lt;/h3&gt;

&lt;p&gt;If you want to run OpenFGA locally as a Docker container, follow these steps:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Install Docker (if not already installed).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Pull the latest OpenFGA Docker image:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker pull openfga/openfga&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run OpenFGA as a container:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -p 8080:8080 -p 8081:8081 -p 3000:3000 openfga/openfga run&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;An HTTP server on port 8080.

A gRPC server on port 8081.

The Playground on port 3000.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Running OpenFGA with Postgres
&lt;/h4&gt;

&lt;p&gt;To run OpenFGA and Postgres in containers, follow these steps:&lt;/p&gt;

&lt;p&gt;Create a Docker network to simplify communication between containers:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker network create openfga&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Start a Postgres container in the created network:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -d --name postgres --network=openfga -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=password postgres:14&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will start Postgres in the openfga network.&lt;/p&gt;

&lt;p&gt;Run the database migration to set up necessary tables (Very important, don't miss this step):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run --rm --network=openfga openfga/openfga migrate \&lt;br&gt;
    --datastore-engine postgres \&lt;br&gt;
    --datastore-uri "postgres://postgres:password@postgres:5432/postgres?sslmode=disable"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Start OpenFGA and connect it to Postgres:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run --name openfga --network=openfga -p 3000:3000 -p 8080:8080 -p 8081:8081 openfga/openfga run \&lt;br&gt;
    --datastore-engine postgres \&lt;br&gt;
    --datastore-uri 'postgres://postgres:password@postgres:5432/postgres?sslmode=disable'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This setup ensures:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The Postgres database is running.

The OpenFGA migration process is completed.

The OpenFGA server is running and connected to Postgres.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  OpenFGA Playground
&lt;/h3&gt;

&lt;p&gt;The Playground facilitates rapid development by allowing you to visualize and model your application's authorization models and manage relationship tuples with a locally running OpenFGA instance.&lt;/p&gt;

&lt;p&gt;It is enabled on port 3000 by default and accessible at &lt;a href="http://localhost:3000/playground" rel="noopener noreferrer"&gt;http://localhost:3000/playground&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let's understand OpenFGA Model for SaaS Project and visualise it using OpenFGA playground&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model
  schema 1.1

type user

type team
  relations
    define member: [user]

type organization
  relations
    define admin: [user, team#member] # Org admins
    define member: [user, team#member] # Org members
    define owner: [user] # Org owners

type service
  relations
    define admin: [user, organization#admin] # Admins include org admins
    define member: [user, team] # Members inherit from team
    define owner: [organization] # Services belong to an org

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How This Model Works
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User Roles
organization#admin → service#admin (Org admins inherit service admin access)
admin → member (Admins inherit all member permissions)
Team-Based Access
A team can include multiple users.
Teams can be granted organization#admin or organization#member permissions.
Service Ownership
Services are owned by an organization (service#owner).
Organization admins automatically get admin access to all services.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4veqrp3spozojwbo1aj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4veqrp3spozojwbo1aj.png" alt="Image description" width="800" height="772"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Example Use Cases
&lt;/h3&gt;

&lt;p&gt;1️⃣ User is an Admin of an Organization&lt;br&gt;
If a user is an admin of organization:org-1, they automatically have admin access to all services under that organization.&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "user": "alice",&lt;br&gt;
  "relation": "admin",&lt;br&gt;
  "object": "organization:org-1"&lt;br&gt;
}&lt;br&gt;
✔ Alice has admin access to all services in org-1.&lt;/p&gt;

&lt;p&gt;2️⃣ A Team Manages Multiple Services&lt;br&gt;
If a team is assigned as an organization#admin, all its members automatically get admin access to services.&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "user": "team-devops",&lt;br&gt;
  "relation": "admin",&lt;br&gt;
  "object": "organization:org-1"&lt;br&gt;
}&lt;br&gt;
✔ All members of team-devops inherit service admin access.&lt;/p&gt;

&lt;p&gt;3️⃣ A User is a Member of a Specific Service&lt;br&gt;
If a user is a service#member, they only have access to that specific service.&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "user": "bob",&lt;br&gt;
  "relation": "member",&lt;br&gt;
  "object": "service:feature-x"&lt;br&gt;
}&lt;br&gt;
✔ Bob can collaborate on feature-x but does not have admin access.&lt;/p&gt;

&lt;p&gt;Authorization Model with OpenFGA (Using Go SDK)&lt;/p&gt;

&lt;p&gt;Let’s implement this using the OpenFGA Go SDK.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the SDK&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;go get github.com/openfga/go-sdk&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initialise the client and create the store
After running OpenFGA locally, we will setup the envs as follows&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Make sure to avoid creating store id multiple times once we get the id after one run&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export FGA_API_URL=http://localhost:8080
export FGA_STORE_ID=01JR3M255RHWEE4KXGHE71H3F3 // example id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// Initialize OpenFGA client&lt;/span&gt;
    &lt;span class="n"&gt;fgaClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;NewSdkClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;ClientConfiguration&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;ApiUrl&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;  &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"FGA_API_URL"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;  &lt;span class="c"&gt;// e.g., "https://api.fga.example"&lt;/span&gt;
        &lt;span class="n"&gt;StoreId&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"FGA_STORE_ID"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="c"&gt;// Required after store creation&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to create OpenFGA client: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c"&gt;// Create OpenFGA store&lt;/span&gt;
    &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;fgaClient&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CreateStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;
        &lt;span class="n"&gt;Body&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ClientCreateStoreRequest&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Akuity Org"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;
        &lt;span class="n"&gt;Execute&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to create OpenFGA store: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Store created:"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetId&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Define and Write the Authorization Model
This model defines roles and permissions for organizations, services, teams, and users.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;configureAuthorizationModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fgaClient&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;OpenFgaClient&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;writeAuthorizationModelRequestString&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;`{
    "schema_version": "1.1",
    "type_definitions": [
      {
        "type": "user"
      },
      {
        "type": "team",
        "relations": {
          "member": {
            "this": {}
          }
        },
        "metadata": {
          "relations": {
            "member": {
              "directly_related_user_types": [
                { "type": "user" }
              ]
            }
          }
        }
      },
      {
        "type": "organization",
        "relations": {
          "admin": {
            "union": {
              "child": [
                { "this": {} },
                { "computedUserset": { "relation": "member", "userset": "team#member" } }
              ]
            }
          },
          "member": {
            "this": {}
          },
          "owner": {
            "this": {}
          }
        },
        "metadata": {
          "relations": {
            "admin": {
              "directly_related_user_types": [
                { "type": "user" },
                { "type": "team", "relation": "member" }
              ]
            },
            "member": {
              "directly_related_user_types": [
                { "type": "user" }
              ]
            },
            "owner": {
              "directly_related_user_types": [
                { "type": "user" }
              ]
            }
          }
        }
      },
      {
        "type": "service",
        "relations": {
          "owner": {
            "this": {}
          },
          "admin": {
            "union": {
              "child": [
                { "this": {} },
                {
                  "tupleToUserset": {
                    "tupleset": { "relation": "owner" },
                    "computedUserset": { "relation": "admin" }
                  }
                }
              ]
            }
          },
          "member": {
            "union": {
              "child": [
                { "this": {} },
                {
                  "computedUserset": {
                    "relation": "admin",
                    "userset": "service#admin"
                  }
                }
              ]
            }
          }
        },
        "metadata": {
          "relations": {
            "owner": {
              "directly_related_user_types": [
                { "type": "organization" }
              ]
            },
            "admin": {
              "directly_related_user_types": [
                { "type": "user" }
              ]
            },
            "member": {
              "directly_related_user_types": [
                { "type": "user" }
              ]
            }
          }
        }
      }
    ]
  }`&lt;/span&gt;

    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="n"&gt;ClientWriteAuthorizationModelRequest&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Unmarshal&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;writeAuthorizationModelRequestString&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to parse model JSON: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c"&gt;// Write the model to OpenFGA&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;fgaClient&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WriteAuthorizationModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Body&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Execute&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to write authorization model: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Authorization model configured, Model ID:"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetAuthorizationModelId&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Assign roles(Set Relationships)
This function assigns users to specific roles in organizations or services.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;assignRole&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fgaClient&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;OpenFgaClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;relation&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;fgaClient&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Body&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ClientWriteRequest&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Writes&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="n"&gt;ClientTupleKey&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;     &lt;span class="s"&gt;"user:"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Relation&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;relation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Object&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Execute&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Failed to assign role: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Assigned %s as %s to %s&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;relation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;e.g.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;assignRole(fgaClient, "james", "organization:akuity", "admin")
assignRole(fgaClient, "alice", "organization:akuity", "member")

Assigned james as admin to organization:akuity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check permissions
This function verifies if a user has a specific permission.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func checkPermission(fgaClient *OpenFgaClient, user, object, relation, modelID string) {
    options := ClientCheckOptions{
        AuthorizationModelId: PtrString(modelID),
    }

    body := ClientCheckRequest{
        User:     "user:" + user,
        Relation: relation,
        Object:   object,
    }

    data, err := (*fgaClient).Check(context.Background()).
        Body(body).
        Options(options).
        Execute()

    if err != nil {
        log.Fatalf("Failed to check permission: %v", err)
    }

    fmt.Printf("User %s has %s access to %s: %v\n", user, relation, object, data.Allowed)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;e.g.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;checkPermission(fgaClient, "alice", "member:org1", "admin", "01JR4FY69VK3G33EFTT6A1372E")

User alice has admin access to organization:org1: 0x1400021e117
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this second part of the series, we took a hands-on approach to implementing fine-grained authorization using OpenFGA in the context of a SaaS project. From setting up OpenFGA with Docker and Postgres to defining an extensible authorization model with the Go SDK, we've covered the foundational steps to get your access control logic up and running.&lt;/p&gt;

&lt;p&gt;By modeling roles like admin and member, introducing hierarchical permission inheritance, and incorporating organizational ownership, we've laid the groundwork for a scalable and maintainable permissions system.&lt;/p&gt;

&lt;p&gt;This model not only supports flexibility but also aligns with real-world requirements of multi-tenant SaaS platforms, writing relationship tuples, validating permissions via API calls,&lt;br&gt;
and showing real-world scenarios to test access logic.&lt;/p&gt;

&lt;p&gt;Until then, feel free to experiment with the Playground, tweak the model, and explore how OpenFGA can be tailored to your specific authorization needs. 🔐&lt;/p&gt;

&lt;p&gt;Let me know if you'd like me to add this directly to the doc or tweak the tone!&lt;/p&gt;

</description>
      <category>openfga</category>
      <category>authorization</category>
      <category>go</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Enhancing Application Security: Implementing Authorization with OpenFGA</title>
      <dc:creator>Afzal Ansari</dc:creator>
      <pubDate>Wed, 04 Dec 2024 18:55:00 +0000</pubDate>
      <link>https://dev.to/afzal442/enhancing-application-security-implementing-authorization-with-openfga-38mh</link>
      <guid>https://dev.to/afzal442/enhancing-application-security-implementing-authorization-with-openfga-38mh</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to OpenFGA
&lt;/h2&gt;

&lt;p&gt;OpenFGA, an open-source authorization system under the Cloud Native Computing Foundation (CNCF), empowers developers to implement robust &lt;strong&gt;authorization&lt;/strong&gt; mechanisms for any application. Inspired by Google’s Zanzibar, OpenFGA adopts a Relationship-Based Access Control (ReBAC) model, enabling developers to seamlessly integrate Role-Based Access Control (RBAC) and extend into Attribute-Based Access Control (ABAC). This system not only supports evolving complexity but also ensures scalability, making it ideal for modern applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Choose OpenFGA for Authorization?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Authorization&lt;/strong&gt; is a cornerstone of application security. It ensures that users can only access resources and perform actions that they are permitted to, enhancing both security and compliance. Implementing authorization using OpenFGA provides:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Centralized and Externalized Authorization

&lt;ul&gt;
&lt;li&gt;Decouple authorization logic from application code.&lt;/li&gt;
&lt;li&gt;Simplify policy management, changes, and audits.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Standardization and Velocity

&lt;ul&gt;
&lt;li&gt;Use a unified authorization system to accelerate development.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Simplified Compliance

&lt;ul&gt;
&lt;li&gt;Centralized decision-making and audit logs help with security 
and compliance requirements.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Ease of Evolution

&lt;ul&gt;
&lt;li&gt;Dynamically adapt authorization policies to evolving 
application needs.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Key Features of OpenFGA
&lt;/h2&gt;

&lt;p&gt;OpenFGA’s robust feature set empowers developers to implement fine-grained authorization effectively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-Environment Support: Manage authorization across production, testing, and development environments.&lt;/li&gt;
&lt;li&gt;ABAC Scenarios Support: Utilize Contextual Tuples and Conditional Relationship Tuples for dynamic access control.&lt;/li&gt;
&lt;li&gt;Extensive SDK Support: Available in Java, .NET, JavaScript, Go, and Python.&lt;/li&gt;
&lt;li&gt;Flexible APIs: Includes both HTTP and gRPC APIs.&lt;/li&gt;
&lt;li&gt;Versatile Deployment Options: Supports Postgres, MySQL, SQLite, and in-memory datastores.&lt;/li&gt;
&lt;li&gt;Developer Tooling: Includes CLI tools, GitHub Actions, VS Code Extensions, and Helm Charts for Kubernetes deployments.&lt;/li&gt;
&lt;li&gt;Monitoring Support: Integrate seamlessly with OpenTelemetry.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Understanding Authorization Concepts with OpenFGA
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Authentication vs. Authorization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Authentication: Verifies identity.&lt;br&gt;
Authorization: Determines access permissions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fine-Grained Authorization (FGA)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Allows specific actions on defined resources, scaling with millions of users and objects.&lt;br&gt;
Example: Google Drive's granular sharing features.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access Control Models&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;RBAC: Role-based permissions (e.g., Editors can edit content).&lt;br&gt;
ABAC: Attribute-driven permissions (e.g., Marketing Managers can publish marketing content).&lt;br&gt;
PBAC: Centralized policy management for access control.&lt;br&gt;
ReBAC: Relationship-driven permissions, supporting complex hierarchies like document owners or folder relationships.&lt;/p&gt;

&lt;p&gt;With OpenFGA, developers can create scalable, flexible, and compliant authorization systems while focusing on application logic. Its features make it the go-to choice for modern applications requiring fine-grained, dynamic access controls.&lt;/p&gt;
&lt;h2&gt;
  
  
  Delving into ReBAC with OpenFGA
&lt;/h2&gt;

&lt;p&gt;Let's focus on ReBAC which is quite necessary for the OpenFGA implementation&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Relationship-Based Access Control &lt;br&gt;
(ReBAC) goes beyond traditional access control models by basing user access decisions on relationships between users, objects, and other entities. This model provides powerful flexibility, enabling applications to define fine-grained, dynamic access policies that naturally reflect real-world relationships.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;How ReBAC Works&lt;/strong&gt;&lt;br&gt;
In ReBAC, access rules are defined based on relationships such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A user's relationship with an object (e.g., a user is an owner of a document).&lt;/li&gt;
&lt;li&gt;An object's relationship with other objects (e.g., a document belongs to a specific folder).&lt;/li&gt;
&lt;li&gt;Conditions that combine user and object relationships (e.g., a user’s manager can view team documents).
With OpenFGA, relationships are stored as object-relation-user tuples, which act as the foundation for making authorization decisions. Applications can call the OpenFGA check endpoint to determine if a user has a specific relationship with an object.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  ReBAC in Action: A Real-World Example
&lt;/h2&gt;

&lt;p&gt;Let’s explore a document management system where access to documents is governed by ReBAC:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scenario:

&lt;ul&gt;
&lt;li&gt;Users can only view documents if they have access to the parent folder.&lt;/li&gt;
&lt;li&gt;Some users (like managers) can view all documents within their department.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Data Representation in OpenFGA:
OpenFGA models these relationships as tuples:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;folder:finance#viewer@alice&lt;/code&gt; → Alice has viewer access to the "finance" folder.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;document:budget#parent@folder:finance&lt;/code&gt; → The "budget" document belongs to the "finance" folder.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Check Request Example:
To determine if Alice can view the "budget" document:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   json{
     "tuple_key": {
       "object": "document:budget",
       "relation": "viewer",
       "user": "alice"
     }
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;OpenFGA's Response:&lt;/strong&gt;&lt;br&gt;
The service verifies the relationships:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alice has viewer access to folder:finance.&lt;/li&gt;
&lt;li&gt;The budget document's parent is folder:finance.
Result: &lt;code&gt;true&lt;/code&gt; (Alice can view the "budget" document).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ReBAC with OpenFGA empowers developers to implement flexible, scalable, and secure access control tailored to the needs of modern applications. Start exploring OpenFGA to implement relationship-driven policies in your system today!&lt;/p&gt;

&lt;p&gt;Let's explore configuration language required for your application&lt;/p&gt;

&lt;p&gt;OpenFGA’s Configuration Language defines the relationships and authorization rules in a system. This configuration acts as the blueprint for determining access rights within your application, enabling the OpenFGA API to enforce those rules dynamically. Let's delve deeper into its structure and expand the example provided.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How the Configuration Language Works&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Schema Definition:

&lt;ul&gt;
&lt;li&gt;Specifies the schema version (e.g., 1.1) to ensure compatibility.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Type Definitions:

&lt;ul&gt;
&lt;li&gt;Defines the object types (e.g., user, folder, document) and their possible relationships.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Relation Definitions:

&lt;ul&gt;
&lt;li&gt;Describes relationships (e.g., viewer, writer) and their inheritance or conditions for validity.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's explore a scenario i.e. Managing Access in a Simple Document System&lt;/p&gt;

&lt;p&gt;In this example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users can view or edit documents.&lt;/li&gt;
&lt;li&gt;Documents can belong to folders, and access can be inherited from the parent folder.&lt;/li&gt;
&lt;li&gt;A user can also be the owner of a document or folder, granting full control.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Model Definition&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;model&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;schema&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1.1&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;user&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;folder&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;relations&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;define&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;owner:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;user&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;define&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;viewer:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;user&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;or&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;viewer&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;parent_folder&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;define&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;editor:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;user&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;or&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;owner&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;or&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;editor&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;parent_folder&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;define&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;parent_folder:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;folder&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;document&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;relations&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;define&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;owner:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;user&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;define&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;viewer:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;user&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;or&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;viewer&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;parent_folder&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;define&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;editor:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;user&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;or&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;owner&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;or&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;editor&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;from&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;parent_folder&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;define&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;parent_folder:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;folder&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation of the Example&lt;/strong&gt;&lt;br&gt;
Key Elements&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Users:

&lt;ul&gt;
&lt;li&gt;Represents individuals in the system.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Folders and Documents:

&lt;ul&gt;
&lt;li&gt;Folders can contain other folders or documents.&lt;/li&gt;
&lt;li&gt;Access rules for documents can be inherited from the parent folder.
&lt;strong&gt;Relations:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;owner:

&lt;ul&gt;
&lt;li&gt;Direct ownership grants all permissions for a folder or 
document.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;viewer:

&lt;ul&gt;
&lt;li&gt;A user can view a folder or document if they are:&lt;/li&gt;
&lt;li&gt;Explicitly added as a viewer.&lt;/li&gt;
&lt;li&gt;A viewer of the parent folder (inheritance).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;editor:

&lt;ul&gt;
&lt;li&gt;A user can edit a folder or document if they are:&lt;/li&gt;
&lt;li&gt;Explicitly added as an editor.&lt;/li&gt;
&lt;li&gt;The owner.&lt;/li&gt;
&lt;li&gt;An editor of the parent folder (inheritance).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  How OpenFGA Handles These Scenarios
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;When checking if Bob can view &lt;code&gt;document:budget:&lt;/code&gt;
OpenFGA evaluates:

&lt;ul&gt;
&lt;li&gt;Is Bob explicitly a viewer of document:budget?&lt;/li&gt;
&lt;li&gt;Is Bob a viewer of folder:project (the parent folder)?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;When checking if Alice can edit &lt;code&gt;document:proposal:&lt;/code&gt;
OpenFGA evaluates:

&lt;ul&gt;
&lt;li&gt;Is Alice the owner or editor of document:proposal?&lt;/li&gt;
&lt;li&gt;Does Alice inherit edit permissions from the parent folder?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Authorization is a crucial aspect of application security, ensuring that users can only access resources they are entitled to. OpenFGA, inspired by Google's Zanzibar, provides a powerful, scalable solution to implement fine-grained, relationship-based authorization. By decoupling authorization logic from application code, OpenFGA offers a flexible and centralized approach to managing access control in a way that adapts to evolving requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplicity: OpenFGA's Configuration Language allows developers to easily define authorization models using intuitive relationships.&lt;/li&gt;
&lt;li&gt;Scalability: OpenFGA’s ability to handle millions of objects and permissions makes it ideal for modern applications.&lt;/li&gt;
&lt;li&gt;Flexibility: Support for Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC), and Relationship-Based Access Control (ReBAC) ensures diverse authorization needs can be met.&lt;/li&gt;
&lt;li&gt;Efficiency: Inherited and conditional relationships streamline access control across hierarchical structures.&lt;/li&gt;
&lt;li&gt;Integration: With SDKs, APIs, and CLI tools, OpenFGA fits seamlessly into existing development workflows.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>openfga</category>
      <category>opensource</category>
      <category>api</category>
    </item>
    <item>
      <title>Harness Authentication Capabilities</title>
      <dc:creator>Afzal Ansari</dc:creator>
      <pubDate>Mon, 21 Oct 2024 11:24:32 +0000</pubDate>
      <link>https://dev.to/afzal442/harness-authentication-capabilities-30ec</link>
      <guid>https://dev.to/afzal442/harness-authentication-capabilities-30ec</guid>
      <description>&lt;h2&gt;
  
  
  An Overview of Harness Platform
&lt;/h2&gt;

&lt;p&gt;Harness is an Enterprise DevOps platform that provides a comprehensive set of tools and services to help organizations deliver software faster and more securely. Harness Capabilities authentication process is designed to ensure that only authorized users have access to the platform and its resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Harness Authentication Overview
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Harness&lt;/strong&gt; authentication provides a secure way for users to access and interact with Harness capabilities. It offers various authentication mechanisms to use for a wide range of customer needs. This guide will walk you through Harness' authentication capabilities, explain the options available, and help you decide which method is best for your organization&lt;/p&gt;

&lt;h3&gt;
  
  
  Supported Authentication Mechanisms
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Login via a Harness Account or Public OAuth Providers&lt;/strong&gt;: Harness allows users to authenticate using either a Harness account (email and password) or a range of single sign-on with public OAuth 2.0 providers like Google, GitHub, and GitLab. This method is ideal for organizations that either want a Harness-managed user account system or OAuth providers for authentication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanntoswl8tk40eqh9uvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanntoswl8tk40eqh9uvg.png" alt="Image description" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;e.g. if an user (registered with Harness) John logs into a Harness account using a OAuth 2.0 provider i.e. GitHub, then the user must also be registered with GitHub using &lt;a href="mailto:JohnOAuth20@outlook.com"&gt;JohnOAuth20@outlook.com&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SAML Provider&lt;/strong&gt;: Harness supports Single Sign-On (SSO) via Security Assertion Markup Language (SAML), which allows organizations to integrate with an identity provider (IdP) such as Okta, Azure AD, or Google Workspace. This option is best for organizations that already have a centralized IdP and want to provide a seamless, secure login experience across their applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf80ig37jlm6s9e4k36g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf80ig37jlm6s9e4k36g.png" alt="Image description" width="723" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;e.g. if an user (registered with Harness) John logs into a Harness account using a SAML      provider i.e. Okta, then the user must also be registered with an Okta account using the same email address.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LDAP Provider&lt;/strong&gt;: Harness also supports Lightweight Directory Access Protocol (LDAP), which allows organizations to integrate with internal directory services like Microsoft Active Directory (AD) or OpenLDAP. This method is ideal for organizations with an established LDAP infrastructure that want to manage users centrally through their directory service.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How to Set Up Authentication in Harness:
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Setting Up Login via a Harness Account or OAuth Providers&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to Account Settings → Authentication.&lt;/li&gt;
&lt;li&gt;Select either Enable Harness Account Authentication or choose an OAuth Provider (e.g., Google, GitHub, GitLab).&lt;/li&gt;
&lt;li&gt;Enforce password policies: Ensure that users create and use strong passwords that meet complexity requirements, and periodically expire passwords&lt;/li&gt;
&lt;li&gt;Enforce lockout after failed logins: Implement a lockout mechanism that automatically blocks access after a specified number of consecutive failed login attempts.&lt;/li&gt;
&lt;li&gt;Enforce two factor authentication: Implement two-factor authentication (2FA) for all Harness users to add an extra layer of security.&lt;/li&gt;
&lt;li&gt;Follow the provider’s instructions to configure OAuth credentials, such as Client ID and Client Secret.&lt;/li&gt;
&lt;li&gt;Once configured, users can log in via Harness-managed credentials or their OAuth provider’s login flow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Setting Up SAML Authentication&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to Account Settings → Authentication.&lt;/li&gt;
&lt;li&gt;Select Enable SAML.&lt;/li&gt;
&lt;li&gt;Provide the required SAML metadata from your IdP (such as Okta or Azure AD), including the SAML Metadata in XML format. &lt;/li&gt;
&lt;li&gt;Configure the required SAML attributes (e.g., Entity Id).&lt;/li&gt;
&lt;li&gt;Save the configuration and test logging in via your IdP.&lt;/li&gt;
&lt;li&gt;Note: To do this, you should first disable any configured public OAuth providers. And to use SAML SSO, Harness Users must use the same email addresses to register in Harness and the SAML provider as mentioned in the example above.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Set up vanity URL
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;You can access app.harness.io using your own unique subdomain URL in the following format;
&lt;code&gt;https://\{company}.harness.io&lt;/code&gt; where &lt;code&gt;{company}&lt;/code&gt;is the name of your account.&lt;/li&gt;
&lt;li&gt;To set up this, you have to contact Harness Support.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Restrict email domains
&lt;/h4&gt;

&lt;p&gt;You can allow (whitelist) only certain domains as usable in login credentials, e.g. gmail.com&lt;/p&gt;

&lt;h4&gt;
  
  
  Set inactive and absolute session timeout
&lt;/h4&gt;

&lt;p&gt;You can set a session timeout (in minutes) for auto logout if there has been no activity. Similarly, you have an option to set an absolute Session Timeout (in minutes) for any user’s logout, regardless of any activity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion:
&lt;/h3&gt;

&lt;p&gt;Harness offers a range of authentication options to meet the needs of organizations of all sizes, from small teams leveraging public OAuth providers to large enterprises managing users with SAML or LDAP providers. Choosing the right method depends on your existing infrastructure, security requirements, and user management needs. By following the setup guides, you can ensure that your organization’s authentication is secure, scalable, and easy to manage, that’s where Harness stands out and makes a difference.&lt;/p&gt;

&lt;p&gt;Reference:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.harness.io/docs/platform/authentication/authentication-overview" rel="noopener noreferrer"&gt;https://developer.harness.io/docs/platform/authentication/authentication-overview&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>oauth</category>
      <category>sso</category>
      <category>saas</category>
    </item>
    <item>
      <title>Optimising Performance: A Deep Dive into Caching Strategies with AWS Services</title>
      <dc:creator>Afzal Ansari</dc:creator>
      <pubDate>Sun, 28 Jan 2024 11:14:36 +0000</pubDate>
      <link>https://dev.to/afzal442/optimising-performance-a-deep-dive-into-caching-strategies-with-aws-services-1gmj</link>
      <guid>https://dev.to/afzal442/optimising-performance-a-deep-dive-into-caching-strategies-with-aws-services-1gmj</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the ever-evolving landscape of cloud computing, optimizing performance is a top priority for businesses leveraging AWS. One crucial aspect of achieving this optimization is the implementation of efficient caching strategies. In this blog post, we will explore the benefits, use cases, and scenarios surrounding popular AWS caching services such as &lt;em&gt;DynamoDB Accelerator&lt;/em&gt; (DAX), &lt;em&gt;Amazon ElastiCache&lt;/em&gt;, etc. Additionally, we will delve into how these caching strategies can be integrated with &lt;em&gt;AWS CloudFront&lt;/em&gt; and &lt;em&gt;API Gateway&lt;/em&gt; for enhanced performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of Caching Strategies:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Efficient caching is the key to reducing latency and improving the overall responsiveness of applications. By strategically implementing caching strategies, businesses can achieve the following benefits:&lt;/li&gt;
&lt;li&gt;Improved Response Time: Caching allows frequently requested data to be stored closer to the application, significantly reducing the time it takes to retrieve information.&lt;/li&gt;
&lt;li&gt;Cost Optimization: Caching minimizes the need for repeated requests to backend services, resulting in lower compute and data transfer costs.&lt;/li&gt;
&lt;li&gt;Scalability: Caching services enable applications to scale more effectively by offloading the backend infrastructure and distributing the load across multiple nodes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's explore some common use cases for caching strategies&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;DynamoDB Accelerator (DAX) with CloudFront:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Use Case:&lt;/strong&gt;&lt;br&gt;
DynamoDB is a highly scalable and managed NoSQL database service, but frequent read operations can impact response times. By integrating DynamoDB Accelerator (DAX) with CloudFront, you can create a powerful combination for enhanced read performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lv49mloduhn3six4idh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lv49mloduhn3six4idh.png" alt="Image description" width="489" height="525"&gt;&lt;/a&gt;&lt;br&gt;
Diagram depicting the integration of DAX with DynamoDB for accelerated read performance.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Scenario:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read Acceleration: DAX sits between your application and DynamoDB, caching frequently accessed items. CloudFront then distributes this cached data globally through its content delivery network.&lt;/li&gt;
&lt;li&gt;Reduced Latency: As CloudFront caches and serves data from edge locations, users experience significantly reduced latency when accessing frequently requested DynamoDB items.&lt;/li&gt;
&lt;li&gt;Cost Savings: By minimising the load on DynamoDB, CloudFront helps reduce read capacity units, leading to cost savings.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;ElastiCache with CloudFront:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Use Case:&lt;/strong&gt;&lt;br&gt;
Amazon ElastiCache is a fully managed, in-memory caching service. When combined with CloudFront, it creates a robust solution for both static and dynamic content delivery.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gbjcqt852nbd42uyi6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gbjcqt852nbd42uyi6p.png" alt="Image description" width="800" height="106"&gt;&lt;/a&gt;&lt;br&gt;
Illustration showcasing ElastiCache as a caching layer for frequently executed database queries.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Scenario:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Static Content Caching: CloudFront can be configured to cache and distribute static content globally, reducing latency for end-users.&lt;/li&gt;
&lt;li&gt;Dynamic Content Acceleration: ElastiCache, integrated with CloudFront, serves as a caching layer for frequently executed database queries, reducing the load on backend databases.&lt;/li&gt;
&lt;li&gt;Global Distribution: CloudFront ensures that cached content is available at edge locations worldwide, providing a faster and more responsive experience for users.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgueirpwxk6f9xzoufdpy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgueirpwxk6f9xzoufdpy.png" alt="Image description" width="599" height="570"&gt;&lt;/a&gt;&lt;br&gt;
Diagram illustrating the integration of CloudFront with ElastiCache for dynamic content caching.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Summary:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
In conclusion, adopting effective caching strategies with AWS services is pivotal for achieving optimal performance in cloud-based applications. Whether leveraging DAX, ElastiCache, businesses can significantly enhance response times, reduce costs, and improve scalability. Integrating these caching strategies with AWS CloudFront and API Gateway further amplifies their impact, ensuring a seamless and efficient user experience. As you embark on your journey to optimise performance on AWS, understanding and implementing these caching strategies will undoubtedly be a game-changer for your applications.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>caching</category>
    </item>
    <item>
      <title>Unlocking the Power of AWS WAF: Safeguarding Your Cloudfront and Load Balancer Services</title>
      <dc:creator>Afzal Ansari</dc:creator>
      <pubDate>Sat, 20 Jan 2024 10:04:05 +0000</pubDate>
      <link>https://dev.to/afzal442/unlocking-the-power-of-aws-waf-safeguarding-your-cloudfront-and-load-balancer-services-3k1c</link>
      <guid>https://dev.to/afzal442/unlocking-the-power-of-aws-waf-safeguarding-your-cloudfront-and-load-balancer-services-3k1c</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;Protect your Web App from vulnerabilities by Unleashing AWS WAF for Cloudfront and Load Balancer Services&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;In the ever-evolving landscape of cloud computing, ensuring robust security measures is paramount. Amazon Web Services (AWS) offers a comprehensive solution with its Web Application Firewall (WAF), particularly when integrated seamlessly with Cloudfront and Load Balancer services. This blog explores the benefits, use cases, and scenarios of leveraging AWS WAF for enhanced security in three key setups.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2vbnirxt9tcll8ybqf1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2vbnirxt9tcll8ybqf1.png" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s speak about the benefits of AWS WAF:&lt;/strong&gt;&lt;br&gt;
Before diving into specific scenarios, let's highlight the overarching benefits of AWS WAF. This web application firewall provides a layer of protection against common web exploits, such as SQL injection and cross-site scripting (XSS). By seamlessly integrating with AWS services like Cloudfront and Load Balancer, AWS WAF ensures a centralised and effective approach to safeguarding your applications. Some other key benefits include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Granular Control: Fine-tune security rules to suit specific application needs.&lt;/li&gt;
&lt;li&gt;Threat Intelligence Integration: Leverage AWS Threat Intelligence feeds for proactive security.&lt;/li&gt;
&lt;li&gt;Automated Protections: Automatically block common threats and respond to emerging attack patterns.&lt;/li&gt;
&lt;li&gt;Scalability: Scale security measures seamlessly with growing application demands.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Let’s discuss a few uses cases:&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Use Case 1:&lt;/strong&gt; EC2 Instance with Network Access Control List (NACL):&lt;/em&gt;&lt;br&gt;
In the first scenario, we examine the traditional setup of an EC2 instance protected by either a Network Access Control List (NACL). NACL offers basic network-level security but not protecting from a client vulnerable IP address where attack surface is higher. The diagram below illustrates this configuration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fui9u6x5n4wijkas40yq7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fui9u6x5n4wijkas40yq7.png" alt="Image description" width="185" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In lucid architecture diagram as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyjddkoefbkfm64qm1jr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyjddkoefbkfm64qm1jr.png" alt="Image description" width="735" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Use Case 2:&lt;/strong&gt; EC2 Instance Followed by Application Load Balancer (ALB) with NACL or WAF:&lt;/em&gt;&lt;br&gt;
Moving to a more scalable architecture, the second scenario involves an EC2 instance behind an Application Load Balancer (ALB), further fortified by NACL or AWS WAF. This setup not only distributes traffic across multiple instances but also provides enhanced security at both the network and application layers. The following diagram visualizes this configuration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkkmp6f9n8jkm0oc497y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkkmp6f9n8jkm0oc497y.png" alt="Image description" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In lucid architecture way:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsn6la35e1zn3rrfouet.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsn6la35e1zn3rrfouet.png" alt="Image description" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Use Case 3:&lt;/strong&gt; EC2 Instance, ALB, and Cloudfront with WAF:&lt;/em&gt;&lt;br&gt;
For the most robust security and performance, the third scenario combines an EC2 instance, ALB, and Cloudfront, with AWS WAF ensuring protection at every level. Cloudfront, a content delivery network (CDN), accelerates content delivery while AWS WAF safeguards against web exploits filtering out IP addresses. The diagram below showcases this comprehensive setup:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09jsbw75w6qrwj4yfq62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09jsbw75w6qrwj4yfq62.png" alt="Image description" width="800" height="41"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In lucid architecture way:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnq0iavkof5cn5byys1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnq0iavkof5cn5byys1x.png" alt="Image description" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt;&lt;br&gt;
In conclusion, leveraging AWS WAF in conjunction with Cloudfront and Load Balancer services provides a powerful and flexible approach to securing your web applications. Whether opting for a basic EC2 instance with NACL or a sophisticated setup involving ALB and Cloudfront, AWS WAF ensures a robust defense against a variety of cyber threats. As you architect and refine your AWS infrastructure, consider these scenarios to enhance both the performance and security of your applications in the cloud.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>waf</category>
      <category>cloudfront</category>
      <category>loadbalancer</category>
    </item>
    <item>
      <title>My Experience as an LFX Mentee for Jaeger Project</title>
      <dc:creator>Afzal Ansari</dc:creator>
      <pubDate>Mon, 04 Sep 2023 06:52:48 +0000</pubDate>
      <link>https://dev.to/afzal442/my-experience-as-an-lfx-mentee-for-jaeger-project-5fpo</link>
      <guid>https://dev.to/afzal442/my-experience-as-an-lfx-mentee-for-jaeger-project-5fpo</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I am thrilled to share my experience as an LFX Mentee for Jaeger, an open-source project that focuses on observability within the cloud native ecosystem. My passion for open source projects and emerging technologies led me to explore the world of DevOps, where monitoring and observability are paramount.&lt;br&gt;
I started my impeccable journey by applying for the CNCF LFX Mentorship program. I highlighted my enthusiasm for DevOps and explained how it intertwined with monitoring and observability, aligning perfectly with Jaeger's objectives.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NwVJG9au--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5t2zx4ucc56e1n9xflke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NwVJG9au--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5t2zx4ucc56e1n9xflke.png" alt="Image description" width="630" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Being selected as a mentee for Jaeger was a moment of great excitement. It affirmed my commitment to diving deep into the observability ecosystem and experiencing the day-to-day tasks of managing infrastructure in a cloud-native environment.&lt;br&gt;
My mentor and the Jaeger community provided an exceptional onboarding experience. I gained insights into the project's history, architecture, and its role in the CNCF landscape. Understanding the fundamentals was crucial for my contributions.&lt;br&gt;
Throughout my mentorship, I actively contributed to Jaeger where my main task is to refactor code migrating open tracing pkg to OpenTelemetry pkg. These experiences allowed me to sharpen my coding skills while also understanding the complexities of a real-world, production-ready open-source project. These also gave me a deeper understanding of the Observability part. I learned how it handles microservices and how observability tools like OpenTelemetry can be integrated seamlessly. This knowledge was invaluable for my DevOps aspirations. I understood the importance of tracing and monitoring for troubleshooting and optimizing performance.&lt;/p&gt;

&lt;p&gt;This mentorship significantly boosted my confidence as a developer and my understanding of open-source contributions. I honed my skills in coding, debugging, and collaboration, which are all crucial for a career in DevOps.  I now have a clearer path towards my career goals and a network of professionals to learn from and collaborate with in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Process
&lt;/h2&gt;

&lt;p&gt;I discovered the Jaeger project at CNCF LFX Mentorship page. I thought this was a great opportunity to work on an open source project, which I had been dreaming about. I also had the right technology stack, so I submitted my resume right before the deadline where I highlighted my relevant experience, skills, and contributions to the tech community. In my cover letter, I went the extra mile by proposing concrete tasks to be undertaken during the mentorship period. This demonstrated my commitment and readiness to contribute meaningfully to the project. It also showcased my understanding of the project's needs and how my skills could address them.I think this is the crucial thing to give your proposal more chances of getting selected.&lt;/p&gt;

&lt;p&gt;After submitting my application, there was a period of anticipation. Waiting for the results can be a nervous but exciting time, as you hope for the opportunity to work on a project you're passionate about. I was lucky enough to receive the good news that I got selected for the mentorship. This was a moment of great joy and validation of my efforts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1m_fpt9Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z0w9v72osdo9fclfkgku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1m_fpt9Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z0w9v72osdo9fclfkgku.png" alt="Image description" width="683" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Process
&lt;/h2&gt;

&lt;p&gt;The project I applied for was Upgrade internal use of tracing to OpenTelemetry under Jaeger category project, which aimed to upgrade the Jaeger backend to use the OpenTelemetry tracing API and SDK directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iKFtp6ki--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ud5bv6489zr1hf6bjxlz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iKFtp6ki--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ud5bv6489zr1hf6bjxlz.png" alt="Image description" width="621" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During the first two weeks of the project, I got familiar with the business process and some details of Jaeger code base. In the next two weeks, I started to contribute to the sub-part of the project introducing a new pkg which included passing a wrapper that contained a wrapper object. This wrapper object streamlined the migration from open tracing to OpenTelemetry by providing a convenient interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mfV7Q3Ic--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8tzz96iez9ryap78ccg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mfV7Q3Ic--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8tzz96iez9ryap78ccg.png" alt="Image description" width="621" height="232"&gt;&lt;/a&gt;&lt;br&gt;
This phase of work required a deep understanding of both technologies and the project's architecture. During this time, I studied one of many blogs written by Yuri about open tracing and met with my next mentor Albert, to understand the clear understanding of the blog and some of the code logic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JOKsXYLV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2vmnluajpaz0or6byghn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JOKsXYLV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2vmnluajpaz0or6byghn.png" alt="Image description" width="572" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The tasks related to hotROD application migration were strange to me in the first place. Therefore, I had to ask my mentors for technical support from time to time, and they were always very responsive. I was greatly impressed by my mentor’s extensive knowledge and experience in this field. Under the guidance from my mentors, I was finally able to put together all sets of milestones for my task. Later, in order to track my work, I created a tracking checklist on another doc as suggested.&lt;br&gt;
However, during the subsequent coding work, I encountered various problems. Most challenging part was to enable OpenTelemetry SDK and APIs everywhere I saw use of opentracing pkg which I later was able to resolve through submitting small small PRs following the milestones created for my project gradually.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--775pyT_q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bl6dek0e76cm5i392psa.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--775pyT_q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bl6dek0e76cm5i392psa.jpeg" alt="Image description" width="800" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;My project process involved a systematic journey of learning, contributing, and problem-solving. Hence, I was able to demonstrate adaptability by seeking help when needed and effectively collaborating with mentors and the community.&lt;br&gt;
The experience of encountering challenges and overcoming them will undoubtedly contribute to my growth as a developer and open-source contributor. My commitment to documenting my work and reflecting on my experiences showcases a proactive approach to continuous improvement. This project not only contributed to the Jaeger project but also enriched my own skill set and understanding of tracing technologies.&lt;br&gt;
If you’re interested in contributing to the LFX Mentorship projects, start your journey now by getting exposed to the community with small contributions before you apply for mentorship. &lt;/p&gt;

&lt;p&gt;Thanks for reading by.&lt;/p&gt;

</description>
      <category>lfx</category>
      <category>jaeger</category>
      <category>opentelemetry</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How I debugged through AWS ec2 instance creation when SSH failed to work</title>
      <dc:creator>Afzal Ansari</dc:creator>
      <pubDate>Wed, 12 Jul 2023 11:05:03 +0000</pubDate>
      <link>https://dev.to/afzal442/how-i-debugged-through-aws-ec2-instance-creation-when-ssh-failed-to-work-1pc2</link>
      <guid>https://dev.to/afzal442/how-i-debugged-through-aws-ec2-instance-creation-when-ssh-failed-to-work-1pc2</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;Creating and connecting to an Amazon Web Services (AWS) EC2 instance is a fundamental skill for anyone working with cloud infrastructure. However, even experienced users can encounter challenges, such as failed SSH connections, when attempting to connect to newly created instances. In this blog post, we will explore the steps I took to debug an SSH failure while connecting to an EC2 instance, specifically when I created an instance without a key pair. By following these troubleshooting techniques, you can quickly identify and resolve similar issues.&lt;/p&gt;

&lt;p&gt;When I started to create an ec2 instance, I came to this step where I needed to select key-pair(login) option. I chose &lt;code&gt;without key-pair&lt;/code&gt; to create further as shown below.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sSa4fQLF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1lopmp7zza771cxihy93.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sSa4fQLF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1lopmp7zza771cxihy93.png" alt="Image description" width="799" height="517"&gt;&lt;/a&gt;&lt;br&gt;
By the time I got to start the instance and connect it through &lt;code&gt;SSH&lt;/code&gt; after finishing all the steps, I grabbed the command and ran it in the terminal.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xbQpIDCH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vl7o7e6qyjhrnkw17let.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xbQpIDCH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vl7o7e6qyjhrnkw17let.png" alt="Image description" width="800" height="497"&gt;&lt;/a&gt;&lt;br&gt;
Next, I tried to setup as per the instruction given in the SSH connection without key-pair. I failed to do so.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KjyQLHE9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p910edf3kfwtlvwlipr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KjyQLHE9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p910edf3kfwtlvwlipr7.png" alt="Image description" width="723" height="126"&gt;&lt;/a&gt;&lt;br&gt;
After retrying several times, I realised that I am missing somewhere. Then I got to know I can't connect to the instance without key-pair.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iJe-esq7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/otdtvtf7lx6okc57d2vh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iJe-esq7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/otdtvtf7lx6okc57d2vh.png" alt="Image description" width="721" height="68"&gt;&lt;/a&gt;&lt;br&gt;
I chose other option to connect the instance created. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fnqM_xs9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9xufrenm11rzlvwq5uy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fnqM_xs9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9xufrenm11rzlvwq5uy2.png" alt="Image description" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eRfF6j4s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2d5i21cfma0o5jxavnis.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eRfF6j4s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2d5i21cfma0o5jxavnis.png" alt="Image description" width="800" height="639"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, I was able to connect through EC2 instance connect which opens you up a new tab to use the web based console.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g7dncJki--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zlls0oem5a9jc8yyyad1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g7dncJki--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zlls0oem5a9jc8yyyad1.png" alt="Image description" width="768" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we want to connect ec2 instance using SSH and ignore the failures like me, we can follow the steps as such below&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Step 1: Confirm SSH Configuration&lt;br&gt;
&lt;code&gt;The first step in troubleshooting an SSH connection issue is to ensure that the SSH configuration on the EC2 instance is correctly set up.&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pY_3is9t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/36kaa9nkrq8rhbynoh8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pY_3is9t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/36kaa9nkrq8rhbynoh8z.png" alt="Image description" width="799" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Step 2: Create and Associate a Key Pair&lt;br&gt;
&lt;code&gt;If you created the EC2 instance without a key pair, you won't be able to establish an SSH connection. In this case, you will need to create a new key pair and associate it with the instance:&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PaorY9DQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qvr1e7xzobvptcld1tb4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PaorY9DQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qvr1e7xzobvptcld1tb4.png" alt="Image description" width="717" height="664"&gt;&lt;/a&gt;&lt;br&gt;
We can also follow as such given following ways:&lt;br&gt;
Generate a New Key Pair: In the EC2 Dashboard, navigate to the "Key Pairs" section and create a new key pair. Save the private key (.pem file) securely on your local machine.&lt;/p&gt;

&lt;p&gt;Associate Key Pair with the Instance: Right-click on the instance in the EC2 Dashboard and select "Actions" &amp;gt; "Instance Settings" &amp;gt; "Attach/Replace IAM Role." Choose the newly created key pair, and AWS will associate it with the instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Debugging SSH connection issues when creating an AWS EC2 instance can be a frustrating experience, but by following a systematic troubleshooting approach, you can identify and resolve the underlying problems. In this blog post, we explored the steps to confirm SSH configuration, create and associate a key pair, and establish an SSH connection. Remember to double-check security group rules, ensure the instance has a public IP, and verify the permissions and path of the private key. By mastering these techniques, you can effectively troubleshoot SSH connection failures and confidently manage your EC2 instances in AWS.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>ssh</category>
      <category>webconsole</category>
    </item>
    <item>
      <title>Developing a Web App with Gin framework, integrating it with psql DB and my GO HACK Hackathon Experience</title>
      <dc:creator>Afzal Ansari</dc:creator>
      <pubDate>Thu, 26 May 2022 07:58:36 +0000</pubDate>
      <link>https://dev.to/afzal442/developing-a-web-app-with-gin-framework-integrating-it-with-psql-db-and-my-go-hack-hackathon-experience-mpa</link>
      <guid>https://dev.to/afzal442/developing-a-web-app-with-gin-framework-integrating-it-with-psql-db-and-my-go-hack-hackathon-experience-mpa</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Before I deep dive into the architecture of what I rebuilt an app during the hackathon, I would like to tell about my experiences in my words.&lt;br&gt;
When I first came across #GO Hack Challenge on &lt;a href="https://gohack.devpost.com/" rel="noopener noreferrer"&gt;devpost&lt;/a&gt;, I was overwhelming to find related tracks which I could be fit for and tutorials to enrich my knowledge here and there to get into it. It was really a learning opportunity to someone who is newbie and wants to contribute or build any stuff under the hood. &lt;br&gt;
I stepped into the open source projects under &lt;a href="https://github.com/cs3org" rel="noopener noreferrer"&gt;CERN Foundation&lt;/a&gt;, where I didn’t even know what the ecosystem around the project is. I was much curious then. Next, I delved into one of the &lt;a href="https://github.com/cs3org/reva" rel="noopener noreferrer"&gt;projects&lt;/a&gt; for a few days and looked into a few &lt;a href="https://github.com/cs3org/reva/issues" rel="noopener noreferrer"&gt;issues&lt;/a&gt; and made a small contribution to it by fixing one of the &lt;a href="https://github.com/cs3org/reva/pull/2841" rel="noopener noreferrer"&gt;bugs&lt;/a&gt;. I moved to another track where we needed to build any app based on golang. I came to know about use of generic function which is currently released and used in Go &amp;gt;= 18 Version. I managed to rebuild an app using Gin-Gonic framework and use the generic function usage in my app.&lt;/p&gt;

&lt;p&gt;Monolithic services architecture enables developers to develop product simpler and deploy easier, which can further help them achieve the product's needs sharply. This has also better development productivity ultimately. With that, I am going to give you a short demonstration on how to build a web app using monolithic way.&lt;/p&gt;

&lt;p&gt;Well, this looks scary when you never heard about this architecture. But when you start building things on top of that, you can gradually understand the scope of this architecture i.e. when and how to use it. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dzx3sly7ihkxwyxlnzt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dzx3sly7ihkxwyxlnzt.png" alt="api-conf"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We take the resource management app as an example to build a monolithic service here. The app service is consists of multiple modules, the modules include add resource, get resource and login modules, etc. Each module will have its own independent business logic, and each module will also depend on some others. For example, the get resource (/resource/get-resc) module will depend on the login module. In the monolithic application this kind of dependency is usually accomplished by method calls between modules. Monolithic services generally share storage resources, such as MySQL but in our case, it is PSQL.&lt;/p&gt;

&lt;p&gt;The overall architecture of monolithic services is relatively simple, which is also the advantage of monolithic services.&lt;/p&gt;
&lt;h1&gt;
  
  
  Development
&lt;/h1&gt;

&lt;p&gt;This section describes how to quickly implement a resource management app monolithic service based on gin framework of golang.&lt;br&gt;
With this framework, we can quickly build, configure and deploy resources within a few commands. We can create and store our API’s and configuration in a centralized repository as many as possible.&lt;/p&gt;
&lt;h3&gt;
  
  
  Getting Started
&lt;/h3&gt;

&lt;p&gt;To get started, I would assume you have GOlang already installed and configured locally and the corresponding Go package of verion &amp;gt;=18 for this app. For more details, please follow the official docs. Once we set up the requirements, we will proceed with the next steps to play around GO gin pkg for apis and gorm pkg for database instance creations.&lt;/p&gt;

&lt;p&gt;So, let’s install plugins and libraries step by step;&lt;/p&gt;

&lt;p&gt;The below commands will clone basic templates for you to work with the API after you clone the &lt;a href="https://github.com/Cloud-Hacks/go_hacks_resc_mgmnt_app" rel="noopener noreferrer"&gt;repo&lt;/a&gt;. Plz, ignore the docker file for now.&lt;/p&gt;

&lt;p&gt;After that install the dependencies as follows; this allows you to update the pkg installed.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;go mod download&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;go mod tidy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let’s dive into main.go file and create another file inside our &lt;code&gt;routes&lt;/code&gt; dir.&lt;/p&gt;
&lt;h4&gt;
  
  
  User and Resource model API definition
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Resource&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ID&lt;/span&gt;        &lt;span class="kt"&gt;int&lt;/span&gt;    &lt;span class="s"&gt;`json:"id"`&lt;/span&gt;
    &lt;span class="n"&gt;Title&lt;/span&gt;     &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"title"`&lt;/span&gt;
    &lt;span class="n"&gt;Category&lt;/span&gt;  &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"category"`&lt;/span&gt;
    &lt;span class="n"&gt;Status&lt;/span&gt;    &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"status"`&lt;/span&gt;
    &lt;span class="n"&gt;Types&lt;/span&gt;     &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"types"`&lt;/span&gt;
    &lt;span class="n"&gt;Content&lt;/span&gt;   &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"content"`&lt;/span&gt;
    &lt;span class="n"&gt;FileLink&lt;/span&gt;  &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"file_link"`&lt;/span&gt;
    &lt;span class="n"&gt;CreatedBy&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;    &lt;span class="s"&gt;`json:"created_by"`&lt;/span&gt;
    &lt;span class="n"&gt;CreatedAt&lt;/span&gt; &lt;span class="kt"&gt;int64&lt;/span&gt;  &lt;span class="s"&gt;`json:"created_at"`&lt;/span&gt;
    &lt;span class="n"&gt;UpdatedBy&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;    &lt;span class="s"&gt;`json:"updated_by"`&lt;/span&gt;
    &lt;span class="n"&gt;UpdatedAt&lt;/span&gt; &lt;span class="kt"&gt;int64&lt;/span&gt;  &lt;span class="s"&gt;`json:"updated_at"`&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;User&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ID&lt;/span&gt;        &lt;span class="kt"&gt;int&lt;/span&gt;    &lt;span class="s"&gt;`json:"id"`&lt;/span&gt;
    &lt;span class="n"&gt;FirstName&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"first_name"`&lt;/span&gt;
    &lt;span class="n"&gt;LastName&lt;/span&gt;  &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"last_name"`&lt;/span&gt;
    &lt;span class="n"&gt;Password&lt;/span&gt;  &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"password"`&lt;/span&gt;
    &lt;span class="n"&gt;Email&lt;/span&gt;     &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"email"`&lt;/span&gt;
    &lt;span class="n"&gt;Role&lt;/span&gt;      &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="s"&gt;`json:"role"`&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Resource module API definition
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Add Resource details -&amp;gt; resource/add-resource
View Resource details -&amp;gt; resource/get-resc
Login User -&amp;gt; resource/login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnlfs7lcl0g44wcimbo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnlfs7lcl0g44wcimbo9.png" alt="routes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another way to handle http request through &lt;code&gt;http.HandleFunc&lt;/code&gt; to support generic function here as shown below&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4253wf8n2odhyfnydi7i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4253wf8n2odhyfnydi7i.png" alt="handlefunc and generic func"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To more about generic function, you can definitely check this &lt;a href="https://www.freecodecamp.org/news/generics-in-golang/" rel="noopener noreferrer"&gt;link&lt;/a&gt; out.&lt;/p&gt;

&lt;p&gt;Let's deep dive into &lt;a href="https://gorm.io/docs/connecting_to_the_database.html#PostgreSQL" rel="noopener noreferrer"&gt;gorm pkg&lt;/a&gt; and psql pkg a little bit now.&lt;/p&gt;

&lt;p&gt;Since we are focusing here PostgreSql, we look into that sql query managed by that as shown below;&lt;br&gt;
In order to execute the program, you need to have psql installed in your system:&lt;/p&gt;

&lt;p&gt;Using Linux env and running on its terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install postgresql -&amp;gt; installs the psql

sudo -u postgres psql -&amp;gt; sudo into user name as postgress

psql --version -&amp;gt; checks version

export RESC_DB_DSN='postgres://USER_NAME:YOUR_PWD@localhost/DB_NAME' psql $RESC_DB_DSN

psql --host=localhost --dbname=DB_NAME --username=USER_NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fts77dcnelp4b7kkerq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fts77dcnelp4b7kkerq.png" alt="gorm-psql"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope we are done with the &lt;code&gt;psql&lt;/code&gt; creation and its usage through &lt;code&gt;gorm&lt;/code&gt; pkg.&lt;/p&gt;

&lt;p&gt;Let's run the App using &lt;code&gt;go run main.go&lt;/code&gt;, then using POSTMAN API Client or &lt;code&gt;curl&lt;/code&gt; command you can go ahead request the body.&lt;br&gt;
&lt;code&gt;curl localhost:8080/resource/get-resource&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9m8dvkw0js3hbart7r4v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9m8dvkw0js3hbart7r4v.png" alt="g-hack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Using Postman
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxae2l1b4pvuuvmy3a2wj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxae2l1b4pvuuvmy3a2wj.png" alt="using-postman"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What’s next!
&lt;/h1&gt;

&lt;p&gt;I am looking forward to adding more API endpoints to make use of HTTP requests persistently and integrating with kubernetes cluster&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;The above walk-through shows that it is very simple to use go-gin gonic framework to develop monolithic services. You only need to define the api file, and then the goctl tool can automatically generate the project code. We only need to fill in the business logic code in the logic package. In this article, I just demonstrated how to quickly develop monolithic services based on go-gin, gorm pkgs and generic function. Hope, you find it insightful and adopt the usage in building your app.&lt;/p&gt;

</description>
      <category>webapp</category>
      <category>restapi</category>
      <category>go</category>
      <category>psql</category>
    </item>
  </channel>
</rss>
