<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ashutosh Singh</title>
    <description>The latest articles on DEV Community by Ashutosh Singh (@ashutosh5786).</description>
    <link>https://dev.to/ashutosh5786</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ashutosh5786"/>
    <language>en</language>
    <item>
      <title>GitHub Actions + AWS Role Chaining: A Security Upgrade Worth Making</title>
      <dc:creator>Ashutosh Singh</dc:creator>
      <pubDate>Mon, 22 Dec 2025 23:41:17 +0000</pubDate>
      <link>https://dev.to/aws-builders/github-actions-aws-role-chaining-a-security-upgrade-worth-making-3ibb</link>
      <guid>https://dev.to/aws-builders/github-actions-aws-role-chaining-a-security-upgrade-worth-making-3ibb</guid>
      <description>&lt;p&gt;&lt;strong&gt;A&lt;/strong&gt;s the year is coming to an end, I’ve been spending some time reflecting and intentionally improving my security knowledge.&lt;/p&gt;

&lt;p&gt;Since joining Muzz, I’ve been exposed to systems that operate at scale, multiple AWS accounts, production-critical pipelines, and infrastructure that really cannot afford loose security practices. One of the most valuable things I learned during this journey is AWS role chaining in GitHub Actions.&lt;/p&gt;

&lt;p&gt;It’s one of those setups that looks complex at first, but once it clicks, you realise:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“&lt;em&gt;Yeah… this is how CI/CD should work.&lt;/em&gt;”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So I thought I’d share what we did, why we did it, and why you should probably do it too.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem with “Simple” AWS Credentials in CI/CD
&lt;/h3&gt;

&lt;p&gt;Traditionally, CI/CD pipelines access AWS by storing credentials like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS_ACCESS_KEY_ID&lt;/li&gt;
&lt;li&gt;AWS_SECRET_ACCESS_KEY&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;as GitHub Secrets.&lt;/p&gt;

&lt;p&gt;This works but it comes with problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Long-lived credentials sitting in GitHub&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hard to rotate safely&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Over-permissioned “just in case”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No clear trust boundaries between environments&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Risk increases as the organisation scales&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As teams grow and environments multiply (shared, dev, staging, prod), this approach doesn’t scale securely.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Modern Answer: GitHub OIDC + IAM Roles
&lt;/h2&gt;

&lt;p&gt;Instead of static secrets, we moved to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;GitHub Actions OpenID Connect (OIDC) (I have already discussed that in my other blog)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IAM Roles with trust policies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Short-lived AWS credentials&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Role chaining for environment isolation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;No AWS secrets stored in GitHub&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Credentials are issued only when a job runs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access is tightly scoped and time-limited&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Is AWS Role Chaining?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Role chaining means:
&lt;/h3&gt;

&lt;p&gt;GitHub Actions first assumes a base role (usually in a shared AWS account)&lt;br&gt;
From there, it assumes a target role in another account (dev/prod).&lt;br&gt;
Each hop has explicit trust and permissions.&lt;/p&gt;

&lt;p&gt;Think of it like airport security:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub gets through the main gate (shared role)&lt;/li&gt;
&lt;li&gt;Then gets escorted to the correct terminal (dev/prod role)&lt;/li&gt;
&lt;li&gt;No free roaming, no shortcuts&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  High Level Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fix5bnolxn1iothpi95oe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fix5bnolxn1iothpi95oe.png" alt="Diagram showing the permission flow" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  NOTE
&lt;/h2&gt;

&lt;p&gt;Make sure you’re using the official AWS configure-aws-credentials GitHub Action with role chaining enabled (role-chaining: true).&lt;/p&gt;

&lt;p&gt;If that approach doesn’t work for your setup, you can always fall back to the good old &lt;code&gt;aws sts assume-role&lt;/code&gt; command as a manual alternative.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- uses: aws-actions/configure-aws-credentials@v4
  with:
    role-to-assume: arn:aws:iam::ACCOUNT_ID:role/target-role
    role-chaining: true
    aws-region: eu-west-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OR&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws sts assume-role \
  --role-arn arn:aws:iam::ACCOUNT_ID:role/target-role \
  --role-session-name github-actions-session
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Security isn’t something you “add later,” it’s something you design into your workflows from day one.&lt;/p&gt;

&lt;p&gt;Working with GitHub Actions and AWS role chaining this year changed how I think about CI/CD security. It showed me that you don’t need to slow teams down to be secure you need better defaults.&lt;/p&gt;

&lt;p&gt;As the year wraps up, I’ve been consciously investing more time in understanding why certain security patterns exist, not just how to implement them. AWS role chaining is one of those patterns that quietly improves everything around it, from auditability to confidence in production deployments.&lt;/p&gt;

&lt;p&gt;If you’re still using long-lived AWS keys in CI/CD, this is one of the cleanest upgrades you can make.&lt;/p&gt;

&lt;p&gt;Hopefully, this helps a few fellow builders ship with a bit more confidence and a lot less risk, and if you have any issues, please comment.&lt;/p&gt;

&lt;p&gt;Happy building&lt;/p&gt;

</description>
      <category>aws</category>
      <category>github</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>AWS Vault Integration</title>
      <dc:creator>Ashutosh Singh</dc:creator>
      <pubDate>Fri, 19 Dec 2025 15:05:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-vault-integration-4jac</link>
      <guid>https://dev.to/aws-builders/aws-vault-integration-4jac</guid>
      <description>&lt;h2&gt;
  
  
  Securing AWS Access on My Laptop with AWS Vault
&lt;/h2&gt;

&lt;p&gt;Since I joined Muzz, things have been moving fast. Between onboarding, understanding the platform, CI/CD pipelines, Kubernetes, and AWS infrastructure, my days have been pretty packed.&lt;/p&gt;

&lt;p&gt;But with that pace, I’ve also picked up a few really nice practices, and one of them is &lt;strong&gt;AWS Vault&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Before this, like many others, I had AWS credentials sitting locally in ~/.aws/credentials. It works, but let’s be honest, it’s not ideal from a security point of view.&lt;/p&gt;

&lt;p&gt;That’s where AWS Vault comes in.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is AWS Vault?
&lt;/h3&gt;

&lt;p&gt;AWS Vault is an open-source tool that helps you securely store and access AWS credentials on your laptop.&lt;/p&gt;

&lt;p&gt;Instead of keeping long-lived AWS access keys in plain text files, AWS Vault:&lt;/p&gt;

&lt;p&gt;Stores credentials securely in your OS keychain&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;macOS → Keychain&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Windows → Credential Manager&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Linux → Secret Service&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Generates temporary credentials using AWS STS&lt;/p&gt;

&lt;p&gt;Prompts you for a password / OS unlock whenever you want to access AWS&lt;/p&gt;

&lt;p&gt;Works seamlessly with AWS CLI, SDKs, and even the AWS Console&lt;/p&gt;

&lt;p&gt;In simple terms:&lt;/p&gt;

&lt;p&gt;You no longer store secrets locally instead, you unlock access only when needed.&lt;/p&gt;

&lt;p&gt;Why This Matters (And Why We Use It)&lt;/p&gt;

&lt;p&gt;At Muzz, security is taken seriously, and AWS Vault fits perfectly into that mindset.&lt;/p&gt;

&lt;p&gt;Here’s why it’s a big improvement over traditional setups:&lt;/p&gt;

&lt;p&gt;No plain-text access keys lying around&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The OS encrypts credentials&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uses short-lived credentials instead of permanent ones&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Works nicely with IAM roles and MFA&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Forces a conscious “unlock” step before AWS access&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every time I want to access AWS resources, AWS Vault asks for my password, which is a great trade-off for better security.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing AWS Vault
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;macOS&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;brew install --cask aws-vault&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Windows&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;choco install aws-vault&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Linux&lt;/strong&gt;&lt;br&gt;
Download the binary from GitHub or install via Homebrew for Linux.&lt;/p&gt;

&lt;p&gt;Verify installation:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws-vault --version&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding AWS Credentials Securely
&lt;/h3&gt;

&lt;p&gt;To add credentials:&lt;/p&gt;

&lt;p&gt;aws-vault add &lt;/p&gt;

&lt;p&gt;You’ll be asked for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS Access Key ID&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Secret Access Key&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once added:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;They are encrypted&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They are not stored in plain text&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They’re only used to generate temporary session credentials&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using AWS Vault Day-to-Day&lt;br&gt;
Run a Single AWS Command&lt;br&gt;
&lt;code&gt;aws-vault exec muzz -- aws s3 ls&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
OR &lt;/p&gt;

&lt;p&gt;Just try to connect to EKS, or any AWS resources, and it will prompt you for the password &lt;/p&gt;

&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;AWS Vault handles all of this quietly in the background, which makes it great for both security and developer experience.&lt;/p&gt;

&lt;p&gt;What I Like Most About AWS Vault&lt;/p&gt;

&lt;p&gt;Honestly, the best part is the mental shift it enforces:&lt;/p&gt;

&lt;p&gt;“You don’t own AWS credentials, you borrow them temporarily.”&lt;/p&gt;

&lt;p&gt;If you’re working with AWS regularly, especially on a laptop, AWS Vault is a must-have tool.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Real Monitoring Metrics</title>
      <dc:creator>Ashutosh Singh</dc:creator>
      <pubDate>Wed, 04 Dec 2024 21:36:04 +0000</pubDate>
      <link>https://dev.to/aws-builders/monitoring-golden-signals-545f</link>
      <guid>https://dev.to/aws-builders/monitoring-golden-signals-545f</guid>
      <description>&lt;h2&gt;
  
  
  Monitoring
&lt;/h2&gt;

&lt;p&gt;If you're starting in DevOps or have some experience but haven't had any actions in &lt;strong&gt;Monitoring&lt;/strong&gt;, &lt;strong&gt;Logging&lt;/strong&gt; or &lt;strong&gt;Alerts&lt;/strong&gt;. This will help you with &lt;strong&gt;Monitoring&lt;/strong&gt;( &lt;strong&gt;alerts&lt;/strong&gt; a little bit) at least.&lt;/p&gt;

&lt;p&gt;Without Monitoring, any application crumbles under its sheer vastness. Any application needs constant care, otherwise, things go haywire or just stop working altogether. Monitoring helps us detect incidents, events and evaluate system metrics. Actually, by monitoring system metrics, we detect incidents, events, etc. Monitoring can be done for various purposes like making sure the site is always healthy and servicing customers and bringing in revenue, or it can be done to detect any intrusion or threat whatever the reason, monitoring is important with the end goal of keeping the application/site always healthy.&lt;/p&gt;

&lt;p&gt;Since I'm a DevOps Guy, I'm not going into security but rather monitoring to keep the application healthy, so let's start.&lt;/p&gt;

&lt;p&gt;Prerequisite: AWS account, Helm&lt;/p&gt;

&lt;h2&gt;
  
  
  Google
&lt;/h2&gt;

&lt;p&gt;suggest these 4 Major Signals or &lt;strong&gt;Golden Signals&lt;/strong&gt; that we need to monitor all the time as they give critical data about the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Latency
&lt;/h3&gt;

&lt;p&gt;The Duration required by the packet to reach the server and get back to us.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Traffic
&lt;/h3&gt;

&lt;p&gt;Simply speaking the number of requests per second received by the server.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Errors
&lt;/h3&gt;

&lt;p&gt;As the name suggests, the number of errors can be 4xx/5xx or timeout request, any request that took longer time to respond (more than set in SLOs)&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Saturation
&lt;/h3&gt;

&lt;p&gt;In layman's terms how full the system is. Any resource which is close to 100% utilization. Used to monitor the most constraints or limited resources in a system like CPU, Disk Space, Bandwidth etc. &lt;/p&gt;

&lt;p&gt;Again these are very short explanations if you wanna read more about them &lt;a href="https://sre.google/sre-book/monitoring-distributed-systems/#xref_monitoring_golden-signals" rel="noopener noreferrer"&gt;click here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now let's talk about &lt;strong&gt;Tooling&lt;/strong&gt;. I've been using Prometheus and Grafana as my go-to tools for monitoring. &lt;/p&gt;

&lt;p&gt;So let's start with the hands-on.&lt;/p&gt;

&lt;h4&gt;
  
  
  STEP I
&lt;/h4&gt;

&lt;p&gt;I'm doing this on AWS, specifically speaking EKS. Create the cluster using &lt;code&gt;eksctl&lt;/code&gt;, I like &lt;strong&gt;eksctl&lt;/strong&gt; as it creates everything from VPC to node group, with minimal inputs, you can find you're suitable setting &lt;a href="https://eksctl.io/usage/creating-and-managing-clusters/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But here's mine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: my-cluster
  region: eu-west-2 # Replace with your preferred AWS region
  version: "1.30" # Replace with your desired Kubernetes version

managedNodeGroups:
  - name: ng-t2-medium
    instanceType: t3a.medium
    desiredCapacity: 2
    minSize: 2
    maxSize: 2
    volumeSize: 20 # Size in GiB
    privateNetworking: true # If you want nodes to use private subnets only
    ssh:
      enableSsm: true # Allows access via AWS Systems Manager Session Manager

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nothing complicated just 2 instances and that's all, remember to update the add-ons installed by &lt;code&gt;eksctl&lt;/code&gt;. &lt;/p&gt;

&lt;h4&gt;
  
  
  STEP II
&lt;/h4&gt;

&lt;p&gt;Now that we have our &lt;code&gt;eksctl&lt;/code&gt; get remember to get the kubeconfig file in you're workstation by using the command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws eks update-kubeconfig --region &amp;lt;region-code&amp;gt; --name &amp;lt;cluster-name&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now Let's use helm charts for kube-Prometheus stack install it and wait for the startup&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install &amp;lt;Name&amp;gt; prometheus-community/kube-prometheus-stack -n &amp;lt;namespace&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  STEP III
&lt;/h4&gt;

&lt;p&gt;Now that we have installed Prometheus and Grafana, Let's create an application that we gonna monitor.&lt;/p&gt;

&lt;p&gt;I'm using a very simple nodejs application&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require("express");
const client = require("prom-client");
const winston = require("winston");
const os = require("os");

const app = express();
const PORT = 3000;

// Initialize Prometheus metrics
const httpRequestDuration = new client.Histogram({
  name: "http_request_duration_seconds",
  help: "Duration of HTTP requests in seconds",
  labelNames: ["method", "route", "status_code"],
  buckets: [0.1, 0.5, 1, 2, 5], // Buckets for latency
});

const httpRequestCount = new client.Counter({
  name: "http_requests_total",
  help: "Total number of HTTP requests",
  labelNames: ["method", "route"],
});

const httpErrorCount = new client.Counter({
  name: "http_errors_total",
  help: "Total number of HTTP errors",
  labelNames: ["method", "status_code"],
});

const systemMetrics = new client.Gauge({
  name: "system_resource_usage",
  help: "System CPU and memory usage",
  labelNames: ["resource"],
});

// Logger setup
const logger = winston.createLogger({
  level: "info",
  format: winston.format.json(),
  transports: [
    new winston.transports.Console(),
    new winston.transports.File({ filename: "app.log" }),
  ],
});

// Middleware to track request duration and traffic
app.use((req, res, next) =&amp;gt; {
  const start = Date.now();
  httpRequestCount.inc({ method: req.method, route: req.path });
  res.on("finish", () =&amp;gt; {
    const duration = (Date.now() - start) / 1000; // Convert to seconds
    httpRequestDuration.observe(
      { method: req.method, route: req.path, status_code: res.statusCode },
      duration
    );

    if (res.statusCode &amp;gt;= 400) {
      httpErrorCount.inc({ method: req.method, status_code: res.statusCode });
    }
  });
  next();
});

// Simple endpoints
app.get("/", (req, res) =&amp;gt; {
  res.send("Welcome to the Golden Signals App!");
});

app.get("/hello", (req, res) =&amp;gt; {
  setTimeout(() =&amp;gt; {
    res.send("Hello, World!");
  }, Math.random() * 1000); // Random delay to simulate latency
});

app.get("/error", (req, res) =&amp;gt; {
  res.status(500).send("Simulated error!");
});

// Expose metrics at /metrics
app.get("/metrics", async (req, res) =&amp;gt; {
  res.set("Content-Type", client.register.contentType);
  res.end(await client.register.metrics());
});

// Monitor system resources every 5 seconds
setInterval(() =&amp;gt; {
  const memoryUsage = process.memoryUsage().heapUsed / 1024 / 1024; // Convert to MB
  const cpuLoad = os.loadavg()[0]; // 1-minute average
  systemMetrics.set({ resource: "memory" }, memoryUsage);
  systemMetrics.set({ resource: "cpu" }, cpuLoad);

  logger.info({
    memoryUsage: `${memoryUsage.toFixed(2)} MB`,
    cpuLoad: cpuLoad.toFixed(2),
  });
}, 5000);

// Start server
app.listen(PORT, () =&amp;gt; {
  console.log(`Server running on http://localhost:${PORT}`);
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Dockerfile for the same&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use the official Node.js image from the Docker Hub
FROM node:21-alpine

# Create and change to the app directory
WORKDIR /usr/src/app

# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy the local code to the container image
COPY . .

# Expose the port the app runs on
EXPOSE 3000

# Run the application
CMD ["node", "app.js"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now deployment, service and service monitor file for K8s&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: golden-signals-app
  labels:
    app: golden-signals-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: golden-signals-app
  template:
    metadata:
      labels:
        app: golden-signals-app
    spec:
      containers:
        - name: golden-signals-app
          image: ashutosh5786/golden-signal-app:v1
          ports:
            - containerPort: 3000
          resources:
            limits:
              memory: "256Mi"
              cpu: "500m"
            requests:
              memory: "128Mi"
              cpu: "250m"
          env:
            - name: NODE_ENV
              value: "production"
---
apiVersion: v1
kind: Service
metadata:
  name: golden-signals-service
  labels:
    app: golden-signals-app
spec:
  ports:
    - name: metrics-port             # Port name, used in ServiceMonitor
      port: 3000                     # Exposed port
      targetPort: 3000               # Port on the container
  selector:
    app: golden-signals-app
  type: ClusterIP
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: golden-signals-app-monitor
  labels:
    release: kubeprometheus &amp;lt;Name of the release used when installing using helm chart&amp;gt;
spec:
  selector:
    matchLabels:
      app: golden-signals-app
  endpoints:
    - port: metrics-port # Use the port name from the Service
      path: /metrics
      interval: 15s
  namespaceSelector:
    matchNames:
      - default &amp;lt;Namespace in which app is deployed&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, I take that you understand what deployment and service are but you might have noticed the &lt;strong&gt;Service Monitor&lt;/strong&gt; well it's a way for us to configure the Prometheus to monitor the pods we are creating.&lt;/p&gt;

&lt;h4&gt;
  
  
  STEP IV
&lt;/h4&gt;

&lt;p&gt;After STEP III, we can head to Prometheus-ui, to check if the application is monitored or not. Go to &lt;strong&gt;Status &amp;gt; Target&lt;/strong&gt;&lt;br&gt;
If you see the application good otherwise go back and recheck everything. &lt;/p&gt;

&lt;p&gt;It should look something like this: &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4oqmwancl1ffmumhs2ku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4oqmwancl1ffmumhs2ku.png" alt="Prometheus Target Screen showing Golden Signal App" width="800" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After confirming that Prometheus is scrapping the data, let's move to Grafana. And create the Dashboard which is my favourite part.&lt;/p&gt;

&lt;h4&gt;
  
  
  STEP V
&lt;/h4&gt;

&lt;p&gt;Let's start creating those dashboards.&lt;/p&gt;

&lt;p&gt;Side Panel &amp;gt; Dashboard &amp;gt; Add Visualization&lt;br&gt;
Select Prometheus as Data Source&lt;/p&gt;

&lt;p&gt;Once You've done this Select the &lt;strong&gt;CODE&lt;/strong&gt; Option instead of the Builder&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ubzp2bei4cdgbvmjj4u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ubzp2bei4cdgbvmjj4u.png" alt="Code Option" width="800" height="387"&gt;&lt;/a&gt;&lt;br&gt;
As you can observe in the above screen.&lt;/p&gt;

&lt;p&gt;Starting with Saturation for my case I have used the CPU as my constraint so I'm using below query to visualize it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;rate(container_cpu_usage_seconds_total{container="golden-signals-app"}[1m])&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now I'm gonna give all the query I used to build my dashboard for the Golden Signals&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Latency (P95)&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))&lt;/code&gt;&lt;br&gt;
For more grain visualization we can have different queries but for the ease of this practical, I'm going with the easier option.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error Rate&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sum(rate(http_errors_total[1m])) by (status_code)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traffic (RPS)&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sum(rate(http_requests_total[1m]))&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After Adding all those queries it should look like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9f6u3fajj17qmyszak72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9f6u3fajj17qmyszak72.png" alt="Dashboard on Grafana" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are all ready to monitor those &lt;strong&gt;Golden Signals&lt;/strong&gt; to make sure our applications are always healthy and 100% running.&lt;/p&gt;

&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;Monitoring is essential to keeping your applications healthy and reliable. By tracking Golden Signals with Prometheus and Grafana, you can catch issues early and ensure smooth operations. This guide gives you a starting point to build a monitoring setup that grows with your application’s needs.&lt;/p&gt;

&lt;p&gt;Thank you for Reading, I'd love to know what you have set up for monitoring in your system, If I missed anything please mention it in the comments.&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>google</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Building Secure CI/CD with Terraform on AWS</title>
      <dc:creator>Ashutosh Singh</dc:creator>
      <pubDate>Fri, 22 Nov 2024 13:46:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-secure-cicd-with-terraform-on-aws-43lf</link>
      <guid>https://dev.to/aws-builders/building-secure-cicd-with-terraform-on-aws-43lf</guid>
      <description>&lt;p&gt;Recently, I was interviewed for a DevOps role, and I was asked, "Do you use Terraform in CI/CD?" I said yes. How do you use Terraform in CI/CD? Put the AWS credentials in the GitHub secret and use it with the aws-cli tool to provision the infrastructure. Sadly, I didn't get the job, but I was compelled to explore a new way, so here I am sharing what I found.&lt;/p&gt;

&lt;p&gt;Prerequisite: AWS &amp;amp; GitHub Account, Terraform&lt;/p&gt;

&lt;p&gt;Let me tell you more about the question, he wanted to know how I handle the permission as this is the most important thing in DevOps, the answer I gave was alright, but it turns out we have more security in doing the same task, by letting the GitHub action or any other CI/CD tool to assume the role from AWS directly without us storing the aws secret or access key in the vault of our CI/CD tools. Instead, we can provision the dynamic credentials by using the &lt;strong&gt;AWS sts assume role &amp;amp; OIDC&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP I&lt;/strong&gt;&lt;br&gt;
To tackle this challenge securely and efficiently, the first step is to establish trust between AWS and GitHub Actions by setting up an OIDC provider. Let’s dive into how to do that. If you want to read more about OIDC click &lt;a href="https://www.microsoft.com/en-us/security/business/security-101/what-is-openid-connect-oidc" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F153ev6wx52g6fla20ngt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F153ev6wx52g6fla20ngt.png" alt="OIDC" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;code&gt;Add Provider&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Enter these details &lt;/p&gt;

&lt;p&gt;OIDC provider:  &lt;code&gt;https://token.actions.githubusercontent.com&lt;/code&gt;&lt;br&gt;
Audience: &lt;code&gt;sts.amazonaws.com&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now that we have established the OIDC trust between AWS and GitHub, the next step is to create an IAM role with a custom trust policy. This role will allow GitHub Actions to assume the required permissions dynamically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP II&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's Create the IAM role using a custom trust policy&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajuhohtvr9qqck2zfkhm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajuhohtvr9qqck2zfkhm.png" alt="IAM Role" width="800" height="334"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::&amp;lt;AWS_ACCOUNT_ID&amp;gt;:oidc-provider/token.actions.githubusercontent.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
          "token.actions.githubusercontent.com:sub": "repo:your-username/your-repo:ref:refs/heads/branch-name"
        }
      }
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE: Kindly Replace these placeholders&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;your-username&amp;gt; with your GitHub username.
&amp;lt;your-repo&amp;gt; with your repository's name.
&amp;lt;branch-name&amp;gt; with the branch name (e.g., main or master)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After Replacing those placeholders, let's add the permission for the services this role will allow the GitHub action to provision, for simplicity I'm giving it PowerUser permission but again change them according to needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonbdeaoehr4ok851op2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonbdeaoehr4ok851op2c.png" alt="IAM Policy" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP III&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the IAM role in place, it’s time to configure our GitHub Actions workflow. Here, we’ll set up a YAML file to interact with AWS services using the dynamic credentials generated by the assumed role&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy to AWS

on:
  push:
    branches:
      - master

permissions:
  id-token: write
  contents: read

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      # Step 1: Checkout code
      - name: Checkout repository
        uses: actions/checkout@v3

      # Step 2: Configure AWS credentials using OIDC
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
          aws-region: ${{ secrets.AWS_REGION }}

      # Step 3: Example AWS command (S3 upload in this case)
      - name: Upload files to S3
        run: |
          aws s3 cp ./12.png s3://${{ secrets.S3_BUCKET_NAME }}/

      # Step 4: (Optional) Additional AWS CLI commands or deployment steps
      - name: Example additional AWS command
        run: |
          aws ec2 describe-instances

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are a couple of things which you need to understand&lt;/p&gt;

&lt;p&gt;permission block contains 2 fields &lt;strong&gt;id-token and content&lt;/strong&gt;. By id-token we are setting OIDC to be used, by using content: read we are allowing the workflow to read the repo files. Rest is simple&lt;/p&gt;

&lt;p&gt;After the setup is completed here are my workflow screenshots&lt;/p&gt;

&lt;p&gt;The bucket containing the uploaded file from the repo&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7waq57bvsq3rokriyh4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7waq57bvsq3rokriyh4s.png" alt="S3 Bucket" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub Action Logs&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbt78a6wbxioztqo0cbvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbt78a6wbxioztqo0cbvb.png" alt="GitHub Action ss" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, your GitHub Actions pipeline is securely configured to interact with AWS services. Next, we’ll integrate Terraform into the pipeline to demonstrate how to provision infrastructure dynamically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP IV&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now the last leg Running the CI/CD with Terraform For that we need to write the Terraform script and modify the action yml file.&lt;/p&gt;

&lt;p&gt;workflow.yml file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Terraform EC2 Provisioning

on:
  push:
    branches:
      - master

permissions:
  id-token: write
  contents: read

jobs:
  terraform:
    runs-on: ubuntu-latest

    steps:
      # Step 1: Checkout the repository
      - name: Checkout repository
        uses: actions/checkout@v3

      # Step 2: Configure AWS credentials using OIDC
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
          aws-region: ${{ secrets.AWS_REGION }}

      # Step 3: Set up Terraform CLI
      - name: Set up Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.0

      # Step 4: Initialize Terraform
      - name: Terraform Init
        run: terraform init

      # Step 5: Plan Terraform changes
      - name: Terraform Plan
        run: terraform plan -out=tfplan

      # Step 6: Apply Terraform changes
      - name: Terraform Apply
        run: terraform apply -auto-approve tfplan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now some Terraform scripts&lt;/p&gt;

&lt;p&gt;main.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = var.region
}

resource "aws_instance" "example" {
  ami           = var.ami_id
  instance_type = var.instance_type

  tags = {
    Name = "ExampleInstance"
  }
}

output "instance_id" {
  value = aws_instance.example.id
}

output "public_ip" {
  value = aws_instance.example.public_ip
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;terraform.tfvars&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;region        = "eu-west-2"
ami_id        = "ami-0b2ed2e3df8cf9080" # Replace with your preferred AMI ID
instance_type = "t2.micro"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;variables.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "region" {
  description = "AWS region"
  type        = string
  default     = "eu-west-2"
}

variable "ami_id" {
  description = "AMI ID for the EC2 instance"
  type        = string
}

variable "instance_type" {
  description = "Instance type for the EC2 instance"
  type        = string
  default     = "t2.micro"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With these Terraform scripts in place, the pipeline can now provision an EC2 instance as part of the CI/CD process. Let’s run the workflow and see the results in action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxeuudqj4k6u3a6mk9mzt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxeuudqj4k6u3a6mk9mzt.png" alt="Ec2 instance" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Checking the Github Logs&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsaipfqqrype2ga9izw3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsaipfqqrype2ga9izw3.png" alt="GitHub Log for action running terraform secripts" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And my friend, now you know how to configure the pipeline without using the static credentials stored in GitHub Secrets. &lt;/p&gt;

&lt;p&gt;Now that everything is set up, we can observe the results of our secure and dynamic CI/CD pipeline in GitHub Actions logs and AWS resources. This demonstrates the power of OIDC integration in real-world DevOps workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By integrating OIDC (OpenID Connect) with AWS and GitHub Actions, we enhance security and simplify the CI/CD process. Instead of relying on static credentials stored in GitHub secrets, we use dynamically generated, short-lived credentials through AWS STS. &lt;/p&gt;

&lt;p&gt;This approach offers several key benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Improved Security: No sensitive credentials are stored in CI/CD tools, reducing the risk of accidental exposure or misuse.&lt;/li&gt;
&lt;li&gt;Granular Access Control: Using IAM trust policies, access can be tightly scoped to specific repositories, branches, and workflows.&lt;/li&gt;
&lt;li&gt;Automation-Friendly: Dynamic credentials streamline workflows, making them more robust and easier to maintain.&lt;/li&gt;
&lt;li&gt;Reduced Attack Surface: Temporary credentials expire quickly, minimizing the impact of potential leaks.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Thank you for Reading if you face any issues feel free to ask in the comments&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>devops</category>
      <category>cicd</category>
    </item>
    <item>
      <title>AWS Lambda Layer</title>
      <dc:creator>Ashutosh Singh</dc:creator>
      <pubDate>Sun, 16 Jun 2024 14:18:31 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-lambda-layer-5g00</link>
      <guid>https://dev.to/aws-builders/aws-lambda-layer-5g00</guid>
      <description>&lt;p&gt;This is something I was stuck on while deploying the lambda function, even though the code was only a few 100 lines I had to deploy the function with a zip file including all the node dependencies unnecessarily, it was quite annoying and I'm having this feeling you felt the same way that why you landed over here.&lt;/p&gt;

&lt;p&gt;Let's get started...&lt;/p&gt;

&lt;p&gt;This is quite simple, it is made just for the problem I've discussed above i.e. Take away the dependencies and focus on code.&lt;/p&gt;

&lt;p&gt;Official Definitions:&lt;br&gt;
&lt;code&gt;A Lambda layer is a .zip file archive that contains supplementary code or data. Layers usually contain library dependencies, a custom runtime, or configuration files.&lt;/code&gt;&lt;br&gt;
If you are interested in reading more about it here's the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can start using it now that we know what the lambda layer is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP I&lt;/strong&gt;&lt;br&gt;
Search lambda in the service list and go to the layer on the left side.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02n8fm79pykgiptuf1qm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02n8fm79pykgiptuf1qm.png" alt="Lambda Layer page on AWS" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP II&lt;/strong&gt;&lt;br&gt;
Now we gonna create a package that we'll upload in layers&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTICE&lt;/strong&gt;: The path of the folders is important you can't mess this up otherwise it won't work. Different runtimes have different path like Im using node&lt;/p&gt;

&lt;p&gt;My folder structure is like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/layers
|------- /nodejs
            |----- /node_modules
            |----- package.json
            |----- package-lock.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you wanna check out other runtime paths I suggest you should check the table given &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/packaging-layers.html"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure any modules you're adding are executable in Linux as lambda layers are executed in Amazon-Linux(Linux).&lt;/p&gt;

&lt;p&gt;Now Zip the folder &lt;strong&gt;layers.zip&lt;/strong&gt; or anything you wanna name it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STEP III&lt;/strong&gt;&lt;br&gt;
Head towards the lambda page. Now we will upload the zip to lambda layers by creating a new layer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk959gni0pu7lyhelt4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk959gni0pu7lyhelt4x.png" alt="Lambda layer creation page" width="800" height="751"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name the layer anything you want.&lt;br&gt;
Don't forget to add the other details like architecture and runtime these 2 are the most important.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcpt0k2n8k0b0rky2dm9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcpt0k2n8k0b0rky2dm9.png" alt="lambda layer detailed form" width="800" height="803"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you're done click on Create and done&lt;/p&gt;

&lt;p&gt;Now you can attach this layer to any of your lambda functions and just call the package as you write normally. You don't have to do anything to integrate it. Enjoy the freedom of writing the code on the browser or make any changes to it on the go without worrying about the package or zipping it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thank you
&lt;/h2&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>lambda</category>
      <category>cicd</category>
    </item>
    <item>
      <title>GitHub Action with EC2 and SSH</title>
      <dc:creator>Ashutosh Singh</dc:creator>
      <pubDate>Sat, 03 Feb 2024 19:11:13 +0000</pubDate>
      <link>https://dev.to/aws-builders/github-action-with-ec2-and-ssh-4aja</link>
      <guid>https://dev.to/aws-builders/github-action-with-ec2-and-ssh-4aja</guid>
      <description>&lt;p&gt;Sometimes we are stuck with a use case which is not very efficient&lt;br&gt;
but are essentials for the pipeline or CI/CD there are so many reasons why this could arise, cost, time &amp;amp; effort, it's not production we can do whatever we want!!&lt;/p&gt;

&lt;h3&gt;
  
  
  Here's How
&lt;/h3&gt;

&lt;p&gt;you can connect any virtual machine whether it's deployed on AWS, Azure or GCP they all have something in common,&lt;br&gt;
&lt;strong&gt;SSH&lt;/strong&gt; either password or .pem file this method will work just fine.&lt;br&gt;
I know this is not what we as a DevOps Guy prefer but you have to make something out of this situation you're stuck, right !!&lt;/p&gt;

&lt;p&gt;For this, I'm gonna use GitHub Action, SSH DEPLOY action and AWS(I mean why not).&lt;/p&gt;

&lt;h4&gt;
  
  
  STEP I
&lt;/h4&gt;

&lt;p&gt;Setup your authentication for my case ACCESS KEY &amp;amp; SECRET KEY&lt;br&gt;
Add them to GitHub Secret &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9vy90t96boql4l653hb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9vy90t96boql4l653hb.png" alt="GitHub Secret"&gt;&lt;/a&gt;&lt;br&gt;
Password is the .pem key for the instances we gonna deploy the application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTICE&lt;/strong&gt;: After this setup the HOST, USERNAME, and PORT(if you have changed ur default SSH port) in the variable section&lt;/p&gt;

&lt;p&gt;Here's the GitHub Action &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

name: Deployment to server
on:
    workflow_run:
        workflows: ["Docker Image CI"]
        types: 
            - completed
jobs:

  Deploy:
    name: Deploy
    runs-on: ubuntu-latest
    if: github.event.workflow_run.conclusion == 'success'

    steps:
        - uses: easingthemes/ssh-deploy@v5.0.0
          name: Deploy over SSH
          with:
            SSH_PRIVATE_KEY: ${{ secrets.PASSWORD }}
            REMOTE_HOST: ${{ vars.HOST }}
            REMOTE_USER: ${{ vars.USERNAME }}
            SCRIPT_AFTER: |
                ./script
                // PUT THE COMMANDS HERE



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And Voila!! here you go I like this GitHub action due to its simplicity and ease of use with private keys.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgvf0hchu852ydp12yl5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgvf0hchu852ydp12yl5.png" alt="Deployment Done"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you for reading.&lt;br&gt;
Feel free to reach out for any suggestions.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cicd</category>
      <category>github</category>
      <category>ssh</category>
    </item>
    <item>
      <title>AWS ALB with NGINX INGRESS CONTROLLER</title>
      <dc:creator>Ashutosh Singh</dc:creator>
      <pubDate>Mon, 01 May 2023 08:17:50 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-alb-with-nginx-ingress-controller-1ofd</link>
      <guid>https://dev.to/aws-builders/aws-alb-with-nginx-ingress-controller-1ofd</guid>
      <description>&lt;p&gt;So if you are wondering how to integrate the AWS application load balancer with Nginx/Istio to get the traffic into the cluster so this is it.&lt;/p&gt;

&lt;p&gt;You might ask why this article so let me tell you what I'm looking for when I say aws alb/nlb and ingress controller.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7p92byyrx5w68119ula3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7p92byyrx5w68119ula3.png" alt="Diagram showing 1 LB and and EKS Cluster"&gt;&lt;/a&gt;&lt;br&gt;
I know there are bunch of components missing but you got the idea right!! only 1 ALB/NLB unless u have some other use case where you need bunch of LB&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LB: LoadBalancer; A: Application; N: Network&lt;/strong&gt;&lt;br&gt;
just to avoid any confusions.&lt;/p&gt;

&lt;p&gt;Before moving forward let's discuss why this setup&lt;/p&gt;

&lt;h2&gt;
  
  
  Before
&lt;/h2&gt;

&lt;p&gt;If you know Kubernetes(K8s) you know they support 3 type of networking, build natively we called them &lt;strong&gt;service&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ClusterIP&lt;br&gt;
NodePort&lt;br&gt;
LoadBalancer&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;you can read more about them from K8s official docs&lt;br&gt;
We are not interested in above 2 only LoadBalancer so what it does is that when you are using cloud services in our case &lt;strong&gt;EKS&lt;/strong&gt; we can directly provision a loadbalancer for our services to ingress the traffic.&lt;br&gt;
But the problem with this is how many LB we can create until we ran out of money coz we use many services and according to this there's going to be a lot of LB.&lt;/p&gt;

&lt;p&gt;So you get the problem right we can't create LB everytime we need to let the traffic inside, so the great guys at K8s decided we need something and they came up with &lt;strong&gt;INGRESS&lt;/strong&gt; the whole idea was to save cost and reduce management this is what I think 😁 coz it sure does for me!&lt;/p&gt;

&lt;h2&gt;
  
  
  AFTER
&lt;/h2&gt;

&lt;p&gt;Moving on we can create the setup which showed in the starting only 1 LB is sufficient for K8s cluster we don't need to create different LB for different Services.&lt;/p&gt;

&lt;h2&gt;
  
  
  PROCESS
&lt;/h2&gt;

&lt;p&gt;There are 4 stage in this from creating the cluster to deploying the application and then opening it in browser.&lt;/p&gt;

&lt;h3&gt;
  
  
  STAGE I
&lt;/h3&gt;

&lt;p&gt;Spin Up the EKS cluster, you can do it from the aws console, cli. I love cli and there's a best tool to do this &lt;strong&gt;&lt;a href="https://eksctl.io/" rel="noopener noreferrer"&gt;eksctl&lt;/a&gt;&lt;/strong&gt; &lt;br&gt;
Setup this tool you need aws cli configure plus you must have the permissions.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;eksctl create cluster -f cluster.yaml&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: DevOps-Prod
  region: ap-south-1
  version: "1.26"
managedNodeGroups:
  - name: ng-1
    instanceType: m5.large
    desiredCapacity: 2
    volumeSize: 80
    labels: { role: workers }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Give it 15mins or so it will complete the creation and the cluster will be up then&lt;/p&gt;

&lt;h3&gt;
  
  
  STAGE II
&lt;/h3&gt;

&lt;p&gt;Now we'll install something called AWS LoadBalancer Controller.&lt;br&gt;
I'm not going into details coz AWS blogs are great you just need to follow them here's the &lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;TIP: I prefer the eksctl&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  STAGE III
&lt;/h3&gt;

&lt;p&gt;Make sure the pods are running by running the command &lt;br&gt;
&lt;code&gt;kubectl get po -n kube-system&lt;/code&gt;&lt;br&gt;
If your namespace is kube-system&lt;/p&gt;

&lt;p&gt;Now we need to install the Nginx Ingress Controller again not going into deep here's the &lt;strong&gt;&lt;a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So we need to do some changes over here to achieve our noble goal of having 1 LB&lt;br&gt;
&lt;strong&gt;TIP: Install it with Helm&lt;/strong&gt;&lt;br&gt;
make sure you change the service type of chart from &lt;strong&gt;LoadBalancer to  NodePort&lt;/strong&gt;.&lt;br&gt;
You find it in the values.yaml or if you're using manifest file the look for the service block just make sure you do this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTICE: It's important that you do this otherwise it won't work&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now after successfully deploying this you need to check which NodePort is being utilized it's mostly in above &lt;strong&gt;3000&lt;/strong&gt; make sure you wrote it down somewhere.&lt;br&gt;
And check the &lt;code&gt;readiness-probe&lt;/code&gt; for the deployment of nginx controller &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;something like this check it for yourself and now we are going to last phase.&lt;/p&gt;

&lt;h3&gt;
  
  
  STAGE IV
&lt;/h3&gt;

&lt;p&gt;After everything is deployed and you have the NodePort and Path of the health-check you going to apply the below file.&lt;/p&gt;

&lt;p&gt;You also need the certificate imported in a service called ACM &lt;br&gt;
Amazon Certificate Manager from which you will get the arn of that certificate which you need to update in the file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: alb-ingress-connect-nginx
  namespace: kube-system
  annotations:
    # Ingress Core Settings
    kubernetes.io/ingress.class: "alb"
    alb.ingress.kubernetes.io/scheme: internet-facing
    # Health Check Settings
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
    alb.ingress.kubernetes.io/healthcheck-port: 30000 # Make sure you add the NodePort over here
    alb.ingress.kubernetes.io/healthcheck-path: /healthz # Put the path of readiness probe over here
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
    alb.ingress.kubernetes.io/success-codes: '200'
    alb.ingress.kubernetes.io/healthy-threshold-count: '2'
    alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
    ## SSL Settings
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-2:122221113322:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # make sure you update your certificate arn over here
    #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used)
    # redirect all HTTP to HTTPS
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
  rules:
  - http:
      paths:
        - path: /*
          pathType: ImplementationSpecific
          backend:
            service:
              name: ssl-redirect
              port:
                name: use-annotation
        - path: /*
          pathType: ImplementationSpecific
          backend:
            service:
              name: ingress-nginx-controller # Make sure you name the service correctly by checking the name of it nginx ingress controller service nothing else
              port:
                number: 80



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you open the above file in vscode or text-editor you'll find the comment that I wrote for you guys where you need to supply the NodePort and Health-check path and name of the service of ingress controller, arn of the certificate after doing all the changes apply with following&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f ingress.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After running this command you'll see that there's a ALB is provisioning and target group is created to check the instance health check once it's checked and alb is fully provisioned you can hit the url of the alb see the &lt;strong&gt;404 NOT FOUND from nginx&lt;/strong&gt;&lt;br&gt;
and this confirm that response is coming from nginx.&lt;/p&gt;

&lt;h3&gt;
  
  
  ENDING
&lt;/h3&gt;

&lt;p&gt;You can deploy the application and expose the service with ingress and try to hit the url mentioned in the ingress file before this you need to update the records too if using ROUTE53 or any other domain registrar and that's it&lt;/p&gt;

&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;I hope this works for you if not plz reach out to me I'm happy to help make sure the to update the values which I commented and it will work for you.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS XRAY with FASTAPI</title>
      <dc:creator>Ashutosh Singh</dc:creator>
      <pubDate>Wed, 12 Apr 2023 05:09:26 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-xray-with-fastapi-390p</link>
      <guid>https://dev.to/aws-builders/aws-xray-with-fastapi-390p</guid>
      <description>&lt;p&gt;Hi Guys, I think you stumble upon the article after a very long search of&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;How to integrate AWS Xray with application written with FASTAPI python?&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;So you are not alone I have searched for a very long time and this is what I found which works for me, there are something which I don't understand it's just there😅.&lt;/p&gt;

&lt;p&gt;First, let's take a quick look at what AWS X-Ray is. AWS X-Ray is a distributed tracing system that allows you to visualize the entire lifecycle of a request across all of your microservices. With X-Ray, you can quickly identify performance bottlenecks, troubleshoot errors, and understand the dependencies between your services.&lt;br&gt;
If You wanna read more about aws xray here the &lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Let's Start.&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;First you need to know where you are going to deploy the application EC2, EBS, or EKS. I have deployed it on EKS so I'll use that over here, but all of them will work with little bit of changes.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Installation of ADOT Collector&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For those who are wondering what's this? AWS Distro for OpenTelemetry, and for those who were thinking that they can get away without installing it you were wrong as I am🥲, I thought it too but can't so here the &lt;a href="https://aws-otel.github.io/docs/getting-started/collector" rel="noopener noreferrer"&gt;link you can follow to install&lt;/a&gt; it for &lt;a href="https://aws-otel.github.io/docs/getting-started/adot-eks-add-on/installation" rel="noopener noreferrer"&gt;EKS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After you have decided and installed the collector with your prefer way of installing (I prefer Daemonset) remember the service name we have to mention it in application.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Integration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;There are some python packages that you need to install&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;opentelemetry-distro==0.36b0
opentelemetry-exporter-otlp==1.15.0
opentelemetry-exporter-otlp-proto-grpc==1.15.0
opentelemetry-exporter-otlp-proto-http==1.15.0
opentelemetry-instrumentation==0.36b0
opentelemetry-instrumentation-asgi==0.36b0
opentelemetry-instrumentation-aws-lambda==0.36b0
opentelemetry-instrumentation-boto3sqs==0.36b0
opentelemetry-instrumentation-botocore==0.36b0
opentelemetry-instrumentation-dbapi==0.36b0
opentelemetry-instrumentation-fastapi==0.36b0
opentelemetry-instrumentation-grpc==0.36b0
opentelemetry-instrumentation-logging==0.36b0
opentelemetry-instrumentation-pymongo==0.36b0
opentelemetry-instrumentation-requests==0.36b0
opentelemetry-instrumentation-sqlalchemy==0.36b0
opentelemetry-instrumentation-sqlite3==0.36b0
opentelemetry-instrumentation-starlette==0.36b0
opentelemetry-instrumentation-urllib==0.36b0
opentelemetry-instrumentation-urllib3==0.36b0
opentelemetry-instrumentation-wsgi==0.36b0
opentelemetry-propagator-aws-xray==1.0.1
opentelemetry-proto==1.15.0
opentelemetry-sdk==1.15.0
opentelemetry-sdk-extension-aws==2.0.1
opentelemetry-semantic-conventions==0.36b0
opentelemetry-util-http==0.36b0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I know it's never ending list trust me, but without these my application wasn't even starting so I don't know about these but it works I hope it works for you too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Main Code&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# OpenTelemetry Configuration
# Basic packages for your application
# Add imports for OTel components into the application
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor


# Import the AWS X-Ray for OTel Python IDs Generator into the application.
from opentelemetry.sdk.extension.aws.trace import AwsXRayIdGenerator

# Sends generated traces in the OTLP format to an ADOT Collector running on port 4317
otlp_exporter = OTLPSpanExporter(endpoint="http://xray-collector-collector.amazon-cloudwatch:4317")
# Processes traces in batches as opposed to immediately one after the other
span_processor = BatchSpanProcessor(otlp_exporter)
# Configures the Global Tracer Provider
trace.set_tracer_provider(TracerProvider(active_span_processor=span_processor, id_generator=AwsXRayIdGenerator()))



# Using the AWS resource Detectors
import opentelemetry.trace as trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.extension.aws.resource.ec2 import (
    AwsEc2ResourceDetector,
)
from opentelemetry.sdk.resources import get_aggregated_resources

trace.set_tracer_provider(
    TracerProvider(
        resource=get_aggregated_resources(
            [
                AwsEc2ResourceDetector(),
            ]
        ),
    )
)

app = FastAPI()

@app.get("/test")
def test():
    return {"message": "Hello This is testing"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might be thinking why I copy pasted the whole code, it's coz when I was searching I couldn't make any sense of the code so I thought why not just pour it all.&lt;/p&gt;

&lt;p&gt;Make sure you set your own endpoint url in the &lt;strong&gt;otlp_exporter&lt;/strong&gt; &lt;br&gt;
otherwise it won't send any traces.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Dockerfile&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Pull base image
FROM python:3.8

# Set enviroment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV PYTHONPATH=/code
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"


# Added OpenTelemetry Collector attributes

ENV OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST=".*"
ENV OTEL_RESOURCE_ATTRIBUTES='service.name=aws-xray-test'


# Set work directory
WORKDIR /code

# Install dependencies
COPY requirements.txt /code/

RUN pip install -r requirements.txt  --no-cache-dir

# Copy project
COPY . /code/

ENTRYPOINT ["opentelemetry-instrument","/opt/venv/bin/python", "-m", "uvicorn", "main:app"]
CMD ["--reload", "--host", "0.0.0.0", "--port", "8000"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE: see the Entrypoint and the ENV make sure you set them&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And My Wonderful reader if you made it till here than I hope your Integration is successful&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In conclusion, integrating AWS X-Ray with your FastAPI application is a simple and effective way to gain visibility into the performance and behavior of your application. With X-Ray, you can quickly identify performance bottlenecks, troubleshoot errors, and understand the dependencies between your services. So why not give it a try today?&lt;/p&gt;

&lt;p&gt;Plus If you face any issue with above steps plz reach out to me&lt;br&gt;
here's my &lt;a href="https://www.linkedin.com/in/ashutoshsingh5786/" rel="noopener noreferrer"&gt;linkedin&lt;/a&gt;&lt;/p&gt;

</description>
      <category>design</category>
      <category>ui</category>
      <category>tooling</category>
      <category>ai</category>
    </item>
    <item>
      <title>FREE VPN with AWS</title>
      <dc:creator>Ashutosh Singh</dc:creator>
      <pubDate>Tue, 27 Sep 2022 07:00:58 +0000</pubDate>
      <link>https://dev.to/aws-builders/free-vpn-with-aws-223a</link>
      <guid>https://dev.to/aws-builders/free-vpn-with-aws-223a</guid>
      <description>&lt;h1&gt;
  
  
  So,
&lt;/h1&gt;

&lt;p&gt;You Want a Free VPN for securing you're connection, you don't want the third party to sneak up behind you and see or steal your data, you are not alone, we all want that, easy way buy a premium vpn there are tons of out there, but we they are paid, so lets have our &lt;strong&gt;own VPN&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Without Further Delay lets get started,&lt;/p&gt;

&lt;h2&gt;
  
  
  PREQUISTE
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS ACCOUNT&lt;br&gt;
LITTLE BIT OF LINUX&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We are going to use the service name &lt;strong&gt;LightSail&lt;/strong&gt; in here, you ask why? Well, first we can run the sever for &lt;strong&gt;free&lt;/strong&gt; (3 Months) if you are well-passed you're free tier of AWS.&lt;/p&gt;

&lt;p&gt;Lets go to LightSail Console&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgzjfpin9567zexbg3km.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgzjfpin9567zexbg3km.png" alt="First thing you see why you go to LightSail"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's create the Instance&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcmha4fal3puravlr1og.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcmha4fal3puravlr1og.png" alt="Instance Create"&gt;&lt;/a&gt;&lt;br&gt;
We can use any OS amazon-linux-2, ubuntu we just needs to know the package name for the same, here I'm going with ubuntu:20.04LTS&lt;/p&gt;

&lt;p&gt;Choose the Plan According to you're needs and select it &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finm3wg1zxadkbfq62z0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finm3wg1zxadkbfq62z0z.png" alt="Plan"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While waiting for the instance to provision for you lets go to &lt;strong&gt;Network&lt;/strong&gt; tab and create a &lt;strong&gt;static-IP&lt;/strong&gt; for our VPN&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ngataj4hwm6g7u83p5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ngataj4hwm6g7u83p5u.png" alt="static-ip"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Name the IP with anything, choose the Instance in which you want to attach the static-IP &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fee2bge09nvig0k7dy628.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fee2bge09nvig0k7dy628.png" alt="static-ip"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's connect to the instance with SSH, LightSail give us web-based ssh and terminal based, for terminal based we need the &lt;strong&gt;key&lt;/strong&gt;, Download the Default key from here&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjlrd8x5xdas9nxthhmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjlrd8x5xdas9nxthhmd.png" alt="ssh key"&gt;&lt;/a&gt;&lt;br&gt;
and lets start executing some linux commands.&lt;/p&gt;

&lt;h2&gt;
  
  
  PART II
&lt;/h2&gt;

&lt;p&gt;Start the SSH connection &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

ssh ubuntu@&amp;lt;IP&amp;gt; -i &amp;lt;path-to-key&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Install the Wireguard&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo -i
apt update 
apt install wireguard -y


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyueab8z9q1txdt4ywh1n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyueab8z9q1txdt4ywh1n.png" alt="Installation of Wireguard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After Installing it, we need to enable the port forwarding so that after connecting to instance we can still use the internet freely&lt;br&gt;
run the following command to do so&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

vim /etc/sysctl.d/10-wireguard.conf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and add the following line &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

net.ipv4.ip_forward=1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After adding the line execute the following command to make it permanent&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sysctl -p /etc/sysctl.d/10-wireguard.conf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxztv6p23kkwusj4sygd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxztv6p23kkwusj4sygd.png" alt="port forwarding enable"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After enabling the Port Forwarding lets move to the Wireguard directory&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cd /etc/wireguard


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  NOTICE: Important We are generating key for the server make sure that you don't share any private key from here.
&lt;/h3&gt;

&lt;p&gt;Execute the following commands&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

wg genkey | tee server.key | wg pubkey &amp;gt; server.pub


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnfdke37uwbj9xf79gxa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnfdke37uwbj9xf79gxa.png" alt="Keys are generated here"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;wg &amp;amp; wg-quick is command-line tool for interacting with Wireguard.&lt;br&gt;
We will be using these file in our next step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Now Let's create the configuration file,
&lt;/h2&gt;

&lt;p&gt;for our VPN here whatever you want to name the configuration file you can name it and it will create a interface with the same name&lt;br&gt;
but it must contain the &lt;strong&gt;.conf&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

vim vpn.conf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add these line into it &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

[Interface]
Address = 10.1.1.1/24
ListenPort = 51820
PrivateKey = &amp;lt;server.key&amp;gt;
PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Make sure you add the server.key content here the file we created earlier&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz96fsjanqfi4aprk68ac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz96fsjanqfi4aprk68ac.png" alt="Added the Configuration to vpn.conf"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's Enable the VPN
&lt;/h2&gt;

&lt;p&gt;run the following command to start it&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

systemctl enable --now wg-quick@vpn


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Verify it's running successfully &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

systemctl status wg-quick@vpn


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn683rnkwvdsur6nxaxgv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn683rnkwvdsur6nxaxgv.png" alt="Started the VPN"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  PART III
&lt;/h2&gt;

&lt;p&gt;So our VPN Server is now running but we need to give the access to user, for that we need to generate some more files using the wg but I love the GUI so after doing some digging I found this amazing dashboard thanks to the author I can do everything from Dashboard only&lt;/p&gt;

&lt;h3&gt;
  
  
  Install the Dashboard
&lt;/h3&gt;

&lt;p&gt;Here the &lt;a href="https://github.com/donaldzou/WGDashboard" rel="noopener noreferrer"&gt;github link&lt;/a&gt; &lt;br&gt;
let's clone the repo&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

git clone -b v3.0.6 https://github.com/donaldzou/WGDashboard.git wgdashboard


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;change the directory &amp;amp; execute some commands&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cd wgdashboard/src
chmod u+x wgd.sh
./wgd.sh install
chmod -R 755 /etc/wireguard
apt install python3-pip -y
pip3 install -r requirements.txt
./wgd.sh start


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focu88oe7dphe1e86ot7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focu88oe7dphe1e86ot7s.png" alt="Execution of the above commands"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzclasgnznyst1ct2xed2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzclasgnznyst1ct2xed2.png" alt="Server started"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make Sure the port 10086 being used by running &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

netstat -tnlp


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flupuh1lyo8g0j7nvh5pu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flupuh1lyo8g0j7nvh5pu.png" alt="checking the port"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And we are done here&lt;/p&gt;

&lt;h2&gt;
  
  
  Back to LightSail
&lt;/h2&gt;

&lt;p&gt;Let Open these port &lt;br&gt;
&lt;strong&gt;51820 UDP&lt;br&gt;
10086 TCP&lt;/strong&gt;&lt;br&gt;
By going into the networking tab&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsokfr3xcyjycxghzremg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsokfr3xcyjycxghzremg.png" alt="port allowing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  PART IV
&lt;/h2&gt;

&lt;h3&gt;
  
  
  We are in the ENDGAME
&lt;/h3&gt;

&lt;p&gt;Open the Dashboard by going to &lt;br&gt;
&lt;code&gt;&lt;br&gt;
public-ip-of-instance:10086&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
In my case &lt;a href="http://3.111.147.192:10086/" rel="noopener noreferrer"&gt;http://3.111.147.192:10086/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw2lrl1m76lsjyvqxiwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw2lrl1m76lsjyvqxiwb.png" alt="Logging Screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Default Creds&lt;br&gt;
&lt;code&gt;&lt;br&gt;
username: admin&lt;br&gt;
password: admin&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
After logging in &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fraqpozjsjy4dn10eh7ag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fraqpozjsjy4dn10eh7ag.png" alt="Dashboard "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to the Setting Page&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffyrwge8a0tgka3gnx9jk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffyrwge8a0tgka3gnx9jk.png" alt="Setting Page"&gt;&lt;/a&gt;&lt;br&gt;
change the &lt;br&gt;
&lt;code&gt;&lt;br&gt;
Peer Remote Endpoint (This will be change globally, and will be apply to all peer's QR code and configuration file.)&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
From anything like this&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72plgrqu342g6jbiwcz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72plgrqu342g6jbiwcz2.png" alt="ip"&gt;&lt;/a&gt;&lt;br&gt;
to You're Public IP of the Instance in my case &lt;strong&gt;3.111.147.192&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And then Go to the Configuration Page&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9pqys17sarohygvmyhf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9pqys17sarohygvmyhf.png" alt="Configuration Page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the Blue Add Button on the Lower Right&lt;br&gt;
Add the Username and Download the File by clicking on the small green button.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzij6cx24qvkaxassggy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzij6cx24qvkaxassggy.png" alt="Added the User"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to WireGuard Client And add the tunnel by importing the downloaded file and click &lt;br&gt;
&lt;strong&gt;ACTIVATE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If Everything is right you will be connected to the VPN check your IP to verify it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cpb2nmemblrcpjyhdnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cpb2nmemblrcpjyhdnz.png" alt="whatsmyIP"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  That's How We can have our own VPN
&lt;/h2&gt;

&lt;p&gt;If any question plz feel free to ask in the comments&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpn</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>KeyCloak with Nginx Ingress</title>
      <dc:creator>Ashutosh Singh</dc:creator>
      <pubDate>Sun, 07 Aug 2022 16:21:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/keycloak-with-nginx-ingress-6fo</link>
      <guid>https://dev.to/aws-builders/keycloak-with-nginx-ingress-6fo</guid>
      <description>&lt;p&gt;Hello there, If you came here I guess you are also tired of finding the solution to Deploy KeyCloak with Ingress(Nginx) in Kubernetes (K8s), I have faced the some issue that are not available very openly, so I'm here to make sure you didn't go through the pain I have gone through 😅 so let's start.&lt;/p&gt;

&lt;h2&gt;
  
  
  Perquisite
&lt;/h2&gt;

&lt;p&gt;Kubernetes Cluster(can create with &lt;a href="https://dev.to/aws-builders/install-kops-with-gossip-dns-over-aws-49op"&gt;KOps&lt;/a&gt;), Ingress Controller (Nginx)&lt;/p&gt;

&lt;h2&gt;
  
  
  Step I
&lt;/h2&gt;

&lt;p&gt;Select Which chart you want to use, there are 2 helm chart &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://bitnami.com/stack/keycloak/helm" rel="noopener noreferrer"&gt;Bitnami KeyCloak&lt;/a&gt;&lt;br&gt;
&lt;a href="https://artifacthub.io/packages/helm/codecentric/keycloak" rel="noopener noreferrer"&gt;Codecentric KeyCloak&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Feel Free to Use anyone of these you can just google them or click on the link provided above. For this Example we'll use the Bitnami KeyCloak, personally I think it's easier to deploy with this chart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step II
&lt;/h2&gt;

&lt;p&gt;So I guess you decided to use the Bitnami Chart too, so there are few thing you need to take care otherwise the deployment will fail.&lt;/p&gt;

&lt;h3&gt;
  
  
  NOTICE
&lt;/h3&gt;

&lt;p&gt;Make sure you have set the password for the external database by passing into values.yaml&lt;/p&gt;

&lt;p&gt;&lt;code&gt;externalDatabase.password&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;postgresql.auth.password&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;These 2 field should have same value otherwise you'll run into postgres error and pod will go &lt;strong&gt;crashback-loop&lt;/strong&gt; &lt;br&gt;
And Since we are using Nginx as Ingress-Controller we are going to to enable the ingress&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ingress.enabled&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ingress.hostname&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ingress.pathType&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I hope you are finding these value in values.yaml and overwriting them, now most Important thing since we are using Application Load Balancer in our case (I'll attach the link how to do that too soon.)&lt;br&gt;
I have configure it in such a way that Before ALB all traffic is in &lt;strong&gt;HTTPS&lt;/strong&gt; and from there in &lt;strong&gt;HTTP&lt;/strong&gt; if you have the same case&lt;br&gt;
make sure you have done this &lt;strong&gt;change&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;proxy: edge&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And You can configure the username and its password as well I hope you'll find the values.&lt;/p&gt;

&lt;p&gt;Now You can deploy the helm chart with updated values and the wait for few seconds as it will take some time grab a water bottle for yourself 🍾.&lt;/p&gt;

&lt;h2&gt;
  
  
  STEP III
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Confirmation&lt;/strong&gt; that it's running successfully try the kube-proxy command to proxy the port to you're local system and see if it's running if yes then we can move forward, if not 🥺 plz check the configuration that you have made or feel free to ask in comments.&lt;/p&gt;

&lt;h2&gt;
  
  
  STEP IV
&lt;/h2&gt;

&lt;p&gt;If you have done this step while setting up the ingress it's well and good but if not you are like me😊.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;KeyCloak&lt;/strong&gt; needs some headers to work behind proxy as it's mentioned &lt;a href="https://www.keycloak.org/server/reverseproxy" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to configure our Nginx Ingress Controller to pass the headers so after digging for 5 Days I found this,&lt;br&gt;
We need to create a configmap which contains the following data&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;br&gt;
kind: ConfigMap&lt;br&gt;
apiVersion: v1&lt;br&gt;
metadata:&lt;br&gt;
  name: &amp;lt;chart-name-with-which-deployed&amp;gt;-nginx-ingress-controller&lt;br&gt;
  namespace: &amp;lt;namespace-in-which-deployed-nginx-ingress-controller&amp;gt;&lt;br&gt;
data:&lt;br&gt;
  use-forwarded-headers: "true"&lt;br&gt;
  forwarded-for-header: "X-Forwarded-For"&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;and make sure the name is correct otherwise it will not work, to verify it's working see the logs of the pod&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nginx-controller-nginx-ingress-controller&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You'll see something like&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Found the configmap needed to reload backend, reload complete&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;not exactly but something like this and you're done&lt;/p&gt;

&lt;p&gt;Now go to your hostname that associated with keycloak you'll be able to access the admin-panel without issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's Discuss the Error if These Steps are not Completed
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;First&lt;/strong&gt; if you didn't set the password whenever you'll upgrade the helm chart you'll loose the connection with postgres as the by default password is randomly generated it will change after upgrade so make sure you have provided the password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second&lt;/strong&gt; if the header are not making through Ingress You'll not be able to access the admin console rather than you'll be stuck with&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/admin/master/console&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;if it's already configure you'll not face this error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Third&lt;/strong&gt; too many redirect&lt;/p&gt;

&lt;p&gt;This is due the &lt;strong&gt;proxy=passthrough&lt;/strong&gt; which lead to this error.&lt;br&gt;
And its default value so make sure if your tls terminate at loadbalancer or proxy which is in front of keycloak then you have to use the&lt;/p&gt;

&lt;p&gt;&lt;code&gt;proxy: edge&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;and it will start working&lt;/p&gt;

&lt;h2&gt;
  
  
  And
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;My Friend&lt;/strong&gt; if you have done all this right you will be able to see the login screen of admin console&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4s8w1fyrst7gez5tji4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4s8w1fyrst7gez5tji4.png" alt="Image description" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Thank you for reading this long hope, it help you
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Feel Free to ask any question&lt;/strong&gt; &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>aws</category>
      <category>productivity</category>
    </item>
    <item>
      <title>KOps using Gossip DNS</title>
      <dc:creator>Ashutosh Singh</dc:creator>
      <pubDate>Mon, 07 Mar 2022 06:37:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/install-kops-with-gossip-dns-over-aws-49op</link>
      <guid>https://dev.to/aws-builders/install-kops-with-gossip-dns-over-aws-49op</guid>
      <description>&lt;h3&gt;
  
  
  KOPS?? you read it right! Kops with Gossip DNS
&lt;/h3&gt;

&lt;p&gt;So I want to apologies for posting this late. Anyways without any delay let's get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is KOPS ?
&lt;/h2&gt;

&lt;p&gt;let me borrow this line from the official definition,&lt;br&gt;
"it's the kubectl of K8s clusters"&lt;/p&gt;

&lt;p&gt;I like this line very much coz it's the simplest form that can make you understand what is KOps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Case of KOps
&lt;/h2&gt;

&lt;p&gt;Suppose you wanted to create a self managed cluster but don't want to use the kubeadm and managed the infrastructure etc.&lt;br&gt;
KOps is the answer to all of those queries.&lt;/p&gt;

&lt;p&gt;It creates the Production grade Cluster to use for Development/Testing or just fun&lt;/p&gt;

&lt;h2&gt;
  
  
  To create Cluster you need 2 Things
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Cloud Account (AWS ,GCP ,Azure etc).&lt;br&gt;
A DNS/hostname/website anything you wanna call it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But most of us don't want to buy or own a website name neither we want to buy a domain name so what can we do to solve this &lt;/p&gt;

&lt;h2&gt;
  
  
  Gossip DNS comes into play.
&lt;/h2&gt;

&lt;p&gt;you can choose anyname but it must end with '.k8s.local' and you're ready to go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lets talk about Cloud Account
&lt;/h2&gt;

&lt;p&gt;So here we need the admin access coz we need to create user and give it the permission, you can check the permission from the official doc &lt;a href="https://kops.sigs.k8s.io/getting_started/aws/"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;You also need a S3 bucket, so that KOps can save the cluster configuration on the cloud so that even other member of your team can use the same configuration to create their own cluster.&lt;br&gt;
&lt;strong&gt;"NOTE: Bucket Versioning need to be enable &amp;amp; Bucket should be in us-east-1 other wise you need to do some more steps"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;and you're done just run&lt;br&gt;
&lt;code&gt;kops create cluster --state &amp;lt;your-bucket-name&amp;gt; --name &amp;lt;name-of-your-cluster&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;you can set these flag into env for easier use&lt;/p&gt;

&lt;p&gt;After running the command it will show you the config file in which everything is mention &lt;/p&gt;

&lt;p&gt;You just need to run the same command with --yes&lt;br&gt;
&lt;code&gt;kops create cluster --state &amp;lt;your-bucket-name&amp;gt; --name &amp;lt;name-of-your-cluster&amp;gt; --yes&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Saying it will create production grade cluster is a little bit off from my point of view but there are 2-3 things you need to do before you can hand it over to someone like &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Setting up metric server in K8s&lt;br&gt;
Setting the config file expiry date longer &lt;br&gt;
Installing the CNI&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;NOTE: Its better to specify the CNI in the starting, when we are creating the config file for our cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Otherwise it's a mess to configure the CNI coz it need to match the CIDR of k8s, for me I have to delete the cluster and create it again coz I didn't have anything critical over there. &lt;br&gt;
So choose it first&lt;/p&gt;

&lt;p&gt;You can check the about CNI and metric server if you just google it with KOps as keyword you can get everything.&lt;/p&gt;

&lt;h3&gt;
  
  
  And Thanks For Reading the long Article guys
&lt;/h3&gt;

&lt;p&gt;You can checkout all the command flag from the official docs&lt;br&gt;
I have made this for article for the &lt;strong&gt;GOSSIP DNS&lt;/strong&gt; mostly coz I wasn't able to find any article about it.&lt;/p&gt;

&lt;p&gt;Read every bit of the article coz it helps to setup everything and feel free to ask any question or error I'll try my best to solve them.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>SAIL</title>
      <dc:creator>Ashutosh Singh</dc:creator>
      <pubDate>Mon, 03 Jan 2022 14:29:17 +0000</pubDate>
      <link>https://dev.to/aws-builders/sail-2ifg</link>
      <guid>https://dev.to/aws-builders/sail-2ifg</guid>
      <description>&lt;h2&gt;
  
  
  Happy New Year Guys!
&lt;/h2&gt;

&lt;h3&gt;
  
  
  I know it's a bit late but.... you understand it
&lt;/h3&gt;

&lt;p&gt;You probably have guessed the todays topic by seeing the cover Image or title, &lt;strong&gt;"LightSail"&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So I was just going through Upwork and I have seen some work related to &lt;strong&gt;"AWS LightSail"&lt;/strong&gt;. I have not worked with LightSail ever but just read about it a bit, it's the simplest way of using cloud... basically AWS cloud 😁😁.&lt;/p&gt;

&lt;p&gt;From my 1 day experience of using LightSail or I can say 1 hour&lt;br&gt;
All I can say is that &lt;em&gt;""It's really the simplest way of utilizing the resources of AWS""&lt;/em&gt;, from  deploying the Virtual Machine, Application, Container, Load Balancer it have it all in a very clear and precise order, They too have a very good documentation for the common use cases. I have read one of them regarding deployment of WordPress application or instance. And It's really good.&lt;/p&gt;

&lt;h3&gt;
  
  
  So, lets move on to the Technical Stuffs 🔥
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frdh1xqspbfcwu6gsqygf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frdh1xqspbfcwu6gsqygf.png" alt="Image description" width="800" height="393"&gt;&lt;/a&gt;&lt;br&gt;
The first image that you'll see after opening the LightSail service&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Speaking of Truth there's really not much of things to learn about LightSail or I have mistaken, please let know more about it in comments.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, Why do we have this service ?&lt;br&gt;
According to me it's for those who are new to cloud like very new just know about some terms like instance/vm, load balancers, databases, etc.&lt;/p&gt;

&lt;p&gt;It have all the service, that are required by someone, who is not expert or know about cloud like the database, container, storage, snapshots(backups). They are just there if you just see the above image you'll find it by just looking it.&lt;br&gt;
I know this sound a bit ridiculous but think from the perspective of a person who barely know AWS it's easy start for them.&lt;br&gt;
You don't have find the Services from service panel that we generally do, we don't have to configure much everything is pretty much already configure for us that's a plus point.   &lt;/p&gt;

&lt;h3&gt;
  
  
  Pricing
&lt;/h3&gt;

&lt;p&gt;Well I have not compare the pricing with other services because every service is unique in it's own way but here you find more about &lt;a href="https://aws.amazon.com/lightsail/pricing/"&gt;it&lt;/a&gt;.&lt;br&gt;
You basically have 3 months of free tier limit so if you want to check it out please do and let me know about your experience.&lt;/p&gt;

&lt;p&gt;Again there wasn't much to discuss or learn but still I would suggest please give it a try so that at least when someone is talking you know about it.&lt;/p&gt;

&lt;p&gt;You can use it as a staging or development environment if you don't want to get into the trouble of configuring the services or you have just started your business&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Again it's a very easy, simplest way for now I think to use the AWS resources&lt;/em&gt;&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;But do keep in mind there is not much of option available to you when you use them, from my side I can't think of a option but if you're looking for some specific functionality do check the official page of LightSail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thanks for Reading it, I hope it gave you a little bit of insights of LightSail.
&lt;/h2&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
