<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mahesh Upreti</title>
    <description>The latest articles on DEV Community by Mahesh Upreti (@mahupreti).</description>
    <link>https://dev.to/mahupreti</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mahupreti"/>
    <language>en</language>
    <item>
      <title>Automating MFA in an Amazon EC2 Instance: Mastering Secure Access</title>
      <dc:creator>Mahesh Upreti</dc:creator>
      <pubDate>Sun, 22 Mar 2026 07:14:48 +0000</pubDate>
      <link>https://dev.to/mahupreti/automating-mfa-in-an-amazon-ec2-instance-mastering-secure-access-1maa</link>
      <guid>https://dev.to/mahupreti/automating-mfa-in-an-amazon-ec2-instance-mastering-secure-access-1maa</guid>
      <description>&lt;p&gt;Automating MFA in an Amazon EC2 instance adds a critical security layer to protect your infrastructure. &lt;/p&gt;

&lt;p&gt;By configuring MFA for SSH access, you're ensuring that access requires more than just a private key or password. Users must also present a time-based one-time password (TOTP) generated by an MFA device, such as Google Authenticator or Authy. This means that even if an attacker obtains an SSH key, they cannot access the instance without the MFA token. MFA setup on an EC2 instance helps safeguard your system from unauthorized access and significantly reduces the risk of remote attacks.&lt;/p&gt;

&lt;p&gt;In addition to MFA, SELinux (Security-Enhanced Linux) may be enabled on your Amazon Linux instance. This tutorial works if your EC2 instance doesn't have SELinux enabled, too. SELinux is a security mechanism that provides an additional layer of protection for the system by enforcing security policies that govern how processes interact with each other and the system’s files. But automating MFA in an Amazon EC2 instance with SELinux enabled requires ensuring that the necessary security contexts and policies are properly configured to allow the MFA process to work alongside SELinux's protections.&lt;/p&gt;

&lt;h3&gt;
  
  
  In this guide, we will walk through the following steps:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Setting up MFA on the EC2 instance for SSH login.&lt;/li&gt;
&lt;li&gt;Ensure SELinux is properly configured to not interfere with the MFA process.&lt;/li&gt;
&lt;li&gt;Testing the MFA-enabled SSH login to ensure that access is secure and compliant with both MFA and SELinux policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Components for MFA Setup in EC2 instance
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;An Amazon EC2 Instance running an Amazon Linux (or compatible version) with SELinux enabled.&lt;/li&gt;
&lt;li&gt;Virtual MFA device (such as Google Authenticator or Authy).&lt;/li&gt;
&lt;li&gt;SSH access is configured for the EC2 instance.&lt;/li&gt;
&lt;li&gt;Sudo privileges on the EC2 instance to modify configurations.&lt;/li&gt;
&lt;li&gt;The system has SELinux enabled and configured to permit the necessary changes required for implementing MFA.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why You Need SELinux and MFA?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;MFA: Protects against unauthorized access by requiring users to provide not only their SSH key but also a time-sensitive code from a registered MFA device.&lt;/li&gt;
&lt;li&gt;SELinux: SELinux adds a security layer by actively controlling processes and access to files through strict security policies. To enable MFA on your EC2 instance, you must configure SELinux to allow the necessary modifications and ensure it doesn't block authentication-related operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end of this guide, you'll have successfully automated MFA in an Amazon EC2 for SSH access to your Amazon Linux instance, with SELinux configured to support the changes without breaking the system’s security context. This will significantly enhance your instance's overall security, making unauthorized access much more difficult.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Installing Google Authenticator on the EC2 Instance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, SSH into your EC2 instance. Once connected, switch to the root user (or use sudo for administrative tasks) and install the necessary software packages.&lt;/p&gt;

&lt;p&gt;Install Google Authenticator and QRencode: Run the following commands to install the Google Authenticator PAM module and qrencode (for generating QR codes): &lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo dnf install google-authenticator qrencode -y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Initialize Google Authenticator: To enable MFA in AWS EC2, run the google-authenticator command for the user for whom you want to enable MFA (e.g., ec2-user). This command will generate a secret key and provide a QR code to scan with the Google Authenticator app. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;google-authenticator -s ~/.ssh/google_authenticator&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;When you set up Google Authenticator on your EC2 instance, the system prompts you with several configuration questions to customize how MFA (Multi-Factor Authentication) operates. These prompts allow you to fine-tune the behavior of the authentication process—helping you strike the right balance between strong security and user convenience. Below is a detailed breakdown of each option and what it means:&lt;/p&gt;

&lt;p&gt;Do you want authentication tokens to be time-based? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt: Do you want authentication tokens to be time-based? (y/n)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Google Authenticator and most MFA apps use time-based tokens by default. These tokens expire after a short period, usually 30 seconds, and then generate a new token for the next time window. This time-based approach ensures that even if someone intercepts a token, they cannot reuse it later. Therefore, choosing Yes (y) to enable time-based tokens is recommended because it provides stronger security. Regular rotation of tokens reduces the risk of reuse or interception.&lt;/p&gt;

&lt;p&gt;The system displays a QR code for you to scan or lets you manually enter the secret key into your authentication app. You add your account name and enable the time-based option. You must securely store the secret key, verification codes, and scratch codes generated during this process in case you lose access to the app. Remember that you can use each scratch code only once. Instead of entering the key manually, you will scan the QR code.&lt;br&gt;
Next, you'll be prompted with several questions to configure MFA: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a. Do you want me to update your "/home/ec2-user/.ssh/.google_authenticator" file? (y/n)&lt;br&gt;
When you configure Google Authenticator for Multi-Factor Authentication (MFA) on your EC2 instance, one of the prompts you will encounter is: &lt;br&gt;
Do you want me to update your "/home/ec2-user/.ssh/.google_authenticator" file (y/n)?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The .google_authenticator file stores your secret key, OTP settings, and scratch codes crucial for MFA setup and recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;b. Do you want to disallow multiple uses of the same authentication token? &lt;br&gt;
Prompt: Do you want to disallow multiple uses of the same authentication token? (y/n)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you choose Yes (y), each authentication token will only be valid for a single login attempt. This ensures that once you use a token to log in successfully, it becomes invalid and cannot be reused—even if someone tries to enter the same token again. Disallowing multiple uses of the same token provides an additional layer of security. In case a malicious actor intercepts a token during transmission, they won't be able to reuse it for another login attempt, even if they manage to access the server or your session. Recommendation: Yes (y) is recommended because it prevents the token from being reused by attackers, which is a useful security measure. It forces attackers to generate a new token for each login attempt, reducing the chances of successful brute-force attacks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;c. Do you want to increase the time window for valid tokens (default is 1:30 min)? &lt;br&gt;
Prompt: Do you want to increase the time window for valid tokens (default is 1:30 min)? (y/n)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This option allows you to extend the time window in which a generated authentication token is valid. By default, Google Authenticator tokens are valid for 30 seconds (the "1:30 min" refers to the 1 minute 30 seconds time window, which includes the current 30-second token and the previous and next 30-second tokens to accommodate any time skew between the client and server). Increasing the time window (e.g., to 4 minutes) could be helpful if you're experiencing issues with time synchronization between the client (your phone or device) and the server, especially if you are in a region with unstable or variable network conditions. However, the downside is that extending the window makes it easier for attackers to guess the token, as they have more time to try to authenticate. &lt;/p&gt;

&lt;p&gt;Recommendation: Choosing No (n) is generally the best option. The default 1-minute and 30-second window typically offers a solid balance between security and usability. You should only consider extending this time window if you're experiencing issues with clock drift or synchronization delays between your MFA device and the server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;d. Do you want to enable rate-limiting for login attempts? &lt;br&gt;
Prompt: Do you want to enable rate-limiting for login attempts? (y/n)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Rate-limiting helps prevent brute-force attacks by limiting the number of login attempts a user can make within a specific time frame. If enabled, the system will allow only a certain number of authentication attempts (e.g., 3 attempts) within a defined period (e.g., 30 seconds). After this limit is reached, further attempts will be blocked temporarily. Recommendation: Yes (y) is the recommended option. Enabling rate-limiting is a simple but effective way to reduce the chances of a successful brute-force attack. If an attacker tries to guess the token by repeatedly entering incorrect values, the system will block them after a few failed attempts, making it much harder for them to break into the system. These configuration options are all about finding the balance between security and usability. By opting for the recommended settings, you will significantly improve the security of your EC2 instance while minimizing the risk of unauthorized access via brute force or token reuse. After these options, the tool will display a QR code in your terminal. You can scan this code with the Google Authenticator app or manually enter the provided secret key.&lt;/p&gt;

&lt;p&gt;Restore the SELinux context for the new configuration: Since we have SELinux enabled, we need to restore the proper security context for the .google_authenticator file: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;restorecon -Rv ~/.ssh/&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2: Configure PAM (Pluggable Authentication Module) for automating MFA in an Amazon EC2
&lt;/h3&gt;

&lt;p&gt;The next step is to modify the PAM configuration to enable Google Authenticator for SSH logins. We’ll update the PAM configuration file (/etc/pam.d/sshd) to require Google Authenticator's second-factor authentication.&lt;/p&gt;

&lt;p&gt;Edit the PAM configuration file: Open the PAM configuration file for SSH (/etc/pam.d/sshd) with a text editor: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/pam.d/sshd&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the following line at the bottom of the file to enable Google Authenticator for MFA: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;auth required pam_google_authenticator.so secret=/home/${USER}/.ssh/google_authenticator auth required pam_permit.so&lt;/code&gt;&lt;br&gt;
(This is a single-line command.)&lt;/p&gt;

&lt;p&gt;Alternatively, if you want to allow some users to bypass MFA, you can add the nullok option to the end of the line: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;auth required pam_google_authenticator.so secret=/home/${USER}/.ssh/google_authenticator nullok auth required pam_permit.so&lt;/code&gt;&lt;br&gt;
(This is a single-line command)&lt;/p&gt;

&lt;p&gt;Note: nullok will allow users who haven't set up MFA to log in without a second-factor prompt.&lt;/p&gt;

&lt;p&gt;Comment out the password authentication line: In the same file (/etc/pam.d/sshd), find the line that references password-auth and comment it out to ensure only SSH key-based authentication and MFA are required: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;#auth substack password-auth&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Update SSH Configuration to Require MFA&lt;/strong&gt;&lt;br&gt;
Next, we need to update the SSH server configuration (/etc/ssh/sshd_config) to require both an SSH key and a verification code.&lt;/p&gt;

&lt;p&gt;Edit the SSH configuration file: Open the SSH configuration file with a text editor: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/ssh/sshd_config&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Make the following changes:&lt;/p&gt;

&lt;p&gt;Disable the interactive keyboard authentication: Comment out the line: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;#KbdInteractiveAuthentication yes KbdInteractiveAuthentication yes&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Enable Challenge-Response Authentication: Uncomment or add the following line: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;ChallengeResponseAuthentication yes&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Enable the Authentication Methods: Add this line at the bottom of the file: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;AuthenticationMethods publickey,keyboard-interactive&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Restart the SSH service: After saving the changes, restart the SSH service for the changes to take effect: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo systemctl restart sshd.service&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 4: Test the Automating MFA in an Amazon EC2 Instance
&lt;/h3&gt;

&lt;p&gt;To confirm MFA is working correctly, open a new terminal and attempt to SSH into your EC2 instance. After entering your SSH key passphrase (if required), the system prompts you to provide a verification code from your Google Authenticator app. When you enter the correct code, the system grants access, confirming that MFA is properly set up.&lt;/p&gt;

&lt;p&gt;Example prompt:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Verification code:&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Enter the code shown in your Google Authenticator app. If the code is correct, the system logs you in successfully. If you fail to authenticate, check that your system time syncs accurately, because Google Authenticator uses time-based one-time passwords (TOTP), and even small time differences cause login problems.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 5: Force Users to Set Up Google Authenticator on First Login
&lt;/h3&gt;

&lt;p&gt;Create a script to prompt users to set up Google Authenticator if they haven’t configured MFA during their first login. This will be our final step in automating MFA in an Amazon EC2 instance.&lt;/p&gt;

&lt;p&gt;Create a script to enforce MFA setup: Create a new file in the /etc/profile.d/ directory: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/profile.d/mfa.sh &lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Paste the following script into the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; ~/.ssh/google_authenticator &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$USER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s2"&gt;"root"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 

&lt;span class="k"&gt;then &lt;/span&gt;google-authenticator &lt;span class="nt"&gt;-s&lt;/span&gt; ~/.ssh/google_authenticator 

restorecon &lt;span class="nt"&gt;-Rv&lt;/span&gt; ~/.ssh/ 

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Save the generated emergency scratch codes and use the secret key or scan the QR code to register your device for multifactor authentication."&lt;/span&gt; 

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Login again using your SSH key pair and the generated One-Time Password on your registered device."&lt;/span&gt; 

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"logout"&lt;/span&gt; 

&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set the correct permissions for the script: Set the script file’s permissions to make it readable for all users:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo chmod o+r /etc/profile.d/mfa.sh&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Update PAM configuration: Update the PAM configuration for SSH (/etc/pam.d/sshd) to run the MFA setup script on user login:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;auth&lt;/span&gt; &lt;span class="n"&gt;required&lt;/span&gt; &lt;span class="n"&gt;pam_google_authenticator&lt;/span&gt;.&lt;span class="n"&gt;so&lt;/span&gt; &lt;span class="n"&gt;secret&lt;/span&gt;=/&lt;span class="n"&gt;home&lt;/span&gt;/${&lt;span class="n"&gt;USER&lt;/span&gt;}/.&lt;span class="n"&gt;ssh&lt;/span&gt;/&lt;span class="n"&gt;google_authenticator&lt;/span&gt; &lt;span class="n"&gt;nullok&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(This is a single-line command)&lt;/p&gt;

&lt;p&gt;Remove nullok After MFA Setup&lt;/p&gt;

&lt;p&gt;Once all users have configured MFA, you can remove the nullok option from the PAM configuration file to ensure that users must use MFA for all future logins.&lt;/p&gt;

&lt;p&gt;Edit the PAM configuration again: sudo nano /etc/pam.d/sshd&lt;/p&gt;

&lt;p&gt;Remove nullok from the line that includes pam_google_[authenticator.so]&lt;code&gt;(&amp;lt;http://authenticator.so/&amp;gt;): auth required pam_google_authenticator.so secret=/home/${USER}/.ssh/google_authenticator&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Save and exit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;By following these steps, you’ve successfully automated Multi-Factor Authentication (MFA) on your Amazon EC2 instance running an Amazon Linux 2023. This setup enhances security by requiring users to authenticate with both an SSH key and a time-based OTP generated by the Google Authenticator app.&lt;/p&gt;

&lt;p&gt;Additionally, by configuring SELinux properly, you maintain strict security policies without interfering with MFA functionality. Enforcing MFA setup on a user's first login ensures that every individual accessing your EC2 instance is protected with an extra layer of authentication, further strengthening your instance’s overall security posture.&lt;/p&gt;

&lt;p&gt;If you like scripts or any other automation tasks, feel free to reach out to me at &lt;a href="mailto:maheshupretiofficial@gmail.com"&gt;maheshupretiofficial@gmail.com&lt;/a&gt;. I am open for collaboration and projects.&lt;/p&gt;

</description>
      <category>mfainec2instance</category>
      <category>ec2instancesecurity</category>
      <category>awssecurity</category>
    </item>
    <item>
      <title>RepoChat 🚀 — Explore and talk with any repository effortlessly.</title>
      <dc:creator>Mahesh Upreti</dc:creator>
      <pubDate>Sat, 07 Feb 2026 16:15:20 +0000</pubDate>
      <link>https://dev.to/mahupreti/repochat-chat-with-any-github-repository-n06</link>
      <guid>https://dev.to/mahupreti/repochat-chat-with-any-github-repository-n06</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github-2026-01-21"&gt;GitHub Copilot CLI Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;Every developer knows the power of the GitHub Copilot CLI for explaining commands and fixing local errors. But what if that same intelligence could understand the entire remotely located project architecture?&lt;/p&gt;

&lt;p&gt;RepoChat is a command-line tool that acts as a high-performance bridge between your local system and remote repositories. With the GitHub Copilot CLI at its core, it converts the CLI into a global codebase intelligence tool, providing actionable insights across projects.&lt;/p&gt;

&lt;p&gt;🔥 Key Highlights (Driven by Copilot CLI)&lt;br&gt;
🔍 Repository-Wide Intelligence: RepoChat breaks the "active file" barrier. It indexes the entire repo so the Copilot CLI can answer questions about cross-module logic and architecture.&lt;br&gt;
🧠 Intelligent Code Translation: Need a Python service converted to a Node.js middleware? RepoChat leverages Copilot CLI's deep language understanding to translate complex logic while maintaining project-wide context.&lt;br&gt;
💾  Zero-Trust Security: No API keys, no third-party LLM dashboards. By using the official gh copilot extension, user data and credentials remain entirely within the trusted GitHub ecosystem.&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;My repository link:  &lt;a href="https://github.com/mahupreti/repochat/tree/master" rel="noopener noreferrer"&gt;https://github.com/mahupreti/repochat/tree/master&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/dpO9aJRUdyI"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;If GitHub Copilot is already installed on your system, you can install the package using the command &lt;code&gt;pip install git+https://github.com/mahupreti/repochat.git&lt;br&gt;
&lt;/code&gt; and start using the RepoChat command-line tool right away.&lt;/p&gt;

&lt;p&gt;If GitHub Copilot is not installed, you can use the Dockerfile provided below or set up on your machine and follow a few step-by-step commands to set up and run RepoChat manually.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.11-slim

# System deps
RUN apt-get update &amp;amp;&amp;amp; apt-get install -y \
    git \
    wget \
    curl \
    &amp;amp;&amp;amp; rm -rf /var/lib/apt/lists/*

# Install GitHub CLI
RUN mkdir -p /etc/apt/keyrings \
 &amp;amp;&amp;amp; wget -qO /etc/apt/keyrings/githubcli-archive-keyring.gpg https://cli.github.com/packages/githubcli-archive-keyring.gpg \
 &amp;amp;&amp;amp; echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" \
    &amp;gt; /etc/apt/sources.list.d/github-cli.list \
 &amp;amp;&amp;amp; apt-get update \
 &amp;amp;&amp;amp; apt-get install -y gh

# Install Copilot CLI
RUN gh extension install github/gh-copilot

# Install your repo
RUN pip install git+https://github.com/mahupreti/repochat.git

WORKDIR /workspace
CMD ["/bin/bash"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;docker build -t repochat .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -it --rm repochat&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# 1. Log in to GitHub
gh auth login

# 2. Initialize Copilot (Downloads the CLI - Press 'Esc' after download)
gh copilot

# 3. (Optional) Install the package directly from the repository
pip install git+https://github.com/mahupreti/repochat.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;(Now you are ready to index and chat!)&lt;/strong&gt;&lt;br&gt;
I am running inside the container for test purposes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjfanobd7tk3cdr3qeq0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjfanobd7tk3cdr3qeq0.png" alt="Indexing the repository"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7k9vdna3s3extuqr1is.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7k9vdna3s3extuqr1is.png" alt="Chatting start with repo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlq4venjcuh8yablqxb4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlq4venjcuh8yablqxb4.png" alt="chat with repo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjputz35gh6o1qms8y2sb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjputz35gh6o1qms8y2sb.png" alt="chat with repo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the same way, you can ask additional questions about the code repository and request operations such as code changes, refactoring, or improvements.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Experience with GitHub Copilot CLI
&lt;/h2&gt;

&lt;p&gt;Building RepoChat was an exploration of what I call "Modular AI." Instead of building a custom RAG backend from scratch, I treated the GitHub Copilot CLI as a secure, high-utility Intelligence API.&lt;/p&gt;

&lt;p&gt;⚡ Frictionless Power&lt;br&gt;
Integrating gh copilot was remarkably simple. It allowed me to focus 100% of my energy on the retrieval logic, knowing that the "answering" part was handled by the best code-focused model in the world.&lt;/p&gt;

&lt;p&gt;🔒 The Trust Factor&lt;br&gt;
In an era of AI privacy concerns, the Copilot CLI is a standout. Users already trust gh auth. By piggybacking on this, RepoChat becomes a tool that even the most security-conscious developers can adopt immediately.&lt;/p&gt;

&lt;p&gt;💡 The Takeaway&lt;br&gt;
The GitHub Copilot CLI is more than just a tool; it's an Enabler. It provides a standardized platform for developers to build specialized, highly intelligent tools without the traditional overhead of AI development. RepoChat is my vision of that future.&lt;/p&gt;

&lt;p&gt;Team Submission: Mahesh Upreti ( Solo )&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>cli</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>The Corporate Comedian: Turning LinkedIn into LaughIn.</title>
      <dc:creator>Mahesh Upreti</dc:creator>
      <pubDate>Sun, 26 Jan 2025 16:18:57 +0000</pubDate>
      <link>https://dev.to/mahupreti/the-corporate-comedian-turning-linkedin-into-laughin-46de</link>
      <guid>https://dev.to/mahupreti/the-corporate-comedian-turning-linkedin-into-laughin-46de</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://srv.buysellads.com/ads/long/x/T6EK3TDFTTTTTT6WWB6C5TTTTTTGBRAPKATTTTTTWTFVT7YTTTTTTKPPKJFH4LJNPYYNNSZL2QLCE2DPPQVCEI45GHBT" rel="noopener noreferrer"&gt;Agent.ai&lt;/a&gt; Challenge: Assembly of Agents (&lt;a href="https://dev.to/challenges/agentai"&gt;See Details&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I have built an agent to make people laugh. The agent is a corporate comedian who is also a demotivational speaker. The agent generates three different meme items to make you roll on the floor. First, it generates the Roast material for you based on the LinkedIn profile you pass. Then it will create the meme for you. Finally, the demotivational mentor rises and gives you the best unhelpful life advice based on your profile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why did I build this Agent?
&lt;/h2&gt;

&lt;p&gt;As we all know, LinkedIn is a platform for sharing professional insights, Job searching, connecting like-minded people and having corporate talks. But, deep down we all know we also need some fun in this platform. From the corporate world, I would love to see a roast of all those people, including myself. Based on their profile, I would love to share what roast we can bring to them. When they post helpful advice on the platform, I would like to provide some unhelpful life advice based on their profile and activities on LinkedIn and make them giggle on their own.&lt;/p&gt;

&lt;p&gt;I would love to see people using it solely for entertainment purposes. This agent will generate roasts, memes and some unhelpful advice each time so they won't get bored and will use it for their entertainment purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://agent.ai/agent/the-corporate-comedian" rel="noopener noreferrer"&gt;https://agent.ai/agent/the-corporate-comedian&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqqkqngejyy1cf4orbq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqqkqngejyy1cf4orbq8.png" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ajuchp7cur2j3prozpn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ajuchp7cur2j3prozpn.png" alt="Image description" width="800" height="769"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zrn20bo9c60w2bcgk20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zrn20bo9c60w2bcgk20.png" alt="Image description" width="800" height="624"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0lvr20zl87jschrf5du.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0lvr20zl87jschrf5du.png" alt="Image description" width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent.ai Experience
&lt;/h2&gt;

&lt;p&gt;This was my first time creating AI Agents, and I was truly fascinated by what AI can do. I first tried different actions and created different things. Then, I thought of creating something fun. I started taking some actions seriously and started creating agent flow. I also explored some other agents and found that I can invoke other agents from my agent. So, I created my LinkedIn roasting agent and invoked another agent to generate memes and unhelpful advice in life.&lt;/p&gt;

&lt;p&gt;Since it was my first attempt, I faced some challenges, particularly when it came to handling the actions and ensuring smooth communication between them. It was a bit tricky initially, but with time and some playing with the actions, I figured it out and successfully created my agent. It was a rewarding experience. Thanks to the agents.ai Team! &lt;/p&gt;

&lt;p&gt;One disadvantage while invoking other agents was that overall my agent generating output took a long time.&lt;/p&gt;

&lt;p&gt;Team Member: Mahesh Upreti (Solo)&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>agentaichallenge</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>What's Inside? Find your Perfect Search</title>
      <dc:creator>Mahesh Upreti</dc:creator>
      <pubDate>Sat, 25 Jan 2025 04:22:30 +0000</pubDate>
      <link>https://dev.to/mahupreti/whats-inside-find-your-perfect-search-5egl</link>
      <guid>https://dev.to/mahupreti/whats-inside-find-your-perfect-search-5egl</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://srv.buysellads.com/ads/long/x/T6EK3TDFTTTTTT6WWB6C5TTTTTTGBRAPKATTTTTTWTFVT7YTTTTTTKPPKJFH4LJNPYYNNSZL2QLCE2DPPQVCEI45GHBT" rel="noopener noreferrer"&gt;Agent.ai&lt;/a&gt; Challenge: Productivity-Pro Agent (&lt;a href="https://dev.to/challenges/agentai"&gt;See Details&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built an agent designed to make searching through long-form content—like websites, videos, podcasts, or documents—quick and effortless. Its main goal is to save time and boost productivity by helping users find exactly what they’re looking for within lengthy content, without the hassle of manually going through it. It’s like having a smart assistant that tells you if the information you need is there, so you can focus on what matters most!&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works?
&lt;/h2&gt;

&lt;p&gt;When you start the agent, you'll start with a choice: Article format or Video format. Once you've picked it, the agent will kindly ask for the URL you want to explore. After you share the link, the magic begins:&lt;/p&gt;

&lt;p&gt;"What do you want to search for?"&lt;/p&gt;

&lt;p&gt;Here’s where you shine! Ask about anything you think might be in the URL. If the agent finds something related to your query, it will present the results in a precise, easy-to-read format. But if the agent can’t find what you’re looking for and senses you’re eager to dig deeper, it will suggest its partner agent for a little extra help! &lt;/p&gt;

&lt;h2&gt;
  
  
  Why Did I Build This Agent?
&lt;/h2&gt;

&lt;p&gt;It's not always the same but sometimes I waste a lot of my time watching long-form content for some specific time. So, the idea for this agent came from my own experience of wasting a lot of time on long-form content. As a DevSecOps engineer, I often watch lengthy DevOps tutorials, AWS re:Invent videos and other long videos from product launches. But there are days when I don’t have the time or patience to sit through these long videos, and I want to know if the specific topic I’m looking for is covered. That’s when I thought, “What if I had a way to quickly check if the content I need is there?” With that idea in mind, I created this agent, "What's Inside," to help save time and make content searching smarter and more efficient.&lt;/p&gt;

&lt;p&gt;I have in mind people using it to make their daily life easy by saving some amount of time. There will be no need to watch long-form full content or read any research papers and documents to only know if the specific portion is there or not. &lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;What's Inside: &lt;a href="https://agent.ai/agent/whats-inside" rel="noopener noreferrer"&gt;https://agent.ai/agent/whats-inside&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Video Link
&lt;/h2&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/JdrBCKOQRZE"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent.ai Experience
&lt;/h2&gt;

&lt;p&gt;This was my second time creating AI Agents, and I was truly fascinated by what AI can do. Using the tools at agents.ai, I was able to build an AI agent with surprising ease. My experience with the Builder and AI Agents was great overall. I especially loved the variety of Actions provided in the Builder. That said, I’d love to see even more Actions available in the future. &lt;/p&gt;

&lt;p&gt;Team Member: Mahesh Upreti&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>agentaichallenge</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>The FOMO Factory: You won't miss a beat!</title>
      <dc:creator>Mahesh Upreti</dc:creator>
      <pubDate>Sun, 06 Oct 2024 14:14:20 +0000</pubDate>
      <link>https://dev.to/mahupreti/the-fomo-factory-you-wont-miss-a-beat-2pf0</link>
      <guid>https://dev.to/mahupreti/the-fomo-factory-you-wont-miss-a-beat-2pf0</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/pinata"&gt;The Pinata Challenge &lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;Events are constantly occurring in our local area, yet my friends often overlook them. To address this, I created "&lt;strong&gt;The FOMO Factory&lt;/strong&gt;." As the name implies, no one will miss out on what's happening nearby. Users can upload various file types, including images and videos, to this web application, along with the location where the event is happening. User will provide the location using Map feature in the web app and then upload the image or the video of the happening event. Then, the user can view the events happening in town in their phone in the real-time and visit the event without missing them. This project utilizes &lt;strong&gt;Pinata’s&lt;/strong&gt; Files API for smooth file storage and retrieval on the &lt;strong&gt;IPFS&lt;/strong&gt; network.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Media Upload&lt;/strong&gt;: Users can upload images and videos for events, enriching the event experience to notify friends in real-time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Display&lt;/strong&gt;: All events are showcased with their details and media, along with geographical locations displayed on an interactive map.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IPFS Integration&lt;/strong&gt;: Seamless storage and retrieval of media files using the Pinata API ensure security and permanence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Responsive Design&lt;/strong&gt;: The application is optimized for both desktop and mobile devices, ensuring accessibility for all users.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;The website is hosted on render platform using free instance. So, at first, it may take few seconds to load due to inactivity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://event-sharing-application-with-pinata.onrender.com/" rel="noopener noreferrer"&gt; Website Demo Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5g6qaj5ynlr8poufjsbc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5g6qaj5ynlr8poufjsbc.png" alt="Main page" width="649" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgeqp8xcq8o2g6mk0m6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgeqp8xcq8o2g6mk0m6o.png" alt="Add Event" width="582" height="860"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsjj61e41ko5rux2o8nn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsjj61e41ko5rux2o8nn.png" alt="View Event Details" width="469" height="749"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/mahupreti" rel="noopener noreferrer"&gt;
        mahupreti
      &lt;/a&gt; / &lt;a href="https://github.com/mahupreti/event-sharing-application-with-pinata" rel="noopener noreferrer"&gt;
        event-sharing-application-with-pinata
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      This simple project was submission to Pinata Challenge on dev.to
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;The FOMO Factory&lt;/h1&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Description&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Events are constantly occurring in our local area, yet my friends often overlook them. To address this, I created "&lt;strong&gt;The FOMO Factory&lt;/strong&gt;." As the name implies, no one will miss out on what's happening nearby. Users can upload various file types, including images and videos, to this web application, along with the location where the event is happening. User will provide the location using Map feature in the web app and then upload the image or the video of the happening event. Then, the nearby user around 10 miles will get this notification on their phone in the realtime and visit the event without missing them. This project utilizes &lt;strong&gt;Pinata’s&lt;/strong&gt; Files API for smooth file storage and retrieval on the &lt;strong&gt;IPFS&lt;/strong&gt; network.&lt;/p&gt;
&lt;p&gt;Pinata cloud IPFS is used to store all the images and videos uploaded by the user. Users are then able to share…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/mahupreti/event-sharing-application-with-pinata" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  More Details
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Pinata&lt;/strong&gt; proved invaluable for enabling users to upload images along with their event locations, ensuring that others stay informed. While some friends may receive location details, they often miss the excitement of what’s happening. By allowing users to upload images and videos, they can share a glimpse of local events. The &lt;strong&gt;Pinata API facilitated seamless integration&lt;/strong&gt; for storing all this data, using &lt;strong&gt;CIDs&lt;/strong&gt; to later view and showcase events.&lt;/p&gt;

&lt;p&gt;Additionally, we recognized that once an event concludes, its relevance diminishes over time. For instance, an event lasting three days may not feel relatable after a week. To address this, we utilized the &lt;strong&gt;Pinata API&lt;/strong&gt; to &lt;strong&gt;unpin&lt;/strong&gt; images after seven days, ensuring that outdated content is automatically removed. The gateway URL provided by &lt;strong&gt;Pinata also served as a handy&lt;/strong&gt; &lt;strong&gt;CDN&lt;/strong&gt;, allowing users to easily share links directly with friends, which I have integrated into the website.&lt;/p&gt;

&lt;p&gt;I am a solo participant.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>pinatachallenge</category>
      <category>webdev</category>
      <category>api</category>
    </item>
    <item>
      <title>AI Chatbot for the personalized feedback and recommendations</title>
      <dc:creator>Mahesh Upreti</dc:creator>
      <pubDate>Sun, 23 Jun 2024 03:47:42 +0000</pubDate>
      <link>https://dev.to/mahupreti/ai-chatbot-for-the-personalized-feedback-and-recommendations-5e9e</link>
      <guid>https://dev.to/mahupreti/ai-chatbot-for-the-personalized-feedback-and-recommendations-5e9e</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for &lt;a href="https://dev.to/challenges/twilio"&gt;Twilio Challenge v24.06.12&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I have built a chatbot system using Twilio WhatsApp API and the Gemini API key. The user can ask for any feedback and they will get proper feedback and recommendations through WhatsApp.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;The demo can be seen after running the repo &lt;a href="https://github.com/mahupreti/twilio-gemini-whatsapp-bot/tree/master" rel="noopener noreferrer"&gt;https://github.com/mahupreti/twilio-gemini-whatsapp-bot/tree/master&lt;/a&gt;&lt;br&gt;
I have included the screenshots&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1toxhjm41btc3vw8w5c6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1toxhjm41btc3vw8w5c6.png" alt="Image description" width="800" height="616"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkjerqfkaz245mnf5zrd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkjerqfkaz245mnf5zrd.png" alt="Image description" width="800" height="616"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjaky5k692o4idw1tlbo4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjaky5k692o4idw1tlbo4.png" alt="Image description" width="800" height="616"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4mzeoo0cqmzbsw6kibrm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4mzeoo0cqmzbsw6kibrm.png" alt="Image description" width="800" height="616"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Twilio and AI
&lt;/h2&gt;

&lt;p&gt;Twilio has so many products in its list that we can integrate API with third-party API and then build something better out of it. In my case, I connected the Gemini API with the Twilio WhatsApp API. Now we don't have to go to the AI official website. Also, the best feature of adding the third-party AI API from WhatsApp is that we can even upload an image or PDF in WhatsApp and generate insights from it. &lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Prize Categories
&lt;/h2&gt;

&lt;p&gt;I guess my project falls under the Impactful Innovators. &lt;/p&gt;

&lt;p&gt;Team Member: Mahesh Upreti&lt;/p&gt;

&lt;p&gt;Thanks for participating!&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>twiliochallenge</category>
      <category>ai</category>
      <category>twilio</category>
    </item>
    <item>
      <title>Gift of Terraform (CDKTF): Building Automated Load Balancer Health Monitoring with CloudWatch Alarms and SNS</title>
      <dc:creator>Mahesh Upreti</dc:creator>
      <pubDate>Mon, 12 Feb 2024 16:49:42 +0000</pubDate>
      <link>https://dev.to/mahupreti/cdktf-and-python-building-automated-load-balancer-health-monitoring-with-sns-alarms-1lnc</link>
      <guid>https://dev.to/mahupreti/cdktf-and-python-building-automated-load-balancer-health-monitoring-with-sns-alarms-1lnc</guid>
      <description>&lt;p&gt;We often prefer to engage with familiar elements in our surroundings. What if I told you that you can utilize your preferred programming language to provision resources from various service providers?&lt;/p&gt;

&lt;p&gt;Cloud Development Kit for Terraform (CDKTF) enables some of the popular programming languages to define and deploy infrastructure. You can check out this installation guide for cdktf &lt;a href="https://developer.hashicorp.com/terraform/tutorials/cdktf/cdktf-install" rel="noopener noreferrer"&gt;https://developer.hashicorp.com/terraform/tutorials/cdktf/cdktf-install&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To begin with, create a new directory,&lt;br&gt;
&lt;code&gt;mkdir send_alarm&lt;br&gt;
cd send_alarm&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Inside the directory, run cdktf init, specifying the template for your preferred language and Terraform's AWS provider. We will not be using terraform cloud so you can leave that for now.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cdktf init --template="python" --providers="aws@~&amp;gt;4.0"&lt;br&gt;
(check the latest version in official terraform documentation)&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;Note: &lt;strong&gt;You will be provided information to activate the virtual environment.&lt;/strong&gt;&lt;br&gt;
Open the main.py file and copy and paste the below code.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

 #!/usr/bin/env python

 from constructs import Construct
 from cdktf import TerraformStack, App, Token

 from imports.aws.provider import AwsProvider
 from imports.aws.cloudwatch_metric_alarm import CloudwatchMetricAlarm
 from imports.aws.sns_topic import SnsTopic
 from imports.aws.data_aws_lb_target_group import DataAwsLbTargetGroup
 from imports.aws.data_aws_lb import DataAwsLb
 from imports.aws.sns_topic_subscription import SnsTopicSubscription

 class MyStack(TerraformStack):
     def __init__(self, scope: Construct, name: str, config: dict):
         super().__init__(scope, name)
         AwsProvider(self, "aws", region="us-east-1", profile=your-credential-profile-name)
         for load_balancer in config["load_balancers"]:
             print(load_balancer) #used for debugging purpose
             lb = DataAwsLb(self, "load_balancer" + "_" + load_balancer["name"],
                            name=load_balancer["load_balancer_name"]
                            )

             lb_tg = DataAwsLbTargetGroup(self, "load_balancer_tg_name" + "_" + load_balancer["name"],
                                          name=load_balancer["load_balancer_tg_name"]
                                          )

             sns = SnsTopic(self, "sns_topic_name" + "_" + load_balancer["name"],
                            name=load_balancer["name"] + "_" + "topic"
                            )

             for email in load_balancer["email_list"]:
                 SnsTopicSubscription(self, "terraform-sns-topic_subscription" + "_" + load_balancer["name"] + email,
                                      protocol="email",
                                      topic_arn=sns.arn,
                                      endpoint=email,
                                      )

             CloudwatchMetricAlarm(self, "alb-healthy-hosts" + "_" + load_balancer["name"],
                                   alarm_actions=[sns.arn],
                                   alarm_description="Number of unhealthy nodes in Target Group",
                                   alarm_name=load_balancer["alarm_name"],
                                   comparison_operator="GreaterThanOrEqualToThreshold",
                                   dimensions={
                                       "LoadBalancer": lb.arn_suffix,
                                       "TargetGroup": lb_tg.arn_suffix
                                   },
                                   evaluation_periods=1,
                                   metric_name="UnHealthyHostCount",
                                   namespace="AWS/ApplicationELB",
                                   # ok_actions=[sns.arn],
                                   period=60,
                                   statistic="SampleCount",
                                   threshold=1
                                   )
 app = App()
 MyStack(app, "MyStack", config={
     "load_balancers": [
         {
             "name": "test-fulfillment1",
             "region": "us-east-1",
             "load_balancer_name": "lb-test-fulfillment",
             "load_balancer_tg_name": "lb-test-fulfillment-tg01",
             "alarm_name": "test-fulfillment-alarm",
             "email_list": ["mygmail1@gmail.com", "mygmail2@gmail.com"]
         },
         {
             "name": "test-fulfillment2",
             "region": "us-east-1",
             "load_balancer_name": "lb-test-fulfillment2",
             "load_balancer_tg_name": "test-fulfillment2-tg01",
             "alarm_name": "test-fulfillment2-alarm",
             "email_list": ["mygmail121@gmail.com"]
         },
     ]
 })
 app.synth()


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Ahhhhhh!!!!!!! The code is so long.....&lt;/strong&gt;&lt;br&gt;
Wait This code may be lengthy, but we'll break down each component thoroughly to understand its functionality.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;#!/usr/bin/env python&lt;br&gt;
from constructs import Construct&lt;br&gt;
from cdktf import App, NamedRemoteWorkspace, TerraformStack, TerraformOutput, RemoteBackend&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In CDKTF (Cloud Development Kit for Terraform), the line from constructs import Construct is used to import the Construct class from the constructs module.&lt;/p&gt;

&lt;p&gt;The Construct class is a fundamental class in CDKTF that represents a building block of your infrastructure. It serves as the base class for all other constructs and provides common functionality for creating, manipulating, and organizing resources in your CDKTF program.&lt;/p&gt;

&lt;p&gt;By importing Construct, you can utilize the functionality provided by this class to create and manage your infrastructure components in CDKTF.&lt;/p&gt;

&lt;p&gt;In CDKTF (Cloud Development Kit for Terraform), the line from cdktf import TerraformStack, App, and Token is used to import specific classes and objects from the cdktf module.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TerraformStack is a class that represents a CDKTF stack, which is a collection of infrastructure resources defined and managed together. It provides methods and properties for defining and deploying your Terraform-based infrastructure.&lt;/li&gt;
&lt;li&gt;App is a class that represents a CDKTF application. It serves as the entry point for your CDKTF program and is responsible for initializing and executing the stack(s) defined in your program.
A token is an object that represents a CDKTF token. Tokens are used in CDKTF to represent references to resources and properties within the constructs. They provide a way to express dependencies and relationships between different parts of your infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By importing these classes and objects, you can use them in your CDKTF program to define and manage your infrastructure stacks, create CDKTF applications, and work with tokens for resource references and dependencies.&lt;br&gt;
&lt;code&gt;from imports.aws.cloudwatch_metric_alarm import CloudwatchMetricAlarm&lt;br&gt;
 from imports.aws.sns_topic import SnsTopic&lt;br&gt;
 from imports.aws.data_aws_lb_target_group import DataAwsLbTargetGroup&lt;br&gt;
 from imports.aws.data_aws_lb import DataAwsLb&lt;br&gt;
 from imports.aws.sns_topic_subscription import SnsTopicSubscription&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Each line is for importing the specific resources and data modules from the imports.aws module.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;class MyStack(TerraformStack):&lt;br&gt;
     def __init__(self, scope: Construct, name: str, config: dict):&lt;br&gt;
         super().__init__(scope, name)&lt;br&gt;
         AwsProvider(self, "aws", region="us-east-1", profile=your-credential-profile-name)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The code provided defines a class named MyStack that extends the TerraformStack class.&lt;/p&gt;

&lt;p&gt;The MyStack class is defined with three parameters in its constructor: scope, name, and config. scope represents the construct scope, name is the name of the stack, and config is a dictionary that can be used to pass additional configuration parameters.&lt;/p&gt;

&lt;p&gt;The super().&lt;strong&gt;init&lt;/strong&gt;(scope, name) line calls the constructor of the parent class TerraformStack with the provided scope and name arguments. This ensures that the base TerraformStack class is properly initialized.&lt;/p&gt;

&lt;p&gt;The AwsProvider class is instantiated with the line AwsProvider(self, "aws", region="us-east-1"). This creates an AWS provider resource within the stack. The self-argument refers to the current instance of the MyStack class. The string "aws" is the provider's name, and region="us-east-1" specifies the AWS region to use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You might have configured credentials inside the .aws/credentials file and your profile might be default which is generally in most of the cases.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now let me describe the portion below before moving on.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;app = App()&lt;br&gt;
 MyStack(app, "MyStack", config={&lt;br&gt;
     "load_balancers": [&lt;br&gt;
         {&lt;br&gt;
             "name": "loadbalancer1",&lt;br&gt;
             "region": "us-east-1",&lt;br&gt;
             "load_balancer_name": "lb-loadbalancer-1",&lt;br&gt;
             "load_balancer_tg_name": "lb-loadbalancer-tg01",&lt;br&gt;
             "alarm_name": "loadbalancer-1-alarm",&lt;br&gt;
             "email_list": ["test1@gmail.com", "test2@gmail.com"]&lt;br&gt;
         },&lt;br&gt;
         {&lt;br&gt;
             "name": "loadbalancer2",&lt;br&gt;
             "region": "us-east-1",&lt;br&gt;
             "load_balancer_name": "lb-loadbalancer-2",&lt;br&gt;
             "load_balancer_tg_name": "lb-loadbalancer-tg02",&lt;br&gt;
             "alarm_name": "loadbalancer-2-alarm",&lt;br&gt;
             "email_list": ["test1@gmail.com"]&lt;br&gt;
         },&lt;br&gt;
     ]&lt;br&gt;
 })&lt;br&gt;
 app.synth()&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The code provided creates an instance of the App class and initializes a MyStack stack within it.&lt;/p&gt;

&lt;p&gt;MyStack(app, "MyStack", config={...}) creates an instance of the MyStack class within the app. It takes three arguments: app (the App instance), "MyStack" (the name of the stack), and config (a dictionary that contains configuration details for the stack).&lt;/p&gt;

&lt;p&gt;app.synth() synthesizes the CDKTF app and generates the Terraform configuration files based on the defined stack and resources.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;for load_balancer in config["load_balancers"]:&lt;br&gt;
             print(load_balancer) #used for debugging purpose&lt;br&gt;
             lb = DataAwsLb(self, "load_balancer" + "_" + load_balancer["name"],&lt;br&gt;
                            name=load_balancer["load_balancer_name"]&lt;br&gt;
                            )&lt;br&gt;
             lb_tg = DataAwsLbTargetGroup(self, "load_balancer_tg_name" + "_" + load_balancer["name"],&lt;br&gt;
                                          name=load_balancer["load_balancer_tg_name"]&lt;br&gt;
                                          )&lt;br&gt;
             sns = SnsTopic(self, "sns_topic_name" + "_" + load_balancer["name"],&lt;br&gt;
                            name=load_balancer["name"] + "_" + "topic"&lt;br&gt;
                            )&lt;br&gt;
             for email in load_balancer["email_list"]:&lt;br&gt;
                 SnsTopicSubscription(self, "terraform-sns-topic_subscription" + "_" + load_balancer["name"] + email,&lt;br&gt;
                                      protocol="email",&lt;br&gt;
                                      topic_arn=sns.arn,&lt;br&gt;
                                      endpoint=email,&lt;br&gt;
                                      )&lt;br&gt;
             CloudwatchMetricAlarm(self, "alb-healthy-hosts" + "_" + load_balancer["name"],&lt;br&gt;
                                   alarm_actions=[sns.arn],&lt;br&gt;
                                   alarm_description="Number of unhealthy nodes in Target Group",&lt;br&gt;
                                   alarm_name=load_balancer["alarm_name"],&lt;br&gt;
                                   comparison_operator="GreaterThanOrEqualToThreshold",&lt;br&gt;
                                   dimensions={&lt;br&gt;
                                       "LoadBalancer": lb.arn_suffix,&lt;br&gt;
                                       "TargetGroup": lb_tg.arn_suffix&lt;br&gt;
                                   },&lt;br&gt;
                                   evaluation_periods=1,&lt;br&gt;
                                   metric_name="UnHealthyHostCount",&lt;br&gt;
                                   namespace="AWS/ApplicationELB",&lt;br&gt;
                                   # ok_actions=[sns.arn],&lt;br&gt;
                                   period=60,&lt;br&gt;
                                   statistic="SampleCount",&lt;br&gt;
                                   threshold=1&lt;br&gt;
                                   )&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The code provided iterates over the list of load balancer configurations in the config["load_balancers"] list and creates corresponding resources for each load balancer. Here's an explanation of what the code does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The for load_balancer in config["load_balancers"] loop iterates over each load balancer configuration in the config["load_balancers"] list. Inside the loop, the code prints the load_balancer dictionary, which represents the current load balancer configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The DataAwsLb class is instantiated with the line lb = DataAwsLb(self, "load_balancer" + "_" + load_balancer["name"], name=load_balancer["load_balancer_name"]). This creates a data source for an existing Application Load Balancer (ALB) with the specified name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The DataAwsLbTargetGroup class is instantiated with the line lb_tg = DataAwsLbTargetGroup(self, "load_balancer_tg_name" + "_" + load_balancer["name"], name=load_balancer["load_balancer_tg_name"]). This creates a data source for an existing ALB target group with the specified name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The SnsTopic class is instantiated with the line sns = SnsTopic(self, "sns_topic_name" + "&lt;em&gt;" + load_balancer["name"], name=load_balancer["name"] + "&lt;/em&gt;" + "topic"). This creates an Amazon SNS topic resource with the specified name. Inside a nested loop, the code iterates over the load_balancer["email_list"] list and creates a SnsTopicSubscription for each email address. This sets up email subscriptions to the SNS topic for receiving notifications&lt;br&gt;
(Will be talking more about the SNS topic and other SNS features in upcoming blog posts).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The CloudwatchMetricAlarm class is instantiated with the line CloudwatchMetricAlarm(self, "alb-healthy-hosts" + "_" + load_balancer["name"], ...). This creates a CloudWatch metric alarm resource with the specified configurations such as alarm actions, alarm description, alarm name, comparison operator, dimensions, evaluation periods, metric name, namespace, period, statistic, and threshold&lt;br&gt;
(Will be talking about the cloud watch alarm and other features in upcoming blog posts).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: &lt;strong&gt;One thing is that you have to manually subscribe for the topic clicking on your email inbox.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To deploy the code , run:&lt;br&gt;
&lt;code&gt;cdktf deploy your_stack_name&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;By executing this code, you will create multiple resources, an SNS topic, SNS topic subscriptions, and CloudWatch metric alarms based on the provided load balancer configurations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Excellent! This marks the conclusion of our comprehensive blog post detailing the process of setting up CloudWatch metric alarms through an SNS topic to notify when the load balancer is identified as unhealthy.&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
