<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Faizan</title>
    <description>The latest articles on DEV Community by Faizan (@faizan711).</description>
    <link>https://dev.to/faizan711</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/faizan711"/>
    <language>en</language>
    <item>
      <title>MCP: The Simple Protocol to Make AI Actually Useful</title>
      <dc:creator>Faizan</dc:creator>
      <pubDate>Tue, 22 Jul 2025 05:33:14 +0000</pubDate>
      <link>https://dev.to/faizan711/mcp-the-simple-protocol-to-make-ai-actually-useful-1cgh</link>
      <guid>https://dev.to/faizan711/mcp-the-simple-protocol-to-make-ai-actually-useful-1cgh</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;MCP is like creating a universal plug for AI tools. Here’s why that matters&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s be honest. AI assistants are smart, but they’re mostly stuck inside a chat window. They can write an email, but they can’t send it. They can plan a trip, but they can’t book it.&lt;/p&gt;

&lt;p&gt;To do real things, they need to connect to other apps and services — what developers call “tools.” And right now, that connection is a mess.&lt;/p&gt;

&lt;p&gt;Every time a developer wants their AI agent to use a new tool, like a weather API or a flight booking system, they have to build a custom bridge. It’s like needing a different, clunky adapter for every single device you own.&lt;/p&gt;

&lt;p&gt;This is the problem that the Model Context Protocol, or MCP, is trying to solve.&lt;/p&gt;




&lt;h2&gt;
  
  
  So, What is MCP?
&lt;/h2&gt;

&lt;p&gt;Think of it this way: Remember the days before USB? (If you are not that old, imagine it for understanding’s sake.) You had a different cable for your keyboard, your mouse, your printer… it was a tangled nightmare. Then USB came along and created one standard plug that just worked for everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  MCP aims to be the USB for AI.
&lt;/h2&gt;

&lt;p&gt;It’s a proposed set of rules( by Anthropic, the company behind Claude) — a standard language — that lets any AI agent talk to any tool without needing a custom-built connector. It’s a simple, universal agreement on how to ask for things and get a response.&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Works (The Simple Version)
&lt;/h2&gt;

&lt;p&gt;Instead of a developer writing tons of “glue code,” an MCP server handles the conversation. The process is straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Handshake: An AI agent connects to an MCP server and asks, “Hey, what can you do?”&lt;/li&gt;
&lt;li&gt;The Menu: The server sends back a simple list of its available tools and what information each one needs. For example:

&lt;ul&gt;
&lt;li&gt;get_current_weather(location: string)&lt;/li&gt;
&lt;li&gt;book_flight(destination: string, date: string)&lt;/li&gt;
&lt;li&gt;add_to_calendar(event_name: string, time: string)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Request: The AI agent, now knowing the menu, can make a clear request. “I need to use the get_current_weather tool. The location is 'Kolkata'.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Result: The MCP server takes that request, runs the actual code to get the weather, and sends the answer back to the AI agent in a standard format.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The AI agent doesn’t need to know the messy details of how the weather API works. It just needs to know the standard way to ask. The MCP server handles the rest.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Should You Care?
&lt;/h2&gt;

&lt;p&gt;This might sound like a technical detail, but its impact is huge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you’re a developer:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Less grunt work. You can stop writing endless custom API wrappers. Just make your tool available via an MCP server, and any compliant AI can use it.&lt;br&gt;
Wider reach. Build a tool once, and it’s instantly compatible with an entire ecosystem of agents. You can focus on creating a great tool, not on the plumbing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you’re not a developer:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;More capable AI. This is how we get AI assistants that can do things. Your AI could seamlessly check your flight status, order your lunch, and update your project management board, all without you lifting a finger.&lt;br&gt;
Things will just work. No more worrying about whether your AI assistant has the right “plugin” or “integration.” If both the agent and the tool speak MCP, they can work together.&lt;/p&gt;




&lt;h2&gt;
  
  
  Is This the Future?
&lt;/h2&gt;

&lt;p&gt;MCP is still a new idea. It needs big players — the companies building the AI models and the companies building the tools — to adopt it.&lt;/p&gt;

&lt;p&gt;But the logic is solid. Standardization is what unlocks real progress. The internet runs on standard protocols like TCP/IP. Our devices connect using standards like USB and Wi-Fi.&lt;/p&gt;

&lt;p&gt;MCP is a bet that the same thing needs to happen for AI agents. By creating a simple, open standard for communication, it could clear the path for a new wave of AI applications that are far more useful and connected to the real world. It’s an idea worth watching.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Thank you for reading! If you have any feedback or notice any mistakes, please feel free to leave a comment below. I’m always looking to improve my writing and value any suggestions you may have. If you’re interested in working together or have any further questions, please don’t hesitate to reach out to me at &lt;a href="mailto:fa1319673@gmail.com"&gt;fa1319673@gmail.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>mcp</category>
      <category>programming</category>
    </item>
    <item>
      <title>What is Secure Coding?</title>
      <dc:creator>Faizan</dc:creator>
      <pubDate>Sat, 03 Aug 2024 08:44:44 +0000</pubDate>
      <link>https://dev.to/faizan711/what-is-secure-coding-nfb</link>
      <guid>https://dev.to/faizan711/what-is-secure-coding-nfb</guid>
      <description>&lt;p&gt;Imagine building a house without a sturdy foundation or reliable locks on the doors. It might look impressive from the outside, but inside, it’s vulnerable to collapse and break-ins. Secure coding is akin to constructing that solid foundation and installing those reliable locks. It ensures that your software is robust, resilient, and safe from malicious intrusions.&lt;/p&gt;

&lt;p&gt;Secure coding is the practice of writing software that protects itself against cyber threats and vulnerabilities. It involves implementing security measures during the coding phase rather than after the software is built. This proactive approach minimizes the risk of attacks, safeguarding sensitive data, and maintains the integrity and availability of the system.&lt;/p&gt;

&lt;p&gt;Insecure code, on the other hand, is like leaving your house with the doors wide open. It exposes your software to numerous threats, such as data breaches, unauthorized access, and service disruptions. Without secure coding practices, your application becomes an easy target for hackers, jeopardizing both user trust and business operations. By understanding and implementing secure coding best practices, you can solidify your software against these risks and ensure a safer digital environment.&lt;/p&gt;

&lt;p&gt;In this article, we will explore the most common security threats, principles of secure coding, and the best practices for secure coding, which will help you make your software robust and safer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Security Threats
&lt;/h2&gt;

&lt;h3&gt;
  
  
  SQL Injection
&lt;/h3&gt;

&lt;p&gt;SQL Injection is like giving someone an invitation to tamper with the blueprints of your house. When a website or application accepts user input and incorporates it directly into an SQL query without proper validation, an attacker can insert malicious SQL code. This code can manipulate the database, allowing the attacker to access, modify, or delete data. For example, instead of entering a username, an attacker might input a code that tricks the database into revealing all user passwords.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross-site Scripting (XSS)
&lt;/h3&gt;

&lt;p&gt;It is like someone tricking you into running dangerous commands in your own home, like convincing you to open an email that infects your computer. XSS occurs when an attacker injects malicious scripts into web pages viewed by other users. These scripts can hijack user sessions, deface websites, or redirect users to malicious sites. For instance, an attacker could post a harmful script in a comment section, which then runs in the browser of anyone who views that comment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross-Site Request Forgery (CSRF)
&lt;/h3&gt;

&lt;p&gt;Cross-Site Request Forgery (CSRF) is like someone tricking you into signing a document without knowing its contents. CSRF attacks occur when an attacker tricks a user into performing actions they didn’t intend to do, like changing their account email or making a transaction. This is often done by exploiting the user’s authenticated session with a website. For example, if you’re logged into your bank account, an attacker could send you a link that, when clicked, transfers money to their account without your knowledge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Remote Code Execution (RCE)
&lt;/h3&gt;

&lt;p&gt;Remote Code Execution (RCE) is comparable to letting a stranger remotely control your home appliances. RCE happens when an attacker exploits a vulnerability to run arbitrary code on a target system. This can lead to complete control over the affected system, allowing the attacker to steal data, install malware, or disrupt operations. For example, if an application allows user-uploaded files without proper checks, an attacker could upload and execute a malicious file, taking over the server hosting the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Principles of Secure Coding
&lt;/h2&gt;

&lt;p&gt;Though there are many, I will show you the 3 most important here that you should always try to follow:&lt;/p&gt;

&lt;h3&gt;
  
  
  Security by Design
&lt;/h3&gt;

&lt;p&gt;Security by Design is like building a house with security features integrated from the ground up. Instead of adding locks and alarms after construction, security measures are considered and implemented during the initial design and development stages. This approach ensures that security is an inherent part of the system, making it more robust and less prone to vulnerabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Least Privilege Principle
&lt;/h3&gt;

&lt;p&gt;The Least Privilege Principle is similar to giving each person in your house access only to the rooms they need. In cybersecurity, this means granting users and systems the minimal level of access required to perform their functions. By restricting permissions, you limit the potential damage in case of a security breach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Defense in Depth
&lt;/h3&gt;

&lt;p&gt;Defense in Depth is akin to having multiple layers of security around your house: a fence, security cameras, locked doors, and an alarm system. This strategy involves implementing several layers of defense mechanisms to protect against threats. If one layer is compromised, the other layers continue to provide protection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Secure Coding
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Input Validation and Sanitization
&lt;/h3&gt;

&lt;p&gt;Input validation and sanitization are like filtering and cleaning water before it enters your home. Validation ensures that the data is in the expected format and type, while sanitization removes any harmful elements. These practices prevent malicious data from exploiting vulnerabilities in your application. For example, validating email addresses to ensure they follow the correct format and sanitizing user inputs to remove SQL injection attempts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Techniques and Tools
&lt;/h3&gt;

&lt;p&gt;Employing the right techniques and tools is akin to using advanced tools and materials for constructing a safe building. Techniques such as code reviews, static code analysis, and automated testing help identify and mitigate security vulnerabilities. Tools like OWASP ZAP for web application security testing and SonarQube for code quality analysis are invaluable in ensuring secure coding practices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Authentication and Authorization
&lt;/h3&gt;

&lt;p&gt;Authentication and authorization are like verifying a guest’s identity before granting them access to specific areas of your house. Authentication confirms that users are who they claim to be, while authorization determines what resources they can access. Secure methods include using multi-factor authentication (MFA) and role-based access control (RBAC) to ensure robust security. For instance, requiring a password and a one-time code sent to a user’s phone for logging in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error Handling and Logging
&lt;/h3&gt;

&lt;p&gt;Proper error handling and secure logging are like having a detailed incident report without revealing sensitive information. Error handling ensures that the application gracefully manages unexpected issues without crashing or exposing sensitive data. Secure logging involves recording activities for audit purposes while protecting sensitive information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Encryption
&lt;/h3&gt;

&lt;p&gt;Data encryption is like storing valuables in a safe. It converts data into a coded form that is unreadable without the decryption key, protecting it from unauthorized access. Implementing encryption for data at rest and in transit ensures that sensitive information remains confidential and secure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dependency Management
&lt;/h3&gt;

&lt;p&gt;Managing dependencies is like regularly maintaining and updating your home’s safety features. Keeping libraries and dependencies up-to-date ensures that your application is protected against known vulnerabilities. Regularly reviewing and updating these components, and using tools like Dependabot or Snyk, helps in identifying and mitigating risks associated with outdated or insecure dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Adopting secure coding practices is crucial for creating reliable and safe software. By validating and cleaning user inputs, using the right tools, securing authentication and authorization processes, handling errors properly, encrypting data, and keeping dependencies up-to-date, you protect your applications from various threats. As technology advances, staying proactive about security helps ensure your software remains secure and trustworthy in an ever-changing digital world.&lt;/p&gt;




&lt;p&gt;Thank you for reading! If you have any feedback or notice any mistakes, please feel free to leave a comment below. I’m always looking to improve my writing and value any suggestions you may have. If you’re interested in working together or have any further questions, please don’t hesitate to reach out to me at &lt;a href="mailto:fa1319673@gmail.com"&gt;fa1319673@gmail.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>coding</category>
      <category>webdev</category>
      <category>security</category>
      <category>development</category>
    </item>
    <item>
      <title>GitHub Actions Explained</title>
      <dc:creator>Faizan</dc:creator>
      <pubDate>Wed, 24 Jul 2024 12:31:18 +0000</pubDate>
      <link>https://dev.to/faizan711/github-actions-explained-3hkg</link>
      <guid>https://dev.to/faizan711/github-actions-explained-3hkg</guid>
      <description>&lt;p&gt;In the rapidly evolving landscape of software development, the ability to automate testing and deploy applications efficiently stands as a cornerstone of success. GitHub Actions, a powerful feature within the GitHub platform, has emerged as a pivotal tool in this regard. By enabling developers to create custom workflows for continuous integration (CI) and continuous deployment (CD), GitHub Actions simplifies the complexities of the CI/CD pipelines, fostering a culture of continuous improvement. Leveraging GitHub workflows, automation, and the GitHub marketplace’s vast array of pre-built and custom actions, developers can streamline their development processes, ensuring that integration and deployment are as seamless as possible.&lt;/p&gt;

&lt;p&gt;This comprehensive guide aims to unfold the multifaceted capabilities of GitHub Actions, providing readers with a detailed roadmap from the basics to more advanced features.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Need to Know About GitHub Actions
&lt;/h2&gt;

&lt;p&gt;GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that run tests whenever you push a change to your repository, or that deploy merged pull requests to production. GitHub Actions can automate your workflow, allowing you to build, test, and deploy your code right from GitHub, saving you a lot of time and effort.&lt;/p&gt;

&lt;h4&gt;
  
  
  Automate Workflows
&lt;/h4&gt;

&lt;p&gt;Automate, customize and execute your software development workflows right in your repository with GitHub Actions. You can discover, create, and share actions to perform any job you'd like, including CI/CD, and combine actions in a completely customized workflow.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For example, you can use GitHub Actions to automatically build and test your code every time you push a commit to GitHub, ensuring that your code is always up-to-date and working as expected.&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-customizable-and-integrative" rel="noopener noreferrer"&gt;&lt;/a&gt;Customizable and Integrative
&lt;/h4&gt;

&lt;p&gt;GitHub Actions are highly customizable. You can create your actions or use actions from the GitHub Marketplace to build workflows that meet your specific needs. GitHub Actions integrates seamlessly with other GitHub features, such as pull requests and issues, making it easy to manage your entire workflow in one place.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-community-and-affordability" rel="noopener noreferrer"&gt;&lt;/a&gt;Community and Affordability
&lt;/h4&gt;

&lt;p&gt;GitHub Actions has a large and active community. You can find many pre-built actions in the GitHub Marketplace, and you can also share your actions with the community. GitHub Actions is free for public repositories, and you get &lt;a href="https://dev.to/n3wt0n/5-top-reasons-to-use-github-actions-for-your-next-project-cga"&gt;2,000 free minutes of build time per month&lt;/a&gt; for private repositories, making it an affordable option for developers of all sizes.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-getting-started" rel="noopener noreferrer"&gt;&lt;/a&gt;Getting Started
&lt;/h4&gt;

&lt;p&gt;GitHub provides starter workflows for a variety of languages and tooling. For instance, you can publish Node.js packages to a registry as part of your continuous integration (CI) workflow, or create a continuous integration (CI) workflow to build and test your PowerShell project. To get started, you can use the user interface of &lt;a href="http://github.com/" rel="noopener noreferrer"&gt;GitHub.com&lt;/a&gt; to add a workflow that demonstrates some of the essential features of GitHub Actions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-building-your-first-workflow" rel="noopener noreferrer"&gt;&lt;/a&gt;Building Your First Workflow
&lt;/h3&gt;

&lt;p&gt;To get started with GitHub Actions, you'll need to set up the environment, create a basic YAML file, and test the workflow. Here's a step-by-step guide:&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-setting-up-the-environment" rel="noopener noreferrer"&gt;&lt;/a&gt;Setting Up the Environment
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a new directory named &lt;code&gt;.github/workflows&lt;/code&gt; within your repository. This is where GitHub will look for workflow files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Inside the &lt;code&gt;.github/workflows&lt;/code&gt; directory, create a new YAML file with an &lt;code&gt;.yml&lt;/code&gt; or &lt;code&gt;.yaml&lt;/code&gt; extension. You can give it any name you prefer, such as &lt;code&gt;github-actions-demo.yml&lt;/code&gt; .&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-creating-a-basic-yaml-file" rel="noopener noreferrer"&gt;&lt;/a&gt;Creating a Basic YAML File
&lt;/h4&gt;

&lt;p&gt;3. Copy the following YAML code into your newly created file :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on: [push]

jobs:
  build:
    name: Hello world
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Write a multi-line message
        run: |
          echo This demo file shows a
          echo very basic and easy-to-understand workflow

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This basic workflow is triggered whenever code is pushed to the repository (&lt;code&gt;on: [push]&lt;/code&gt;). It defines a single job &lt;code&gt;build&lt;/code&gt; that runs on the latest Ubuntu runner (&lt;code&gt;runs-on: ubuntu-latest&lt;/code&gt;). The job consists of two steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The first step checks out the repository code using the &lt;code&gt;actions/checkout@v2&lt;/code&gt; action.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The second step runs a multi-line Bash script that prints a message to the console.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;4. Commit the YAML file to your repository's default branch.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-testing-the-workflow" rel="noopener noreferrer"&gt;&lt;/a&gt;Testing the Workflow
&lt;/h4&gt;

&lt;p&gt;5. After committing the workflow file, navigate to the "Actions" tab in your repository on &lt;a href="http://github.com/" rel="noopener noreferrer"&gt;GitHub.com&lt;/a&gt;. You should see your new workflow listed there.&lt;/p&gt;

&lt;p&gt;6. Click on the workflow to view its execution details. The workflow should run automatically after your commit if everything is set up correctly.&lt;/p&gt;

&lt;p&gt;7. Expand the job run to see the output of each step. You should see the message you defined in the &lt;code&gt;run&lt;/code&gt; step.&lt;/p&gt;

&lt;p&gt;By following these steps, you've successfully created and tested your first GitHub Actions workflow. You can now build upon this foundation to automate more complex tasks, such as running tests, building and deploying applications, and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-detailed-breakdown-of-components" rel="noopener noreferrer"&gt;&lt;/a&gt;Detailed Breakdown of Components
&lt;/h3&gt;

&lt;p&gt;A workflow is a configurable automated process that will run one or more jobs. Workflows are defined by &lt;a href="https://docs.github.com/actions/learn-github-actions/understanding-github-actions" rel="noopener noreferrer"&gt;a YAML file&lt;/a&gt; checked into your repository and will run when triggered by an event in your repository, or they can be triggered manually, or at a defined schedule.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-workflows-jobs-and-steps" rel="noopener noreferrer"&gt;&lt;/a&gt;Workflows, Jobs, and Steps
&lt;/h4&gt;

&lt;p&gt;A workflow must contain one or more events that will trigger it, one or more jobs that will execute on a runner machine and run a series of steps, and each step can either run a script or an action. A job is &lt;a href="https://docs.github.com/actions/learn-github-actions/understanding-github-actions" rel="noopener noreferrer"&gt;a set of steps&lt;/a&gt; in a workflow executed on the same runner, where each step is either a shell script or an action. An action is a custom application that performs a complex but frequently repeated task, helping reduce repetitive code in workflow files.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-events-and-triggers" rel="noopener noreferrer"&gt;&lt;/a&gt;Events and Triggers
&lt;/h4&gt;

&lt;p&gt;An event is a specific activity that triggers a workflow run, such as creating a pull request, opening an issue, or pushing a commit. Workflows can also be triggered on a schedule, by posting to a REST API, or manually. There are various types of events, including repository events like creating or modifying issues, pull requests, releases, discussions, and more.&lt;/p&gt;

&lt;p&gt;GitHub provides filters to control when a workflow should run based on activity types, branches, file paths, and other criteria. For example, the &lt;code&gt;branches&lt;/code&gt; filter causes a workflow to run only when a push to a specific branch occurs, and the &lt;code&gt;paths&lt;/code&gt; filter allows it to run workflows based on changed file paths.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-actions-and-runners" rel="noopener noreferrer"&gt;&lt;/a&gt;Actions and Runners
&lt;/h4&gt;

&lt;p&gt;A runner is a server that runs your workflows when triggered, with each runner executing a single job at a time. GitHub provides Ubuntu Linux, Microsoft Windows, and macOS runners, where each workflow run executes in a fresh, newly-provisioned virtual machine.&lt;/p&gt;

&lt;p&gt;GitHub-hosted runners are available for public and private repositories, with &lt;a href="https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners" rel="noopener noreferrer"&gt;different specifications and pricing models&lt;/a&gt;. For private repositories, jobs use the account's allotment of free minutes and are then charged per minute. GitHub also offers larger, managed virtual machines with more resources, static IP addresses, Azure private networking, and advanced features like GPU and ARM-powered runners.&lt;/p&gt;

&lt;p&gt;The communication between GitHub and self-hosted runners works by the runner connecting to GitHub (outbound) and keeping the connection open, allowing events to be pushed to the runner through the established connection.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-advanced-techniques" rel="noopener noreferrer"&gt;&lt;/a&gt;Advanced Techniques
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-integrating-with-third-party-tools" rel="noopener noreferrer"&gt;&lt;/a&gt;Integrating with Third-Party Tools
&lt;/h4&gt;

&lt;p&gt;When a third-party application requests access to your GitHub data, it will display the developer's contact information and the specific data being requested. Since the application is developed by a third party, GitHub does not have information on how the requested data will be used, so it is important to contact the developer if you have any concerns.&lt;/p&gt;

&lt;p&gt;Applications can have read or write access to your GitHub data. Read access allows an application to view your data, while write access permits it to modify your data. Scopes are named groups of permissions that an application can request to access public and non-public data. For example, if an app requests the &lt;code&gt;user:email&lt;/code&gt; scope, it will have read-only access to your private email addresses. However, currently, you cannot scope source code access to read-only.&lt;/p&gt;

&lt;p&gt;It is recommended to regularly review your authorized integrations and remove any unused applications or tokens to minimize potential risks. Additionally, you should use credentials with minimal required permissions and be mindful that any user with write access to your repository has read access to all configured secrets.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-managing-workflow-secrets" rel="noopener noreferrer"&gt;&lt;/a&gt;Managing Workflow Secrets
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions" rel="noopener noreferrer"&gt;Sensitive values should never be stored as plaintext&lt;/a&gt; in workflow files but rather as secrets. Secrets are encrypted before reaching GitHub, helping minimize risks related to accidental logging. GitHub uses a mechanism to redact secrets from run logs, but this redaction is not guaranteed due to the various transformations secrets can undergo.&lt;/p&gt;

&lt;p&gt;To ensure proper secret redaction and limit associated risks, follow these best practices :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Avoid structured data as secrets: Structured data like JSON or XML can cause redaction failures since redaction relies on finding exact matches.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Register all used secrets: If a secret is used to generate another sensitive value, register that value as a secret to ensure redaction.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Audit secret handling: Review the source code and actions used in your workflows to ensure secrets are not sent to unintended hosts or printed to logs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use minimally scoped credentials: Ensure that the credentials used in workflows have the least required privileges.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Audit and rotate secrets: Periodically review and remove unnecessary secrets, and rotate secrets to reduce the window of potential compromise.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consider requiring reviews for access: You can use required reviewers to control access to environment secrets.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions" rel="noopener noreferrer"&gt;GitHub-hosted runners take measures to mitigate security risks&lt;/a&gt;, such as providing software bills of materials (SBOMs) and blocking access to malicious sites. However, self-hosted runners can be persistently compromised by untrusted code and should be used cautiously, especially for public repositories. To improve security, you can use the REST API to create ephemeral, just-in-time (JIT) runners that perform at most one job before being automatically removed.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-scaling-with-self-hosted-runners" rel="noopener noreferrer"&gt;&lt;/a&gt;Scaling with Self-Hosted Runners
&lt;/h4&gt;

&lt;p&gt;You can &lt;a href="https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/autoscaling-with-self-hosted-runners" rel="noopener noreferrer"&gt;automatically scale the number of self-hosted runners&lt;/a&gt; in your environment based on webhook events with a particular label. However, GitHub recommends implementing autoscaling with ephemeral self-hosted runners, as persistent runners cannot guarantee that jobs are not assigned during the shutdown.&lt;/p&gt;

&lt;p&gt;To add an ephemeral runner, include the &lt;code&gt;--ephemeral&lt;/code&gt; parameter when registering the runner using &lt;code&gt;config.sh&lt;/code&gt; . The runner will be automatically de-registered after processing one job, and you can then automate the process of wiping the runner.&lt;/p&gt;

&lt;p&gt;You can create an autoscaling environment by using payloads received from the &lt;code&gt;workflow_job&lt;/code&gt; webhook, which contains information about the stages of a workflow job's lifecycle. You can register and delete repository, organization, and enterprise self-hosted runners using the API and appropriate authentication methods.&lt;/p&gt;

&lt;p&gt;When adding self-hosted runners, consider the level of ownership and management. If a centralized team will own the runners, add them at the highest mutual organization or enterprise level. If each team will manage its runners, add them at the highest level of team ownership, such as the organization level.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-conclusion" rel="noopener noreferrer"&gt;&lt;/a&gt;Conclusion
&lt;/h3&gt;

&lt;p&gt;From navigating the basics and crafting your first workflow to diving deep into advanced techniques and security best practices, the guide has aimed to equip you with the necessary knowledge to automate, integrate, and optimize your software development processes efficiently. Emphasizing GitHub Actions' adaptability, the discussion highlighted how this powerful tool can streamline workflows, foster continuous improvement, and enable a culture of innovation by leveraging automation, seamless integration, and community-driven enhancements.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://sololearner.hashnode.dev/github-actions-explained#heading-faqs" rel="noopener noreferrer"&gt;&lt;/a&gt;FAQs
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. What exactly are GitHub Actions?&lt;/strong&gt;&lt;br&gt;
GitHub Actions is a CI/CD platform that automates your software's build, test, and deployment processes. It enables you to create workflows that automatically build and test every pull request to your repository or deploy merged pull requests to production environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. How do GitHub Actions differ from other CI/CD tools?&lt;/strong&gt;&lt;br&gt;
While both GitHub Actions and other CI/CD tools like GitLab CI/CD automate software deployment processes, they differ in usability and configuration. GitLab CI/CD has a visual editor for simple workflows and requires YAML for more complex setups. GitHub Actions uses YAML extensively, supported by a large community that provides prebuilt workflows and tools to ease configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. What types of actions can you create with GitHub Actions?&lt;/strong&gt;&lt;br&gt;
In GitHub Actions, you can develop Docker container actions, JavaScript actions, and composite actions. Each action type requires a metadata file that specifies the inputs, outputs, and the main entry point of the action. The metadata file must be named action.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. What is the relationship between GitHub Actions and workflows?&lt;/strong&gt;&lt;br&gt;
GitHub Actions is a tool that automates workflows within the GitHub platform, where you can also manage code and collaborate on pull requests and issues. A workflow in GitHub Actions refers to an automated sequence of operations set up in your GitHub repository to handle tasks like building, testing, and deploying applications.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Thank you for reading! If you have any feedback or notice any mistakes, please feel free to leave a comment below. I'm always looking to improve my writing and value any suggestions you may have. If you're interested in working together or have any further questions, please don't hesitate to reach out to me at&lt;/em&gt; &lt;a href="//mailto:fa1319673@gmail.com"&gt;&lt;/a&gt;&lt;em&gt;&lt;a href="mailto:fa1319673@gmail.com"&gt;fa1319673@gmail.com&lt;/a&gt;&lt;/em&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>webdev</category>
      <category>devops</category>
      <category>cicd</category>
    </item>
    <item>
      <title>5 Common Myths About Artificial Intelligence</title>
      <dc:creator>Faizan</dc:creator>
      <pubDate>Thu, 18 Jul 2024 14:42:06 +0000</pubDate>
      <link>https://dev.to/faizan711/5-common-myths-about-artificial-intelligence-28a9</link>
      <guid>https://dev.to/faizan711/5-common-myths-about-artificial-intelligence-28a9</guid>
      <description>&lt;p&gt;Artificial intelligence (AI) has quickly gained traction in a variety of industries, promising transformative advances and novel solutions. However, amid the excitement and potential, a number of myths and misconceptions about AI have surfaced. These myths can lead to reluctance to implement AI technologies in businesses or unrealistic expectations of AI applications. In this article, we will debunk the five most common myths about artificial intelligence, separating fact from fiction and shedding light on the truth behind AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Myth 1: AI Will Take Over the World
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzirxn7x8pf5kowv740v8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzirxn7x8pf5kowv740v8.jpg" alt="illustration" width="700" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The idea that artificial intelligence (AI) will take over the world and create a dystopian future in which intelligent machines rule over humans is one of the most prevalent fears related to AI. Books and films based on science fiction have contributed to this fear. In actuality, though, AI is incapable of independent thought or behaviour. AI systems work within the constraints imposed by their programming and data inputs because they are designed and developed by humans.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;They are not sentient, and they are not capable of thinking or acting beyond the parameters of their programming.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Artificial intelligence (AI) is a tool that can be used to improve many facets of our lives and increase human capabilities. It is employed to solve challenging issues, make wise decisions, and optimise procedures. Even though AI has a lot of potential to change society, humans are still in charge of it at all times. AI must be developed and governed responsibly to ensure that it continues to be a useful tool and does not endanger humankind.&lt;/p&gt;

&lt;h2&gt;
  
  
  Myth 2: AI Will Take All of Our Jobs
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5mmeuz0f6d4bhuml6qe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5mmeuz0f6d4bhuml6qe.jpg" alt="Illustration" width="318" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another widespread misconception regarding AI is that it will cause widespread unemployment because it will replace human labour in all industries. Although artificial intelligence (AI) can automate some jobs and procedures, it is not meant to completely replace people. AI technologies, on the other hand, are meant to improve efficiency, supplement human capabilities, and manage tedious or repetitive tasks. Artificial Intelligence frees humans from repetitive tasks so they can concentrate on higher order cognitive functions, creative expression, and difficult problem solving.&lt;/p&gt;

&lt;p&gt;History has shown that technological advancements, including AI, disrupt the employment landscape by making some jobs obsolete and creating new ones. AI will certainly displace certain occupations as its capabilities improve, but it also creates new job opportunities. For example, the development and maintenance of AI systems require skilled professionals such as data scientists and machine learning specialists. Moreover, AI can enhance job productivity and create new roles that we may not even envision yet. Therefore, the long-term impact of AI on the job market is likely to be positive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Myth 3: AI Is Sentient or Conscious
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmcn897elveq73u4vedfq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmcn897elveq73u4vedfq.gif" alt="Illustration" width="480" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI is often portrayed in popular culture as sentient beings with consciousness, emotions, and the ability to think and reason like humans. However, this is far from the truth. AI systems, no matter how advanced, are not capable of feeling emotions or having subjective experiences. They are not sentient or conscious entities; they are simply tools that can process information and make decisions based on algorithms and data inputs.&lt;/p&gt;

&lt;p&gt;AI systems, particularly those based on machine learning algorithms, learn patterns from past data and make predictions without being explicitly programmed. They excel at specialized tasks such as natural language processing or image recognition, but their abilities are limited to the specific tasks they are trained for. They lack comprehensive comprehension or self-awareness like humans possess. It is important to understand this distinction to avoid overestimating the capabilities of AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Myth 4: AI Is a Threat to Humanity
&lt;/h2&gt;

&lt;p&gt;The idea of AI as a threat to humanity has been a prevalent theme in science fiction stories and movies. However, in reality, AI is not inherently dangerous or a threat to humanity. The risks associated with AI come from how it is used and the decisions made by those who control it, rather than from the technology itself. Responsible development and ethical use of AI systems are essential to mitigate any potential risks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fner6c702jdrob7z27b2j.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fner6c702jdrob7z27b2j.gif" alt="Illustration" width="500" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI can be used for both positive and negative purposes, depending on the intentions of its creators and users. It is crucial to ensure that AI is developed and used in a manner that aligns with ethical standards and societal values. Robust regulations and guidelines are necessary to prevent misuse of AI technology and to ensure that it is used for the greater good of humanity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Myth 5: AI Is Magic
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsp6zvh902tnyi8atswnd.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsp6zvh902tnyi8atswnd.gif" alt="Illustration" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI is often perceived as something mysterious and magical, capable of performing tasks that seem beyond human comprehension. However, AI is not magic. It is based on complex algorithms and data processing techniques that can be understood and explained. While the inner workings of advanced AI systems, such as deep learning algorithms, may be complex, they are ultimately based on mathematical principles and statistical models.&lt;/p&gt;

&lt;p&gt;AI is a tool that can be harnessed by humans to solve problems and make informed decisions. It is not a mystical force that operates outside the realm of understanding. By demystifying AI and promoting a deeper understanding of its principles and limitations, we can harness its full potential and use it to drive positive change in various domains.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future of AI: Separating Fact from Fiction
&lt;/h2&gt;

&lt;p&gt;As AI continues to evolve and shape our world, it is crucial to separate fact from fiction and dispel the myths surrounding this transformative technology. By understanding the true capabilities and limitations of AI, we can make informed decisions about its implementation and development. AI is not a threat to humanity, but a powerful tool that, when used responsibly, has the potential to enhance our lives, improve productivity, and drive innovation in diverse fields.&lt;/p&gt;

&lt;p&gt;As AI continues to evolve and shape our world, it is crucial to separate fact from fiction and dispel the myths surrounding this transformative technology. By understanding the true capabilities and limitations of AI, we can make informed decisions about its implementation and development. AI is not a threat to humanity, but a powerful tool that, when used responsibly, has the potential to enhance our lives, improve productivity, and drive innovation in diverse fields.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Thank you for reading! If you have any feedback or notice any mistakes, please feel free to leave a comment below. I’m always looking to improve my writing and value any suggestions you may have. If you’re interested in working together or have any further questions, please don’t hesitate to reach out to me at &lt;a href="mailto:fa1319673@gmail.com"&gt;fa1319673@gmail.com&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>myths</category>
      <category>debunk</category>
    </item>
    <item>
      <title>Sending Emails in Node.js Using Nodemailer</title>
      <dc:creator>Faizan</dc:creator>
      <pubDate>Fri, 12 Jul 2024 12:01:43 +0000</pubDate>
      <link>https://dev.to/faizan711/sending-emails-in-nodejs-using-nodemailer-474</link>
      <guid>https://dev.to/faizan711/sending-emails-in-nodejs-using-nodemailer-474</guid>
      <description>&lt;p&gt;In Today’s Article, I will explain how to send e-mails from your Node.js server with the use of a library named &lt;strong&gt;“Nodemailer”&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;But before we begin, it’s important to note that a foundational understanding of creating APIs in a Node.js server, whether with or without Express.js, is assumed. If you’re unfamiliar with these concepts, I recommend exploring resources on creating an Express.js server and APIs first. You’ll find ample tutorials, YouTube videos, and articles online to build a strong foundation. Once you’re comfortable with the basics you can follow along with this article. So for the people who already know these things, let’s dive right into the heart of the matter.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Set up your Express.js Server
&lt;/h2&gt;

&lt;p&gt;Open your project directory and run the below commands to set up your node+express server. You can use either npm or yarn package manager, I am using npm here.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm init -y

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will initialize your Node.js project.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i express cors

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This command will install Express and Cors for your server.&lt;/p&gt;

&lt;p&gt;Now create an &lt;em&gt;index.js&lt;/em&gt; file which will be your main file and paste the below code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express');
const cors = require('cors');

const app = express();
const port = 3000;

// Use the cors middleware to enable CORS for all routes
app.use(cors());

// Use the express.json() middleware to parse JSON data from requests
app.use(express.json());

// Define a sample route
app.get('/', (req, res) =&amp;gt; {
  res.send('Hello, Express server is up and running!');
});

// Start the server
app.listen(port, () =&amp;gt; {
  console.log(`Server is running on port ${port}`);
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node index.js

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will start your server on port 3000, you can check if your server is running by going to &lt;strong&gt;&lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing and Using Nodemailer
&lt;/h2&gt;

&lt;p&gt;Now that you have your Express server up and running, it's time to install nodemailer which we will use to send emails. Run the command below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install nodemailer

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will install nodemailer as a dependency in your Node.js project. Now require it in your index.js in a const variable below const express shown below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const nodemailer = require('nodemailer')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Testing with a Test Account
&lt;/h3&gt;

&lt;p&gt;Now that you have installed nodemailer, we will try sending emails with nodemailer with a test account first before using our gmail account. For that, you have to create an API endpoint in your server and define a few things&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.post('/testingroute', async (req, res) =&amp;gt; {

});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Inside the route, you have to define 2 things, a &lt;strong&gt;transporter&lt;/strong&gt; and a &lt;strong&gt;test account&lt;/strong&gt; for which nodemailer provides functions&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// This will create a test account to send email
let testAccount = await nodemailer.createTestAccount();

// This is a transporter, it is required to send emails in nodemailer
let transporter = nodemailer.createTransport({
        host: "smtp.ethereal.email",
        port: 587,
        secure: false,
        auth: {
          user: testAccount.user,
          pass: testAccount.pass,
  }
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now you have to define your message which will go to your email, i have given a sample of how to do it, feel free to edit as you like it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let message = {
   from: '"Fred Foo 👻" &amp;lt;foo@example.com&amp;gt;', // sender address
   to: "bar@example.com", // list of receivers
   subject: "Hello ✔", // Subject line
   text: "Hello world?", // plain text body
   html: "&amp;lt;b&amp;gt;Hello world?&amp;lt;/b&amp;gt;",
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and now the only thing left is to send an email, the &lt;strong&gt;transporter&lt;/strong&gt; is used here, and the full API endpoint will look like below :&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.post('/testingroute', async (req,res) =&amp;gt; {

    let testAccount = await nodemailer.createTestAccount();

    const transporter = nodemailer.createTransport({
        host: "smtp.ethereal.email",
        port: 587,
        secure: false,
        auth: {
          user: testAccount.user,
          pass: testAccount.pass,
        }
      });

    let message = {
        from: '"Fred Foo 👻" &amp;lt;foo@example.com&amp;gt;', // sender address
        to: "bar@example.com", // list of receivers
        subject: "Hello ✔", // Subject line
        text: "Hello world?", // plain text body
        html: "&amp;lt;b&amp;gt;Hello world?&amp;lt;/b&amp;gt;",
    };

    transporter.sendMail(message).then((info)=&amp;gt; {
        return res.status(201)
        .json({
            message: "you should receive an email!",
            info: info.messageId,
            preview: nodemailer.getTestMessageUrl(info)
        });

    }).catch( error =&amp;gt; {
        return res.status(500).json({error});
    })

});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To test this API endpoint, I am using Postman, you can use whatever you want, below is an image for your reference of how to test it using Postman&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbbxc3bfsnyvfm4pbxx8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbbxc3bfsnyvfm4pbxx8.png" alt="illustration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you have to go to the URL you will get in a preview of your API response as in the image above and you will be able to check the mail received in test account as below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frlkm0cqh8dxbvbhgfqoe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frlkm0cqh8dxbvbhgfqoe.png" alt="illustration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that you have seen how nodemailer works and how to test it using a test account, it's time to put our Gmail account and send real emails to users from our Node.js server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sending Emails using Gmail
&lt;/h3&gt;

&lt;p&gt;For sending emails using Gmail we have to set up a new route, let's call it signup, here I will use a library called &lt;strong&gt;Mailgen&lt;/strong&gt; to send professional-looking emails, you can install it using the command below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install mailgen

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and require it as we did for Nodemailer,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const Mailgen = require('mailgen');

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now set up your new route signup,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.post('/signup', async(req,res) =&amp;gt; {

})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here you have to define a few things you can send emails, so understand this carefully&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let config = {
    service: 'gmail',
    auth : {
        user: '', //please put a gmail id
        pass: '' //please create an app password for gmail id and put here
    }
}

let transporter = nodemailer.createTransport(config);

let MailGenerator = new Mailgen({
    theme: "default",
    product: {
        name: 'Mailgen',
        link: 'https://mailgen.js/'
    }
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;config is where you have to put your Gmail account and password, if you don’t want to put your password, you can go to your Gmail account settings and generate an app password and put it here too. Check out this article to do so &lt;a href="https://support.google.com/mail/answer/185833?hl=en" rel="noopener noreferrer"&gt;https://support.google.com/mail/answer/185833?hl=en&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;transporter is the same as in testing just that here it uses your gmail server.&lt;/p&gt;

&lt;p&gt;and the new thing is MailGenerator, it is a great library to create and send professional-looking emails. You can learn more about it &lt;a href="https://www.npmjs.com/package/mailgen" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now you have to create a template response for your email to be sent to the user and give it to MailGenerator and the rest of the part is the same as in the testing route.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let response = {
        body: {
            name : "Daily Tuition",
            intro: "Your bill has arrived!",
            table : {
                data : [
                    {
                        item : "Nodemailer Stack Book",
                        description: "A Backend application",
                        price : "$10.99",
                    }
                ]
            },
            outro: "Looking forward to do more business"
        }
    }

let mail = MailGenerator.generate(response)

let message = {
        from : , // Give your email address
        to : yourmail@gmail.com, // give an email id 
        subject: "Place Order",
        html: mail
}

transporter.sendMail(message).then(() =&amp;gt; {
        return res.status(201).json({
            msg: "you should receive an email"
        })
}).catch(error =&amp;gt; {
        return res.status(500).json({ error })
})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So everything is done, Now is the time to test your API endpoint as we did with the test account, If you have followed the article carefully your users will receive email as below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2htyr8e7b8c1ku4buec0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2htyr8e7b8c1ku4buec0.png" alt="illustration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we delved into the world of email communication within the Node.js environment using the powerful Nodemailer library. We embarked on a journey that covered the essentials of setting up Nodemailer, crafting and sending emails, test accounts, and even exploring advanced configuration options with our own Gmail account.&lt;/p&gt;

&lt;p&gt;As developers, we understand the significance of effective communication in today’s digital landscape. Leveraging Nodemailer empowers us to seamlessly integrate email capabilities into our applications, enabling us to reach users with essential information, notifications, and personalized content.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Thank you for reading! If you have any feedback or notice any mistakes, please feel free to leave a comment below. I’m always looking to improve my writing and value any suggestions you may have. If you’re interested in working together or have any further questions, please don’t hesitate to reach out to me at &lt;a href="mailto:fa1319673@gmail.com"&gt;fa1319673@gmail.com&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>node</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>nodemailer</category>
    </item>
    <item>
      <title>How to Deploy your NextJS App on a VPS</title>
      <dc:creator>Faizan</dc:creator>
      <pubDate>Sat, 06 Jul 2024 16:11:07 +0000</pubDate>
      <link>https://dev.to/faizan711/how-to-deploy-your-nextjs-app-on-a-vps-45kf</link>
      <guid>https://dev.to/faizan711/how-to-deploy-your-nextjs-app-on-a-vps-45kf</guid>
      <description>&lt;h1&gt;
  
  
  Deploying Your Next.js Application on a VPS
&lt;/h1&gt;

&lt;p&gt;Deployment — A crucial part of development that most crash courses and tutorials leave out of their curriculum. It is one of the underrated skills and a must-have to become an all-round developer. Today we are going to see how to deploy a Next.js application on a VPS (Virtual Private Server).&lt;/p&gt;

&lt;p&gt;You must be thinking, why not just deploy it on Vercel? It is so simple and seamless. But let me tell you, while Vercel is a well-liked option, you might prefer to test on your own VPS or you might not feel comfortable with a managed hosting company, and the costs on Vercel increase exponentially very quickly.&lt;/p&gt;

&lt;p&gt;This article will be a total guide on how to deploy your Next.js application, set up Nginx as a proxy server, connect your domain to the Next.js app, get an SSL certificate using Certbot, and run it using PM2. So let’s get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to Have Beforehand
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Your own VPS&lt;/li&gt;
&lt;li&gt;A Next.js application&lt;/li&gt;
&lt;li&gt;Basic knowledge of SSH, Node.js, and basic Linux command-line usage&lt;/li&gt;
&lt;li&gt;Domain for your application (optional, you can check your app running directly on IP also)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Set Up Your VPS
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Connect to VPS using SSH&lt;/strong&gt;: Replace &lt;code&gt;your-vps-ip&lt;/code&gt; with the IP address of your VPS.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh root@your-vps-ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Install Necessary Software&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Update and Upgrade&lt;/strong&gt;: Update the package list and upgrade installed packages to the latest versions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt upgrade -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Install Node.js and npm&lt;/strong&gt;: Install Node.js and npm on your VPS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install -y nodejs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Install PM2&lt;/strong&gt;: PM2 is a process manager for Node.js applications. It keeps your app running and restarts it if it crashes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo npm install -g pm2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy Your Next.js Application
&lt;/h2&gt;

&lt;p&gt;Here are three ways to deploy your application:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Transfer your application files to the VPS using SCP&lt;/strong&gt;: Use the command below from your local machine’s terminal. Replace the paths as needed.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;scp &lt;span class="nt"&gt;-r&lt;/span&gt; /path/to/your/nextjs-app root@your-vps-ip:/path/to/destination
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Clone your application from a GitHub repository&lt;/strong&gt;: If it is a public repo, clone directly (not recommended). For a private repo, generate SSH keys in your VPS and then copy the public key and put it in the keys section of your GitHub account to allow the VPS to clone your private repo using SSH.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone &amp;lt;Your repository &lt;span class="nb"&gt;link&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create a new Next.js app on the VPS&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /path/to/your/nextjs-app
npx create-next-app@latest myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now that we have deployed our application, it’s time to set up a reverse proxy Nginx server and connect the domain to our application. Then at last, we will build our app and start it with PM2.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set Up a Reverse Proxy with Nginx
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Install Nginx&lt;/strong&gt;: Install Nginx, a web server that can act as a reverse proxy.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;nginx &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Adjust our Firewall&lt;/strong&gt;: To allow Nginx and traffic on ports 80 and 443.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw allow &lt;span class="s1"&gt;'Nginx HTTP'&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw allow &lt;span class="s1"&gt;'Nginx HTTPS'&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;ufw &lt;span class="nb"&gt;enable
sudo &lt;/span&gt;ufw status
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;The status should show something like below:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Nginx HTTP                 ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
Nginx HTTP (v6)            ALLOW       Anywhere (v6)
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Check if Nginx has started&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl status nginx
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Verify by going to the IP of your VPS &lt;code&gt;http://your_vps_ip_address&lt;/code&gt; on a browser; it should show the default Nginx page.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create Nginx Config&lt;/strong&gt;: Create an Nginx configuration file to forward web traffic to your Next.js application.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nano /etc/nginx/sites-available/nextjs-app.conf
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Paste the below configuration and replace &lt;code&gt;your-domain.com&lt;/code&gt; with your domain.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;your-domain.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://localhost:3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_http_version&lt;/span&gt; &lt;span class="mf"&gt;1.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Upgrade&lt;/span&gt; &lt;span class="nv"&gt;$http_upgrade&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Connection&lt;/span&gt; &lt;span class="s"&gt;'upgrade'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;proxy_cache_bypass&lt;/span&gt; &lt;span class="nv"&gt;$http_upgrade&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Now delete the default config file:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo rm&lt;/span&gt; /etc/nginx/sites-enabled/default
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Enable our configuration by creating a symbolic link and restarting Nginx:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /etc/nginx/sites-available/nextjs-app.conf /etc/nginx/sites-enabled/
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Secure Your Application with SSL
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Install Certbot with Snapd&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Install Snapd&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install snapd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Ensure you have the latest Snapd version installed&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo snap install core; sudo snap refresh core
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Install Certbot with Snapd&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo snap install --classic certbot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Create a symlink to ensure Certbot runs&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ln -s /snap/bin/certbot /usr/bin/certbot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Obtain SSL Certificate with Certbot&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo certbot --nginx -d your_domain_name.com -d www.your_domain_name.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Build Your Application and Start with PM2
&lt;/h2&gt;

&lt;p&gt;Now that we have our app on the VPS, the domain connected with it, and SSL to secure it, it's finally time to build our Next.js app and get it running.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Go to your app and run the build command&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /path/to/your/nextjs-app
npm run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start the application using PM2&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;pm2 start npm &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="s2"&gt;"myapp"&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; start
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Save the PM2 process and set up PM2 to start on boot&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;pm2 save
&lt;span class="nb"&gt;sudo &lt;/span&gt;pm2 startup
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Now go to your domain, and you will see your Next.js app running&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://your-domain.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deploying your Next.js application on a VPS might seem challenging at first, but by following these steps, you can set up your server, install the necessary software, transfer your application, and secure it with HTTPS. This process gives you more control over your application and can lead to better performance and cost savings. Happy deploying!&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://medium.com/@zay_kyoy/deploy-nextjs-app-on-vps-20aedfbed3d1" rel="noopener noreferrer"&gt;Deploy NextJS App on VPS&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Epilogue
&lt;/h2&gt;

&lt;p&gt;Thank you for reading! If you have any feedback or notice any mistakes, please feel free to leave a comment below. I’m always looking to improve my writing and value any suggestions you may have. If you’re interested in working together or have any further questions, please don’t hesitate to reach out to me at &lt;a href="//mailto:fa1319673@gmail.com"&gt;fa1319673@gmail.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>webdev</category>
      <category>vps</category>
      <category>ssl</category>
    </item>
  </channel>
</rss>
