<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ram Gopal Srikar Katakam</title>
    <description>The latest articles on DEV Community by Ram Gopal Srikar Katakam (@ramgopalsrikar).</description>
    <link>https://dev.to/ramgopalsrikar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ramgopalsrikar"/>
    <language>en</language>
    <item>
      <title>Optimizing Development with VS Code Copilot or Cursor: A Practical Guide</title>
      <dc:creator>Ram Gopal Srikar Katakam</dc:creator>
      <pubDate>Wed, 06 Aug 2025 15:22:52 +0000</pubDate>
      <link>https://dev.to/ramgopalsrikar/optimizing-development-with-vs-code-copilot-or-cursor-a-practical-guide-4d38</link>
      <guid>https://dev.to/ramgopalsrikar/optimizing-development-with-vs-code-copilot-or-cursor-a-practical-guide-4d38</guid>
      <description>&lt;p&gt;As a developer juggling multiple side projects, I've learned firsthand how to harness VS Code Copilot and i assume it would similar for Cursor to streamline workflows and avoid common pitfalls. In this guide, I'll share actionable strategies to maximize their potential, drawing from my experiences and mistakes so you can avoid repeating them. By setting up your environment thoughtfully and maintaining disciplined practices, you can supercharge your development process. &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Tailor Your Copilot Instructions to Your Project
&lt;/h2&gt;

&lt;p&gt;One of the  mistakes is using generic or copied templates for your Copilot instructions. Instead, craft instructions specific to your project’s structure and goals.&lt;/p&gt;

&lt;p&gt;For example, in one of my projects, I maintained a single repository for both frontend and backend code but created separate Product Requirement Documents (PRDs) for each. This setup gave the AI clear context for both components, making it easier to generate relevant code and suggestions. The result? Faster development and fewer misaligned outputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: Spend time writing detailed Copilot instructions that reflect your project’s architecture. Specify whether you’re working with a monorepo, microservices, or separate frontend/backend stacks to ensure the AI understands your context.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Treat Your PRD as the Source of Truth
&lt;/h2&gt;

&lt;p&gt;Ever wondered your coding agent has suddenly became dumb and its response are not aligning with your expectation then this section is for you!. Your PRD is the backbone of your project, and letting the AI manage it without oversight can lead to issues like &lt;strong&gt;context poisoning&lt;/strong&gt; (hallucinated or incorrect information creeping into the context) or &lt;strong&gt;context distraction&lt;/strong&gt; (irrelevant details derailing the model). These are critical failure modes outlined in &lt;a href="https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-how-to-fix-them.html?ref=blog.langchain.com" rel="noopener noreferrer"&gt;this insightful blog&lt;/a&gt;, i would highly recommend going though the blog, And believe me i spent hours to days just cleaning up my PRD and tasks to remove unnecessary information which might be degrading the model performance. All the tasks mentioned below aren't meant to be done manually i.e you editing the file let agent handle it and you supervise the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to do&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Manually review and update your PRD&lt;/strong&gt; to ensure it’s free of conflicting or repetitive content. Use agent to question things like "do you think this section is required?" in the PRD and based on its respond decide if you wanna keep or remove it. &lt;/li&gt;
&lt;li&gt;Avoid letting the AI overwrite critical sections without your approval.&lt;/li&gt;
&lt;li&gt;Regularly check for context engineering issues, such as outdated dependencies or ambiguous requirements, which can degrade model performance.&lt;/li&gt;
&lt;li&gt;have separate PRD's for frontend and backend if you are handling and single repo for both the codes.&lt;/li&gt;
&lt;li&gt;make sure to define a proper directory structure with filenames, and have a metadata about the files and their purpose. i have seen scenarios where the agent started created a new file even though a file already exist for that exact purpose.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By keeping your PRD clean and focused, you ensure the AI has a reliable foundation to work from.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Embrace Continuous Learning and PRD Evolution
&lt;/h2&gt;

&lt;p&gt;A PRD isn’t a “set it and forget it” document. As you work with AI tools, you’ll notice occasional missteps—code that doesn’t align with your expectations or repeated errors. Use these as opportunities to refine your PRD.&lt;/p&gt;

&lt;p&gt;For instance, if the AI generates incorrect API calls, document the correct behavior in your PRD. You can even ask the AI to update the PRD for you with a prompt like: &lt;em&gt;“Update the PRD to include this API endpoint correction to prevent future errors.”&lt;/em&gt; This iterative process ensures the AI learns from its mistakes and improves over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pro Tip&lt;/strong&gt;: Treat your PRD as a living document. Each time the AI deviates from your desired outcome, update the PRD to include specific guidance, reducing the chance of repeated errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Set Up Model Context Protocol (MCP) in VS Code
&lt;/h2&gt;

&lt;p&gt;The Model Context Protocol (MCP) is a game-changer for AI-assisted development. It allows your AI to interact with external tools, APIs, or data sources, making it far more powerful than relying on its internal knowledge alone. Setting up MCP in VS Code is straightforward and can save you hours of manual verification.&lt;/p&gt;

&lt;p&gt;For example, if you’re unsure whether an API endpoint is correct (especially if the AI might hallucinate it), you can configure an MCP server to fetch real-time data from the web. Instead of manually checking endpoints in a browser, you can ask the AI: &lt;em&gt;“Verify this API endpoint by fetching its documentation.”&lt;/em&gt; This is particularly useful for staying up-to-date with rapidly changing APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to set it up&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow the &lt;a href="https://code.visualstudio.com/docs/copilot/chat/mcp-servers" rel="noopener noreferrer"&gt;VS Code MCP setup guide&lt;/a&gt; to configure MCP in your workspace.&lt;/li&gt;
&lt;li&gt;Explore the &lt;a href="https://github.com/modelcontextprotocol/servers" rel="noopener noreferrer"&gt;MCP server repository&lt;/a&gt; for a list of available servers, such as those for GitHub, file systems, or web scraping.&lt;/li&gt;
&lt;li&gt;Add relevant MCP servers to your &lt;code&gt;.vscode/mcp.json&lt;/code&gt; file. For instance, to use a web-fetching server:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"puppeteer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@modelcontextprotocol/server-puppeteer"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup allows your AI to fetch and verify external data, reducing errors and speeding up development, the is &lt;a href="https://github.com/modelcontextprotocol/servers-archived/tree/main/src/puppeteer" rel="noopener noreferrer"&gt;puppeteer&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Don’t Blindly Trust AI—Do Your Own Research
&lt;/h2&gt;

&lt;p&gt;While Copilot and Cursor often provide solid suggestions, they’re not infallible. Over-relying on AI can lead to suboptimal architectural decisions or missed opportunities to leverage better tools and frameworks.&lt;/p&gt;

&lt;p&gt;During the architecture phase, conduct your own research to validate the AI’s suggestions. For example, if the AI recommends a specific library, cross-check its documentation or explore alternatives. This ensures you’re not locked into a suboptimal solution just because the AI suggested it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: In one project, while developing the workflow the AI kept suggested implementing all the LLM API's and the workflow was getting complex, during the entire time it didnt suggest any alternatives even after asking, but after studying about LangChain i mentioned to AI about it and its response was you're absolutely right we should implement this framework!, so always do your own research.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: When the AI’s suggestions feel off, prompt it with alternative approaches: “What if we used [alternative tool] instead? Compare the two.” This encourages the AI to rethink its recommendations.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Automate Subflows with Reusable Prompts
&lt;/h2&gt;

&lt;p&gt;One of the most powerful features of VS Code Copilot is the ability to create reusable prompts to automate repetitive tasks. By setting up prompts, you can streamline subflows like creating branches, running tests, committing code, and raising pull requests.&lt;/p&gt;

&lt;p&gt;For example, I created a prompt that triggers when I start a new task:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates a new branch based on the task name.&lt;/li&gt;
&lt;li&gt;executes the task from tasks.md folder.&lt;/li&gt;
&lt;li&gt;Commits changes with a standardized message.&lt;/li&gt;
&lt;li&gt;Pushes the branch to the remote repository and raises a pull request.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How to set it up&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In VS Code, Create a prompt file with the Chat: New Prompt File command in the Command Palette. This command creates a .prompt.md file in the .github/prompts folder at the root of your workspace.for more information visit &lt;a href="https://code.visualstudio.com/docs/copilot/copilot-tips-and-tricks#_reusable-prompts" rel="noopener noreferrer"&gt;Reusable Prompts.&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trigger the prompt in agent mode by typing &lt;code&gt;#newTask&lt;/code&gt; in the chat input.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This automation saves time and ensures consistency across your workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By following these strategies—tailoring Copilot instructions, maintaining a robust PRD, embracing continuous learning, setting up MCP, conducting your own research, and automating subflows—you can maximize the power of AI tools like VS Code Copilot or Cursor. These practices have helped me avoid common pitfalls like context poisoning and over-reliance on AI, and I hope they’ll do the same for you.&lt;/p&gt;

&lt;p&gt;For further reading, check out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-how-to-fix-them.html?ref=blog.langchain.com" rel="noopener noreferrer"&gt;How Contexts Fail&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://code.visualstudio.com/docs/copilot/chat/mcp-servers" rel="noopener noreferrer"&gt;VS Code MCP Setup Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/modelcontextprotocol/servers" rel="noopener noreferrer"&gt;MCP Server Repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy coding, and may your side hustles thrive with AI by your side!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>vscode</category>
      <category>cursor</category>
    </item>
    <item>
      <title>Build your own YouTube mp3 downloader ...</title>
      <dc:creator>Ram Gopal Srikar Katakam</dc:creator>
      <pubDate>Wed, 31 Mar 2021 20:59:12 +0000</pubDate>
      <link>https://dev.to/ramgopalsrikar/build-your-own-youtube-mp3-downloader-1gnh</link>
      <guid>https://dev.to/ramgopalsrikar/build-your-own-youtube-mp3-downloader-1gnh</guid>
      <description>&lt;h2&gt;
  
  
  Introduction :
&lt;/h2&gt;

&lt;p&gt;Hello everyone,&lt;br&gt;
                Recently I had the opportunity to work on a music application, where I have developed a REST API to download the audio file given a video ID of the source youtube video. In this blog, I will be covering my experience while building the project like common pitfalls, design decisions I had to undergo while working on the project and maximum performance I could achieve given the constraint so let's dive into the project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Youtube_dl open-source library :
&lt;/h2&gt;

&lt;p&gt;The initial challenge while working on downloading content from youtube, is to find an open-source library, fortunately, I found a library called &lt;a href="https://github.com/ytdl-org/youtube-dl" rel="noopener noreferrer"&gt;youtube-dl open-source library&lt;/a&gt; from GitHub. After going through the library and experimenting with python SDK I was comfortable working with the library, the interesting part is the SDK gives an option to download the directly best quality mp3 file makes its execution locally easy and not demanding much attention, problems araised when I tried to implement in the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda and its limitation :
&lt;/h2&gt;

&lt;p&gt;As my code started working locally I packaged my lambda function file and its dependent libraries into a zip file and uploaded it into lambda configured with python environment and tried to download the file into the temporary repository in lambda. To my surprise, the lambda function produced an error which is due to the fact that mp4 to mp3 conversion requires FFmpeg which is Linux kernel depended, unfortunately, we don't have access to the operating system of lambda so it was like a dead-end for conversion, but later I got to know about lambda layers and we could initialize a layer on top of the function with FFmpeg library, it was a great way to get introduced to lambda layers and I have implemented it.&lt;/p&gt;

&lt;p&gt;The lambda function started running and given the appropriate output, but the heavy load of downloading a file and later converting it into mp3 has been pushing the limits of lambda, especially when given a large video file, making the implementation not much reliable and also due to limitation of the size of tmp repository to 512Mb handling a video larger than it lead to errors. To overcome these limitations I have decided the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It's time to Decouple everything&lt;/li&gt;
&lt;li&gt;Avoiding lambda /tmp repository&lt;/li&gt;
&lt;li&gt;Elastic transcoder to the rescue!&lt;/li&gt;
&lt;li&gt;What does my architecture now look like?&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  1. It's time to Decouple everything
&lt;/h2&gt;

&lt;p&gt;Since discussed above the computing and storage limitations of lambda, I tried to break the process into multiple small processes hence decoupling the application. The first small process would be handling the API gateway request and revert back the response of the s3-presigned URL if the file present or else send file is processing. Allocating the lambda function just to handle API gateway requests helps us satisfy the criteria to send a response within 29 seconds a request being made from API gateway or else it produces an error message, Adding any additional load on lambda can hinder the time constraints at hand. &lt;/p&gt;

&lt;p&gt;So the functionality handled by lambda is to check if the requested file is processed or not from dynamoDB and send a response based on the status of the file being processed.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Avoiding lambda /tmp repository
&lt;/h2&gt;

&lt;p&gt;The way youtube-dl works is, initially it downloads the mp4 file to the path location mentioned in its sdk, where once the file download is completed it starts processing audio conversion, here we try to decouple the processes as discussed above, where we create one lambda function for transcoding and other lambda function for downloading, since we are using 2 different lambda function it doesn't make sense to download directly to /tmp repository instead, we need to figure out a way to divert the incoming data to another more elastic storage which in our case is S3.&lt;/p&gt;

&lt;p&gt;there are few implementations I found using node js, popular ones were from the source cloudonaut.io named &lt;a href="https://cloudonaut.io/download-youtube-videos-with-aws-lambda-and-store-them-on-s3/" rel="noopener noreferrer"&gt;Download youtube videos using lambda and store them in s3&lt;/a&gt; but since many of the implementations found were on node js, it made it difficult to implement in python, anyways from stack overflow i could be able to establish establish the dedicated connection, the code of the python version could be found in my github repository under &lt;a href="https://github.com/RamGopalSrikar/mp3-youtube-download/blob/master/youtube-downloader/lambda_function.py" rel="noopener noreferrer"&gt;youtube downloader lambda function&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Elastic transcoder to the rescue!
&lt;/h2&gt;

&lt;p&gt;Performing transcoding of media files from mp4 to the desired quality of mp3 puts a strain on lambda especially for long media files, especially in our case where we are planning to perform transcoding of media files of length 4 hours, hence using AWS service elastic transcoder comes to rescue!.&lt;/p&gt;

&lt;p&gt;While configuring elastic transcoder, we usually set up a data pipeline with source and target S3 buckets while selecting presetID as "1351620000001-300010" which is 320kpbs audio quality. once, everything is configured writing lambda function for passing the s3 mp4 objects into the pipeline and make sure the lambda function gets triggered by the S3 event notification for .mp4 format. &lt;/p&gt;

&lt;p&gt;The above operation makes sure that the transcoding operation is performed immediately after the mp4 file is uploaded to s3  Next, we need to initialize another lambda function which performs operations such as updating data in dynamodb, deleting mp4 file that was generated to activate this lambda function we use similar s3 event notification but, instead we trigger for .mp3 file format being uploaded.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. What does my architecture now look like?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnlinsajrplg224nqjx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnlinsajrplg224nqjx2.png" alt="Screenshot 2021-03-27 at 6.26.36 PM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/RamGopalSrikar/mp3-youtube-download" rel="noopener noreferrer"&gt;github code&lt;/a&gt; consist of source code for 4 lambda functions we used, soon ill be updating a cloud formation template making it easy to deploy.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>restapi</category>
    </item>
    <item>
      <title>Cloud Resume Challenge</title>
      <dc:creator>Ram Gopal Srikar Katakam</dc:creator>
      <pubDate>Sun, 28 Mar 2021 03:15:48 +0000</pubDate>
      <link>https://dev.to/ramgopalsrikar/cloud-resume-challenge-4dae</link>
      <guid>https://dev.to/ramgopalsrikar/cloud-resume-challenge-4dae</guid>
      <description>&lt;h2&gt;
  
  
  About me !
&lt;/h2&gt;

&lt;p&gt;Hello Everyone, I'm recent grad student with diverse background from hardware design to software development. Since past one year i have been working on cloud, and it has been an amazing experience building projects on AWS platform and achieving AWS certifications in cloud practitioner and Solution Architect Associate. I have been referred by my friend about the cloud resume challenge and i must say it has been an intriguing experience and a great learning curve in my exploration in cloud. I'm currently looking for full time opportunities in cloud as a cloud developer/engineer or DevOps Engineer. &lt;/p&gt;

&lt;p&gt;When comes to the challenge, i would describe the learning curve i went through while completing each individual task. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;HTML, CSS, javascript&lt;/li&gt;
&lt;li&gt;S3 static website hosting&lt;/li&gt;
&lt;li&gt;Enabling Cloud Front and DNS configuration&lt;/li&gt;
&lt;li&gt;Lambda fronted by API gateway and DynamoDB configuration&lt;/li&gt;
&lt;li&gt;Infrastructure as Code- CloudFormation&lt;/li&gt;
&lt;li&gt;Source Control (GitHub) and CI/CD front &amp;amp; backend (Github Actions)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I aware that i have changed the flow as bit, compared to as mentioned in the challenge because of my previous experience in developing application has influenced the flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. HTML, CSS, Javascript Design
&lt;/h2&gt;

&lt;p&gt;Since i am not a heavy frontend developer, i took the approach of using a free template that was available online, which made the time i consumed in developing the frontend less and easier, while making the website look beautiful in different screen sizes from phone to computer. Having the knowledge of future tasks, i have implemented an count.js file and connected it to the main website which is currently left blank.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. S3 static website hosting
&lt;/h2&gt;

&lt;p&gt;This was my first time configuring the S3 bucket as a static website hosting where i have learned some interesting insights like the bucket name should be the same as the domain name that we would be using for hosting the website. During the exploration i have uploaded my resume and tried to access the website using object url available once the files are uploaded, of-course i was denied access to the files since it was not public, so temporarily i have changed the access to files to public for test run.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. enabling Cloud Front and DNS configuration
&lt;/h2&gt;

&lt;p&gt;Having studied about cloud-front and Origin access Identity this was a good experience configuring it for my bucket in s3, with some time spent going through the configuration parameters i was able to implement successfully integrate the cloud-front in-front of s3 and immediately started noticing a global icon on my resume website and also being encrypted in transit HTTPS. One problem i faced with cloud-front is handling invalidators. Every time i update my resume on S3, i don't see the updated version on cloud-front apparently due to cloud-front caching where i had the opportunity to explore invalidators and be cautious to invalidate the cache every-time i make an update.&lt;/p&gt;

&lt;p&gt;When comes to DNS configuration, i have bought a custom domain name called ramgopalsrikark.online from go-daddy where i have changed the NS (Name Server) records to the ones i found on AWS route 53, so the traffic to godaddy is redirected to AWS route 53. To connect to my cloudfront i have added a A record to cloudfront.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Lambda fronted by API gateway and DynamoDB configuration
&lt;/h2&gt;

&lt;p&gt;From my previous project experience, i have been working with lambda and dynamoDB hence, it didn't consume much time working with on it. Comparatively it was my first time working with API gateway, i had fun experimenting with REST API design where i have used GET method along with Lambda proxy to get the response. One interesting insight that i have observed while configuring the API gateway was that if we configure the API gateway to lambda proxy then the body of the JSON response from lambda must be a string, comparatively if we dont mention lambda proxy u can output a json or any data format of choice in body. After integration with backend serverless architecture the api was working and returning the values, but when i integrate with the code in count.js i was getting an error as access denied, where i got to learn about CORS (Cross Origin Resource Sharing)  and have enabled it to remove the problem i encountered.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Infrastructure as Code- CloudFormation
&lt;/h2&gt;

&lt;p&gt;CloudFormation was a completely new technology, where i have no experience so i had to spent 2 days learning about the basics of cloud formation template and how to write code in yaml. Encountered few problems specially while configuring API gateway, but it was informative and would love to explore the technology in detail soon.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_vtGN-o7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n003gjrtfxd7jbq0esjz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_vtGN-o7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n003gjrtfxd7jbq0esjz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once completing the yaml file, i used the designer console to view the above diagram. &lt;/p&gt;

&lt;h2&gt;
  
  
  6. Source Control (GitHub) and CI/CD front &amp;amp; backend (Github Actions)
&lt;/h2&gt;

&lt;p&gt;Probably the most exciting part of the project was working with Github and Github actions for source control &amp;amp; CI/CD integration. Configuring the workflow main.yml file to automate the deployement of frontend and backend file was the fun part, the only regret i had when i was working with these tools was if i had known earlier about this, the whole process would be much easier to develop. Anyways, i would use these skills in my future deployments of projects i would be working on.&lt;/p&gt;

&lt;p&gt;Hence on the whole, it was a great learning curve working on &lt;a href="https://cloudresumechallenge.dev/instructions/"&gt;the cloud resume challenge&lt;/a&gt;, the &lt;a href="https://github.com/RamGopalSrikar/portfolio-website"&gt;front-end code&lt;/a&gt; and the &lt;a href="https://github.com/RamGopalSrikar/portfolio-website-backened"&gt;Backend-end code&lt;/a&gt; is available on github. The finished webpage is &lt;a href="https://ramgopalsrikark.online"&gt;portfolio website&lt;/a&gt;, feel free to give your opinion on the website.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudresumechallenge</category>
      <category>cloudskills</category>
    </item>
  </channel>
</rss>
