<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pooja Bhavani</title>
    <description>The latest articles on DEV Community by Pooja Bhavani (@pooja_bhavani).</description>
    <link>https://dev.to/pooja_bhavani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pooja_bhavani"/>
    <language>en</language>
    <item>
      <title>The Only Production Dashboard That Refuses to Serve Coffee</title>
      <dc:creator>Pooja Bhavani</dc:creator>
      <pubDate>Fri, 03 Apr 2026 07:53:08 +0000</pubDate>
      <link>https://dev.to/pooja_bhavani/the-only-production-dashboard-that-refuses-to-serve-coffee-57ad</link>
      <guid>https://dev.to/pooja_bhavani/the-only-production-dashboard-that-refuses-to-serve-coffee-57ad</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aprilfools-2026"&gt;DEV April Fools Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4qrs7mcyrvu1slucqr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4qrs7mcyrvu1slucqr7.png" alt=" " width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;You know what the cloud-native ecosystem was missing? &lt;strong&gt;Production-grade observability tooling for tea.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Introducing &lt;strong&gt;HTCPCP/2.0 Enterprise&lt;/strong&gt; — a fully interactive Kubernetes-style monitoring dashboard for your imaginary tea brewing cluster. It has real-time log streaming, pod health indicators, temperature telemetry, request traces, autoscaling metrics, and a Deploy to Production button that &lt;strong&gt;always&lt;/strong&gt; returns HTTP 418.&lt;/p&gt;

&lt;p&gt;Always. That's not a bug. That's RFC compliance.&lt;/p&gt;

&lt;p&gt;In 1998, a group of engineers led by &lt;strong&gt;Larry Masinter&lt;/strong&gt; published &lt;a href="https://www.rfc-editor.org/rfc/rfc2324" rel="noopener noreferrer"&gt;RFC 2324&lt;/a&gt; — the &lt;em&gt;Hyper Text Coffee Pot Control Protocol&lt;/em&gt; (HTCPCP). It defined a new internet protocol for controlling coffee pots over a network, and inside it, a new HTTP status code: &lt;strong&gt;418 I'm a Teapot&lt;/strong&gt;, returned whenever a teapot is asked to brew coffee. It was an April Fools' joke. Status 418 was a joke inside the joke.&lt;/p&gt;

&lt;p&gt;Then the internet loved it so much it made it real. Node.js has it. Go has it. Python has it. So I built a full enterprise dashboard to honour it properly.&lt;/p&gt;

&lt;p&gt;With Kubernetes pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://pooja-bhavani.github.io/htcpcp-enterprise/" rel="noopener noreferrer"&gt;https://pooja-bhavani.github.io/htcpcp-enterprise/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/pooja-bhavani/htcpcp-enterprise" rel="noopener noreferrer"&gt;https://github.com/pooja-bhavani/htcpcp-enterprise&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;I started with the most important architectural decision: &lt;strong&gt;what would an enterprise &lt;br&gt;
cloud dashboard look like if it took RFC 2324 seriously?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The answer is: exactly like this. A dark-themed ops console. Kubernetes pod terminology. &lt;br&gt;
Horizontal pod autoscaling for chamomile (bedtime approaching in the EU region). &lt;br&gt;
An ingress controller blocking espresso machines on port 5000. A CI/CD pipeline that &lt;br&gt;
throws &lt;code&gt;CoffeeAttemptException&lt;/code&gt; at the brew step.&lt;/p&gt;

&lt;p&gt;The UI was built mobile-first (lies — it was built for widescreen monitors because &lt;br&gt;
that's where ops dashboards live), then I added a CRT scanline effect because nothing &lt;br&gt;
says "production system" like mild visual artifacts.&lt;/p&gt;

&lt;p&gt;The hardest part was writing the fake log messages. Each one needed to sound exactly &lt;br&gt;
like a real Kubernetes log entry while being completely about tea. I'm proud of:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;HPA: scaling chamomile replicas to 3 (bedtime approaching in EU region)&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That one goes hard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prize Category
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best Ode to Larry Masinter&lt;/strong&gt; — This entire project exists because of RFC 2324. &lt;br&gt;
Larry Masinter and his co-authors gave the internet its most beloved useless status &lt;br&gt;
code, and this dashboard is a monument to that gift. Every 418, every tea pod, every &lt;br&gt;
&lt;code&gt;I'm a Teapot&lt;/code&gt; modal — it's all for you, Larry.&lt;/p&gt;

&lt;p&gt;Also submitting for &lt;strong&gt;Community Favorite&lt;/strong&gt; because I believe in the people who will &lt;br&gt;
recognise the &lt;code&gt;4m18s&lt;/code&gt; steep time and feel seen.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Built with: HTML, CSS, vanilla JS, RFC 2324, and a deep respect for intentionally &lt;br&gt;
useless internet standards.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;No coffee was brewed during the making of this project. Several attempts were made. &lt;br&gt;
All returned 418.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>418challenge</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Resurrecting the Internet's Past: Building a Modern Gopher Browser with Kiro</title>
      <dc:creator>Pooja Bhavani</dc:creator>
      <pubDate>Fri, 05 Dec 2025 17:54:58 +0000</pubDate>
      <link>https://dev.to/pooja_bhavani/resurrecting-the-internets-past-building-a-modern-gopher-browser-with-kiro-20if</link>
      <guid>https://dev.to/pooja_bhavani/resurrecting-the-internets-past-building-a-modern-gopher-browser-with-kiro-20if</guid>
      <description>&lt;p&gt;How I brought a 1991 protocol back to life in 6 hours using spec-driven development&lt;/p&gt;

&lt;p&gt;&lt;a href="https://modern-gopher.netlify.app/" rel="noopener noreferrer"&gt;https://modern-gopher.netlify.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv55davbjgia4n8np291.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv55davbjgia4n8np291.png" alt=" " width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;also added a screen-recording for your reference&lt;/p&gt;

&lt;p&gt;The Challenge&lt;/p&gt;

&lt;p&gt;When I saw the Kiroween Hackathon's "Resurrection" category, I knew exactly what I wanted to build: a modern interface for the Gopher protocol. For those unfamiliar, Gopher was the dominant way to navigate the internet before the World Wide Web took over in the mid-90s. It's a text-based protocol from 1991 that's simpler, faster, and still has active servers running today—but accessing them requires special clients or command-line tools.&lt;/p&gt;

&lt;p&gt;My goal? Make Gopherspace accessible to everyone through a beautiful, retro-futuristic web application.&lt;/p&gt;

&lt;p&gt;Why Gopher?&lt;/p&gt;

&lt;p&gt;Gopher represents a fascinating "what if" in internet history. It was cleaner, more organized, and arguably more user-friendly than early HTTP. But it lost the protocol wars to the World Wide Web. Today, there's a small but dedicated community keeping Gopherspace alive, hosting everything from personal blogs to Wikipedia mirrors.&lt;/p&gt;

&lt;p&gt;The problem? There's no modern, user-friendly way to browse it. That's where my project comes in.&lt;/p&gt;

&lt;p&gt;Enter Kiro: Spec-Driven Development&lt;/p&gt;

&lt;p&gt;Rather than diving straight into code, I used Kiro's spec-driven development workflow. This was a game-changer. Here's how it worked:&lt;/p&gt;

&lt;p&gt;Phase 1: Requirements&lt;/p&gt;

&lt;p&gt;I started by defining what the browser needed to do. Using EARS (Easy Approach to Requirements Syntax), I created 8 major user stories with 40 acceptance criteria. For example:&lt;/p&gt;

&lt;p&gt;User Story: As a user, I want to connect to Gopher servers and view their content, so that I can explore Gopherspace through a modern interface.&lt;/p&gt;

&lt;p&gt;Acceptance Criteria:&lt;/p&gt;

&lt;p&gt;WHEN a user enters a Gopher URL THEN the Web Interface SHALL establish a TCP connection to the specified server and port&lt;/p&gt;

&lt;p&gt;WHEN the Web Interface receives a Gopher menu response THEN the Web Interface SHALL parse the menu according to RFC 1436 format&lt;/p&gt;

&lt;p&gt;This structured approach forced me to think through every feature before writing a single line of code.&lt;/p&gt;

&lt;p&gt;Phase 2: Design&lt;/p&gt;

&lt;p&gt;Next, Kiro helped me create a comprehensive design document. The most interesting part? Correctness properties—formal statements about what the system should do that can be tested with property-based testing.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;p&gt;Property 2: Gopher menu round-trip consistency&lt;br&gt;
For any valid Gopher menu structure, serializing to RFC 1436 format then parsing should produce an equivalent menu structure with all fields preserved.&lt;/p&gt;

&lt;p&gt;I ended up with 19 of these properties. They're like unit tests on steroids—instead of testing specific examples, they verify behavior across infinite inputs.&lt;/p&gt;

&lt;p&gt;Phase 3: Implementation Plan&lt;/p&gt;

&lt;p&gt;Kiro generated a task list with 14 major tasks, each traceable back to specific requirements. This became my roadmap:&lt;/p&gt;

&lt;p&gt;Set up project structure&lt;/p&gt;

&lt;p&gt;Implement Gopher protocol handler&lt;/p&gt;

&lt;p&gt;Build React frontend&lt;/p&gt;

&lt;p&gt;Add bookmarks and history&lt;/p&gt;

&lt;p&gt;Create retro UI theme&lt;/p&gt;

&lt;p&gt;And so on...&lt;/p&gt;

&lt;p&gt;Phase 4: Building&lt;/p&gt;

&lt;p&gt;With the spec complete, implementation was surprisingly smooth. I knew exactly what to build and why. No scope creep, no "should I add this?" moments—just focused execution.&lt;/p&gt;

&lt;p&gt;The Tech Stack&lt;/p&gt;

&lt;p&gt;Backend:&lt;/p&gt;

&lt;p&gt;Node.js + Express&lt;/p&gt;

&lt;p&gt;Raw TCP socket connections (Gopher uses TCP, not HTTP!)&lt;/p&gt;

&lt;p&gt;Custom RFC 1436 parser&lt;/p&gt;

&lt;p&gt;Frontend:&lt;/p&gt;

&lt;p&gt;React 18 + TypeScript&lt;/p&gt;

&lt;p&gt;Vite for blazing-fast builds&lt;/p&gt;

&lt;p&gt;Pure CSS for that retro aesthetic&lt;/p&gt;

&lt;p&gt;Special Sauce:&lt;/p&gt;

&lt;p&gt;Web Audio API for retro computer sounds&lt;/p&gt;

&lt;p&gt;Service Workers for PWA/offline support&lt;/p&gt;

&lt;p&gt;LocalStorage for bookmarks and history&lt;/p&gt;

&lt;p&gt;The Features&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Full Gopher Protocol Support&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The browser handles all major Gopher item types:&lt;/p&gt;

&lt;p&gt;Text files (type 0)&lt;/p&gt;

&lt;p&gt;Directories (type 1)&lt;/p&gt;

&lt;p&gt;Search servers (type 7)&lt;/p&gt;

&lt;p&gt;Binary files (types 9, g)&lt;/p&gt;

&lt;p&gt;HTML links (type h)&lt;/p&gt;

&lt;p&gt;And more!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Three Retro Terminal Themes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I added a theme switcher with three authentic terminal color schemes:&lt;/p&gt;

&lt;p&gt;Green phosphor (classic terminal)&lt;/p&gt;

&lt;p&gt;Amber (warm 80s vibes)&lt;/p&gt;

&lt;p&gt;White (high contrast monochrome)&lt;/p&gt;

&lt;p&gt;Every UI element—buttons, text, borders, even the loading spinner—adapts to the selected theme. It's like having three different vintage computers in one app.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Authentic Retro Sounds&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Boot-up sound when you first open the app (like an old PC powering on)&lt;/p&gt;

&lt;p&gt;Modem sound when connecting to servers (that classic dial-up nostalgia)&lt;/p&gt;

&lt;p&gt;These were created using the Web Audio API—no audio files needed!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modern Conveniences&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Despite the retro aesthetic, it has modern features:&lt;/p&gt;

&lt;p&gt;Bookmark management with localStorage&lt;/p&gt;

&lt;p&gt;Navigation history (last 50 sites)&lt;/p&gt;

&lt;p&gt;Back button navigation&lt;/p&gt;

&lt;p&gt;Search functionality&lt;/p&gt;

&lt;p&gt;PWA support (install it like a native app!)&lt;/p&gt;

&lt;p&gt;Responsive design (works on mobile)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The UI&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The interface honors Gopher's history while feeling modern:&lt;/p&gt;

&lt;p&gt;CRT-style scanlines&lt;/p&gt;

&lt;p&gt;Glow effects on interactive elements&lt;/p&gt;

&lt;p&gt;Smooth animations&lt;/p&gt;

&lt;p&gt;ASCII art logo&lt;/p&gt;

&lt;p&gt;Monospace fonts throughout&lt;/p&gt;

&lt;p&gt;What I Learned&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Spec-Driven Development Works&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I was skeptical at first—90 minutes of planning before writing code? But it paid off massively. I never got stuck wondering "what should I build next?" The spec was my north star.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Correctness Properties Are Powerful&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Property-based testing catches bugs that unit tests miss. For example, my "menu round-trip" property found an edge case with empty selectors that I never would have thought to test manually.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Constraints Breed Creativity&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Gopher protocol is incredibly simple compared to HTTP. This constraint forced me to focus on UX and aesthetics rather than complex features. Sometimes less is more.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Retro Can Be Modern&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The retro aesthetic isn't just nostalgia—it's functional. The high-contrast colors, simple layouts, and clear typography make the app highly usable. Plus, it's just fun!&lt;/p&gt;

&lt;p&gt;Try It Yourself&lt;/p&gt;

&lt;p&gt;The project is open source and ready to run:&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/pooja-bhavani/Kiroween-Challenge" rel="noopener noreferrer"&gt;https://github.com/pooja-bhavani/Kiroween-Challenge&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try these Gopher URLs:&lt;/p&gt;

&lt;p&gt;gopher://gopher.floodgap.com - Most popular Gopher server&lt;/p&gt;

&lt;p&gt;gopher://gopherpedia.com - Wikipedia on Gopher!&lt;/p&gt;

&lt;p&gt;gopher://gopher.quux.org - Classic Gopher content&lt;/p&gt;

&lt;p&gt;The Bigger Picture&lt;/p&gt;

&lt;p&gt;This project isn't just about Gopher—it's about preserving internet history and making it accessible. The early internet had a charm and simplicity we've lost. Projects like this remind us that "dead" technologies often have valuable lessons to teach.&lt;/p&gt;

&lt;p&gt;Plus, it's a testament to what you can build in a weekend with the right tools and approach. Spec-driven development with Kiro turned what could have been a chaotic hackathon scramble into a structured, enjoyable building experience.&lt;/p&gt;

&lt;p&gt;What's Next?&lt;/p&gt;

&lt;p&gt;I'm considering adding:&lt;/p&gt;

&lt;p&gt;Gopher+ protocol support&lt;/p&gt;

&lt;p&gt;Custom theme creator&lt;/p&gt;

&lt;p&gt;Browser extension version&lt;/p&gt;

&lt;p&gt;Gopher server hosting capabilities&lt;/p&gt;

&lt;p&gt;But for now, I'm just excited to share this with the world and see people explore Gopherspace again.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Building the Gopher Browser taught me that:&lt;/p&gt;

&lt;p&gt;Old technologies deserve modern interfaces&lt;/p&gt;

&lt;p&gt;Spec-driven development is worth the upfront investment&lt;/p&gt;

&lt;p&gt;Retro aesthetics can be both beautiful and functional&lt;/p&gt;

&lt;p&gt;You can build something meaningful in just 6 hours&lt;/p&gt;

&lt;p&gt;If you're interested in internet history, retro computing, or just want to see what the web looked like before the web, give it a try. And if you're building something complex, consider using specs—your future self will thank you.&lt;/p&gt;

&lt;p&gt;Technical Deep Dive (For the Curious)&lt;/p&gt;

&lt;p&gt;How Gopher Works&lt;/p&gt;

&lt;p&gt;Gopher is beautifully simple:&lt;/p&gt;

&lt;p&gt;Client connects to server via TCP (usually port 70)&lt;/p&gt;

&lt;p&gt;Client sends: selector\r\n&lt;/p&gt;

&lt;p&gt;Server responds with either:&lt;/p&gt;

&lt;p&gt;A menu (tab-delimited lines)&lt;/p&gt;

&lt;p&gt;Plain text content&lt;/p&gt;

&lt;p&gt;Connection closes&lt;/p&gt;

&lt;p&gt;That's it! No headers, no cookies, no JavaScript. Just pure content.&lt;/p&gt;

&lt;p&gt;The Parsing Challenge&lt;/p&gt;

&lt;p&gt;Gopher menus are tab-delimited:&lt;/p&gt;

&lt;p&gt;Type + Display + TAB + Selector + TAB + Host + TAB + Port&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;p&gt;0About this server  /about.txt  gopher.example.com  70&lt;br&gt;
1Documents  /docs   gopher.example.com  70&lt;/p&gt;

&lt;p&gt;My parser had to handle:&lt;/p&gt;

&lt;p&gt;Different line endings (CRLF vs LF)&lt;/p&gt;

&lt;p&gt;Missing fields&lt;/p&gt;

&lt;p&gt;Invalid item types&lt;/p&gt;

&lt;p&gt;Empty selectors&lt;/p&gt;

&lt;p&gt;Special characters&lt;/p&gt;

&lt;p&gt;The round-trip property test caught several edge cases I missed!&lt;/p&gt;

&lt;p&gt;The Audio Challenge&lt;/p&gt;

&lt;p&gt;Creating retro sounds without audio files was fun. The boot-up sound uses:&lt;/p&gt;

&lt;p&gt;Low frequency beep (100 Hz)&lt;/p&gt;

&lt;p&gt;Rising tone (100 Hz → 800 Hz)&lt;/p&gt;

&lt;p&gt;High frequency beep (1200 Hz)&lt;/p&gt;

&lt;p&gt;All synthesized in real-time with the Web Audio API!&lt;/p&gt;

&lt;p&gt;The Theme System&lt;/p&gt;

&lt;p&gt;CSS variables made theme switching trivial:&lt;/p&gt;

&lt;p&gt;.app.theme-green {&lt;br&gt;
  --primary-color: #00ff00;&lt;br&gt;
  --bg-color: #000;&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;.app.theme-amber {&lt;br&gt;
  --primary-color: #ffb000;&lt;br&gt;
  --bg-color: #000;&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Every component uses var(--primary-color) instead of hardcoded colors. Change the class, change the theme!&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/pooja-bhavani/Kiroween-Challenge" rel="noopener noreferrer"&gt;https://github.com/pooja-bhavani/Kiroween-Challenge&lt;/a&gt;&lt;br&gt;
Built with: Kiro, React, Node.js, TypeScript, and lots of ☕&lt;/p&gt;

</description>
      <category>kiro</category>
      <category>gopher</category>
    </item>
    <item>
      <title>Understanding AWS Storage &amp; Recovery Services: A Complete Guide</title>
      <dc:creator>Pooja Bhavani</dc:creator>
      <pubDate>Tue, 18 Nov 2025 19:42:05 +0000</pubDate>
      <link>https://dev.to/pooja_bhavani/understanding-aws-storage-recovery-services-a-complete-guide-89f</link>
      <guid>https://dev.to/pooja_bhavani/understanding-aws-storage-recovery-services-a-complete-guide-89f</guid>
      <description>&lt;p&gt;&lt;em&gt;When operating in the cloud, data storage and recovery are among the most crucial components. Amazon Web Services (AWS) offers a comprehensive array of services designed to meet various storage requirements, ranging from basic object storage to fully managed file systems, hybrid storage, and disaster recovery solutions. In this blog, we’ll explore key AWS services like AMI, S3, EBS, EFS, AWS Backup their purpose, use cases, and when to choose which.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hwy0aec6cmun9l8bx3m.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hwy0aec6cmun9l8bx3m.gif" alt=" " width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Machine Image (AMI)
&lt;/h2&gt;

&lt;p&gt;Let’s understand what is AMI.&lt;/p&gt;

&lt;p&gt;So, AMI Amazon Machine Image is an a pre-configured template used to create your EC2 instance. It contains the operating system, application server, and applications required to launch an instance. Instead of manually configuring each instance, you can use an AMI Amazon Machine Image to launch a server with all the necessary software and configurations already set up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Types of AMI’s:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS-provided AMIs (Amazon Linux, Ubuntu, Windows)&lt;/p&gt;

&lt;p&gt;Marketplace AMIs (3rd-party software)&lt;/p&gt;

&lt;p&gt;Custom AMIs (your own configured servers)&lt;/p&gt;




&lt;h2&gt;
  
  
  Amazon Elastic Block Store (Amazon EBS)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuwmylgflxb96laj2k4r9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuwmylgflxb96laj2k4r9.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Elastic Block Store (Amazon EBS) provides scalable, high-performance block storage can attach to Amazon EC2 instances and data is divided into blocks and stored. It is designed for low-latency, high-performance workloads where applications need fast read/write access to blocks of data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key features of EBS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Block-Level Storage&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Data is divided into blocks and stored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The application can read/write specific blocks directly, making it ideal for databases and file systems.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Persistence&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Here the data persists even if you stop or terminate the EC2 instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Snapshots (point-in-time backups) can be taken and stored in S3.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;EC2 Attachment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Each EBS volume can be attached to an EC2 instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;One volume one EC2 at a time (but you can detach and attach elsewhere).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Durability &amp;amp; Availability&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Data is automatically replicated within an Availability Zone (AZ).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It ensures high durability and protection against hardware failures.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use cases of EBS&lt;/p&gt;

&lt;p&gt;Databases&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;MySQL, PostgreSQL, MongoDB, Oracle, etc. need block storage for fast transactions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EBS provides low latency and high throughput.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Boot Volumes&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;EC2 instances typically boot from an EBS root volume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can choose pre-built AMIs with EBS as the root storage.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The difference between EBS root volume and EBS?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root Volume&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Root volume is essential for launching and running the EC2 instance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By default if you terminate the root volume will also get deleted with the instance (Is an temporary storage).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Root volumes can be either instance store (ephemeral, data lost on termination) or EBS-backed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Root volumes are used for the operating system boot files, and essential system components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Here the root volume size is limited by the instance type.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;EBS (Elastic Block Store)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Here EBS volume persists/stays even if you terminate the instance, but you can also choose to delete them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EBS volumes provide extra storage for applications, data, and other files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EBS volumes persist even if the instance is stopped or restarted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can be scaled up or down when needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is used for application data, databases, and other files that need persistent storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It offers more flexibility for scaling storage capacity and adjusting performance&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  EFS (Elastic File System)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8cpwf5s31awy5s9b9dj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8cpwf5s31awy5s9b9dj.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon EFS (Elastic File System) is a serverless, scalable, and cloud-based file storage service for AWS compute services and on-premises resources. It provides a shared file system that can be mounted simultaneously by multiple EC2 instances or containers, making it ideal for workloads that require concurrent access to the same data.&lt;/p&gt;

&lt;p&gt;Use case:&lt;/p&gt;

&lt;p&gt;Used for applications that need to share files across multiple compute instances.&lt;/p&gt;

&lt;p&gt;Provides a common storage space for shared web content.&lt;/p&gt;

&lt;p&gt;Offers shared storage for training and running machine learning models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features of EFS:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Elastic and Scalable:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Automatically grows and shrinks as you add or remove files.&lt;/p&gt;

&lt;p&gt;No need to provision storage in advance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managed Service:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Fully managed by AWS — no hardware or infrastructure to maintain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared Access:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Multiple instances or containers can access the same file system concurrently using the NFS protocol.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High Availability and Durability:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Data is stored across multiple Availability Zones (AZs) for redundancy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance Modes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;General Purpose: For most applications (low-latency access).&lt;/p&gt;

&lt;p&gt;Max I/O: For large-scale, highly parallel workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage Classes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Standard: For frequently accessed files.&lt;/p&gt;

&lt;p&gt;Infrequent Access (EFS IA): Cost-effective for rarely accessed files.&lt;/p&gt;

&lt;p&gt;Integration with AWS Services:&lt;/p&gt;

&lt;p&gt;Works with EC2, ECS, Lambda, and other services requiring shared file storage.&lt;/p&gt;




&lt;h2&gt;
  
  
  Amazon Simple Storage Service (Amazon S3)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs78ww53kftp8jjszxob6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs78ww53kftp8jjszxob6.png" alt=" " width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon S3 is a public cloud storage service in AWS that offers scalability, high availability, security, and strong performance. It provides object-based storage, where data is stored inside S3 buckets in distinct units called objects, rather than as traditional files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Points:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each bucket must be created in a specific region.&lt;/p&gt;

&lt;p&gt;Bucket names must be globally unique across all regions and accounts.&lt;/p&gt;

&lt;p&gt;Maximum storage of S3 Bucket is 5TB.&lt;/p&gt;

&lt;p&gt;Use Cases:&lt;/p&gt;

&lt;p&gt;Backup and Storage – Store and manage critical data securely.&lt;/p&gt;

&lt;p&gt;Data Lakes and Analytics – Manage, analyze, and protect large amounts of data for cloud-native and mobile applications.&lt;/p&gt;

&lt;p&gt;Disaster Recovery – In case a region goes down it replicates data across regions to ensure availability.&lt;/p&gt;

&lt;p&gt;Archiving – You can archive and retrieve infrequently accessed data using cost-effective storage classes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Types of S3 Storage Classes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;S3 Standard:&lt;/p&gt;

&lt;p&gt;This is the general-purpose storage for frequently accessed data it offers high durability, availability, and performance with low latency and high throughput. It is suitable for websites, content distribution, big data analytics, mobile and cloud applications.&lt;/p&gt;

&lt;p&gt;S3 Intelligent-Tiering:&lt;/p&gt;

&lt;p&gt;This class automatically moves objects between frequent and infrequent access tiers. It moves objects between frequent, infrequent, and archive instant access tiers based on monitoring, without performance impact or retrieval charges.&lt;/p&gt;

&lt;p&gt;Cost: Optimizes cost automatically, minimal monitoring fee.&lt;/p&gt;

&lt;p&gt;S3 Standard-Infrequent Access (S3 Standard-IA):&lt;/p&gt;

&lt;p&gt;Designed for data accessed less frequently but requires rapid access when needed. It offers lower storage costs than S3 Standard but higher retrieval costs. Used for backups, disaster recovery, long-term storage.&lt;/p&gt;

&lt;p&gt;Cost: Lower storage cost than Standard, but retrieval has a fee.&lt;/p&gt;

&lt;p&gt;S3 One Zone-Infrequent Access (S3 One Zone-IA):&lt;/p&gt;

&lt;p&gt;Similar to S3 Standard-IA but stores data in a single Availability Zone instead of multiple zones, but offers lower costs. Use Case – Secondary backups or easily re-creatable data.&lt;/p&gt;

&lt;p&gt;Cost: Cheaper than Standard-IA, but lower redundancy.&lt;/p&gt;

&lt;p&gt;S3 Glacier Instant Retrieval:&lt;/p&gt;

&lt;p&gt;An archive storage class for data requiring immediate access, such as medical images or news media assets. It provides low-cost storage with millisecond retrieval times.&lt;/p&gt;

&lt;p&gt;S3 Glacier Flexible Retrieval:&lt;/p&gt;

&lt;p&gt;A low-cost archive storage class for long-term data archiving, like backups and disaster recovery. Retrieval times range from minutes to hours depending on retrieval option.&lt;/p&gt;

&lt;p&gt;S3 Glacier Deep Archive:&lt;/p&gt;

&lt;p&gt;The lowest-cost storage class for long-term archiving, suitable for data accessed once or twice a year, such as compliance archives. Retrieval times are typically within 12 hours.&lt;/p&gt;

&lt;p&gt;Cost: Lowest-cost storage for long-term retention.&lt;/p&gt;




&lt;h2&gt;
  
  
  AWS Backup
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F036xmwqz1n4gzity9pm4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F036xmwqz1n4gzity9pm4.png" alt=" " width="768" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Backup is a fully managed service in AWS that helps you to centrally back up and restore your data across AWS services. It simplifies the process of protecting your applications and data by automating backup scheduling, retention management, and compliance monitoring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features of AWS Backup:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Centralized Backup Management:&lt;/p&gt;

&lt;p&gt;Manage backups for multiple AWS services (like EBS, RDS, DynamoDB, EFS, FSx) from a single place.&lt;/p&gt;

&lt;p&gt;Automated Backup Scheduling:&lt;/p&gt;

&lt;p&gt;Define backup plans and policies to automatically take backups at specified times.&lt;/p&gt;

&lt;p&gt;Retention and Lifecycle Management:&lt;/p&gt;

&lt;p&gt;Set rules for how long backups should be retained and automatically transition older backups to cheaper storage classes.&lt;/p&gt;

&lt;p&gt;Cross-Region and Cross-Account Backups:&lt;/p&gt;

&lt;p&gt;Copy backups to other AWS regions or accounts for disaster recovery and compliance purposes.&lt;/p&gt;

&lt;p&gt;Data Protection and Compliance:&lt;/p&gt;

&lt;p&gt;Helps meet regulatory requirements by providing auditing, logging, and encryption for backups.&lt;/p&gt;

&lt;p&gt;On-Demand Backups:&lt;/p&gt;

&lt;p&gt;Create manual backups whenever needed, in addition to scheduled backups.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3bucket</category>
      <category>elasticblockstore</category>
      <category>awsbackup</category>
    </item>
    <item>
      <title>Empowering AI Conversations Using Redis 8 as a Real-Time Brain</title>
      <dc:creator>Pooja Bhavani</dc:creator>
      <pubDate>Sat, 02 Aug 2025 09:17:08 +0000</pubDate>
      <link>https://dev.to/pooja_bhavani/empowering-ai-conversations-using-redis-8-as-a-real-time-brain-4bmi</link>
      <guid>https://dev.to/pooja_bhavani/empowering-ai-conversations-using-redis-8-as-a-real-time-brain-4bmi</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/redis-2025-07-23"&gt;Redis AI Challenge&lt;/a&gt;: Real-Time AI Innovators&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;DevPrompt AI is a smart, lightweight AI assistant designed for developers and tech learners. It's a web-based tool where users can ask any question — technical or non-technical — and get instant responses powered by OpenAI's large language models.&lt;/p&gt;

&lt;p&gt;Built using FastAPI + Redis + OpenAI API — this setup enables smart caching and real-time answers to your questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/dvlvkkue3/video/upload/v1754125729/Screen_Recording_2025-07-31_at_10.47.40_PM_g0iibh.mov" rel="noopener noreferrer"&gt;https://res.cloudinary.com/dvlvkkue3/video/upload/v1754125729/Screen_Recording_2025-07-31_at_10.47.40_PM_g0iibh.mov&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Used Redis 8
&lt;/h2&gt;

&lt;p&gt;It uses Redis 8 as the real-time caching layer to reduce API latency and cost by caching frequently asked queries and instantly serving them. This improves the user experience significantly by delivering results faster — especially for common queries.&lt;/p&gt;

&lt;p&gt;The user sends a question to the FastAPI backend&lt;/p&gt;

&lt;p&gt;The app checks Redis first (fastest!)&lt;/p&gt;

&lt;p&gt;If not found, it fetches from OpenAI and caches it for future&lt;/p&gt;

&lt;p&gt;And everything is now testable via Swagger Docs for easy dev/test!&lt;/p&gt;

&lt;p&gt;If you're building with OpenAI + Redis, be sure to check your SDK version compatibility! &lt;/p&gt;

&lt;p&gt;Thanks for this exciting opportunity!&lt;/p&gt;

</description>
      <category>redischallenge</category>
      <category>devchallenge</category>
      <category>database</category>
      <category>ai</category>
    </item>
    <item>
      <title>StyleHub – Building a Fashion Storefront with Bolt-Powered Speed</title>
      <dc:creator>Pooja Bhavani</dc:creator>
      <pubDate>Mon, 21 Jul 2025 05:17:30 +0000</pubDate>
      <link>https://dev.to/pooja_bhavani/stylehub-building-a-fashion-storefront-with-bolt-powered-speed-53ge</link>
      <guid>https://dev.to/pooja_bhavani/stylehub-building-a-fashion-storefront-with-bolt-powered-speed-53ge</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/wlh"&gt;World's Largest Hackathon Writing Challenge&lt;/a&gt;: Building with Bolt.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;During the World's Largest Hackathon, I challenged myself to create a full-fledged e-commerce platform with a sleek design, responsive layout, and modern developer tooling. The result? &lt;strong&gt;StyleHub&lt;/strong&gt; — an online fashion store with a futuristic vibe, powered by React and styled with Tailwind CSS.&lt;/p&gt;

&lt;p&gt;This project helped me explore new UI design patterns, refine my frontend skills, and experience what it feels like to ship a modern digital storefront in a fast-paced environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;StyleHub&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;A e-commerce web app with the following features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fully responsive homepage with hero banner and featured categories&lt;/li&gt;
&lt;li&gt;Navigation bar with routing to Home, Shop, Categories, Blog, Contact, and Login&lt;/li&gt;
&lt;li&gt;Visually engaging product thumbnails using high-resolution images&lt;/li&gt;
&lt;li&gt;Deployed on Netlify for global reach&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🔗 Live Site&lt;br&gt;
(&lt;a href="https://stylehub-dev.netlify.app" rel="noopener noreferrer"&gt;https://stylehub-dev.netlify.app&lt;/a&gt;)  &lt;/p&gt;

&lt;p&gt;GitHub Repo&lt;br&gt;
(&lt;a href="https://github.com/pooja-bhavani/StyleHub" rel="noopener noreferrer"&gt;https://github.com/pooja-bhavani/StyleHub&lt;/a&gt;)&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚡️ The Bolt Factor
&lt;/h2&gt;

&lt;p&gt;I used AI suggestions from tools like GitHub Copilot to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Refine my component structure&lt;/li&gt;
&lt;li&gt;Implement responsive utility classes faster&lt;/li&gt;
&lt;li&gt;Optimize the navigation bar and image gallery layouts&lt;/li&gt;
&lt;li&gt;Debug tricky UI issues in minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools boosted my productivity and let me focus on creativity and UX.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Simplicity and speed in UI go a long way — Tailwind + component-driven design helped tremendously.&lt;/li&gt;
&lt;li&gt;Previewing UI changes in real-time while deploying on Netlify was a game-changer.&lt;/li&gt;
&lt;li&gt;The power of developer tools (like Bolt/Copilot) makes solo-hackathon projects feel collaborative.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;StyleHub is currently frontend-only, but I'm planning to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add product filtering, sorting, and search&lt;/li&gt;
&lt;li&gt;Connect it to a backend with user auth and product management&lt;/li&gt;
&lt;li&gt;Add cart and payment integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks to the hackathon community and DEV for this opportunity! &lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>wlhchallenge</category>
      <category>bolt</category>
      <category>ai</category>
    </item>
    <item>
      <title>Docker Series: Learn Docker from Scratch</title>
      <dc:creator>Pooja Bhavani</dc:creator>
      <pubDate>Mon, 14 Jul 2025 09:09:16 +0000</pubDate>
      <link>https://dev.to/pooja_bhavani/docker-series-learn-docker-from-scratch-57ad</link>
      <guid>https://dev.to/pooja_bhavani/docker-series-learn-docker-from-scratch-57ad</guid>
      <description>&lt;p&gt;This guide includes Docker installation, creating and optimizing Dockerfiles, pulling images, managing containers, working with volumes and networks, and troubleshooting commands — with real world examples using our base project — EasyShop&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Installation for Each Platform
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Linux (Ubuntu/Debian)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install docker.io -y
sudo systemctl enable docker
sudo systemctl start docker
docker --version 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Mac &amp;amp; Windows&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Visit the official Docs&lt;/p&gt;

&lt;p&gt;Download Docker Desktop:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.docker.com/get-started/" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Docker Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65aawtpox7lzql6fkele.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65aawtpox7lzql6fkele.png" alt=" " width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Client&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Docker Client (or Docker CLI Command Line Interface)) both are same it allows you to communicate with Docker Deamon . Here we can write commands like docker run, docker pull, docker build etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Host&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is a Physical or Virtual Machine that runs the Docker Engine. It is the main environment provides that provides necessary resources (CPU, memory, storage, networking) to execute Docker containers.&lt;/p&gt;

&lt;p&gt;What is dockerd?&lt;br&gt;
dockerd is the Docker daemon which runs in the background. It listens for Docker API requests and handles tasks like:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building Images&lt;/strong&gt;&lt;br&gt;
Running and Stoping Containers&lt;br&gt;
Managing Networks and Volumes&lt;br&gt;
You can check the status using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl status docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;DockerHub&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is an cloud-based public registry where you can store your docker images.&lt;/p&gt;

&lt;p&gt;You can push your images to Docker Hub and pull them when needed for development or deployment.&lt;/p&gt;

&lt;p&gt;Docker Hub integrates seamlessly with Continuous Integration and Continuous Delivery (CI/CD) pipelines, and allows you for automated builds and deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Pulling Docker Images
&lt;/h2&gt;

&lt;p&gt;The commands are used for pulling docker images from Artifact registries like DockerHub&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull nginx
docker pull mongo
docker pull &amp;lt;your dockerhub imagename&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Checking Stats of running Containers
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stats
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Images and Container Management
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Images&lt;/strong&gt;&lt;br&gt;
What are docker Images?&lt;/p&gt;

&lt;p&gt;Images are Executable Packages that contains dependencies (e.g: Node.js, Python), Configuration (e.g: environment variables), software packages and Application code)for the purpose of creating containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container&lt;/strong&gt;&lt;br&gt;
What is container ?&lt;/p&gt;

&lt;p&gt;A container is a lightweight, standalone unit of software that includes everything required to run an application including code, runtime, and dependencies ensuring it runs quickly and reliably across different computing environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Commands:&lt;/strong&gt;&lt;br&gt;
To get the list of Docker Images&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get the list of only running containers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get the list of all running and stopped containers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps -a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To Stop the container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
docker stop &amp;lt;container_id or container name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To Remove Container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rm &amp;lt;container_id or container name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To remove the image&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rmi &amp;lt;image_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. Troubleshooting Commands
&lt;/h2&gt;

&lt;p&gt;To Check container logs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker logs &amp;lt;container_id or container name&amp;gt;&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To Inspect containers/images&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker inspect &amp;lt;container_id or image_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To Enter container shell/ your container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it &amp;lt;container_id&amp;gt; /bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check Disk usage and cleanup&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker system df
docker system prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  8. Builder Cache &amp;amp; Clean Images
&lt;/h2&gt;

&lt;p&gt;Each instruction in the Dockerfile adds a layer to the image. The intermediate layers are generated during the build process. When we rebuild the image without making any changes to the Dockerfile or source code, Docker can reuse the cached layers to speed up the build process.&lt;/p&gt;

&lt;p&gt;To clean cache:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker builder prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To avoid caching during build&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
docker build --no-cache -t easyshop .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Dockerfile Instruction Explanation with Layer-by-Layer Breakdown &amp;amp; Caching Tips
&lt;/h2&gt;

&lt;p&gt;FROM node:18&lt;/p&gt;

&lt;p&gt;Here we define the base image that includes everything needed to run apps Eg: Node:18 image is pre-installed. It’s cached unless you change the Node version. Always use a specific version like node:18 to ensure consistency and Docker’s layer caching.&lt;/p&gt;

&lt;p&gt;WORKDIR /app&lt;/p&gt;

&lt;p&gt;This sets the working directory inside the container to /app. All subsequent commands will run from this location. It doesn’t change often, so it's always cached and adds to Dockerfile readability and structure.&lt;/p&gt;

&lt;p&gt;COPY package*.json ./&lt;/p&gt;

&lt;p&gt;This step copies only package.json and package-lock.json. It’s one of the most important steps for caching because dependencies don’t change as often as your application code. Keeping this separate helps Docker cache the npm install layer effectively.&lt;/p&gt;

&lt;p&gt;RUN npm install&lt;/p&gt;

&lt;p&gt;This installs all the dependencies listed in the package files. If the previous layer is unchanged, Docker reuses this cached layer and skips re-installing, and significantly speeds up builds.&lt;/p&gt;

&lt;p&gt;COPY . .&lt;/p&gt;

&lt;p&gt;Now that dependencies are installed, we copy rest of the app files from source to destination which is from host system to container. This ensures that if you change only your app code, Docker won’t repeat the installation step just this one gets rebuilt. That’s the whole idea behind leveraging Docker’s layer caching.&lt;/p&gt;

&lt;p&gt;EXPOSE 5000&lt;/p&gt;

&lt;p&gt;This exposes port 5000 so others know which port the app is running on. It doesn’t affect caching or container behaviour directly, but is useful for documentation and when using Docker Compose.&lt;/p&gt;

&lt;p&gt;CMD [“node”, “index.js”]&lt;/p&gt;

&lt;p&gt;This is the default command that runs when the container starts and you can override this command. It usually remains unchanged and gets cached normally.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Volumes — Persistent Storage
&lt;/h2&gt;

&lt;p&gt;What are Volumes? Why Use Docker Volumes?&lt;/p&gt;

&lt;p&gt;Volumes provide a way to persist the data outside the containers file system making it available even after container restarts or deletions. So it can be backed up or shared. It allows multiple containers to share same data and ensures that the data persists even if the containers are stopped or removed.&lt;/p&gt;

&lt;p&gt;The path for volumes is /var/lib/docker/volumes/&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Commands:&lt;/strong&gt;&lt;br&gt;
To create volume&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker volume create &amp;lt;volume name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run a container and mount the volume to /app/data inside the container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
docker run -v $(pwd)/data:/app/data &amp;lt;image name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  10. Docker Networks
&lt;/h2&gt;

&lt;p&gt;What are Docker Networks?&lt;/p&gt;

&lt;p&gt;Docker provides In-built networking features enabling secure and efficient communication between containers and the host machine. It provides isolation between containers, allowing you to control which containers can communicate with each other.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Networks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Bridge Network&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is a default network using this you can bind the host and container port. It creates a virtual bridge on the host to connect containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Host Network&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This network is to run the container on same network where your host system is running that means both the ports are auto-mapped. If we give this network then we don’t need to do port mapping.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom Bridge Network (User-Defined Network)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That means to create your own secure network and assign is to the container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overlay Network&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Enables communication between containers on different hosts, it is used to communicate in clustered environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Commands:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To create Custom Bridge Network&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker network create &amp;lt;network name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run the containers on same network&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name &amp;lt;container name&amp;gt; --network &amp;lt;network name&amp;gt; &amp;lt;image name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  DockerFile Creation and Pushing images to DockerHub
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Dockerfile — EasyShop Backend (Node.js + Express)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:18

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 5000

CMD ["node", "index.js"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To build Dockerfile and create image of it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
docker build -t your-image-name:tag .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7gn2a4gw1hk77i49cw3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7gn2a4gw1hk77i49cw3.png" alt=" " width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  DockerHub
&lt;/h2&gt;

&lt;p&gt;To push the images to DockerHub&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker login -u &amp;lt;username&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will ask for your Docker Hub username and password (or personal access token).&lt;/p&gt;

&lt;p&gt;To tag image with your Docker Hub username and pushing the image to DockerHub&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag local-image-name:tag dockerhub-username/repository-name:tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Eg:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker tag easyshop-backend:latest poojabhavani08/easyshop-backend:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzovsth943gz3yg3niec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzovsth943gz3yg3niec.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3y88igfcukabkz3m6nxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3y88igfcukabkz3m6nxy.png" alt=" " width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  EasyShop — Base Demo App
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;EasyShop is our demo e-commerce app built with:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frontend:&lt;/strong&gt; React&lt;br&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Node.js/Express&lt;br&gt;
&lt;strong&gt;Database:&lt;/strong&gt; MongoDB&lt;/p&gt;

&lt;p&gt;The Repository URL:&lt;/p&gt;

&lt;p&gt;Clone this Repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://github.com/iemafzalhassan/easyshop--demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;You can use Docker to:&lt;/li&gt;
&lt;li&gt;Containerize each part&lt;/li&gt;
&lt;li&gt;Connect them using Docker networks&lt;/li&gt;
&lt;li&gt;Persist data with Docker volumes&lt;/li&gt;
&lt;li&gt;Practice cleanup, image builds, and container orchestration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks all. Good luck out there!&lt;/p&gt;

&lt;p&gt;Follow for more such amazing content :)&lt;/p&gt;

&lt;p&gt;Happy Learning 😊&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>cloud</category>
      <category>aws</category>
    </item>
    <item>
      <title>🖥️ Remote Equipment Request System – Build a Mini Admin Dashboard with HTML, CSS &amp; JS</title>
      <dc:creator>Pooja Bhavani</dc:creator>
      <pubDate>Tue, 08 Jul 2025 19:11:38 +0000</pubDate>
      <link>https://dev.to/pooja_bhavani/remote-equipment-request-system-build-a-mini-admin-dashboard-with-html-css-js-2b1</link>
      <guid>https://dev.to/pooja_bhavani/remote-equipment-request-system-build-a-mini-admin-dashboard-with-html-css-js-2b1</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for &lt;a href="https://dev.to/challenges/frontend/axero"&gt;Frontend Challenge: Office Edition sponsored by Axero, Holistic Webdev: Office Space&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqutb25m14rqltq7l66ty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqutb25m14rqltq7l66ty.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I created a Remote Equipment Request System using only HTML, CSS, and JavaScript. &lt;/p&gt;

&lt;p&gt;This project is a web-based tool where employees can submit requests for work-related items (like laptops, chairs, monitors), and admins can review, approve, or reject those requests — all within a sleek browser interface.&lt;/p&gt;

&lt;p&gt;It also helped me:&lt;/p&gt;

&lt;p&gt;Practice DOM manipulation&lt;/p&gt;

&lt;p&gt;Use eventListener effectively&lt;/p&gt;

&lt;p&gt;Learn how to simulate data flow in a web app&lt;/p&gt;

&lt;h2&gt;
  
  
  Live Demo:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.tourl"&gt;&lt;/a&gt;&lt;a href="https://dev-pooja.netlify.app/" rel="noopener noreferrer"&gt;https://dev-pooja.netlify.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;**GitHub Repo:&lt;br&gt;
&lt;a href="https://dev.tourl"&gt; https://github.com/pooja-bhavani/Dev-Challenge&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Journey
&lt;/h2&gt;

&lt;p&gt;In remote-first workplaces, handling equipment logistics can become chaotic. This mini dashboard simulates how a simple internal tool could make life easier for both employees and IT/admins — all without a complex backend. &lt;/p&gt;

&lt;p&gt;This project was a solo project&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I grant Axero a worldwide, royalty-free license to display this project for promotional or marketing purposes, with credit. Full ownership remains with me.&lt;/p&gt;

&lt;p&gt;Thanks to Axero and the DEV team for this cool challenge! &lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>frontendchallenge</category>
      <category>css</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Automating Daily DevOps Job Search &amp; Email Alerts</title>
      <dc:creator>Pooja Bhavani</dc:creator>
      <pubDate>Sat, 05 Jul 2025 06:43:29 +0000</pubDate>
      <link>https://dev.to/pooja_bhavani/automating-daily-devops-job-search-email-alerts-2o</link>
      <guid>https://dev.to/pooja_bhavani/automating-daily-devops-job-search-email-alerts-2o</guid>
      <description>&lt;p&gt;This is a submission for the&lt;a href="https://dev.tourl"&gt; Runner H "AI Agent Prompting" Challenge&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have you ever spent countless hours manually in checking LinkedIn or Indeed for the latest DevOps job postings? That’s exactly what inspired me to build an automated job search assistant using Runner H, powered by Google Workspace and intelligent prompt engineering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjevc0vqaldjgsk40fw1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjevc0vqaldjgsk40fw1.png" alt=" " width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47jvmya1q49zqg6bsqq3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47jvmya1q49zqg6bsqq3.png" alt=" " width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrv556xl97joeupldcod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrv556xl97joeupldcod.png" alt=" " width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe17pl4f4zbvmi27o9evm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe17pl4f4zbvmi27o9evm.png" alt=" " width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxulrxqzobf4m0dgqq2df.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxulrxqzobf4m0dgqq2df.png" alt=" " width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvhtjm6y3nx59bvtxofa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvhtjm6y3nx59bvtxofa.png" alt=" " width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How I Used Runner H&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I used Runner H to build an autonomous AI agent that automates the process of finding and emailing the top DevOps job listings daily. Here's how I designed the workflow:&lt;/p&gt;

&lt;p&gt;• Job Search via Web Scraping&lt;/p&gt;

&lt;p&gt;The agent scrapes job portals like LinkedIn and Indeed using specific keywords related to DevOps roles (e.g., DevOps Engineer, SRE, Cloud Engineer).&lt;/p&gt;

&lt;p&gt;• Filtering and Formatting&lt;/p&gt;

&lt;p&gt;It filters out the top 5 most relevant jobs based on recency and relevance. These are then formatted into a short, readable summary including:&lt;/p&gt;

&lt;p&gt;Job title&lt;/p&gt;

&lt;p&gt;Company name&lt;/p&gt;

&lt;p&gt;Location&lt;/p&gt;

&lt;p&gt;Application link&lt;/p&gt;

&lt;p&gt;• Automated Email Composition&lt;/p&gt;

&lt;p&gt;Using the integrated Google Workspace, the agent generates an email containing these job summaries and sends it to the user’s inbox.&lt;/p&gt;

&lt;p&gt;• No Manual Involvement&lt;/p&gt;

&lt;p&gt;The entire flow runs automatically on demand — no coding, no switching tabs, and no daily searching required.&lt;/p&gt;

&lt;p&gt;Let me walk you through how I created an agent that:&lt;/p&gt;

&lt;p&gt;• Analyzes my resume&lt;br&gt;
• Searches job portals daily&lt;br&gt;
• Picks the top 5 most relevant jobs&lt;br&gt;
• Emails me a beautifully formatted summary every morning&lt;/p&gt;

&lt;p&gt;What Inspired This Project?&lt;/p&gt;

&lt;p&gt;I was participating in the Runner H Prompt Engineering Challenge, which encourages creators to build helpful automation agents using prompt chaining and productivity tools.&lt;/p&gt;

&lt;p&gt;My use case: Automate my DevOps job search workflow so I don’t miss fresh openings every morning.&lt;/p&gt;

&lt;p&gt;Step-by-Step Breakdown&lt;/p&gt;

&lt;p&gt;Step 1: Collecting User Input&lt;/p&gt;

&lt;p&gt;The first step was to gather user details:&lt;/p&gt;

&lt;p&gt;Full Name&lt;/p&gt;

&lt;p&gt;Email Address&lt;/p&gt;

&lt;p&gt;Preferred job roles and locations&lt;/p&gt;

&lt;p&gt;Work preference (Remote/On-site/Hybrid)&lt;/p&gt;

&lt;p&gt;Resume file (PDF)&lt;/p&gt;

&lt;p&gt;Runner H prompts the user for this input at the beginning of the automation.&lt;/p&gt;

&lt;p&gt;Step 2: Analyzing the Resume&lt;/p&gt;

&lt;p&gt;Once the resume is uploaded, Runner H uses its language understanding to extract:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{
  "name": "Pooja Bhavani",
  "keywords": ["DevOps", "Docker", "AWS", "Kubernetes", "CI/CD", "Terraform"],
  "roles": ["DevOps Engineer", "Site Reliability Engineer"],
  "locations": ["Remote", "Bangalore"],
  "email": "poojabhavani@gmail.com"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This data becomes the foundation of the job search.&lt;/p&gt;

&lt;p&gt;Step 3: Searching Job Platforms&lt;/p&gt;

&lt;p&gt;Runner H searches jobs from:&lt;/p&gt;

&lt;p&gt;LinkedIn Jobs&lt;/p&gt;

&lt;p&gt;Indeed&lt;/p&gt;

&lt;p&gt;It applies filters for:&lt;/p&gt;

&lt;p&gt;Job postings within the last 24 hours&lt;/p&gt;

&lt;p&gt;60%+ keyword match&lt;/p&gt;

&lt;p&gt;Matching job title &amp;amp; location&lt;/p&gt;

&lt;p&gt;The output looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  {
    "title": "DevOps Engineer",
    "company": "TechCorp",
    "location": "Remote",
    "link": "https://www.linkedin.com/jobs/view/123456"
  }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 4: Emailing the Results&lt;/p&gt;

&lt;p&gt;Runner H formats the job results into an email and sends it at 9:00 AM IST daily.&lt;/p&gt;

&lt;p&gt;✉️ Email Template:&lt;/p&gt;

&lt;p&gt;Subject: Top 5 Job Matches for You Today&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Hi {{user_name}},

Here are your top 5 DevOps job matches for {{today}}:

1. **{{job1_title}} – {{job1_company}}**
   📍 Location: {{job1_location}}  
   🔗 [Apply Now]({{job1_link}})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;... and so on.&lt;/p&gt;

&lt;p&gt;🌐 Platforms Searched: LinkedIn, Indeed&lt;br&gt;&lt;br&gt;
📅 Run Date: {{today}}&lt;/p&gt;

&lt;p&gt;Good luck with your job search!  &lt;/p&gt;

&lt;p&gt;Why This Works&lt;/p&gt;

&lt;p&gt;This automation:&lt;/p&gt;

&lt;p&gt;Saves 30+ mins/day manually checking platforms&lt;/p&gt;

&lt;p&gt;Surfaces relevant jobs before competitors&lt;/p&gt;

&lt;p&gt;Lets me focus on preparation, not searching&lt;/p&gt;

&lt;p&gt;Final Thoughts&lt;/p&gt;

&lt;p&gt;I used Runner H as a no-code/low-code platform to stitch this all together using natural language.&lt;/p&gt;

&lt;p&gt;It’s a great use case for:&lt;/p&gt;

&lt;p&gt;Job seekers&lt;/p&gt;

&lt;p&gt;Career coaches&lt;/p&gt;

&lt;p&gt;Recruiting teams&lt;/p&gt;

&lt;p&gt;If you’d like to try this for yourself, DM me or comment below 💬&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>runnerhchallenge</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
