<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Wassim Soltani</title>
    <description>The latest articles on DEV Community by Wassim Soltani (@wsoltani).</description>
    <link>https://dev.to/wsoltani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/wsoltani"/>
    <language>en</language>
    <item>
      <title>An Interview is a Conversation You Can Lead. Here's How.</title>
      <dc:creator>Wassim Soltani</dc:creator>
      <pubDate>Sun, 14 Sep 2025 20:00:03 +0000</pubDate>
      <link>https://dev.to/wsoltani/an-interview-is-a-conversation-you-can-lead-heres-how-1i1h</link>
      <guid>https://dev.to/wsoltani/an-interview-is-a-conversation-you-can-lead-heres-how-1i1h</guid>
      <description>&lt;p&gt;Funny how we spend years learning to code, but no one really teaches us how to handle an interview. I've had a lot of people reach out for advice recently, from juniors to experienced engineers, and the pattern is always the same: they're worried about being put on the spot.&lt;/p&gt;

&lt;p&gt;The fear of getting hit with a trick question or not knowing an answer is real. But having been on both sides of the table, as an interviewee and as the one doing the hiring for my company, Blueblood, I've learned that a successful interview isn't about having a perfect score.&lt;/p&gt;

&lt;p&gt;It's a structured conversation, and you can be the one to guide it. I've put together my playbook for how to prepare, stay in control, and actually nail it.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Game Plan
&lt;/h3&gt;

&lt;p&gt;Do Your Homework (No, Really)&lt;br&gt;
Prepare Your "Cheat Sheet"&lt;br&gt;
Talk About the Money&lt;br&gt;
It's Your Conversation&lt;/p&gt;




&lt;p&gt;&lt;a id="section-1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3onr4crgswaa9a3skzc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3onr4crgswaa9a3skzc.png" alt="Cat doing homework" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Do Your Homework (No, Really)
&lt;/h3&gt;

&lt;p&gt;The single biggest mistake developers make is shotgun-blasting applications and then scrambling to figure out what a company does five minutes before the call (if at all). The goal of research is to determine if you are genuinely excited about the opportunity and to distinguish yourself from the hundreds of other applicants.&lt;/p&gt;

&lt;p&gt;Your research should start the moment you decide to apply. Here's your checklist:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The Official Website (The Source of Truth):&lt;/strong&gt; This is your first stop.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Values/Mission Page:&lt;/strong&gt; Do their values resonate with you?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Products Page:&lt;/strong&gt; What are they building? Do you find it interesting?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blog/News Section:&lt;/strong&gt; What are their recent announcements or technical challenges?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. LinkedIn &amp;amp; Glassdoor (The Unfiltered View):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Company Profiles:&lt;/strong&gt; Get a feel for their size, recent posts, and company culture.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reviews &amp;amp; Salary Data:&lt;/strong&gt; Read employee reviews and check for self-reported salary ranges for similar roles.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. The Final Google Search (The Catch-All):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search for news articles, press releases, or any other public information that might have slipped through the cracks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The outcome of this phase should be a simple document of raw notes. This document is your material for the next step.&lt;/p&gt;




&lt;p&gt;&lt;a id="section-2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimvihow1anlk5840kme5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimvihow1anlk5840kme5.png" alt="Cat with a cheat sheet" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Prepare Your "Cheat Sheet"
&lt;/h3&gt;

&lt;p&gt;Your research document is a messy brain dump of raw information. You don't want to be frantically scrolling through that during the call. The goal of this step is to condense that research into a clean, one-page "cheat sheet" that you can glance at to keep you focused and confident.&lt;/p&gt;

&lt;p&gt;This shouldn't be treated as a script. It's a set of talking points organized into a few essential sections.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Your Tailored Introduction (The 30-Second Pitch)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first question is often "Tell me about yourself." This isn't an invitation to read your resume out loud. They already have it. It's your chance to deliver a concise, powerful pitch that proves you're the right person for &lt;em&gt;this specific job&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A good way to structure it is in 3-4 sentences:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Who you are:&lt;/strong&gt; "I'm a fullstack engineer specializing in..."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical Alignment:&lt;/strong&gt; "...with experience in React and TypeScript, which I saw was a key requirement for this role."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Value Alignment:&lt;/strong&gt; "...I was really drawn to your company's mission to [mention something from your research], and I'm looking for a role where I can build products that have a real impact."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. "Why I'm a Good Fit" (Your Talking Points)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's a helpful way to organize your thoughts. Try a simple two-column list. On the left, list the top 3-4 requirements from the job description. On the right, map one of your projects or skills directly to that requirement.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Company Needs&lt;/th&gt;
&lt;th&gt;My Experience&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Experience with React&lt;/td&gt;
&lt;td&gt;Built a full admin dashboard with React, TypeScript, and Zustand.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI/ML Interest&lt;/td&gt;
&lt;td&gt;Developed a multi-tenant AI chatbot on the Cloudflare stack.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cares about UX/Design&lt;/td&gt;
&lt;td&gt;My portfolio and projects use a clean, modern design system (shadcn/ui).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This helps you have specific, evidence-backed answers ready when they ask about your skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Salary &amp;amp; Logistics (The Practicalities)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's always a good idea to have your numbers ready. It shows you're a professional who knows their worth:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Their Range:&lt;/strong&gt; &lt;code&gt;£60,000 - £100,000&lt;/code&gt; (from your research)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;My Desired Range:&lt;/strong&gt; &lt;code&gt;£80,000 - £90,000&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;My Walk-Away Number:&lt;/strong&gt; &lt;code&gt;£75,000&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Question for them:&lt;/strong&gt; "What is the envisioned salary range for this position?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember, this is a normal part of the process. It's not greedy to talk about money; it's why you're there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. "My Questions for Them" (Show Your Interest)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Always have questions prepared. It shows you're engaged and genuinely interested, and it's your best chance to find out if the company is the right fit for you.&lt;/p&gt;

&lt;p&gt;Here are a few good starting points and what they help you achieve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;"What are the biggest technical challenges the team is currently facing?"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Why you're asking this:&lt;/em&gt; It shows you're a problem-solver, not a ticket-closer. You're already thinking about how you can provide value.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;What it helps you learn:&lt;/em&gt; It gives you a real preview of the job's day-to-day challenges and helps you decide if they're problems you'd be excited to work on.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;"Could you tell me more about the day-to-day responsibilities of this role?"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Why you're asking this:&lt;/em&gt; This signals that you're thinking practically about how you would fit into the team and its workflow.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;What it helps you learn:&lt;/em&gt; It cuts through the job description jargon. You'll find out if the job is mostly new features, bug fixes, or maintenance, and what the team's agile process is really like.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;"What does the typical onboarding process look like for a new engineer?"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Why you're asking this:&lt;/em&gt; This is a very smart question for a junior. It shows you're serious about getting up to speed and contributing effectively.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;What it helps you learn:&lt;/em&gt; The answer reveals how much a company invests in its new hires. A good answer will mention mentorship, documentation, and a structured plan for your first few weeks. A vague answer can be a red flag.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;With a one-page cheat sheet like this, you'll walk into the interview feeling prepared, confident, and ready to guide the conversation.&lt;/p&gt;




&lt;p&gt;&lt;a id="section-3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcrzd1jkdkhckc5rxpsok.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcrzd1jkdkhckc5rxpsok.png" alt="Cat with bling" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Talk About the Money
&lt;/h3&gt;

&lt;p&gt;Sooner or later, the salary question will come up. Don't treat it like a taboo. This is a professional conversation about your livelihood, and companies expect it. Getting shy or closed off shows a lack of confidence.&lt;/p&gt;

&lt;p&gt;Your research from Step 1 is your anchor here. Here’s how to handle it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;If the job post included a range:&lt;/strong&gt; A great response is, "The range you posted aligns with my research and my expectations for the role."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;If the post did &lt;em&gt;not&lt;/em&gt; include a range:&lt;/strong&gt; The best move is to let them provide the first number. You can ask confidently and politely: "I'm still learning about the full scope of the role, so I'd be interested to hear what the envisioned salary range for this position is."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;If you must give a number first:&lt;/strong&gt; Don't give a single number. Based on your research of similar roles, provide a confident range. For example: "Based on my research for similar roles in this market, I'm targeting a range of $X to $Y."&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember, this is an initial check to make sure you're in the same ballpark. As long as your number is reasonable and based on research, it won't disqualify you. The final offer comes later.&lt;/p&gt;




&lt;p&gt;&lt;a id="section-4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Thoughts: It's Your Conversation
&lt;/h3&gt;

&lt;p&gt;At the end of the day, remember this: an interview isn't a test you pass or fail. It's a two-way conversation to determine if there is a mutual fit. They are evaluating you, but you are also evaluating them.&lt;/p&gt;

&lt;p&gt;This playbook isn't a script to memorize; it's about giving you the structure to walk in prepared and confident. Don't just hope you know the right answers. Be ready to show your value and to lead the conversation.&lt;/p&gt;

&lt;p&gt;Be curious, be confident, and be yourself. The right opportunity will recognize your value. Good luck!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc91tox0wjftlbx4rj1q5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc91tox0wjftlbx4rj1q5.png" alt="Confident cat" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>tutorial</category>
      <category>career</category>
      <category>interview</category>
    </item>
    <item>
      <title>I Built a Game to Understand Fly.io's Orchestrator: flyd Operator Sim!</title>
      <dc:creator>Wassim Soltani</dc:creator>
      <pubDate>Tue, 03 Jun 2025 01:02:19 +0000</pubDate>
      <link>https://dev.to/wsoltani/i-built-a-game-to-understand-flyios-orchestrator-flyd-operator-sim-35ca</link>
      <guid>https://dev.to/wsoltani/i-built-a-game-to-understand-flyios-orchestrator-flyd-operator-sim-35ca</guid>
      <description>&lt;p&gt;I've recently been deep diving into Fly.io's infrastructure, particularly their &lt;code&gt;flyd&lt;/code&gt; orchestration server and the &lt;code&gt;superfly/fsm&lt;/code&gt; library that powers its stateful operations. To truly grasp the operational challenges, I built an interactive simulation game: &lt;strong&gt;flyd Operator Sim&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Play it here: &lt;strong&gt;&lt;a href="https://flydsim.wsoltani.com/" rel="noopener noreferrer"&gt;https://flydsim.wsoltani.com/&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
Repo: &lt;strong&gt;&lt;a href="https://github.com/wSoltani/flyd-operator-sim" rel="noopener noreferrer"&gt;https://github.com/wSoltani/flyd-operator-sim&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzefs7cfuxsr1gpawnifx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzefs7cfuxsr1gpawnifx.png" alt="flyd Operator Sim Cover" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  🤔 Why Build a Simulation?
&lt;/h3&gt;

&lt;p&gt;Fly.io's platform is impressive. Reading their insightful &lt;strong&gt;&lt;a href="https://fly.io/blog/" rel="noopener noreferrer"&gt;blog posts&lt;/a&gt;&lt;/strong&gt; and their public &lt;strong&gt;&lt;a href="https://fly.io/infra-log/" rel="noopener noreferrer"&gt;infra-log&lt;/a&gt;&lt;/strong&gt; revealed the complexities of &lt;code&gt;flyd&lt;/code&gt;. The &lt;code&gt;superfly/fsm&lt;/code&gt; library also highlighted their focus on robust state management.&lt;/p&gt;

&lt;p&gt;I wanted to explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What kind of incidents can actually occur on a worker node running flyd?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How does an operator diagnose and respond to these issues?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What's the impact of different actions on system health and application uptime?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do Finite State Machines (FSMs) play a role in managing complex operations like machine migrations, even if it's abstracted away from the operator in a crisis?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Building a sim felt like the best way to learn.&lt;/p&gt;




&lt;h3&gt;
  
  
  ✨ Introducing: flyd Operator Sim!
&lt;/h3&gt;

&lt;p&gt;In &lt;code&gt;flyd Operator Sim&lt;/code&gt;, you're an on-call engineer for a Fly.io region. Your goal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Monitor&lt;/strong&gt; worker health (CPU, memory, &lt;code&gt;flyd&lt;/code&gt; status).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Respond&lt;/strong&gt; to incidents like &lt;code&gt;flyd&lt;/code&gt; stalls, &lt;code&gt;containerd&lt;/code&gt; sync issues, network partitions, and storage corruption (many inspired by the infra-log).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Act&lt;/strong&gt; using tools like &lt;code&gt;flyd&lt;/code&gt; restarts, worker drains, log inspection, and (risky!) FSM overrides.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain Uptime&lt;/strong&gt; over a simulated period.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90v3eoytftk4ljzunidh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90v3eoytftk4ljzunidh.png" alt="flyd errors" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Game Objective &amp;amp; Progression:&lt;/strong&gt;&lt;br&gt;
Your main goal is to &lt;strong&gt;maintain high application uptime across your workers for 7 simulated days.&lt;/strong&gt; Each day lasts about 5 minutes in real time. To make things more interesting, you start with one worker, and an additional worker is added each day, up to a maximum of four, increasing your responsibilities and potential points of failure!&lt;/p&gt;




&lt;h3&gt;
  
  
  🎓 What I learned
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Orchestration is Complex:&lt;/strong&gt; Simulating even a part of it showed me the immense challenge of managing global infrastructure.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;State Management is Crucial (and complicated):&lt;/strong&gt; The game reinforced how vital accurate state is for &lt;code&gt;flyd&lt;/code&gt; and why a solid FSM library like &lt;code&gt;superfly/fsm&lt;/code&gt; is essential, especially seeing potential &lt;code&gt;containerd&lt;/code&gt; desync issues.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Observability is Non-Negotiable:&lt;/strong&gt; Good metrics and logs (which the game simulates access to) are critical for diagnosing issues, a theme evident in Fly.io's own infra-log.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Operational Trade-offs:&lt;/strong&gt; The sim touches on the pressure of quick fixes versus safer, slower solutions.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  🤓 Tech Stack
&lt;/h3&gt;

&lt;p&gt;Built with: Next.js, TypeScript, Tailwind CSS, Radix UI (shadcn), and React Context.&lt;/p&gt;




&lt;h3&gt;
  
  
  💭 Try It &amp;amp; Share Your Thoughts!
&lt;/h3&gt;

&lt;p&gt;This was a personal learning project, but I hope others find it useful or fun.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Play: &lt;strong&gt;&lt;a href="https://flydsim.wsoltani.com/" rel="noopener noreferrer"&gt;https://flydsim.wsoltani.com/&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Repo: &lt;strong&gt;&lt;a href="https://github.com/wSoltani/flyd-operator-sim" rel="noopener noreferrer"&gt;https://github.com/wSoltani/flyd-operator-sim&lt;/a&gt;&lt;/strong&gt; (give it a 🌟!)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What incidents should I add next? How can it be a better learning tool? Let me know!&lt;/p&gt;

&lt;p&gt;Thanks for reading 💖&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q5a18rzxxycfwejo42y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q5a18rzxxycfwejo42y.png" alt="flyd mastery badge" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>gamedev</category>
      <category>learning</category>
      <category>fly</category>
    </item>
    <item>
      <title>The Tiny Cat Guide to AI #4: LLM Evaluation – Is Your AI on Catnip? (Spotting Hallucinations)</title>
      <dc:creator>Wassim Soltani</dc:creator>
      <pubDate>Fri, 23 May 2025 21:08:32 +0000</pubDate>
      <link>https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-4-llm-evaluation-is-your-ai-on-catnip-spotting-hallucinations-2bb2</link>
      <guid>https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-4-llm-evaluation-is-your-ai-on-catnip-spotting-hallucinations-2bb2</guid>
      <description>&lt;p&gt;Welcome back to &lt;strong&gt;The Tiny Cat Guide to AI&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;We've journeyed through &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-1-prompt-engineering-directing-the-ai-ballet-238c"&gt;Prompt Engineering&lt;/a&gt;, dived into &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-2-generative-ai-whats-inside-the-magic-box-p0g"&gt;Generative AI&lt;/a&gt;, and equipped our AI cats with a library for &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-3-rag-tiny-librarians-2kph"&gt;RAG&lt;/a&gt;. But what happens when our well-intentioned AI, even with good data, confidently tells us something... that's completely bonkers? 🤪&lt;/p&gt;

&lt;p&gt;This brings us to a crucial aspect of building trustworthy AI: &lt;strong&gt;LLM Evaluation&lt;/strong&gt; and tackling those pesky &lt;strong&gt;"Hallucinations"&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To make these concepts more approachable, our trusty feline guides are back! This time, they're helping us understand how we measure AI responses and try to trick them into revealing their weaknesses:&lt;/p&gt;

&lt;p&gt;&lt;iframe height="600" src="https://codepen.io/wasoltani/embed/jEPNJYd?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, what's the deal with AI "hallucinations" and evaluation, as seen by our cloud of cats?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine our smart RAG cat (from Post #3) gives an answer. But did it truly understand the facts, or did it just invent something plausible-sounding from the "fluffy cloud" of data it was trained on? These inventions are "hallucinations." LLM Evaluation is how we meticulously check our AI's work, ensuring its answers are truthful and helpful, not just creative fluff. 🧐&lt;/p&gt;

&lt;p&gt;As our visual story shows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If one cat says "meow," did the others understand? We might count matching "meows" (like &lt;strong&gt;BLEU&lt;/strong&gt; for precision).&lt;/li&gt;
&lt;li&gt;What if a cat says "muwr?" Is that still close enough? (Like &lt;strong&gt;ROUGE&lt;/strong&gt; for capturing the gist).&lt;/li&gt;
&lt;li&gt;And what if we try to trick a cat by saying "woof woof!" – will any cats bark back? That's like &lt;strong&gt;Red Teaming&lt;/strong&gt; – trying to find unexpected or wrong responses!&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Why is this so critical?&lt;/strong&gt;&lt;br&gt;
We need to be able to &lt;em&gt;trust&lt;/em&gt; our AI. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My &lt;strong&gt;&lt;a href="https://wsoltani.github.io/ai-chat/" rel="noopener noreferrer"&gt;portfolio chatbot&lt;/a&gt;&lt;/strong&gt; must accurately represent my experience from its RAG sources, not invent project details.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://quickplan.blueblood.tech/" rel="noopener noreferrer"&gt;Quickplan&lt;/a&gt;&lt;/strong&gt;, my AI tool for processing meeting notes, needs to generate reliable summaries and action plans, not flights of fancy based on misinterpretations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Mistakes here undermine their value and our trust in the system.&lt;/p&gt;

&lt;p&gt;Evaluating LLMs means tackling challenges like defining what "good" actually means (it's often subjective), looking beyond simple accuracy towards relevance and overall coherence, and dealing with the "black box" nature of some complex LLM behaviors.&lt;/p&gt;




&lt;p&gt;Here are some key insights to guide AI towards truthfulness and robust performance:&lt;/p&gt;

&lt;p&gt;🧠 &lt;strong&gt;Define Success &amp;amp; Measure It:&lt;/strong&gt; Before you even start building, decide what a good output looks like for your specific use case. Is it factual accuracy? Successful task completion? User satisfaction? Use appropriate metrics, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;BLEU (Bilingual Evaluation Understudy):&lt;/strong&gt; This checks the precision of AI-generated text against human-written examples by counting matching sequences of words (n-grams). It's often used to evaluate the quality of machine translation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ROUGE (Recall-Oriented Understudy for Gisting Evaluation):&lt;/strong&gt; This evaluates how well AI-generated summaries capture key information from human-written references by looking at overlapping word sequences, focusing on recall (i.e., how much of the important stuff was covered).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Alongside these automated scores, &lt;em&gt;thorough human review is almost always indispensable&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;🛡️ &lt;strong&gt;Stress-Test with "Red Teaming":&lt;/strong&gt; Don't just test for expected behavior; proactively try to make your AI fail or hallucinate. This technique, known as "Red Teaming" or adversarial testing, involves crafting inputs specifically designed to expose weaknesses, biases, or edge cases in your AI's understanding and response generation.&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;Keep Grounding Strong (RAG Revisited):&lt;/strong&gt; As we discussed in the RAG post, providing solid, factual context is a primary defense against hallucinations. Beyond that, explicitly instruct your LLM to cite its sources for claims or to clearly state when it doesn't have enough information to answer.&lt;/p&gt;

&lt;p&gt;🔄 &lt;strong&gt;Iterate with Human Feedback:&lt;/strong&gt; There's often no substitute for human judgment. Collect feedback on AI outputs, review them carefully, and use these insights to refine your prompts, the data the AI uses, or even to fine-tune the models themselves. This continuous improvement cycle is crucial for building robust and trustworthy AI.&lt;/p&gt;

&lt;p&gt;📡 &lt;strong&gt;Monitor, Monitor, Monitor:&lt;/strong&gt; AI performance isn't static; it can drift over time as data changes, user interactions evolve, or underlying models are updated. Continuously watch your live systems for any unexpected behavior or emerging patterns of hallucinations so you can address them quickly.&lt;/p&gt;




&lt;p&gt;Tackling hallucinations and robustly evaluating LLMs are key to building AI we can truly rely on. It's about moving beyond just getting an AI to work, to ensuring it works &lt;em&gt;well, truthfully, and reliably&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;What are your go-to methods for checking AI outputs, or what's the wildest AI hallucination you've ever encountered in your projects? Share your stories and insights in the comments! 👇&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>llm</category>
      <category>learning</category>
    </item>
    <item>
      <title>The Tiny Cat Guide to AI #3: RAG – Tiny Librarians</title>
      <dc:creator>Wassim Soltani</dc:creator>
      <pubDate>Thu, 22 May 2025 17:41:05 +0000</pubDate>
      <link>https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-3-rag-tiny-librarians-2kph</link>
      <guid>https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-3-rag-tiny-librarians-2kph</guid>
      <description>&lt;p&gt;Welcome back to &lt;strong&gt;The Tiny Cat Guide to AI&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;In our journey so far, we've explored Prompt Engineering – &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-1-prompt-engineering-directing-the-ai-ballet-238c"&gt;Directing the AI Ballet&lt;/a&gt; and peeked inside Generative AI – &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-2-generative-ai-whats-inside-the-magic-box-p0g"&gt;What's Inside the Magic Box of Cats?&lt;/a&gt;. Now, let's tackle a common challenge: how do we get AI to give answers that are not just smart, but also deeply informed by specific, relevant documents it wasn't originally trained on?&lt;/p&gt;

&lt;p&gt;The answer often lies in a powerful technique called &lt;strong&gt;Retrieval Augmented Generation (RAG)&lt;/strong&gt;! 💡&lt;/p&gt;

&lt;p&gt;To illustrate how RAG works, I've summoned our feline friends once more – this time as diligent tiny librarians:&lt;/p&gt;

&lt;p&gt;&lt;iframe height="600" src="https://codepen.io/wasoltani/embed/OPVLLYO?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, what's RAG all about, as told by our tiny cat librarians?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine your AI has access to a giant library filled with specific knowledge (like all the world's tiny cat facts!).&lt;/p&gt;

&lt;p&gt;When you ask a question (say, about "fluffy kittens"), instead of just relying on its general, pre-existing knowledge, the AI first dispatches tons of tiny librarian cats. These cats zoom through the shelves, find the most relevant scrolls of information related to "fluffy kittens," and bring them back.&lt;/p&gt;

&lt;p&gt;Then, a super smart "reader" cat (our LLM) carefully reads these specific scrolls and uses that freshly retrieved information to answer your question accurately and contextually.&lt;/p&gt;

&lt;p&gt;This "retrieve first, then augment the answer" approach is the heart of RAG. It helps the AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stay factual and reduce "hallucinations."&lt;/li&gt;
&lt;li&gt;Use up-to-date information it wasn't originally trained on.&lt;/li&gt;
&lt;li&gt;Access private or domain-specific knowledge (⚠️ always be careful with data privacy).&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Building AI features often means implementing RAG. For instance, I've applied it to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;My portfolio chatbot:&lt;/strong&gt; It uses a RAG setup (leveraging Cloudflare AutoRAG) to sift through documents detailing my professional background, skills, and projects to answer your questions. You can even &lt;a href="https://wsoltani.github.io/ai-chat/" rel="noopener noreferrer"&gt;chat with it here&lt;/a&gt; to see it in action!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI enhancements for an E-commerce platform:&lt;/strong&gt; RAG leverages embedded product details for semantic search, enabling a more helpful Q&amp;amp;A chatbot and relevant product recommendations based on understanding user queries deeply.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Working with RAG has definitely had its share of "aha!" moments and tricky bits:&lt;/p&gt;

&lt;p&gt;😵‍💫 Ensuring Retrieval Relevance: Making sure the "librarian cats" fetch the exact right scrolls is crucial. Irrelevant documents lead to poor answers.&lt;/p&gt;

&lt;p&gt;🧠 Context Window Constraints: Fitting all the crucial info from retrieved scrolls into the AI's limited working memory (the "reader cat's" attention span) can be a puzzle.&lt;/p&gt;

&lt;p&gt;⚖️ Synthesizing, Not Just Repeating: Guiding the AI to weave the retrieved info into a coherent answer, rather than just copying chunks verbatim, requires careful prompting.&lt;/p&gt;

&lt;p&gt;🖼️ Knowledge Base Management: Keeping the "library" (source documents) fresh, well-organized, and accurately indexed is an ongoing task.&lt;/p&gt;




&lt;p&gt;So, how can we guide our AI to make the most of RAG and help our tiny librarians be more effective? Here are some deeper insights that have helped me:&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Curate Your Knowledge Base:&lt;/strong&gt; Quality and structure are paramount for your "library." Clear, well-written, and logically organized source documents make a huge difference. A consistent voice can also help. While some tools automatically convert files, starting with clean Markdown, for example, often leads to better results.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Smart Document Design &amp;amp; Chunking:&lt;/strong&gt; Think about how your information is structured. Logically separated, focused documents or sections often lead to better automated "chunking" (breaking documents into digestible pieces for the AI). Aim for chunks small enough for retrieval precision but large enough to retain meaningful context.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Effective Retrieval Strategy:&lt;/strong&gt; This is often about more than just keyword matching. Implementing semantic search – understanding the meaning and intent behind a user's query – allows the AI to find the most conceptually relevant chunks, even if the exact wording differs.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Clear Prompting for Context Use:&lt;/strong&gt; Once the relevant information is retrieved, you need to explicitly guide the LLM on how to use it. Do you want it to summarize, extract specific facts, answer a question based only on the provided text, or synthesize information from multiple sources?&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Iterate &amp;amp; Evaluate:&lt;/strong&gt; Building a good RAG system is rarely a one-shot deal. Test it rigorously. When you get an unexpected answer, examine the retrieved chunks to understand why. This will help you refine your documents, your chunking strategy, your retrieval mechanism, or your prompts.&lt;/p&gt;




&lt;p&gt;RAG is a game-changer for creating more reliable, tailored, and context-aware AI applications. It beautifully combines the broad knowledge of large language models with the precision of specific, targeted information.&lt;/p&gt;

&lt;p&gt;What are your go-to RAG techniques, or any particular "librarian cat" headaches you've encountered while trying to fetch the right info? Share your experiences below! 👇&lt;/p&gt;

&lt;p&gt;Does that cloud look like a bunch of tiny paws to you, or am I hallucinating?&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-4-llm-evaluation-is-your-ai-on-catnip-spotting-hallucinations-2bb2" class="crayons-story__hidden-navigation-link"&gt;The Tiny Cat Guide to AI #4: LLM Evaluation – Is Your AI on Catnip? (Spotting Hallucinations)&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/wsoltani" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3123779%2F1609a94f-311d-4cb2-b9ea-65f84399d11e.png" alt="wsoltani profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/wsoltani" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Wassim Soltani
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Wassim Soltani
                
              
              &lt;div id="story-author-preview-content-2519815" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/wsoltani" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3123779%2F1609a94f-311d-4cb2-b9ea-65f84399d11e.png" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Wassim Soltani&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-4-llm-evaluation-is-your-ai-on-catnip-spotting-hallucinations-2bb2" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;May 23 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-4-llm-evaluation-is-your-ai-on-catnip-spotting-hallucinations-2bb2" id="article-link-2519815"&gt;
          The Tiny Cat Guide to AI #4: LLM Evaluation – Is Your AI on Catnip? (Spotting Hallucinations)
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/machinelearning"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;machinelearning&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/llm"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;llm&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/learning"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;learning&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-4-llm-evaluation-is-your-ai-on-catnip-spotting-hallucinations-2bb2" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/exploding-head-daceb38d627e6ae9b730f36a1e390fca556a4289d5a41abb2c35068ad3e2c4b5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;5&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-4-llm-evaluation-is-your-ai-on-catnip-spotting-hallucinations-2bb2#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            3 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>ai</category>
      <category>rag</category>
      <category>machinelearning</category>
      <category>learning</category>
    </item>
    <item>
      <title>The Tiny Cat Guide to AI #2: Generative AI – What's Inside the Magic Box?</title>
      <dc:creator>Wassim Soltani</dc:creator>
      <pubDate>Wed, 21 May 2025 19:23:01 +0000</pubDate>
      <link>https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-2-generative-ai-whats-inside-the-magic-box-p0g</link>
      <guid>https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-2-generative-ai-whats-inside-the-magic-box-p0g</guid>
      <description>&lt;p&gt;Welcome back to &lt;strong&gt;The Tiny Cat Guide to AI&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;In our &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-1-prompt-engineering-directing-the-ai-ballet-238c"&gt;previous post&lt;/a&gt; on Prompt Engineering, we explored how to give clear instructions to our creative AI felines. 😹&lt;/p&gt;

&lt;p&gt;Now, let's dive deeper and peek inside the "engine room". What exactly is Generative AI? How does it power these amazing capabilities? What makes these models tick?&lt;/p&gt;

&lt;p&gt;Generative AI is &lt;em&gt;essentially&lt;/em&gt; a system &lt;strong&gt;combining&lt;/strong&gt; countless learned patterns to create something &lt;strong&gt;entirely new&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To help visualize the fundamental concept of how it works, I’ve put together another visual story. This time, it involves a rather surprising (and overflowing) box of tiny cats!&lt;/p&gt;

&lt;p&gt;&lt;iframe height="600" src="https://codepen.io/wasoltani/embed/MYYNoBa?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;sup&gt;Liked this carousel? Check out how to make one &lt;a href="https://dev.to/wsoltani/no-more-static-posts-i-built-an-accessible-high-performance-carousel-for-devto-1pl3"&gt;here&lt;/a&gt;!&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;As our cat-filled box illustrates, you can think of Generative AI as that brand new, super-talented cat that emerges. It can meow happily, purr with contentment, and knows all the best sunbeam spots because it has, in a way, learned from the combined knowledge and experiences of all the other tiny cats it was trained on.&lt;/p&gt;

&lt;p&gt;But &lt;em&gt;beyond&lt;/em&gt; this high-level concept, working hands-on with these models reveals crucial mechanics that we, as developers and enthusiasts, need to grasp.&lt;/p&gt;



&lt;p&gt;My experience building features focused specifically on content generation, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My &lt;strong&gt;AI note-processing tool&lt;/strong&gt;, &lt;a href="https://quickplan.blueblood.tech/" rel="noopener noreferrer"&gt;Quickplan&lt;/a&gt; - generating summaries, action plans, and expanded points from raw text.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://qmims.vercel.app/" rel="noopener noreferrer"&gt;qmims&lt;/a&gt;, my &lt;strong&gt;AI-powered CLI tool&lt;/strong&gt; using Amazon Q to generate and refine project READMEs and documentation files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI enhancements&lt;/strong&gt; for my E-commerce platform - generating creative product descriptions and related marketing copy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;...has really highlighted &lt;em&gt;several&lt;/em&gt; key technical aspects &lt;strong&gt;essential&lt;/strong&gt; for understanding and effectively working with Generative AI:&lt;/p&gt;
&lt;h3&gt;
  
  
  💡 &lt;strong&gt;Context Window &amp;amp; Tokens:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Think of this as the AI's short-term, working memory. It can only "see" and process a limited amount of text (or data) at any given moment.&lt;/p&gt;

&lt;p&gt;This limit is measured in "tokens" – which are roughly words or parts of words. Providing concise, highly relevant information within this window is critical for the AI to generate coherent and contextually appropriate output.&lt;/p&gt;

&lt;p&gt;If crucial information falls outside this window, the AI effectively forgets it for that specific interaction.&lt;/p&gt;
&lt;h3&gt;
  
  
  🌡️ &lt;strong&gt;Temperature:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This setting is your control knob for randomness in the AI's output.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;low temperature&lt;/strong&gt; (e.g., 0.1-0.3) makes the AI more predictable, focused, and deterministic – great for tasks requiring factual accuracy, like summaries or straightforward Q&amp;amp;A.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;high temperature&lt;/strong&gt; (e.g., 0.7-1.0+) encourages more creative, diverse, and sometimes unexpected results, which can be useful for brainstorming, varied content generation, or artistic applications.&lt;/p&gt;

&lt;p&gt;Finding &lt;em&gt;the sweet spot&lt;/em&gt; for your specific use case is key.&lt;/p&gt;
&lt;h3&gt;
  
  
  ⚙️ &lt;strong&gt;Function Calling &amp;amp; Tools:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Modern Large Language Models (LLMs) aren't just isolated brains; they can be empowered with "tools".&lt;/p&gt;

&lt;p&gt;This is often enabled via a mechanism called "function calling". It allows the LLM to pause its generation, call an external API or a predefined function in your code (to fetch real-time data, query a database, perform calculations, interact with other services), and then use the result from that tool to inform and complete its final response.&lt;/p&gt;

&lt;p&gt;This dramatically &lt;strong&gt;expands their capabilities&lt;/strong&gt; beyond text generation.&lt;/p&gt;
&lt;h3&gt;
  
  
  🌍 &lt;strong&gt;Grounding (e.g., RAG):&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A well-known challenge with AI is its tendency to "hallucinate" – invent plausible-sounding but false or irrelevant information.&lt;/p&gt;

&lt;p&gt;Techniques like &lt;strong&gt;Retrieval-Augmented Generation (RAG)&lt;/strong&gt; are vital for combatting this. RAG involves 'grounding' the AI by providing it with specific, up-to-date, and factual documents relevant to the user's prompt before it generates a response.&lt;/p&gt;

&lt;p&gt;The AI is then instructed to base its answer primarily on this provided context, which dramatically &lt;strong&gt;improves factual accuracy&lt;/strong&gt; and relevance.&lt;/p&gt;



&lt;p&gt;Understanding these mechanics is essential for moving beyond basic prompting. It allows us to more reliably harness the true power of Generative AI for building sophisticated and useful applications.&lt;/p&gt;

&lt;p&gt;What's one technical aspect of LLMs you've found particularly crucial or challenging to work with in your own projects? Let's share some insights in the comments below! 👇&lt;/p&gt;

&lt;p&gt;Wanna go to the library with me?&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-3-rag-tiny-librarians-2kph" class="crayons-story__hidden-navigation-link"&gt;The Tiny Cat Guide to AI #3: RAG – Tiny Librarians&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/wsoltani" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3123779%2F1609a94f-311d-4cb2-b9ea-65f84399d11e.png" alt="wsoltani profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/wsoltani" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Wassim Soltani
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Wassim Soltani
                
              
              &lt;div id="story-author-preview-content-2515743" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/wsoltani" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3123779%2F1609a94f-311d-4cb2-b9ea-65f84399d11e.png" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Wassim Soltani&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-3-rag-tiny-librarians-2kph" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;May 22 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-3-rag-tiny-librarians-2kph" id="article-link-2515743"&gt;
          The Tiny Cat Guide to AI #3: RAG – Tiny Librarians
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/rag"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;rag&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/machinelearning"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;machinelearning&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/learning"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;learning&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-3-rag-tiny-librarians-2kph" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/exploding-head-daceb38d627e6ae9b730f36a1e390fca556a4289d5a41abb2c35068ad3e2c4b5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;5&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-3-rag-tiny-librarians-2kph#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            4 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>ai</category>
      <category>llm</category>
      <category>machinelearning</category>
      <category>learning</category>
    </item>
    <item>
      <title>The Tiny Cat Guide to AI #1: Prompt Engineering – Directing the AI Ballet</title>
      <dc:creator>Wassim Soltani</dc:creator>
      <pubDate>Tue, 20 May 2025 15:47:49 +0000</pubDate>
      <link>https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-1-prompt-engineering-directing-the-ai-ballet-238c</link>
      <guid>https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-1-prompt-engineering-directing-the-ai-ballet-238c</guid>
      <description>&lt;p&gt;Ever feel like getting an AI to do exactly what you want is like trying to organize a ballet for super creative, slightly chaotic tiny cats? Yeah, me too! 😹&lt;/p&gt;

&lt;p&gt;To break down how we can better direct these chaotic tiny cats, I've visualized the core ideas of &lt;strong&gt;Prompt Engineering&lt;/strong&gt; using some feline friends. You can see this little story in action in the carousel embedded right below!&lt;/p&gt;

&lt;p&gt;&lt;iframe height="600" src="https://codepen.io/wasoltani/embed/yyyWvjm?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;sup&gt;Liked this carousel? Check out how to make one &lt;a href="https://dev.to/wsoltani/no-more-static-posts-i-built-an-accessible-high-performance-carousel-for-devto-1pl3"&gt;here&lt;/a&gt;!&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;It turns out, just vaguely telling an AI (or a cat) to "do something cool" doesn't quite cut it. But give it clear, detailed instructions? That's when the magic happens! ✨&lt;/p&gt;



&lt;p&gt;Prompt engineering has become absolutely essential in my day-to-day work. Building AI features has definitely thrown some curveballs my way, whether I'm working on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AI feedback system for educational content&lt;/li&gt;
&lt;li&gt;A browser extension using OCR/AI for data extraction&lt;/li&gt;
&lt;li&gt;My own AI note-processing tool, &lt;a href="https://quickplan.blueblood.tech" rel="noopener noreferrer"&gt;Quickplan&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;AI enhancements (like semantic search &amp;amp; content generation) for my E-commerce platform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some of the common challenges I've wrestled with (and where good prompting is key) include:&lt;/p&gt;

&lt;p&gt;🤯 Getting AI to output reliably structured data (like specific JSON or Markdown formats for Quickplan) instead of just... a wall of text.&lt;/p&gt;

&lt;p&gt;🧠 Ensuring AI truly understands the context – like applying specific pedagogical guidelines for educational feedback versus generating creative product descriptions.&lt;/p&gt;

&lt;p&gt;⚖️ Balancing helpful AI creativity with the need for factual accuracy, especially when extracting data or generating e-commerce content.&lt;/p&gt;

&lt;p&gt;🖼️ The dreaded "text hallucinations" when requesting diagrams or images! Asking for a simple chart and getting... an artistic interpretation of letters. 😖&lt;/p&gt;

&lt;p&gt;🎭 Maintaining a consistent tone and style for the AI's output across multiple interactions or for different branding needs.&lt;/p&gt;



&lt;p&gt;So, how do we give our AI cats better stage directions for their ballet? Here’s what has consistently helped me craft more effective prompts:&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Be Hyper-Specific:&lt;/strong&gt; Don't just ask for "a summary." Define the desired length, target audience, tone (e.g., formal, casual, witty), and explicitly list any key points that &lt;em&gt;must&lt;/em&gt; be included or excluded. Details transform vague requests into actionable instructions.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Provide Rich Context:&lt;/strong&gt; Tell the AI the 'why' and the 'who'. What's the ultimate goal of the meeting notes for &lt;a href="https://quickplan.blueblood.tech" rel="noopener noreferrer"&gt;Quickplan&lt;/a&gt;? Who is the end-user of the feedback? Providing this background helps the AI make better inferences and tailor its response more appropriately.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Set Clear Boundaries &amp;amp; Constraints:&lt;/strong&gt; Explicitly state what the AI &lt;em&gt;should not&lt;/em&gt; do. For example: "Avoid technical jargon," "Focus only on the financial aspects discussed," or "Generate NO text elements inside the requested image."&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Define the Output Format Explicitly:&lt;/strong&gt; If you need structured data, ask for it directly. Specify JSON with a particular schema, Markdown with headers, a numbered list, step-by-step instructions, or even a table format. The clearer your definition, the higher the chance of getting a usable output.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Iterate, Iterate, Iterate:&lt;/strong&gt; Your first prompt is rarely your masterpiece. Treat prompting as an iterative process. Test your prompt, analyze the output, identify where the AI stumbles or misunderstands, then tweak your instructions and repeat.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Know Your Model:&lt;/strong&gt; Different AI models (e.g., GPT-4, Claude, Llama) have distinct strengths, weaknesses, knowledge cut-offs, and stylistic quirks. Understanding the specific model you're working with can help you tailor your prompts for optimal performance.&lt;/p&gt;

&lt;p&gt;Prompt engineering really is about clear communication – it's how we effectively guide these incredibly powerful (and creative!) AI brains. Hopefully, the cat ballet in the carousel helps illustrate these points!&lt;/p&gt;



&lt;p&gt;What are your go-to prompt engineering tricks, biggest headaches, or favorite cat-herding analogies for AI? Share your thoughts and experiences, I'd love to hear them! 👇&lt;/p&gt;

&lt;p&gt;What's that? A box full of tiny cats?!&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-2-generative-ai-whats-inside-the-magic-box-p0g" class="crayons-story__hidden-navigation-link"&gt;The Tiny Cat Guide to AI #2: Generative AI – What's Inside the Magic Box?&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/wsoltani" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3123779%2F1609a94f-311d-4cb2-b9ea-65f84399d11e.png" alt="wsoltani profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/wsoltani" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Wassim Soltani
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Wassim Soltani
                
              
              &lt;div id="story-author-preview-content-2512081" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/wsoltani" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3123779%2F1609a94f-311d-4cb2-b9ea-65f84399d11e.png" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Wassim Soltani&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-2-generative-ai-whats-inside-the-magic-box-p0g" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;May 21 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-2-generative-ai-whats-inside-the-magic-box-p0g" id="article-link-2512081"&gt;
          The Tiny Cat Guide to AI #2: Generative AI – What's Inside the Magic Box?
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/llm"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;llm&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/machinelearning"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;machinelearning&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/learning"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;learning&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-2-generative-ai-whats-inside-the-magic-box-p0g" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/exploding-head-daceb38d627e6ae9b730f36a1e390fca556a4289d5a41abb2c35068ad3e2c4b5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;5&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/wsoltani/the-tiny-cat-guide-to-ai-2-generative-ai-whats-inside-the-magic-box-p0g#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            3 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


&lt;p&gt;&lt;sup&gt;Created by &lt;a href="https://wsoltani.github.io/" rel="noopener noreferrer"&gt;Wassim Soltani&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>promptengineering</category>
      <category>machinelearning</category>
      <category>learning</category>
    </item>
    <item>
      <title>No More Static Posts! I Built an Accessible, High-Performance Carousel for DEV.to</title>
      <dc:creator>Wassim Soltani</dc:creator>
      <pubDate>Mon, 19 May 2025 23:30:01 +0000</pubDate>
      <link>https://dev.to/wsoltani/no-more-static-posts-i-built-an-accessible-high-performance-carousel-for-devto-1pl3</link>
      <guid>https://dev.to/wsoltani/no-more-static-posts-i-built-an-accessible-high-performance-carousel-for-devto-1pl3</guid>
      <description>&lt;p&gt;Hey everyone! 👋&lt;/p&gt;

&lt;p&gt;Like many of you, I love using visual storytelling in my technical content. On &lt;a href="https://www.linkedin.com/in/wsoltani" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, I've been using image carousels to break down complex topics. They're engaging and make information so much more digestible!&lt;/p&gt;

&lt;p&gt;But when I wanted to bring that same energy to my DEV.to posts, I hit a snag: native carousel support isn't really a thing here. Since DEV &lt;em&gt;does&lt;/em&gt; support &lt;strong&gt;CodePen&lt;/strong&gt; embeds, I decided to take matters into my own hands and build my own solution: a &lt;strong&gt;Dynamic Image Carousel&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;&lt;iframe height="600" src="https://codepen.io/wasoltani/embed/myyLGgm?height=600&amp;amp;default-tab=result&amp;amp;embed-version=2"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;I'm thrilled to share it with everyone. The best part? &lt;strong&gt;It's designed to be easily adapted for your own use!&lt;/strong&gt; You can simply fork my Pen, change the image source, and you're good to go (with a few considerations, of course).&lt;/p&gt;




&lt;h3&gt;
  
  
  How to Use This Carousel in Your DEV.to Posts
&lt;/h3&gt;

&lt;p&gt;Ready to add some visual flair to your articles? Here’s how to get started:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Prepare Your Images:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Create your series of images. For best results and a good square aspect ratio, I recommend a size of &lt;strong&gt;900x900 pixels&lt;/strong&gt; per image.&lt;/li&gt;
&lt;li&gt;Name them sequentially: &lt;code&gt;1.png&lt;/code&gt; (or &lt;code&gt;.jpg&lt;/code&gt;, &lt;code&gt;.jpeg&lt;/code&gt;, &lt;code&gt;.webp&lt;/code&gt;), &lt;code&gt;2.png&lt;/code&gt;, &lt;code&gt;3.png&lt;/code&gt;, and so on.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Host Your Images:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Upload your images to an image hosting service that provides direct URLs (ImageKit, AWS S3, even GitHub Pages if serving raw files, etc).&lt;/li&gt;
&lt;li&gt;You'll need a common base URL for the folder where these sequentially named images are stored.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Fork &amp;amp; Configure on Codepen:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to my &lt;a href="https://codepen.io/wasoltani/pen/myyLGgm" rel="noopener noreferrer"&gt;&lt;strong&gt;DEV Carousel Codepen&lt;/strong&gt;&lt;/a&gt; and click the &lt;strong&gt;"Fork"&lt;/strong&gt; button (usually in the bottom right of the editor view) to create your own copy.&lt;/li&gt;
&lt;li&gt;In the JavaScript panel of your forked Pen, find and update the &lt;code&gt;IMAGE_BASE_URL&lt;/code&gt; constant with the URL of &lt;em&gt;your&lt;/em&gt; image folder:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;IMAGE_BASE_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://your-hosting-service.com/your-image-folder/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Test Your Fork:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure your carousel loads &lt;em&gt;your&lt;/em&gt; images correctly within your forked Codepen. Save your Pen.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Embed in DEV.to:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once your forked Codepen is saved and working with your images, copy the URL of your Pen.&lt;/li&gt;
&lt;li&gt;In your DEV.to markdown editor, use the Codepen Liquid tag like this, replacing the example URL with the URL of your forked Pen:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;{% codepen https://codepen.io/YOUR_USERNAME/pen/YOUR_PEN_SLUG_HASH %}&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And voila! ✨&lt;/p&gt;




&lt;h3&gt;
  
  
  So, What's Under the Hood? (The Techy Bits)
&lt;/h3&gt;

&lt;p&gt;For those interested in the mechanics, I focused on building this with vanilla JavaScript, HTML, and CSS, emphasizing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Image Loading:&lt;/strong&gt; It doesn't need a predefined list of image URLs. It discovers images by trying sequential numbers (&lt;code&gt;1.png&lt;/code&gt;, &lt;code&gt;2.png&lt;/code&gt;, etc.) with various formats from your &lt;code&gt;IMAGE_BASE_URL&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance Optimizations:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Parallel Image Discovery:&lt;/strong&gt; Uses &lt;code&gt;Promise.all&lt;/code&gt; (with a &lt;code&gt;Promise.race&lt;/code&gt; for timeout) to check for the existence of multiple potential images and their formats simultaneously, making the initial discovery faster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image Existence Caching:&lt;/strong&gt; Once an image's existence (or non-existence) at a specific URL is confirmed, this status is cached in a &lt;code&gt;Map&lt;/code&gt; to avoid redundant &lt;code&gt;new Image()&lt;/code&gt; checks for the same URL during the discovery phase.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intelligent Preloading:&lt;/strong&gt; When you navigate, it attempts to preload the next and previous slides' images to make transitions feel smoother.&lt;/li&gt;
&lt;li&gt;Uses CSS &lt;code&gt;will-change&lt;/code&gt; for transform animations.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Accessibility (WCAG 2.1 AA in mind):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Full keyboard navigation.&lt;/li&gt;
&lt;li&gt;ARIA roles and attributes (&lt;code&gt;aria-roledescription&lt;/code&gt;, &lt;code&gt;aria-label&lt;/code&gt;, &lt;code&gt;aria-selected&lt;/code&gt;, &lt;code&gt;sr-only&lt;/code&gt; live region for slide changes).&lt;/li&gt;
&lt;li&gt;Visible focus indicators and High Contrast Mode considerations in CSS.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Responsive &amp;amp; Interactive:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Adapts to different screen sizes (CSS &lt;code&gt;aspect-ratio&lt;/code&gt; with fallbacks).&lt;/li&gt;
&lt;li&gt;Touch swipe and mouse drag support for navigation.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Error Handling &amp;amp; Fallbacks:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Gracefully handles images that fail to load by showing a fallback SVG.&lt;/li&gt;
&lt;li&gt;Includes a loading indicator during initial image discovery.&lt;/li&gt;
&lt;li&gt;Provides a &lt;code&gt;&amp;lt;noscript&amp;gt;&lt;/code&gt; message.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;You can dive into the HTML, CSS, and JavaScript directly in the &lt;a href="https://codepen.io/wasoltani/pen/myyLGgm" rel="noopener noreferrer"&gt;Codepen&lt;/a&gt; to see all the details!&lt;/p&gt;




&lt;h3&gt;
  
  
  What's Next?
&lt;/h3&gt;

&lt;p&gt;I built this carousel because I wanted a better way to share visual stories here on DEV.to, and I hope it can be useful to some of you too!&lt;/p&gt;

&lt;p&gt;If you found this interesting or use the carousel, I'd love to hear about it in the comments. Also, consider dropping a ❤️ on CodePen!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Be sure to follow me here on DEV.to!&lt;/strong&gt; I'm planning to share more of my projects and insights on AI, web development, and building cool tools. Stay tuned for some interesting posts!&lt;/p&gt;

&lt;p&gt;Happy coding! 👨‍💻&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>tutorial</category>
      <category>javascript</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Iterative, AI-Powered Documentation with Templates &amp; Inline Edits! - Quack The Code</title>
      <dc:creator>Wassim Soltani</dc:creator>
      <pubDate>Sat, 10 May 2025 17:32:31 +0000</pubDate>
      <link>https://dev.to/wsoltani/qmims-iterative-ai-powered-readmes-with-templates-in-file-edits-m7b</link>
      <guid>https://dev.to/wsoltani/qmims-iterative-ai-powered-readmes-with-templates-in-file-edits-m7b</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aws-amazon-q-v2025-04-30"&gt;Amazon Q Developer "Quack The Code" Challenge&lt;/a&gt;: Crushing the Command Line&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;&lt;code&gt;qmims&lt;/code&gt;&lt;/strong&gt; (short for "Q, Make It Make Sense!"), a powerful command-line interface (CLI) tool designed to automate and revolutionize the often tedious process of creating and maintaining &lt;code&gt;README.md&lt;/code&gt; files for software projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem:&lt;/strong&gt; Every developer knows the importance of a good README, but keeping it comprehensive, up-to-date, and well-written is a constant challenge. It's easy for documentation to become an afterthought, leading to outdated or incomplete project information that hinders collaboration and user adoption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Solution:&lt;/strong&gt; &lt;code&gt;qmims&lt;/code&gt; tackles this problem by leveraging the AI capabilities of &lt;strong&gt;Amazon Q Developer CLI&lt;/strong&gt;. It acts as an intelligent orchestrator, using Amazon Q to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Analyze your project's codebase, structure, and dependencies.&lt;/li&gt;
&lt;li&gt;  Automatically generate complete README files from scratch.&lt;/li&gt;
&lt;li&gt;  Guide README creation using built-in or custom templates.&lt;/li&gt;
&lt;li&gt;  Perform precise, AI-driven edits to existing READMEs based on natural language instructions embedded directly within your Markdown files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With &lt;code&gt;qmims&lt;/code&gt;, developers can significantly reduce the time spent on documentation, ensure consistency, and produce high-quality READMEs that truly "make the project make sense" for everyone. It turns README maintenance from a chore into an AI-assisted collaborative process.&lt;/p&gt;

&lt;p&gt;One of the coolest parts? The &lt;code&gt;README.md&lt;/code&gt; for &lt;code&gt;qmims&lt;/code&gt; and its documentation website was generated using &lt;code&gt;qmims&lt;/code&gt; itself, powered by Amazon Q!&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;While &lt;strong&gt;&lt;a href="https://qmims.vercel.app/" rel="noopener noreferrer"&gt;the documentation website&lt;/a&gt;&lt;/strong&gt; provides comprehensive details, here's a visual walkthrough of qmims in action:&lt;/p&gt;

&lt;h3&gt;
  
  
  Let's break it down!
&lt;/h3&gt;

&lt;p&gt;First, we see the &lt;code&gt;qmims&lt;/code&gt; project with a minimal &lt;code&gt;README.md&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5z2qlkvnm2yix5ozrh4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5z2qlkvnm2yix5ozrh4.png" alt="The project's  raw `README.md` endraw  before  raw `qmims` endraw  (with minimal content)" width="790" height="730"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We then run &lt;code&gt;qmims generate&lt;/code&gt; (which defaults to &lt;code&gt;auto&lt;/code&gt; mode). &lt;code&gt;qmims&lt;/code&gt; checks for the Amazon Q CLI and prompts if it needs to overwrite an existing file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7nyej29inekvujjr14at.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7nyej29inekvujjr14at.png" alt="Terminal showing qmims generate command, prerequisite check, and overwrite prompt" width="752" height="721"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once confirmed, &lt;code&gt;qmims&lt;/code&gt; starts the &lt;code&gt;q chat&lt;/code&gt; session, and Amazon Q begins its analysis of the project. You can see Q detailing its steps, like reading the directory structure and &lt;code&gt;package.json&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujd9anbg9mwoydgw6nmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujd9anbg9mwoydgw6nmr.png" alt="Terminal output showing Amazon Q analyzing project files (fs_read tool)" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After analysis, Amazon Q outlines the comprehensive README structure it has generated based on the project's content.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8xwm96ofyho2jpt8rrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8xwm96ofyho2jpt8rrg.png" alt="Terminal output from Amazon Q detailing the sections of the README it has generated" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here's the &lt;code&gt;README.md&lt;/code&gt; file automatically generated and populated by &lt;code&gt;qmims&lt;/code&gt; using Amazon Q!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdozv0tdzgqizet4lr916.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdozv0tdzgqizet4lr916.png" alt="VS Code view showing the fully generated README.md with multiple sections and content" width="800" height="681"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's showcase the &lt;code&gt;edit&lt;/code&gt; command. We add a simple instruction comment to the end of our &lt;code&gt;README.md&lt;/code&gt; file to add a "Made with Love" section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9yihqip87nv2lbancsm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9yihqip87nv2lbancsm.png" alt="VS Code view showing an embedded instruction comment added to the README.md file" width="578" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We run &lt;code&gt;qmims edit&lt;/code&gt;. &lt;code&gt;qmims&lt;/code&gt; starts &lt;code&gt;q chat&lt;/code&gt;, which finds and processes our embedded instruction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xi3w06ng5bd1fcles7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xi3w06ng5bd1fcles7r.png" alt="Terminal output showing qmims edit command initiating and Amazon Q processing the instruction" width="800" height="65"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Q then modifies the &lt;code&gt;README.md&lt;/code&gt; file directly to incorporate the requested change, adding the new "Made with Love" section. It also confirms what it did in the terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrhx54w3pk54r0eagf4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrhx54w3pk54r0eagf4x.png" alt="VS Code view showing the README.md updated with the new " width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These screenshots walk us through how &lt;code&gt;qmims&lt;/code&gt; leverages Amazon Q Developer CLI to both generate comprehensive READMEs from scratch and make precise, instruction-driven edits, streamlining the entire documentation process. For full details and advanced usage, please visit &lt;a href="https://qmims.vercel.app/" rel="noopener noreferrer"&gt;the documentation website&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Repository
&lt;/h2&gt;

&lt;p&gt;You can explore the full source code for &lt;code&gt;qmims&lt;/code&gt; from the public GitHub repository:&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/wSoltani/qmims" rel="noopener noreferrer"&gt;https://github.com/wSoltani/qmims&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;del&gt;Due to a temporary 24-hour restriction on npm after an initial publishing attempt, the package &lt;code&gt;qmims&lt;/code&gt; might not be immediately installable via &lt;code&gt;npm install -g qmims&lt;/code&gt;. However, the tool is fully functional and can be easily run from the source code available in the repository. The npm package will be published as soon as the restriction is lifted (about 20 hours after this post).&lt;/del&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; the package is now &lt;a href="https://www.npmjs.com/package/qmims" rel="noopener noreferrer"&gt;live on npm&lt;/a&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Used Amazon Q Developer
&lt;/h2&gt;

&lt;p&gt;Amazon Q Developer, specifically the &lt;strong&gt;Amazon Q Developer CLI (&lt;code&gt;q chat&lt;/code&gt;)&lt;/strong&gt;, is the core AI engine that powers &lt;code&gt;qmims&lt;/code&gt;. My tool acts as an intelligent orchestrator for &lt;code&gt;q chat&lt;/code&gt; to perform sophisticated documentation tasks. Here's how:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Project Analysis &amp;amp; Content Generation (&lt;code&gt;auto&lt;/code&gt; mode):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;qmims&lt;/code&gt; invokes &lt;code&gt;q chat&lt;/code&gt; within the context of the user's project directory.&lt;/li&gt;
&lt;li&gt;  It sends high-level prompts like, "Analyze this project and generate a comprehensive README covering typical sections."&lt;/li&gt;
&lt;li&gt;  Amazon Q then uses its understanding of code, file structures, and common project patterns to generate relevant content for sections like Overview, Tech Stack, Installation, and Usage.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Template-Driven Generation (&lt;code&gt;template&lt;/code&gt; mode):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;qmims&lt;/code&gt; provides &lt;code&gt;q chat&lt;/code&gt; with a Markdown template structure (either built-in or user-defined).&lt;/li&gt;
&lt;li&gt;  It then prompts Q to fill in the content for each section defined in the template, based on its analysis of the project. This allows for structured yet AI-generated documentation.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Instruction-Based Editing &amp;amp; Generation (&lt;code&gt;instruct&lt;/code&gt; mode with &lt;code&gt;qmims edit&lt;/code&gt; or &lt;code&gt;qmims generate --mode instruct&lt;/code&gt;):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  This is where Amazon Q's &lt;strong&gt;agentic capabilities&lt;/strong&gt; truly shine.&lt;/li&gt;
&lt;li&gt;  Users embed natural language instructions within HTML comments (e.g., &lt;code&gt;&amp;lt;!-- qmims: Rewrite this section for clarity. --&amp;gt;&lt;/code&gt;) in their Markdown files.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;qmims&lt;/code&gt; parses these instructions and crafts specific prompts for &lt;code&gt;q chat&lt;/code&gt;, providing the instruction and relevant file context.&lt;/li&gt;
&lt;li&gt;  Crucially, &lt;code&gt;qmims&lt;/code&gt; then instructs &lt;code&gt;q chat&lt;/code&gt; to &lt;strong&gt;directly propose and (with user permission) edit the Markdown file.&lt;/strong&gt; For example, Q might say, "I will replace lines 25-30 with the following improved text. Proceed? (y/N)".&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;qmims&lt;/code&gt; manages this interactive dialogue, relaying Q's proposed changes and the user's approval back to &lt;code&gt;q chat&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Context Management:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;qmims&lt;/code&gt; ensures &lt;code&gt;q chat&lt;/code&gt; is always operating in the correct project directory.&lt;/li&gt;
&lt;li&gt;  It can also dynamically instruct &lt;code&gt;q chat&lt;/code&gt; to consider specific files (e.g., &lt;code&gt;package.json&lt;/code&gt;, or the &lt;code&gt;README.md&lt;/code&gt; being edited) when necessary to improve the relevance of Q's output.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Tips &amp;amp; Insights from Using Amazon Q Developer CLI:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Agentic Power is Real:&lt;/strong&gt; The ability of &lt;code&gt;q chat&lt;/code&gt; to understand a task, propose file modifications, and then execute them (with permission) is incredibly powerful. This was key to the &lt;code&gt;instruct&lt;/code&gt; mode in &lt;code&gt;qmims&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Context is King:&lt;/strong&gt; The more relevant context &lt;code&gt;q chat&lt;/code&gt; has, either from the project directory or specific files added to its context (using /context add ), the better and more accurate its output. The &lt;code&gt;Context&lt;/code&gt; section in the Amazon Q CLI documentation was very helpful.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Prompt Engineering for CLI:&lt;/strong&gt; Crafting effective natural language prompts that work well when sent programmatically to &lt;code&gt;q chat&lt;/code&gt; required some iteration. Keeping prompts clear, concise, and action-oriented yielded the best results.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Handling Interactivity:&lt;/strong&gt; Programmatically managing the interactive prompts from &lt;code&gt;q chat&lt;/code&gt; (especially permission requests for file edits) by capturing &lt;code&gt;stdout&lt;/code&gt; and writing to &lt;code&gt;stdin&lt;/code&gt; of the child process was a core challenge but essential for &lt;code&gt;qmims&lt;/code&gt;'s workflow.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Iterative Refinement:&lt;/strong&gt; For complex documentation, instructing Q to generate or edit in smaller, iterative steps often works better than one massive prompt. The &lt;code&gt;instruct&lt;/code&gt; mode in &lt;code&gt;qmims&lt;/code&gt; is designed for this.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;No Need for an AWS Account (Builder ID):&lt;/strong&gt; The fact that developers can use the full power of Amazon Q Developer CLI with just a free AWS Builder ID is a fantastic enabler for tools like &lt;code&gt;qmims&lt;/code&gt; and for this challenge.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Building &lt;code&gt;qmims&lt;/code&gt; was a fun exploration of how an AI assistant like Amazon Q Developer can be orchestrated through its CLI to create a genuinely useful automation tool that goes beyond simple code generation. The process of using &lt;code&gt;qmims&lt;/code&gt; to help generate its own documentation and part of its website's content was a particularly rewarding experience, showcasing its practical utility firsthand.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>awschallenge</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
