<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: fock1e</title>
    <description>The latest articles on DEV Community by fock1e (@fock1e).</description>
    <link>https://dev.to/fock1e</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/fock1e"/>
    <language>en</language>
    <item>
      <title>Why I Built Scenar.io - An AI-Powered DevOps Interview Practice Tool</title>
      <dc:creator>fock1e</dc:creator>
      <pubDate>Tue, 07 Apr 2026 12:24:32 +0000</pubDate>
      <link>https://dev.to/fock1e/why-i-built-scenario-an-ai-powered-devops-interview-practice-tool-2ha4</link>
      <guid>https://dev.to/fock1e/why-i-built-scenario-an-ai-powered-devops-interview-practice-tool-2ha4</guid>
      <description>&lt;h1&gt;
  
  
  Why I Built Scenar.io
&lt;/h1&gt;

&lt;h2&gt;
  
  
  How It Started
&lt;/h2&gt;

&lt;p&gt;I was prepping for a Google SRE interview and struggling with the debugging portion. Not the knowledge - I knew the commands, I'd fixed real incidents at work. The problem was practicing under interview conditions: thinking out loud, explaining your reasoning, having someone challenge your approach.&lt;/p&gt;

&lt;p&gt;I started using Claude in the terminal to simulate it. I'd describe a scenario, ask it to act as a broken server, and practice talking through my debugging process. After a few weeks I realized I was spending more time setting up the prompts than actually practicing. I had this whole system - hidden server states, clue tracking, difficulty levels - and it hit me: this should just be a tool.&lt;/p&gt;

&lt;p&gt;I looked at what already existed. SadServers makes you type exact commands into a real terminal. LeetCode is for coding, not ops. Flashcards test recall, not problem-solving. Nothing let you practice the way interviews actually work - talking through your approach, getting realistic output, having someone push back on your reasoning.&lt;/p&gt;

&lt;p&gt;So I turned my Claude practice sessions into a proper product.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Scenar.io Actually Is
&lt;/h2&gt;

&lt;p&gt;Scenar.io is an interactive debugging practice tool where an AI plays the role of both the server you're investigating and the interviewer watching you work. You describe what you'd do in natural language - "I'd check if nginx is running" - and the AI returns realistic command output based on a hidden server state, then asks follow-up questions like a real interviewer would.&lt;/p&gt;

&lt;p&gt;The key distinction: this is a conversation, not a terminal. You say "let me check what's listening on port 80" instead of typing &lt;code&gt;ss -tlnp | grep :80&lt;/code&gt;. The AI understands your intent and shows you what you'd actually see.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Quick Example
&lt;/h2&gt;

&lt;p&gt;Here's what a session looks like with the "Nginx Won't Start" scenario:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Interviewer:&lt;/strong&gt; You're on-call and receive an alert: "Website is down!" The web server (web-server-01) is reporting that nginx won't start. How would you begin investigating?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You:&lt;/strong&gt; First I'd check the status of the nginx service to see what error it's throwing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interviewer:&lt;/strong&gt;&lt;/p&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ systemctl status nginx
nginx.service - A high performance web server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled)
   Active: failed (Result: exit-code) since Wed 2024-01-17 14:30:22 UTC
  Process: 4521 ExitCode=1/FAILURE

Jan 17 14:30:22 web-server-01 nginx[4521]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Interesting - the bind failed. What does that tell you, and what would you check next?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;From there you work through the problem: find that Apache is squatting on port 80, stop it, start nginx, verify the fix. The AI tracks your progress through stages - root cause identification, short-term fix, long-term prevention - just like a real interview rubric.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Ways to Practice
&lt;/h2&gt;

&lt;p&gt;Beyond the 18 built-in debugging scenarios (disk full, OOM killer, DNS failures, container crashloops, compromised binaries, and more), there's:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Verbal interviews&lt;/strong&gt; - Conceptual Q&amp;amp;A on Linux, networking, containers, security, and system design. The AI scores your answers on accuracy, completeness, and communication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sandbox mode&lt;/strong&gt; - Open-ended exploration of simulated servers with no specific bug to find. Practice poking around a Kubernetes node or auditing a web stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You pick your interviewer style too: supportive mentor (easy), neutral professional (medium), or Socratic challenger (hard) who makes you justify every decision.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tech Behind It
&lt;/h2&gt;

&lt;p&gt;For the Dev.to crowd - the stack is Svelte 5 on the frontend with a Bun + Hono backend, Turso (libSQL) for the database with Drizzle ORM, and Claude Sonnet 4.5 via OpenRouter for the AI. Deployed on Fly.io with GitHub Actions.&lt;/p&gt;

&lt;p&gt;The interesting technical bit is how the AI simulation works. Each scenario has a &lt;code&gt;hidden_state&lt;/code&gt; - a JSON blob describing the full server state (running processes, disk usage, service statuses, log files, network connections). The AI receives this state along with the user's command and returns realistic output that's consistent with the hidden state. A hallucination detection layer compares the AI's output against the state to catch fabricated data.&lt;/p&gt;

&lt;p&gt;The AI prompt has a dual-role structure: first act as a "server simulator" that must produce command output, then act as an "interviewer" that asks follow-up questions. This prevents the common failure mode where the AI skips the output and just says "Good thinking, what else would you check?"&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Free, What's Not
&lt;/h2&gt;

&lt;p&gt;The free tier gives you 5 debugging sessions, 3 verbal interviews, and 2 sandbox sessions per month. Enough to get a feel for it and practice regularly.&lt;/p&gt;

&lt;p&gt;Pro is $9/month for unlimited everything, custom scenario generation (describe any topic and the AI builds a scenario for you), and all difficulty modes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you're reading this early: the first 100 subscribers get Pro for $5/month with code &lt;code&gt;M3OTEYOQ&lt;/code&gt;.&lt;/strong&gt; That price locks in permanently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;The whole thing is live at &lt;a href="https://scenar.site" rel="noopener noreferrer"&gt;scenar.site&lt;/a&gt;. Sign in with GitHub, pick a scenario, and start debugging. No credit card needed for the free tier.&lt;/p&gt;

&lt;p&gt;I built this because I needed it. If you're prepping for DevOps or SRE interviews - or you just want to sharpen your debugging instincts - I'd genuinely appreciate you giving it a shot and telling me what you think.&lt;/p&gt;

&lt;p&gt;What scenarios would you want to see? What would make this more useful for your prep? I'm one engineer building this, and feedback directly shapes what gets built next.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>sre</category>
      <category>interview</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
