<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mustafa İlhan</title>
    <description>The latest articles on DEV Community by Mustafa İlhan (@thisismustafailhan).</description>
    <link>https://dev.to/thisismustafailhan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thisismustafailhan"/>
    <language>en</language>
    <item>
      <title>Beyond the Prompt: Why "Harness Engineering" is the Real Successor to Prompt Engineering</title>
      <dc:creator>Mustafa İlhan</dc:creator>
      <pubDate>Sun, 29 Mar 2026 10:05:39 +0000</pubDate>
      <link>https://dev.to/thisismustafailhan/beyond-the-prompt-why-harness-engineering-is-the-real-successor-to-prompt-engineering-348</link>
      <guid>https://dev.to/thisismustafailhan/beyond-the-prompt-why-harness-engineering-is-the-real-successor-to-prompt-engineering-348</guid>
      <description>&lt;p&gt;If you’ve spent any time building with LLMs lately, you’ve likely hit the "ceiling of fragility." You craft the perfect prompt, and it works 80% of the time. But in production, that 20% failure rate is a nightmare.&lt;/p&gt;

&lt;p&gt;Most people try to solve this with &lt;strong&gt;Prompt Engineering&lt;/strong&gt; (words) or &lt;strong&gt;Context Engineering&lt;/strong&gt; (data). But the frontier—led by teams at OpenAI and companies like Harness.io—is moving toward &lt;strong&gt;Harness Engineering&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Technical Hierarchy: Prompt vs. Context vs. Harness
&lt;/h3&gt;

&lt;p&gt;To understand why this works, you have to see where it sits in the stack:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Focus&lt;/th&gt;
&lt;th&gt;Mechanism&lt;/th&gt;
&lt;th&gt;The Goal&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Prompt Engineering&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;The Message&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Natural language instructions, few-shot examples.&lt;/td&gt;
&lt;td&gt;Guiding the model's immediate response.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Context Engineering&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;The Memory&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;RAG, vector DBs, dynamic token management.&lt;/td&gt;
&lt;td&gt;Providing the right "knowledge" at the right time.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Harness Engineering&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;The Environment&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Deterministic guardrails, linters, sandboxes, and loops.&lt;/td&gt;
&lt;td&gt;Ensuring the agent &lt;em&gt;physically cannot&lt;/em&gt; commit a failure.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Why Harness Engineering Works: The "On the Loop" Principle
&lt;/h3&gt;

&lt;p&gt;In traditional development, you are &lt;strong&gt;"In the Loop."&lt;/strong&gt; You see a bug, you fix the code. &lt;br&gt;
In Harness Engineering, you stay &lt;strong&gt;"On the Loop."&lt;/strong&gt; If the agent makes a mistake, you don't fix the code; you &lt;strong&gt;fix the environment.&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Deterministic Constraints (The "Brakes")
&lt;/h4&gt;

&lt;p&gt;LLMs are probabilistic; they are "spiky" in their intelligence. A harness wraps that chaos in deterministic code. For example, instead of &lt;em&gt;asking&lt;/em&gt; an AI not to break a dependency, you implement a custom linter that fails the CI build if it does. The harness turns a "suggestion" into a "physical law" of the repository.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. The Verification Loop (Self-Healing)
&lt;/h4&gt;

&lt;p&gt;A core component of the harness is the &lt;strong&gt;Write-Test-Fix&lt;/strong&gt; cycle. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Agent&lt;/strong&gt; generates code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Harness&lt;/strong&gt; automatically executes that code in a sandbox.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Harness&lt;/strong&gt; captures the standard error (stderr) and feeds it back to the agent.
This moves the agent from "guessing" to "navigating" toward a passing test.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. Machine-Readable Truth (&lt;code&gt;AGENTS.md&lt;/code&gt;)
&lt;/h4&gt;

&lt;p&gt;OpenAI’s team found that "tacit knowledge" (the stuff in your head or Slack) is the enemy of AI. A harness requires converting all tribal knowledge into a machine-readable format—like &lt;code&gt;AGENTS.md&lt;/code&gt; or structural tests—that the agent can query as a "Source of Truth."&lt;/p&gt;




&lt;h3&gt;
  
  
  The Pros and Cons
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reliability:&lt;/strong&gt; You move from "it usually works" to "it is verified to work."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability:&lt;/strong&gt; One engineer can manage 10+ agents because they are managing the &lt;em&gt;system&lt;/em&gt;, not the &lt;em&gt;output&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Legacy-Proof:&lt;/strong&gt; As models get smarter (GPT-4 to GPT-5), your harness stays valid. You’re just putting a bigger engine in the same well-built car.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High Initial Overhead:&lt;/strong&gt; You have to build the linters, the sandboxes, and the documentation first. It feels "slower" at the start.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rigidity:&lt;/strong&gt; A good harness limits what an AI can do. If you need a "creative" hallucination, a harness will kill it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical Debt:&lt;/strong&gt; If your harness isn't well-maintained, the AI will get stuck in loops trying to follow outdated rules.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  The Shift in Your Job Description
&lt;/h3&gt;

&lt;p&gt;We are moving from being &lt;strong&gt;Code Authors&lt;/strong&gt; to &lt;strong&gt;Capability Architects.&lt;/strong&gt; When you sit down to work now, your first question shouldn't be &lt;em&gt;"How do I write this function?"&lt;/em&gt; It should be &lt;em&gt;"How do I build a harness so that an agent can write this function—and a thousand others like it—without me ever touching the keyboard?"&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>How I Actually Use Prompt Router to Write Better Code (Not Just Faster)</title>
      <dc:creator>Mustafa İlhan</dc:creator>
      <pubDate>Fri, 27 Mar 2026 16:54:16 +0000</pubDate>
      <link>https://dev.to/thisismustafailhan/how-i-actually-use-prompt-router-to-write-better-code-not-just-faster-4k1e</link>
      <guid>https://dev.to/thisismustafailhan/how-i-actually-use-prompt-router-to-write-better-code-not-just-faster-4k1e</guid>
      <description>&lt;p&gt;I originally thought tools like &lt;a href="https://prompt-router.pages.dev/dev" rel="noopener noreferrer"&gt;Prompt Router&lt;/a&gt; were about “automatically picking the best AI model.”&lt;/p&gt;

&lt;p&gt;That’s true—but it’s not why I kept using it.&lt;/p&gt;

&lt;p&gt;What made it stick for me was something much more practical:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I can &lt;strong&gt;store prompts, refine them, and run them across multiple models to compare results&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That changed how I work way more than automatic routing ever did.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Problem I Had
&lt;/h2&gt;

&lt;p&gt;When coding with AI, I kept running into this loop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write a prompt&lt;/li&gt;
&lt;li&gt;Get a decent answer&lt;/li&gt;
&lt;li&gt;Wonder: &lt;em&gt;“Is this actually the best I can get?”&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Copy-paste the same prompt into another model&lt;/li&gt;
&lt;li&gt;Lose track of which version was better&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It was messy. And honestly, I wasn’t improving my prompts—I was just guessing.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Do Now (My Actual Workflow)
&lt;/h2&gt;

&lt;p&gt;With &lt;a href="https://prompt-router.pages.dev/dev" rel="noopener noreferrer"&gt;Prompt Router&lt;/a&gt;, I treat prompts like reusable assets.&lt;/p&gt;

&lt;p&gt;Instead of this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;random prompt → random result → forgotten
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I do this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a &lt;strong&gt;prompt template&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Store it in the prompt library&lt;/li&gt;
&lt;li&gt;Run it across &lt;strong&gt;multiple models&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Compare outputs side-by-side&lt;/li&gt;
&lt;li&gt;Refine the prompt&lt;/li&gt;
&lt;li&gt;Repeat&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It feels much closer to how we treat code:&lt;br&gt;
→ iterate&lt;br&gt;
→ test&lt;br&gt;
→ improve&lt;/p&gt;


&lt;h2&gt;
  
  
  Concrete Example #1: Refactoring a Function
&lt;/h2&gt;

&lt;p&gt;Let’s say I start with a basic prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Refactor this function to be more readable"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I’ll save it as a template and run it across:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one fast model&lt;/li&gt;
&lt;li&gt;one strong reasoning model&lt;/li&gt;
&lt;li&gt;one model that’s good at code style&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What I notice:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Model A → fast but superficial&lt;/li&gt;
&lt;li&gt;Model B → deeper structure changes&lt;/li&gt;
&lt;li&gt;Model C → best naming + readability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now I refine the prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Refactor this function into smaller functions, improve naming, and make it easier to test"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run again → compare again.&lt;/p&gt;

&lt;p&gt;👉 The key:&lt;br&gt;
I’m not just getting answers—I’m &lt;strong&gt;learning what a good prompt looks like&lt;/strong&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  Concrete Example #2: Debugging
&lt;/h2&gt;

&lt;p&gt;Prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Why is this async queue processing jobs twice?"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running it across models shows something interesting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One model → generic advice&lt;/li&gt;
&lt;li&gt;Another → points to race conditions&lt;/li&gt;
&lt;li&gt;Another → suggests idempotency + retry logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Individually, each answer is incomplete.&lt;/p&gt;

&lt;p&gt;But together?&lt;/p&gt;

&lt;p&gt;→ I get a &lt;strong&gt;much clearer picture of the problem space&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is something I &lt;em&gt;never&lt;/em&gt; got when using just one model.&lt;/p&gt;




&lt;h2&gt;
  
  
  Concrete Example #3: Designing Systems
&lt;/h2&gt;

&lt;p&gt;When I work on architecture prompts like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Design a scalable event-driven system for handling payments"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Comparing models is incredibly valuable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One focuses on infrastructure (Kafka, queues)&lt;/li&gt;
&lt;li&gt;One focuses on data consistency&lt;/li&gt;
&lt;li&gt;One focuses on edge cases and failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of picking “the best answer,” I:&lt;br&gt;
→ combine the best parts&lt;/p&gt;

&lt;p&gt;It’s like having multiple senior engineers giving input.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Prompt Library Is the Underrated Feature
&lt;/h2&gt;

&lt;p&gt;This part surprised me.&lt;/p&gt;

&lt;p&gt;Saving prompts as templates means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I reuse high-quality prompts&lt;/li&gt;
&lt;li&gt;I stop rewriting the same instructions&lt;/li&gt;
&lt;li&gt;I build a personal “prompt toolkit” over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, I now have templates like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Refactor for readability + testability”&lt;/li&gt;
&lt;li&gt;“Explain bug with root cause + fix”&lt;/li&gt;
&lt;li&gt;“Generate production-ready code with edge cases”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And they keep getting better.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Biggest Shift: Prompting Became Iteration
&lt;/h2&gt;

&lt;p&gt;Before:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompting felt like guessing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompting feels like &lt;strong&gt;engineering&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test prompts&lt;/li&gt;
&lt;li&gt;Compare outputs&lt;/li&gt;
&lt;li&gt;Refine systematically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s basically:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Prompt development instead of prompt writing&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Where Routing Still Helps (Quietly)
&lt;/h2&gt;

&lt;p&gt;The routing part still matters—but more in the background.&lt;/p&gt;

&lt;p&gt;While I’m comparing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple prompts don’t waste powerful models&lt;/li&gt;
&lt;li&gt;Complex prompts automatically hit stronger ones&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Better results&lt;/li&gt;
&lt;li&gt;Lower cost&lt;/li&gt;
&lt;li&gt;Without thinking about it&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Real Value (For Me)
&lt;/h2&gt;

&lt;p&gt;Prompt Router didn’t just improve my outputs.&lt;/p&gt;

&lt;p&gt;It changed how I &lt;em&gt;think&lt;/em&gt; about using AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompts are reusable assets&lt;/li&gt;
&lt;li&gt;Models are tools to compare, not commit to&lt;/li&gt;
&lt;li&gt;Iteration beats guessing&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;If you’re only using one model, you’re missing something important:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI isn’t about getting &lt;em&gt;an&lt;/em&gt; answer.&lt;br&gt;
It’s about exploring the space of possible answers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Prompt Router makes that exploration easy.&lt;/p&gt;

&lt;p&gt;And once you start comparing, refining, and reusing prompts…&lt;/p&gt;

&lt;p&gt;…it’s really hard to go back.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>Prompt Router: Write One Prompt. Open Every AI.</title>
      <dc:creator>Mustafa İlhan</dc:creator>
      <pubDate>Sat, 21 Mar 2026 12:42:27 +0000</pubDate>
      <link>https://dev.to/thisismustafailhan/prompt-router-write-one-prompt-open-every-ai-30od</link>
      <guid>https://dev.to/thisismustafailhan/prompt-router-write-one-prompt-open-every-ai-30od</guid>
      <description>&lt;h2&gt;
  
  
  The Problem: AI Comparison is Tedious
&lt;/h2&gt;

&lt;p&gt;If you use AI tools seriously — for writing, coding, research, marketing, or just exploring — you've hit the same wall. You write a prompt in ChatGPT. You get a decent response. Then you wonder: &lt;em&gt;what would Claude say? Or Gemini? Or Perplexity?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So you open a new tab. Paste the prompt. Wait. Open another tab. Paste again. And again. And again.&lt;/p&gt;

&lt;p&gt;By the time you've queried three different models, you've wasted two minutes on logistics that should take two seconds. The comparison itself — the interesting part — barely gets any time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Copying and pasting across AI tools isn't a workflow. It's friction.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's the problem Prompt Router solves.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Prompt Router?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prompt Router&lt;/strong&gt; is a free, single-page tool that lets you write your prompt once, then open it instantly in any of the major AI chat tools — prefilled and ready to go.&lt;/p&gt;

&lt;p&gt;Visit here now &lt;a href="https://prompt-router.pages.dev" rel="noopener noreferrer"&gt;prompt-router.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No account. No login. No backend. Just you, your prompt, and every AI at your fingertips.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frix4vfz3taisjww5x1yl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frix4vfz3taisjww5x1yl.png" alt=" " width="800" height="848"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Supported providers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ChatGPT&lt;/strong&gt; — OpenAI's flagship model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude&lt;/strong&gt; — Anthropic's model, great for nuanced writing and analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini&lt;/strong&gt; — Google's multimodal AI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copilot&lt;/strong&gt; — Microsoft's AI, integrated with the web&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Perplexity&lt;/strong&gt; — AI-powered search and research&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DeepSeek&lt;/strong&gt; — fast-rising open-weight model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grok&lt;/strong&gt; — xAI's model, with access to real-time X/Twitter data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Le Chat&lt;/strong&gt; — Mistral AI's conversational interface&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The experience is intentionally minimal:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Land on the page&lt;/strong&gt; — there's nothing to sign up for or configure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pick a quick template&lt;/strong&gt; (optional) — ELI5, Compare, Brainstorm, Cold Email, Debug Code, and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edit or write your prompt&lt;/strong&gt; in the large textarea.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Click any provider button&lt;/strong&gt; — the prompt opens in a new tab, prefilled and ready.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Or copy to clipboard&lt;/strong&gt; with one click (keyboard shortcut: &lt;code&gt;⌘ + Enter&lt;/code&gt;).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it. Comparing five AI models now takes about 30 seconds.&lt;/p&gt;




&lt;h2&gt;
  
  
  Design Decisions Worth Mentioning
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Zero friction by default
&lt;/h3&gt;

&lt;p&gt;There's no onboarding flow. No modal asking for your email. No tutorial overlay. You land, you write, you send. Every second of unnecessary UI is a second stolen from actual thinking.&lt;/p&gt;

&lt;h3&gt;
  
  
  Templates that actually matter
&lt;/h3&gt;

&lt;p&gt;The quick templates aren't generic filler. They're eight of the most commonly used prompt patterns across AI workflows: explain, compare, rewrite, summarize, brainstorm, debug, cold outreach, and decision analysis. Each template is designed to be &lt;em&gt;edited, not just used&lt;/em&gt; — they include placeholders like &lt;code&gt;[concept]&lt;/code&gt; and &lt;code&gt;[paste code]&lt;/code&gt; that make the expected customization obvious.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dark and light mode, automatically
&lt;/h3&gt;

&lt;p&gt;The interface reads your system preference and adapts. Dark mode uses a near-black background with warm gold accents. Light mode switches to a warm parchment palette. Both are designed to be easy on the eyes during long sessions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keyboard shortcuts for power users
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;⌘ + Enter&lt;/code&gt; copies your prompt instantly. &lt;code&gt;Escape&lt;/code&gt; clears the textarea when it's focused. These are small touches, but they matter when you're in flow and don't want to reach for the mouse.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who Is It For?
&lt;/h2&gt;

&lt;p&gt;Prompt Router is built for people who think carefully about AI outputs — not just people who use AI occasionally.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prompt engineers&lt;/strong&gt; who want to A/B test the same prompt across models&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developers&lt;/strong&gt; evaluating which model handles a specific coding task best&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Writers and marketers&lt;/strong&gt; comparing tone and creativity across ChatGPT, Claude, and Gemini&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Researchers&lt;/strong&gt; cross-referencing answers from Perplexity and DeepSeek&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Product managers&lt;/strong&gt; using AI for competitive analysis, user interviews, and spec writing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Curious people&lt;/strong&gt; who just want to see how differently models interpret the same question&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;If you've ever caught yourself copying a prompt into a third AI tool, this tool was built for you.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;Prompt Router is a single HTML file. No framework, no build step, no backend, no dependencies beyond two Google Fonts. It runs entirely in your browser.&lt;/p&gt;

&lt;p&gt;A few technical notes for the curious:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;URL prefilling&lt;/strong&gt; works by appending your prompt as a query parameter to each provider's URL. Most major AI chat tools support this — though it can change without notice, and a few may not auto-submit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nothing is stored&lt;/strong&gt; — the page has no analytics, no localStorage, no server. Your prompts never leave your browser.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Popup blockers&lt;/strong&gt; can interfere with opening multiple tabs at once. If a tab doesn't open, the fallback is a one-click copy to clipboard.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PWA-ready&lt;/strong&gt; — you can install Prompt Router on your phone or desktop as a standalone app via your browser's "Add to Home Screen" option.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;Prompt Router is live and free at &lt;a href="https://prompt-router.pages.dev" rel="noopener noreferrer"&gt;prompt-router.pages.dev&lt;/a&gt;. No account required, no time limit, no catch.&lt;/p&gt;

&lt;p&gt;The next time you're about to open a second tab and manually paste your prompt, remember: &lt;em&gt;there's a better way.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>promptengineering</category>
      <category>ai</category>
      <category>tooling</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Supported Languages on Serverless</title>
      <dc:creator>Mustafa İlhan</dc:creator>
      <pubDate>Mon, 02 Mar 2020 15:11:17 +0000</pubDate>
      <link>https://dev.to/thisismustafailhan/supported-languages-on-serverless-3ob5</link>
      <guid>https://dev.to/thisismustafailhan/supported-languages-on-serverless-3ob5</guid>
      <description>&lt;h1&gt;
  
  
  Serverless Architecture
&lt;/h1&gt;

&lt;p&gt;Serverless architecture allows you to focus on the development of application without thinking about the details of the infrastructure. Therefore, it saves you time in the development of your application.&lt;/p&gt;

&lt;p&gt;You develop your functions and handlers and then connect events to your handlers. That's it. This approach also called Function as a Service (Faas).&lt;/p&gt;

&lt;h1&gt;
  
  
  Serverless Infrastructure Providers
&lt;/h1&gt;

&lt;p&gt;There are a couple of cloud providers that have serverless service. Major ones are Amazon Web Services (AWS), Azure, Google Cloud, Firebase, Cloudflare, IBM Cloud and Alibaba Cloud. There are differences between them in terms of supported languages, CPU, memory, etc. This article focuses on the supported programming languages of these cloud providers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F85lbhkh17nsccshauqn1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F85lbhkh17nsccshauqn1.png" alt="Supported Languages on Serverless" width="667" height="792"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Serverless Languages
&lt;/h1&gt;

&lt;p&gt;In the chart, the left-hand side lists the available programming languages and the right-hand side lists the cloud providers. If there is a link between language and cloud provider, that means this language is supported by that cloud provider. As you can see that not all languages supported by every cloud provider except AWS and IBM cloud has a Runtime API or Docker that supports every language on top of that. Javascript is the only language supported by every cloud provider.&lt;/p&gt;

&lt;p&gt;In Serverless infrastructure, since you don't depend on infrastructure, you can easily switch between cloud providers. Therefore, the most important parameter here is the programming language that you develop your application.&lt;/p&gt;

&lt;p&gt;Most recent version of supported languages can be reachable via this website &lt;a href="https://serverless-lang.web.app/" rel="noopener noreferrer"&gt;https://serverless-lang.web.app/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>azure</category>
    </item>
  </channel>
</rss>
