<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: AytuncYildizli</title>
    <description>The latest articles on DEV Community by AytuncYildizli (@aytuncyildizli).</description>
    <link>https://dev.to/aytuncyildizli</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aytuncyildizli"/>
    <language>en</language>
    <item>
      <title>I reverse-engineered X's open-source algorithm into a Chrome extension that predicts your reach before you post</title>
      <dc:creator>AytuncYildizli</dc:creator>
      <pubDate>Fri, 03 Apr 2026 21:04:40 +0000</pubDate>
      <link>https://dev.to/aytuncyildizli/i-reverse-engineered-xs-open-source-algorithm-into-a-chrome-extension-that-predicts-your-reach-5hmd</link>
      <guid>https://dev.to/aytuncyildizli/i-reverse-engineered-xs-open-source-algorithm-into-a-chrome-extension-that-predicts-your-reach-5hmd</guid>
      <description>&lt;p&gt;I kept writing tweets, posting them, and getting 200 views. Same effort, wildly different outcomes. So I went to &lt;code&gt;twitter/the-algorithm&lt;/code&gt; on GitHub to find out why.&lt;/p&gt;

&lt;p&gt;Turns out X published exactly how they rank content. Replies are worth 27x a like. Your own reply to your own tweet? 150x. Bookmarks? 20x. External links? -50% reach. It's all in the source code.&lt;/p&gt;

&lt;p&gt;I extracted 36 scoring rules from the algorithm and built a Chrome extension that grades your tweets in real time as you type.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;You open X.com. Start typing a tweet. A small overlay appears:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Score: 72/100&lt;/strong&gt; (updating live as you type)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Predicted reach: ~14,200 people&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Remove the link → 21,600&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Add an image → 19,600&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Both → 34,400&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it. Know your reach before you post.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 36 rules
&lt;/h2&gt;

&lt;p&gt;Every tweet is scored across 5 categories:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Rules&lt;/th&gt;
&lt;th&gt;What it checks&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Hook&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;Opening strength, open loops, contrarian claims, story openers, pattern interrupts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Structure&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Length, hashtag/emoji spam, thread length, line breaks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Engagement&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;CTA presence, bookmark-worthy formats&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Penalties&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;Engagement bait, AI slop, hedging, external links, combative tone, grammar&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bonuses&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;First-person voice, media, questions, sentiment, readability, surprise&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These aren't vibes. They're derived from the actual algorithm weights:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Reply         → 27x a like    (twitter/the-algorithm)
Self-reply    → 150x a like   (twitter/the-algorithm)
Bookmark      → 20x a like    (twitter/the-algorithm)
Media         → 2x Earlybird boost
External link → -30% to -50% reach
3+ hashtags   → ~40% engagement drop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The reach prediction formula
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;predictedReach&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;baseReach&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;contentMultiplier    &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;score&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;so&lt;/span&gt; &lt;span class="nx"&gt;score&lt;/span&gt; &lt;span class="mi"&gt;75&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;timeMultiplier       &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;peak&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;1.25&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;off&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;peak&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.85&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;trendMultiplier      &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;matches&lt;/span&gt; &lt;span class="nx"&gt;trending&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;1.15&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;mediaMultiplier      &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;video&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;1.38&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;linkPenalty          &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;external&lt;/span&gt; &lt;span class="nx"&gt;link&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.55&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;healthMultiplier     &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;account&lt;/span&gt; &lt;span class="nx"&gt;health&lt;/span&gt; &lt;span class="mf"&gt;0.6&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.3&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;calibrationFactor    &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;auto&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;corrects&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="nx"&gt;your&lt;/span&gt; &lt;span class="nx"&gt;own&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The calibration factor is the interesting part. After you post, ReachOS fetches your real metrics at 15-minute intervals, compares predicted vs actual, and adjusts the model. It gets more accurate the more you use it.&lt;/p&gt;

&lt;h2&gt;
  
  
  X-Ray mode
&lt;/h2&gt;

&lt;p&gt;Toggle it on and every tweet in your timeline gets a color-coded score pill. Red through purple. Scroll your feed and immediately see which tweets the algorithm would push.&lt;/p&gt;

&lt;p&gt;I use this more than the composer scoring. You start to internalize the patterns fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI features (optional, BYOK)
&lt;/h2&gt;

&lt;p&gt;The core scoring works entirely client-side with zero API calls. But if you bring your own Anthropic API key, you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Slop detection&lt;/strong&gt; — 28 weighted patterns that flag AI-sounding language, plus Claude verification&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hook analysis&lt;/strong&gt; — 6-dimension assessment of your opening line&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-optimize&lt;/strong&gt; — 5 rounds of iterative rewriting, keeps the best version&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-reply generator&lt;/strong&gt; — Creates a reply to your own tweet (that 150x algorithm boost)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No keys required for the base experience. No account needed. No data leaves your browser unless you opt in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;The extension watches the X.com DOM for the tweet composer. On every keystroke (debounced), the rules engine runs locally and updates the overlay. After 2 seconds of idle, it optionally calls the API for AI-powered suggestions.&lt;/p&gt;

&lt;p&gt;The API is a Next.js app deployable on Vercel with a Neon PostgreSQL database. Four cron jobs handle metric fetching, weight learning, batch optimization, and forecast calibration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Chrome Extension
  ├── Composer Detector (DOM watch)
  ├── Rules Engine (36 rules, instant, client-side)
  ├── Score Overlay (React)
  └── X-Ray Mode (timeline pills)
        │
        ▼ (optional, after 2s idle)
Next.js API (Vercel)
  ├── /analyze (AI delta scoring)
  ├── /suggest (hook/CTA rewrites)
  ├── /account-health (X profile scoring)
  └── Cron jobs (metrics, learning, calibration)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Quick start
&lt;/h2&gt;

&lt;p&gt;No server needed for basic usage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/AytuncYildizli/reach-optimizer.git
&lt;span class="nb"&gt;cd &lt;/span&gt;reach-optimizer
pnpm &lt;span class="nb"&gt;install
&lt;/span&gt;pnpm &lt;span class="nt"&gt;--filter&lt;/span&gt; @reach/extension build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Load the &lt;code&gt;apps/extension/dist/&lt;/code&gt; folder as an unpacked extension. Go to X.com and start typing.&lt;/p&gt;

&lt;p&gt;For the full experience with AI and tracking, copy &lt;code&gt;.env.example&lt;/code&gt;, add your keys, and run &lt;code&gt;pnpm dev&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I learned building this
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The algorithm is surprisingly transparent.&lt;/strong&gt; X published the weights. Most people just never read them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Links are reach killers.&lt;/strong&gt; Everyone knows this intuitively, but seeing "-50% reach" quantified changes behavior fast. Put links in replies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Self-replies are broken good.&lt;/strong&gt; 150x a like is insane. The extension generates self-replies for you.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AI detection is the new spam filter.&lt;/strong&gt; Negative sentiment toward AI-sounding content is a real penalty. The slop detector catches phrases like "delve into", "it's worth noting", "game-changer" before you post them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Calibration matters more than the model.&lt;/strong&gt; The base formula is rough. But after ~50 tweets of calibration data, predictions get within 20% of actual reach.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/AytuncYildizli/reach-optimizer" rel="noopener noreferrer"&gt;AytuncYildizli/reach-optimizer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Star it if it's useful. Issues and PRs welcome. The rules engine is in &lt;code&gt;packages/rules-engine&lt;/code&gt; if you want to add rules or adjust weights.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built this because I was mass-deleting draft tweets that flopped. Now I delete them before posting.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>twitter</category>
      <category>opensource</category>
      <category>javascript</category>
    </item>
    <item>
      <title>I stopped writing prompts. Here's what I use instead.</title>
      <dc:creator>AytuncYildizli</dc:creator>
      <pubDate>Wed, 11 Feb 2026 10:57:56 +0000</pubDate>
      <link>https://dev.to/aytuncyildizli/i-stopped-writing-prompts-heres-what-i-use-instead-325i</link>
      <guid>https://dev.to/aytuncyildizli/i-stopped-writing-prompts-heres-what-i-use-instead-325i</guid>
      <description>&lt;p&gt;markdown&lt;br&gt;
Every prompt I wrote was garbage. &lt;/p&gt;

&lt;p&gt;Not because I don't know prompt engineering — I do. I just couldn't be bothered to write &lt;code&gt;&amp;lt;role&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;constraints&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;success_criteria&amp;gt;&lt;/code&gt; every single time. So I'd type "build me a dashboard" and wonder why Claude gave me something I had to rewrite.&lt;/p&gt;

&lt;p&gt;Sound familiar?&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem is friction, not knowledge
&lt;/h2&gt;

&lt;p&gt;You know a good prompt needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A role definition&lt;/li&gt;
&lt;li&gt;Clear task description
&lt;/li&gt;
&lt;li&gt;Explicit constraints&lt;/li&gt;
&lt;li&gt;Success criteria&lt;/li&gt;
&lt;li&gt;Context about your project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But writing all that for every task? Nobody does it consistently. So we all default to lazy prompts and get lazy outputs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/AytuncYildizli/reprompter" rel="noopener noreferrer"&gt;RePrompter&lt;/a&gt; is a skill file — not a SaaS, not an app, not a VS Code extension. It's a 1000-line &lt;code&gt;SKILL.md&lt;/code&gt; that teaches your LLM to interview you before generating a prompt.&lt;/p&gt;

&lt;p&gt;Here's what it looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0umlq0m4di76hg28yfes.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0umlq0m4di76hg28yfes.gif" alt="RePrompter demo" width="787" height="806"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You type your messy prompt&lt;/strong&gt; — "uhh build me a real-time analytics dashboard, needs charts and stuff, maybe websockets"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It asks 4 smart questions&lt;/strong&gt; — not generic fluff. If you mention "tracking", it asks tracking questions. If you mention "API", it asks API questions. Clickable options, not free text.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It detects complexity&lt;/strong&gt; — single file change? Quick mode, no interview. Frontend + backend + tests? Auto-suggests team mode with parallel agents.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It generates a structured prompt&lt;/strong&gt; — XML-tagged output with &lt;code&gt;&amp;lt;role&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;task&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;constraints&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;success_criteria&amp;gt;&lt;/code&gt;. Ready to execute.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It scores quality&lt;/strong&gt; — before vs after, on 6 dimensions. Typical improvement: 1.6/10 → 9.0/10.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The team mode is where it gets interesting
&lt;/h2&gt;

&lt;p&gt;When RePrompter detects your task spans multiple systems, it doesn't just write one prompt. It generates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;team coordination brief&lt;/strong&gt; with handoff rules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Per-agent sub-prompts&lt;/strong&gt; with scoped responsibilities
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared contracts&lt;/strong&gt; so agents don't drift&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One messy sentence → 3 agents working in parallel with coordination rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quality scoring
&lt;/h2&gt;

&lt;p&gt;Every transformation is scored on 6 dimensions:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Weight&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Clarity&lt;/td&gt;
&lt;td&gt;20%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Specificity&lt;/td&gt;
&lt;td&gt;20%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Structure&lt;/td&gt;
&lt;td&gt;15%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Constraints&lt;/td&gt;
&lt;td&gt;15%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Verifiability&lt;/td&gt;
&lt;td&gt;15%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Decomposition&lt;/td&gt;
&lt;td&gt;15%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Most rough prompts score 1-3. RePrompter typically outputs 8-9+.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation (30 seconds)
&lt;/h2&gt;

&lt;p&gt; ⁠bash&lt;br&gt;
mkdir -p skills/reprompter&lt;br&gt;
curl -sL &lt;a href="https://github.com/AytuncYildizli/reprompter/archive/main.tar.gz" rel="noopener noreferrer"&gt;https://github.com/AytuncYildizli/reprompter/archive/main.tar.gz&lt;/a&gt; | \&lt;br&gt;
  tar xz --strip-components=1 -C skills/reprompter&lt;/p&gt;

&lt;p&gt;Works with Claude Code (auto-discovers &lt;code&gt;SKILL.md&lt;/code&gt;), OpenClaw, or any LLM. Zero dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it's NOT
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Not a SaaS with a monthly fee&lt;/li&gt;
&lt;li&gt;Not a Chrome extension&lt;/li&gt;
&lt;li&gt;Not a prompt library you copy-paste from&lt;/li&gt;
&lt;li&gt;Not model-specific — works with Claude, GPT, Gemini, anything&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's a behavioral spec that makes your LLM do the boring work of prompt engineering for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;⭐ &lt;a href="https://github.com/AytuncYildizli/reprompter" rel="noopener noreferrer"&gt;github.com/AytuncYildizli/reprompter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MIT licensed. PRs welcome — someone already submitted a bug fix on day one.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What's the laziest prompt you've ever written that actually worked? I'm curious.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>promptengineering</category>
      <category>opensource</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
