<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Şahin Uygutalp</title>
    <description>The latest articles on DEV Community by Şahin Uygutalp (@ebuodinde).</description>
    <link>https://dev.to/ebuodinde</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ebuodinde"/>
    <language>en</language>
    <item>
      <title>How I Detect AI-Generated Text Without Calling an LLM</title>
      <dc:creator>Şahin Uygutalp</dc:creator>
      <pubDate>Sat, 28 Mar 2026 11:01:24 +0000</pubDate>
      <link>https://dev.to/ebuodinde/how-i-detect-ai-generated-text-without-calling-an-llm-395h</link>
      <guid>https://dev.to/ebuodinde/how-i-detect-ai-generated-text-without-calling-an-llm-395h</guid>
      <description>&lt;p&gt;Most AI detection tools make the same mistake: they use an LLM to detect an LLM.&lt;/p&gt;

&lt;p&gt;That's expensive, slow, and ironic. You're spending money on the exact technology you're trying to filter out.&lt;/p&gt;

&lt;p&gt;For PR-Sentry — a GitHub Action that protects open source maintainers from AI-generated PR spam — I needed something different. Detection had to be free, fast, and impossible to rate-limit. Here's how I built it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The core insight: AI text has a statistical fingerprint
&lt;/h2&gt;

&lt;p&gt;Human writing is messy. Sentence lengths vary. Word choice is idiosyncratic. Structure is inconsistent.&lt;/p&gt;

&lt;p&gt;AI writing is suspiciously uniform. It favors certain words, certain patterns, certain rhythms. Not because it's programmed to — but because it learned from a corpus that rewards this style.&lt;/p&gt;

&lt;p&gt;This uniformity is detectable without a model. You just need the right signals.&lt;/p&gt;




&lt;h2&gt;
  
  
  Signal 1: Buzzword density
&lt;/h2&gt;

&lt;p&gt;AI models consistently overuse a specific vocabulary. Not randomly — these words appear because they score well in RLHF training. "Robust", "seamless", "leverage", "utilize", "comprehensive", "innovative", "streamline".&lt;/p&gt;

&lt;p&gt;I built a weighted buzzword list and calculate density per 100 words. A human developer writing a PR description might use one of these. An AI will use four.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;buzzword_density&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;words&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;hits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;words&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;BUZZWORDS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;hits&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;words&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Signal 2: Passive voice ratio
&lt;/h2&gt;

&lt;p&gt;AI consistently overuses passive constructions. "The function is called", "the error is handled", "the issue was fixed". Human developers write more directly.&lt;/p&gt;

&lt;p&gt;Passive voice detection is simple with a few regex patterns against auxiliary verb + past participle constructions. Not perfect — but in combination with other signals, it adds real weight.&lt;/p&gt;




&lt;h2&gt;
  
  
  Signal 3: Sentence length uniformity
&lt;/h2&gt;

&lt;p&gt;This is the subtlest signal and the most reliable. Human writing has high variance in sentence length. Some sentences are short. Others run longer because the thought requires it, building toward a point that the writer is trying to make clearly before moving on.&lt;/p&gt;

&lt;p&gt;AI writing has low variance. Everything converges toward a medium length. Calculate the standard deviation of sentence lengths — if it's suspiciously low, something is off.&lt;/p&gt;




&lt;h2&gt;
  
  
  Signal 4: Repetition score
&lt;/h2&gt;

&lt;p&gt;AI frequently restates the same point in different words. "This fixes the bug. The issue has been resolved. The problem no longer occurs." Count unique trigrams vs total trigrams. High repetition = low uniqueness = suspicious.&lt;/p&gt;




&lt;h2&gt;
  
  
  Combining the signals
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;slop_score&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;buzzword_density&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="n"&gt;passive_voice_ratio&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="n"&gt;sentence_length_uniformity&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="n"&gt;repetition_score&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;is_slop&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;slop_score&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The weights aren't arbitrary — they reflect how reliably each signal distinguishes AI from human writing in my test corpus. Buzzword density and repetition are the strongest predictors. Sentence uniformity catches cases where the other two don't fire.&lt;/p&gt;




&lt;h2&gt;
  
  
  Bonus: Shannon entropy for secret detection
&lt;/h2&gt;

&lt;p&gt;This isn't about AI detection — but it's the same "statistics over syntax" philosophy applied to security.&lt;/p&gt;

&lt;p&gt;Regex patterns for API keys have a fundamental problem: they only catch known formats. A new service launches, uses a different key format, and your patterns miss it.&lt;/p&gt;

&lt;p&gt;Shannon entropy catches anything. Real secrets — API keys, tokens, passwords — are high-entropy strings. Human-readable text is not.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;math&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;collections&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Counter&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;shannon_entropy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;counts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Counter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;length&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;length&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;length&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;counts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;values&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Flag anything above 4.5 bits per character
&lt;/span&gt;&lt;span class="n"&gt;is_suspicious&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;shannon_entropy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;4.5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;AKIA4EXAMPLE123KEY&lt;/code&gt; → entropy: 4.7 → flagged.&lt;br&gt;&lt;br&gt;
&lt;code&gt;hello world&lt;/code&gt; → entropy: 3.1 → clean.&lt;/p&gt;




&lt;h2&gt;
  
  
  Does it work?
&lt;/h2&gt;

&lt;p&gt;In testing against a corpus of real PRs and AI-generated PRs, the slop detector catches roughly 70% of AI-generated descriptions with a false positive rate under 5%. It's not perfect — a careful human could write to fool it, and a well-prompted AI can avoid the obvious buzzwords.&lt;/p&gt;

&lt;p&gt;But that's fine. The goal isn't perfection. The goal is to avoid calling Claude on every PR that starts with "This PR implements a robust solution to seamlessly address the issue."&lt;/p&gt;

&lt;p&gt;The LLM runs on the hard cases. The heuristics handle the obvious ones for free.&lt;/p&gt;




&lt;h2&gt;
  
  
  The full implementation is in PR-Sentry
&lt;/h2&gt;

&lt;p&gt;If you want to see the complete code — including the full buzzword list, the passive voice patterns, and how this integrates with the GitHub Actions workflow — it's all open source:&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://github.com/Ebuodinde/PR_SENTRY" rel="noopener noreferrer"&gt;github.com/Ebuodinde/PR_SENTRY&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What signals would you add? I'm especially curious whether anyone has tried perplexity scoring at inference time — the compute cost might be worth it for high-value repos.&lt;/p&gt;

</description>
      <category>python</category>
      <category>opensource</category>
      <category>ai</category>
      <category>security</category>
    </item>
    <item>
      <title>I Built a GitHub Action to Stop AI-Generated PRs Before They Reach My Queue</title>
      <dc:creator>Şahin Uygutalp</dc:creator>
      <pubDate>Thu, 26 Mar 2026 21:18:13 +0000</pubDate>
      <link>https://dev.to/ebuodinde/i-built-a-github-action-to-stop-ai-generated-prs-before-they-reach-my-queue-40bj</link>
      <guid>https://dev.to/ebuodinde/i-built-a-github-action-to-stop-ai-generated-prs-before-they-reach-my-queue-40bj</guid>
      <description>&lt;p&gt;Last year, Daniel Stenberg — the author of curl — shut down his project's bug bounty program.&lt;/p&gt;

&lt;p&gt;The reason? &lt;strong&gt;20% of the incoming reports were AI-generated garbage.&lt;/strong&gt; Not just low-quality — worthless. Hallucinated vulnerabilities, copy-pasted exploit templates, fabricated CVEs. His team was spending more time triaging noise than fixing real bugs.&lt;/p&gt;

&lt;p&gt;This is the asymmetry nobody talks about: AI can generate 500 lines of plausible-looking code in two seconds. Reviewing it still takes a human &lt;em&gt;hours&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;And it's breaking open source.&lt;/p&gt;




&lt;h2&gt;
  
  
  The industry's fix made things worse
&lt;/h2&gt;

&lt;p&gt;When the "AI PR flood" problem became obvious, the market responded with AI code review bots — CodeRabbit, Copilot review, and friends.&lt;/p&gt;

&lt;p&gt;Here's the problem: they review code the way an anxious intern would. They flood your PR timeline with comments about variable naming, whitespace, missing docstrings. They are glorified linters with a chat interface.&lt;/p&gt;

&lt;p&gt;Maintainers went from dealing with &lt;em&gt;one&lt;/em&gt; source of noise (AI-generated PRs) to dealing with &lt;em&gt;two&lt;/em&gt; (AI-generated PRs + AI-generated review comments).&lt;/p&gt;

&lt;p&gt;I call this double review fatigue. And it's what made me build something different.&lt;/p&gt;




&lt;h2&gt;
  
  
  A different approach: Zero-Nitpick
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;&lt;a href="https://github.com/Ebuodinde/PR_SENTRY" rel="noopener noreferrer"&gt;PR-Sentry&lt;/a&gt;&lt;/strong&gt; — a GitHub Action with one core rule:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Never comment on style. Only report things that break in production.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That means: security vulnerabilities, runtime crashes, memory leaks, race conditions. Nothing else.&lt;/p&gt;

&lt;p&gt;Here's how it works under the hood:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Statistical slop detection (no LLM needed)
&lt;/h3&gt;

&lt;p&gt;Before calling any API, PR-Sentry runs a local analysis on the PR description and diff. It calculates a "slop score" based on buzzword density (&lt;code&gt;robust&lt;/code&gt;, &lt;code&gt;seamless&lt;/code&gt;, &lt;code&gt;leverage&lt;/code&gt;, &lt;code&gt;synergy&lt;/code&gt;...), passive voice ratio, sentence length patterns, and repetition score.&lt;/p&gt;

&lt;p&gt;If the PR scores above 60, it's flagged as AI slop — without burning a single API token.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;slop_score&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;buzzword_density&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="n"&gt;passive_voice_ratio&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="n"&gt;sentence_length_avg&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;
    &lt;span class="n"&gt;repetition_score&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;is_slop&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;slop_score&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Security scanning with entropy analysis
&lt;/h3&gt;

&lt;p&gt;The diff parser checks for 50+ security patterns before any LLM touches the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;PATTERNS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;aws_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;    &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AKIA[0-9A-Z]{16}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;github_pat&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gh[pousr]_[A-Za-z0-9_]{36,}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sk-[A-Za-z0-9]{48}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sql_inject&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT.*FROM.*WHERE.*=.*\$&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;xss&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;        &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;innerHTML\s*=|document\.write\(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;# ... 45 more
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;High-entropy strings (Shannon entropy &amp;gt; 4.5) are also flagged to catch accidentally committed secrets.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Constrained AI review
&lt;/h3&gt;

&lt;p&gt;Only PRs that pass the slop filter &lt;em&gt;and&lt;/em&gt; show signs of potential runtime issues reach the LLM. And the system prompt is strict:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"You are a zero-nitpick code reviewer. Report ONLY: runtime crashes, memory leaks, race conditions, security vulnerabilities. If the code is logically sound, say nothing."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One concise comment. Or silence. Never noise.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setup takes 2 minutes
&lt;/h2&gt;

&lt;p&gt;Add this to &lt;code&gt;.github/workflows/pr-sentry.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PR-Sentry Review&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;opened&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;synchronize&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;review&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;fetch-depth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run PR-Sentry&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ebuodinde/PR_SENTRY@v3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;github-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GITHUB_TOKEN }}&lt;/span&gt;
          &lt;span class="na"&gt;anthropic-api-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.ANTHROPIC_API_KEY }}&lt;/span&gt;
          &lt;span class="c1"&gt;# Optional: switch providers&lt;/span&gt;
          &lt;span class="c1"&gt;# provider: openai  # or deepseek&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then add &lt;code&gt;ANTHROPIC_API_KEY&lt;/code&gt; to your repo secrets. Done.&lt;/p&gt;

&lt;p&gt;No database. No external server. No lock-in — it supports Anthropic, OpenAI, and DeepSeek out of the box.&lt;/p&gt;




&lt;h2&gt;
  
  
  It also works locally via MCP
&lt;/h2&gt;

&lt;p&gt;If you use Cursor or Claude Code, PR-Sentry ships an MCP server. You can run the slop detector and security scanner against your diff &lt;em&gt;before pushing&lt;/em&gt;, directly from your IDE.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;The tool is at v3.0.0 with 262 passing tests. What I want to improve next:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smarter language-aware slop detection (Python idioms vs JS patterns)&lt;/li&gt;
&lt;li&gt;VS Code extension for local pre-push checks&lt;/li&gt;
&lt;li&gt;Feedback loop: learning from maintainer decisions over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you maintain an open source project and review fatigue is real for you — give it a try. Remove it anytime, it's just a YAML file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/Ebuodinde/PR_SENTRY" rel="noopener noreferrer"&gt;→ github.com/Ebuodinde/PR_SENTRY&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you noticed an uptick in AI-generated PRs in your repos? Curious how others are handling it — drop a comment.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>github</category>
      <category>security</category>
    </item>
  </channel>
</rss>
