<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yuvraj Angad Singh</title>
    <description>The latest articles on DEV Community by Yuvraj Angad Singh (@yuvrajangadsingh).</description>
    <link>https://dev.to/yuvrajangadsingh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yuvrajangadsingh"/>
    <language>en</language>
    <item>
      <title>I Scanned 31 AI-Built Repos. Each Tool Leaves Behind a Different Mess.</title>
      <dc:creator>Yuvraj Angad Singh</dc:creator>
      <pubDate>Wed, 08 Apr 2026 08:26:21 +0000</pubDate>
      <link>https://dev.to/yuvrajangadsingh/i-scanned-31-ai-built-repos-each-tool-leaves-behind-a-different-mess-4k3</link>
      <guid>https://dev.to/yuvrajangadsingh/i-scanned-31-ai-built-repos-each-tool-leaves-behind-a-different-mess-4k3</guid>
      <description>&lt;p&gt;46% of every issue I found was the same thing: deep nesting. AI models keep stuffing logic into the same function instead of breaking it apart. That pattern showed up in every tool I tested.&lt;/p&gt;

&lt;p&gt;I scanned 31 public JS/TS repos with &lt;a href="https://github.com/yuvrajangadsingh/vibecheck" rel="noopener noreferrer"&gt;vibecheck&lt;/a&gt;, a linter with 34 rules for AI-specific code smells. 10 repos from Cursor, 11 from Lovable, 10 from Bolt.new. Only public repos with real application code, no scaffolds or starters.&lt;/p&gt;

&lt;p&gt;This is not a scientific benchmark. But it's a real sample of real repos people shipped. I manually reviewed every error-level finding and noted which patterns were real issues vs template boilerplate. The &lt;a href="https://github.com/yuvrajangadsingh/vibecheck/tree/main/article" rel="noopener noreferrer"&gt;full repo list and raw scan data&lt;/a&gt; are public if you want to verify.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Repos&lt;/th&gt;
&lt;th&gt;Issues&lt;/th&gt;
&lt;th&gt;Files&lt;/th&gt;
&lt;th&gt;Issues/file&lt;/th&gt;
&lt;th&gt;Errors&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cursor&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;9,534&lt;/td&gt;
&lt;td&gt;1,499&lt;/td&gt;
&lt;td&gt;6.36&lt;/td&gt;
&lt;td&gt;77&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lovable&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;4,832&lt;/td&gt;
&lt;td&gt;1,886&lt;/td&gt;
&lt;td&gt;2.56&lt;/td&gt;
&lt;td&gt;27&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bolt&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;329&lt;/td&gt;
&lt;td&gt;212&lt;/td&gt;
&lt;td&gt;1.55&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;14,695 issues across 3,597 files. Cursor had the most issues by far, but also the biggest repos. Bolt looked cleanest by volume but had the smallest codebases.&lt;/p&gt;

&lt;h2&gt;
  
  
  What broke most often
&lt;/h2&gt;

&lt;p&gt;Three patterns dominated everything else:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deep nesting&lt;/strong&gt;: ~46% of all issues. The most AI-shaped pattern in the dataset. Models keep appending branches because it keeps context intact. Humans usually extract functions earlier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Console.log pollution&lt;/strong&gt;: hundreds of hits across every tool. No AI coding tool cleans up debug logging.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;God functions&lt;/strong&gt;: the largest was 3,579 lines in a Cursor repo (easy-kanban's AppContent). Lovable's biggest was 1,810 lines. Bolt's was 820 lines.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's what nested AI code looks like in practice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newChunkCount&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;size&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;currentProgress&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mf"&gt;99.9&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[PackageQueue] Progress 100%, no new chunks...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reset&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;emptyRetryCount&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`[PackageQueue] No new chunks. Retry: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;emptyRetryCount&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;emptyRetryCount&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;maxEmptyRetries&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[PackageQueue] PKG installation failed&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nested control flow plus debug logging left in production. This showed up everywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  Each tool had a distinct pattern
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cursor&lt;/strong&gt; had the highest issue density (6.36/file) and 77 error-level findings. All SQL injection hits came from Cursor repos. It also had the most &lt;code&gt;as any&lt;/code&gt; usage (490 in one repo alone). My read: Cursor users were building bigger, more ambitious apps, and the mess scaled with them. This is inference, not proof. Bigger projects have more surface area.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lovable&lt;/strong&gt; had innerHTML or dangerouslySetInnerHTML in 100% of repos (11 out of 11). That's a confirmed pattern at scale. It also produced the only &lt;code&gt;eval&lt;/code&gt;-class calls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;code&lt;/span&gt; &lt;span class="na"&gt;dangerouslySetInnerHTML&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;__html&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;highlightedCode&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;script&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;consoleProxyScript&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some of these are template-level patterns (syntax highlighting, chart CSS) that aren't exploitable in isolation. But they're the kind of code that drifts toward real XSS if someone later feeds user input into the same path. Worth reviewing, not worth panicking about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bolt&lt;/strong&gt; had the lowest issues per file (1.55) but was the only tool that shipped a hardcoded database credential and an error info leak in the same repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Pool&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;5.75.154.79&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postgres&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Jk5h...redacted...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postgres&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bolt looked cleaner by volume but not necessarily safer. Small repo size helped its totals. Batch 2 had zero security findings though, so the batch 1 hardcoded key might be an outlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually mattered
&lt;/h2&gt;

&lt;p&gt;The tools did not fail in the same way. But the overlap mattered more than the differences.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;None of them naturally refactor.&lt;/li&gt;
&lt;li&gt;None of them clean up logs.&lt;/li&gt;
&lt;li&gt;None of them ask "should this really be one 3,500-line function?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Scope and review mattered more than tool choice. Small apps were cleaner. Lovable had the single cleanest repo in the entire set (humanise-ai, 0.48 issues/file). Cursor had the dirtiest (easy-kanban, 14.04 issues/file). One of the cleanest Bolt repos was actually a hybrid built with Bolt + Cursor + Cline.&lt;/p&gt;

&lt;p&gt;The bad outcome was never "AI touched the repo." The bad outcome was "AI wrote it, nobody looked at it after."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vibecheck flagged patterns, not confirmed vulnerabilities.&lt;/strong&gt; Some of these findings are real bugs. Some are smells that might never cause a problem. A linter doesn't know intent. It flags what it sees. I doubled the sample from 15 to 31 repos, and the same patterns held.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;If you're shipping AI-generated code, run a check before you push:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @yuvrajangadsingh/vibecheck &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;34 rules. JS/TS and Python. Also runs as a &lt;a href="https://github.com/marketplace/actions/vibecheck-ai-slop" rel="noopener noreferrer"&gt;GitHub Action&lt;/a&gt;, &lt;a href="https://marketplace.visualstudio.com/items?itemName=yuvrajangadsingh.vibecheck-linter" rel="noopener noreferrer"&gt;VS Code extension&lt;/a&gt;, and &lt;a href="https://github.com/yuvrajangadsingh/vibecheck" rel="noopener noreferrer"&gt;MCP server&lt;/a&gt; for AI coding agents.&lt;/p&gt;

</description>
      <category>devtools</category>
      <category>ai</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>I Scanned a 1K-Star Cursor Project. AI Code Doesn't Look Like AI Code Anymore.</title>
      <dc:creator>Yuvraj Angad Singh</dc:creator>
      <pubDate>Sun, 29 Mar 2026 14:06:35 +0000</pubDate>
      <link>https://dev.to/yuvrajangadsingh/i-scanned-a-1k-star-cursor-project-ai-code-doesnt-look-like-ai-code-anymore-1lck</link>
      <guid>https://dev.to/yuvrajangadsingh/i-scanned-a-1k-star-cursor-project-ai-code-doesnt-look-like-ai-code-anymore-1lck</guid>
      <description>&lt;p&gt;There's a common belief that AI-generated code is easy to spot. Obvious comments, step-by-step numbered instructions, hedging language like "might need to adjust this later."&lt;/p&gt;

&lt;p&gt;I built &lt;a href="https://github.com/yuvrajangadsingh/vibecheck" rel="noopener noreferrer"&gt;vibecheck&lt;/a&gt;, a static analysis tool that detects these patterns. I ran it against &lt;a href="https://github.com/ryokun6/ryos" rel="noopener noreferrer"&gt;ryOS&lt;/a&gt;, a 1,100-star web-based macOS clone built entirely with Cursor by Ryo Lu (Head of Design at Cursor). If any project would have AI fingerprints, this one would.&lt;/p&gt;

&lt;p&gt;The results surprised me.&lt;/p&gt;

&lt;h2&gt;
  
  
  Zero comment-level AI tells
&lt;/h2&gt;

&lt;p&gt;None. No "// Initialize the state variable" above a useState. No "// Step 1: Fetch the data." No narrator comments, no hedging, no placeholder stubs. The code reads clean line by line.&lt;/p&gt;

&lt;p&gt;AI-generated code has evolved past the obvious tells. The models learned to stop over-explaining. If you're still looking for bad comments as your AI detector, you're looking at last year's problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The smell moved to architecture
&lt;/h2&gt;

&lt;p&gt;vibecheck found &lt;strong&gt;4,523 issues&lt;/strong&gt; across 378 files. Here's where the signal actually is:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;God functions.&lt;/strong&gt; &lt;code&gt;MacDock&lt;/code&gt; is a single 2,003-line React component. &lt;code&gt;useIpodLogic&lt;/code&gt; is an 1,891-line hook. &lt;code&gt;useKaraokeLogic&lt;/code&gt; is 1,798 lines. A human writing incrementally would extract sub-hooks, split components, refactor. AI keeps stuffing logic into the same function because it doesn't have the "this is getting too big" instinct.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deep nesting.&lt;/strong&gt; 13 levels deep in &lt;code&gt;ChatMessages.tsx&lt;/code&gt;. Callback hell meets JSX spaghetti. When you ask AI to "add a conditional render for loading states" three times in a row, each one nests inside the previous one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Swallowed errors.&lt;/strong&gt; 35 empty catch blocks, many in sequence. &lt;code&gt;infiniteMacHandler.ts&lt;/code&gt; has 10+ empty catches in a row (lines 790, 797, 804, 827...). This happens when AI wraps every async call in try/catch but has no error handling strategy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Console.log pollution.&lt;/strong&gt; 671 console.log statements in production code. AI adds them for debugging and never removes them because no one asks it to clean up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error info leaks.&lt;/strong&gt; 11 API endpoints that send &lt;code&gt;error.message&lt;/code&gt; directly in HTTP responses. Internal details (stack traces, DB errors) exposed to clients.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;Code review catches line-level problems. A reviewer reads 50 lines of a function and it looks fine. Clean variable names, proper TypeScript, no weird patterns.&lt;/p&gt;

&lt;p&gt;But zoom out and the function is 2,000 lines long. The reviewer never sees the full picture because the diff only shows the 30 lines that changed. The AI-generated architecture accumulates invisibly.&lt;/p&gt;

&lt;p&gt;This is the new AI code smell: &lt;strong&gt;code that passes review but fails at scale.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What to look for
&lt;/h2&gt;

&lt;p&gt;If you're reviewing AI-assisted code, stop looking for bad comments and start looking for:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Function length.&lt;/strong&gt; Anything over 200 lines is suspicious. Over 500 is almost certainly AI-accumulated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nesting depth.&lt;/strong&gt; 5+ levels means the function is doing too many things.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Empty catch blocks.&lt;/strong&gt; AI loves try/catch. It hates error handling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Console pollution.&lt;/strong&gt; Count the console.logs. AI never cleans up after itself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repeated patterns.&lt;/strong&gt; 10 empty catches in a row? That's a loop, not a developer.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @yuvrajangadsingh/vibecheck ./your-project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;vibecheck catches 32 patterns across JS/TS and Python. Works as a CLI, GitHub Action, VS Code extension, and pre-commit hook. All offline, no API calls.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/yuvrajangadsingh/vibecheck" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://www.npmjs.com/package/@yuvrajangadsingh/vibecheck" rel="noopener noreferrer"&gt;npm&lt;/a&gt; | &lt;a href="https://marketplace.visualstudio.com/items?itemName=yuvrajangadsingh.vibecheck-linter" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Your AI tools don't know what your brand looks like</title>
      <dc:creator>Yuvraj Angad Singh</dc:creator>
      <pubDate>Thu, 26 Mar 2026 07:49:42 +0000</pubDate>
      <link>https://dev.to/yuvrajangadsingh/your-ai-tools-dont-know-what-your-brand-looks-like-3ajo</link>
      <guid>https://dev.to/yuvrajangadsingh/your-ai-tools-dont-know-what-your-brand-looks-like-3ajo</guid>
      <description>&lt;p&gt;Every AI coding agent generates the same UI. Gray backgrounds, blue buttons, Inter font, 8px border radius. It doesn't matter if you're building for a fintech startup or a surf shop. The output looks identical because the agent has zero context about your brand.&lt;/p&gt;

&lt;p&gt;Google noticed this too. When they redesigned Stitch in March 2026, they introduced a file format called DESIGN.md. It's a markdown file that encodes your design system (colors, typography, spacing, component styles) in a format that LLMs can read. Drop it in your project root and tools like Claude Code, Cursor, Gemini CLI, and Stitch itself will use it to generate UI that actually matches your brand.&lt;/p&gt;

&lt;p&gt;The problem is, nobody wants to write one from scratch.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a DESIGN.md looks like
&lt;/h2&gt;

&lt;p&gt;It has 5 sections:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Visual Theme &amp;amp; Atmosphere&lt;/strong&gt; - mood, shape language, depth&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Color Palette &amp;amp; Roles&lt;/strong&gt; - every color with a semantic role&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Typography Rules&lt;/strong&gt; - font families, size scale, weights&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Component Stylings&lt;/strong&gt; - buttons, cards, inputs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layout Principles&lt;/strong&gt; - spacing scale, base grid unit&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's a snippet from Stripe's:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## 2. Color Palette &amp;amp; Roles
- **White** (\`#FFFFFF\`) — Page background
- **Dark Blue** (\`#533AFD\`) — Accent background
- **Cyan** (\`#00D66F\`) — Accent background
- **Dark Muted Blue** (\`#64748D\`) — Secondary text

## 3. Typography Rules
**Primary font:** sohne-var
- Headings: 26px, 32px, 48px, 56px
- Body / UI: 14px, 16px, 18px, 22px
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Writing this by hand means opening DevTools, inspecting every element, noting down colors and fonts, figuring out the spacing scale. For a complex site that's hours of work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Extracting it automatically
&lt;/h2&gt;

&lt;p&gt;I built a CLI called brandmd that does this in one command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx brandmd https://stripe.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It launches a headless browser, renders the page, scrolls through it to trigger lazy-loaded content, dismisses cookie banners, then extracts computed styles from every visible element. Colors get clustered (so you don't end up with 50 shades of the same gray), fonts and spacing values get grouped into scales, and everything gets templated into the DESIGN.md format.&lt;/p&gt;

&lt;p&gt;No LLM calls, no API keys. Runs entirely on your machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it catches
&lt;/h2&gt;

&lt;p&gt;I ran it on a few sites to test:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Linear&lt;/strong&gt; - picked up Inter Variable as the primary font, Berkeley Mono as secondary, the indigo accent (#5E6AD2), and identified a 4px base grid from the spacing values.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stripe&lt;/strong&gt; - found sohne-var (their custom font), the purple (#533AFD) and green (#00D66F) accent colors, and 4 distinct shadow styles for depth.&lt;/p&gt;

&lt;p&gt;The output isn't perfect. It can't read your Figma tokens or understand why you chose a specific color for error states. But it gives you a solid starting point that's 90% there, and you can tweak the remaining 10% in a few minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The workflow
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Extract from your current site&lt;/span&gt;
npx brandmd https://yoursite.com &lt;span class="nt"&gt;-o&lt;/span&gt; DESIGN.md

&lt;span class="c"&gt;# Drop it in your project root&lt;/span&gt;
&lt;span class="nb"&gt;mv &lt;/span&gt;DESIGN.md ./

&lt;span class="c"&gt;# Now your AI tools generate on-brand UI&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude Code, Cursor, and Gemini CLI all read markdown files from your project root automatically. No configuration needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx brandmd https://linear.app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Source: &lt;a href="https://github.com/yuvrajangadsingh/brandmd" rel="noopener noreferrer"&gt;github.com/yuvrajangadsingh/brandmd&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Works on any public URL. If you have a site and you use AI coding tools, try it on your own URL. You'll probably be surprised at how much design information is hiding in your computed styles.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>design</category>
      <category>cli</category>
    </item>
    <item>
      <title>Embeddings shouldn't need a notebook</title>
      <dc:creator>Yuvraj Angad Singh</dc:creator>
      <pubDate>Wed, 25 Mar 2026 14:14:09 +0000</pubDate>
      <link>https://dev.to/yuvrajangadsingh/embeddings-shouldnt-need-a-notebook-4hjb</link>
      <guid>https://dev.to/yuvrajangadsingh/embeddings-shouldnt-need-a-notebook-4hjb</guid>
      <description>&lt;p&gt;I kept running into the same annoyance whenever I needed embeddings. The retrieval part of a RAG pipeline was hard enough already. Generating vectors should've been the easy part.&lt;/p&gt;

&lt;p&gt;But every time I needed to embed something, the workflow looked like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open a notebook or write a throwaway script&lt;/li&gt;
&lt;li&gt;Import the SDK, set up the client&lt;/li&gt;
&lt;li&gt;Figure out the right model name (was it &lt;code&gt;text-embedding-004&lt;/code&gt; or &lt;code&gt;text-embedding-3-small&lt;/code&gt;?)&lt;/li&gt;
&lt;li&gt;Write the call, handle the response format&lt;/li&gt;
&lt;li&gt;Copy the vector out of the output&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For text that's annoying. For images or audio it's worse. Different SDKs, different input formats, different response shapes.&lt;/p&gt;

&lt;p&gt;I kept thinking: I can &lt;code&gt;curl&lt;/code&gt; an API in seconds. I can &lt;code&gt;jq&lt;/code&gt; a JSON response without writing a script. Why can't I just embed something from the terminal?&lt;/p&gt;

&lt;h2&gt;
  
  
  The tool I wanted
&lt;/h2&gt;

&lt;p&gt;Something like httpie but for embeddings. Type a command, get a vector back.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vemb text &lt;span class="s2"&gt;"hello world"&lt;/span&gt;
&lt;span class="c"&gt;# {"model": "gemini-embedding-2-preview", "dimensions": 3072, "values": [0.0123, -0.0456, ...]}&lt;/span&gt;

vemb text &lt;span class="s2"&gt;"hello world"&lt;/span&gt; &lt;span class="nt"&gt;--compact&lt;/span&gt;
&lt;span class="c"&gt;# [0.0123, -0.0456, 0.0789, ...]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Embed an image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vemb image photo.jpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Embed a PDF:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vemb pdf report.pdf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compare two files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vemb similar photo1.jpg photo2.jpg
&lt;span class="c"&gt;# 0.8734&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No notebooks, no scripts, no boilerplate. Just the vector.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Gemini Embedding 2
&lt;/h2&gt;

&lt;p&gt;I looked at OpenAI's embedding models first. Their embeddings endpoint is text-only. If you want to embed images, you're stitching together separate models and separate vector spaces. No clean way to compare text against images with a single embedding call.&lt;/p&gt;

&lt;p&gt;Google released &lt;a href="https://blog.google/technology/google-deepmind/gemini-embedding-model/" rel="noopener noreferrer"&gt;Gemini Embedding 2&lt;/a&gt; (public preview, March 2026). One model that handles text, images, audio, video, and PDFs natively. Same vector space for everything. You can embed a photo and a text description and compare them directly with cosine similarity.&lt;/p&gt;

&lt;p&gt;That's what made the CLI possible. One model, one API, all input types.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building it
&lt;/h2&gt;

&lt;p&gt;The whole thing is ~400 lines of Python. Two files: &lt;code&gt;embed.py&lt;/code&gt; (core logic) and &lt;code&gt;cli.py&lt;/code&gt; (Click commands).&lt;/p&gt;

&lt;p&gt;The interesting parts:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auto-detection&lt;/strong&gt;: &lt;code&gt;vemb embed&lt;/code&gt; guesses the file type from the extension. JPEGs, PNGs, MP3s, WAVs, MP4s, PDFs all work with the same command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Batch mode&lt;/strong&gt;: &lt;code&gt;vemb embed *.jpg --jsonl&lt;/code&gt; embeds every file and outputs one JSON object per line.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Directory search&lt;/strong&gt;: &lt;code&gt;vemb search ./photos/ "dark moody sunset"&lt;/code&gt; embeds the query, embeds every file in the directory (with caching), and ranks by cosine similarity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# search a folder of images by text description&lt;/span&gt;
vemb search ./photos/ &lt;span class="s2"&gt;"dark moody sunset"&lt;/span&gt; &lt;span class="nt"&gt;--top&lt;/span&gt; 5

0.8234    ./photos/sunset-beach.png
0.7891    ./photos/evening-skyline.png
0.7654    ./photos/golden-hour.png
0.6123    ./photos/cloudy-morning.png
0.5987    ./photos/overcast-street.png
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Example use cases
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Retrieval experiments&lt;/strong&gt;: embed a few chunks, check similarity scores, tune the chunking. No notebook needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image search&lt;/strong&gt;: I keep a folder of reference mockups. &lt;code&gt;vemb search ./mockups/ "login screen"&lt;/code&gt; finds the right ones instantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Checking if two files are semantically close&lt;/strong&gt;: &lt;code&gt;vemb similar draft-v1.pdf draft-v2.pdf&lt;/code&gt; tells me how much the content actually changed between versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-modal search&lt;/strong&gt;: embed a text query against a folder of images and get ranked results. One model, one vector space means text and images are directly comparable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pipx &lt;span class="nb"&gt;install &lt;/span&gt;vemb
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GEMINI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your_key   &lt;span class="c"&gt;# free at aistudio.google.com/apikey&lt;/span&gt;
vemb text &lt;span class="s2"&gt;"hello world"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Source and docs: &lt;a href="https://github.com/yuvrajangadsingh/vemb" rel="noopener noreferrer"&gt;github.com/yuvrajangadsingh/vemb&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The API key is free, the model is free tier. If you're building anything with embeddings and you're tired of opening notebooks for a one-line operation, try it.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>cli</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Your AI Wrote the Code. Who's Checking It?</title>
      <dc:creator>Yuvraj Angad Singh</dc:creator>
      <pubDate>Sat, 21 Mar 2026 11:01:49 +0000</pubDate>
      <link>https://dev.to/yuvrajangadsingh/your-ai-wrote-the-code-whos-checking-it-2d2e</link>
      <guid>https://dev.to/yuvrajangadsingh/your-ai-wrote-the-code-whos-checking-it-2d2e</guid>
      <description>&lt;p&gt;I review a lot of PRs at work. Over the last year, I started noticing patterns in AI-generated code that kept showing up. Same five or six things, every time.&lt;/p&gt;

&lt;p&gt;Empty catch blocks. &lt;code&gt;as any&lt;/code&gt; sprinkled everywhere. Comments that just restate what the code does. Hardcoded API keys. &lt;code&gt;except: pass&lt;/code&gt; in Python. The code works, it passes tests, but it's the kind of stuff you'd flag in a review and ask someone to fix.&lt;/p&gt;

&lt;p&gt;The numbers back this up. CodeRabbit found AI-generated PRs have &lt;a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report" rel="noopener noreferrer"&gt;1.7x more issues&lt;/a&gt; than human PRs. Veracode says &lt;a href="https://www.helpnetsecurity.com/2025/08/07/create-ai-code-security-risks/" rel="noopener noreferrer"&gt;45% of AI code samples&lt;/a&gt; contain security vulnerabilities.&lt;/p&gt;

&lt;p&gt;ESLint catches syntax issues. But nobody's catching the behavioral patterns that AI tools leave behind. So I built one.&lt;/p&gt;

&lt;h2&gt;
  
  
  vibecheck
&lt;/h2&gt;

&lt;p&gt;24 rules across JS/TS and Python. Zero config. Runs offline. Regex-based, so it's fast.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @yuvrajangadsingh/vibecheck &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  src/api/routes.ts
    12:5    error  no-hardcoded-secrets   Hardcoded secret detected
    45:3    error  no-empty-catch         Empty catch block swallows errors
    89:1    warn   no-console-pollution   console.log left in production code

  src/utils/db.ts
    34:5    error  no-sql-concat          SQL query built with string concatenation

  4 problems (3 errors, 1 warning)
  2 files with issues out of 47 scanned (0.8s)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No API keys, no cloud, no LLM calls. It's regex rules that match the patterns AI tools tend to produce.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it actually catches
&lt;/h2&gt;

&lt;p&gt;Here's the stuff I kept flagging in code reviews:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The silent failure:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;pass&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your API call fails and nobody ever knows. vibecheck flags &lt;code&gt;no-bare-except&lt;/code&gt; and &lt;code&gt;no-pass-except&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "I'll type it later":&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AI tools love &lt;code&gt;as any&lt;/code&gt;. It shuts up the type checker but defeats the entire point of TypeScript. vibecheck flags &lt;code&gt;no-ts-any&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The useless comment:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// initialize the counter&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;counter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your linter doesn't care about this. vibecheck does. &lt;code&gt;no-obvious-comments&lt;/code&gt; catches comments that just repeat what the code already says.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The hardcoded key:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;API_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sk-proj-abc123def456&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one's obvious but it keeps happening. &lt;code&gt;no-hardcoded-secrets&lt;/code&gt; matches common API key patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Diff mode
&lt;/h2&gt;

&lt;p&gt;This is the part I use most. Instead of scanning your entire codebase, scan only the lines you just changed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vibecheck &lt;span class="nt"&gt;--staged&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Drop it in a pre-commit hook and it only checks what you're about to commit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# .git/hooks/pre-commit&lt;/span&gt;
npx @yuvrajangadsingh/vibecheck &lt;span class="nt"&gt;--staged&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In CI, it runs on PR diffs so you're not drowning in warnings from old code.&lt;/p&gt;

&lt;h2&gt;
  
  
  "Isn't this just ESLint?"
&lt;/h2&gt;

&lt;p&gt;No. ESLint catches syntax and style. vibecheck catches the patterns that come from how AI tools generate code. Your linter won't flag a catch block that only does &lt;code&gt;console.error(err)&lt;/code&gt; without rethrowing. It won't flag &lt;code&gt;# type: ignore&lt;/code&gt; without a specific error code. It won't flag a function that's 120 lines long because the AI didn't know when to stop.&lt;/p&gt;

&lt;p&gt;They're complementary. Run both.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# no install needed&lt;/span&gt;
npx @yuvrajangadsingh/vibecheck &lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="c"&gt;# or install globally&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @yuvrajangadsingh/vibecheck

&lt;span class="c"&gt;# standalone binary (no Node required)&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://github.com/yuvrajangadsingh/vibecheck/releases/latest/download/vibecheck-darwin-arm64 &lt;span class="nt"&gt;-o&lt;/span&gt; vibecheck
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x vibecheck
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GitHub: &lt;a href="https://github.com/yuvrajangadsingh/vibecheck" rel="noopener noreferrer"&gt;github.com/yuvrajangadsingh/vibecheck&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open to feedback on what rules to add next. If you keep flagging the same thing in AI-generated PRs, I probably want to hear about it.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>ai</category>
      <category>python</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Your GitHub Profile Is Lying About You</title>
      <dc:creator>Yuvraj Angad Singh</dc:creator>
      <pubDate>Tue, 10 Mar 2026 12:30:00 +0000</pubDate>
      <link>https://dev.to/yuvrajangadsingh/your-github-profile-is-lying-about-you-34a9</link>
      <guid>https://dev.to/yuvrajangadsingh/your-github-profile-is-lying-about-you-34a9</guid>
      <description>&lt;p&gt;I shipped 888 commits last year. My GitHub profile showed 0.&lt;/p&gt;

&lt;p&gt;Not because I wasn't coding. I was writing code every single day, reviewing PRs, closing issues, shipping features. But all of it went to private repos. And GitHub doesn't count private repo activity on your public profile.&lt;/p&gt;

&lt;p&gt;For almost 4 years, my contribution graph was a graveyard of gray squares.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;Recruiters spend about 6 seconds on a GitHub profile. Gray squares = "doesn't code." Green squares = "active developer." It's shallow, but it's real. I've had recruiters literally ask me why my profile looked inactive.&lt;/p&gt;

&lt;p&gt;GitHub does have a "show private contributions" toggle in settings. But all it does is show anonymous green squares. No repo names, no PRs, no context. And if your company uses a separate org account (like mine does), those contributions don't show up on your personal profile at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;I wrote a bash CLI called &lt;a href="https://github.com/yuvrajangadsingh/greens" rel="noopener noreferrer"&gt;greens&lt;/a&gt; that mirrors your private work activity to a public repo without exposing any code.&lt;/p&gt;

&lt;p&gt;Here's what it does:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scans your work repos locally (never modifies them)&lt;/li&gt;
&lt;li&gt;Extracts commit timestamps for your email&lt;/li&gt;
&lt;li&gt;Creates empty commits with matching timestamps in a public mirror repo&lt;/li&gt;
&lt;li&gt;Pushes to GitHub&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it. No code leaves your machine. The mirror repo contains empty commits with only timestamps.&lt;/p&gt;

&lt;p&gt;If you have &lt;code&gt;gh&lt;/code&gt; CLI set up, it also picks up PRs, reviews, and issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;yuvrajangadsingh/greens/greens
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;greens&lt;/code&gt; and the setup wizard walks you through configuration. Takes about 2 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works under the hood
&lt;/h2&gt;

&lt;p&gt;The key insight is that GitHub's contribution graph only cares about commit timestamps, not content. So greens creates a bare cache of each repo (no working tree, no blobs), extracts dates where your email authored a commit, and creates empty commits with those exact timestamps in the mirror.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Your Work Repos → Bare Cache → Public Mirror
  (untouched)     (no blobs)    (empty commits with your dates)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your source repos can be on GitHub, GitLab, Bitbucket, or self-hosted. greens scans the local clone, not the remote.&lt;/p&gt;

&lt;h2&gt;
  
  
  My results
&lt;/h2&gt;

&lt;p&gt;After setting it up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;11 repos tracked&lt;/li&gt;
&lt;li&gt;888 commits mirrored&lt;/li&gt;
&lt;li&gt;158 active days visible on my graph&lt;/li&gt;
&lt;li&gt;Auto-syncs daily via launchd&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before: dead profile that made me look like I stopped coding in 2022.&lt;br&gt;
After: an accurate picture of my actual work.&lt;/p&gt;
&lt;h2&gt;
  
  
  "Isn't this gaming the system?"
&lt;/h2&gt;

&lt;p&gt;Fair question. I'm not faking open source contributions. The mirror repo clearly says what it is. I'm just making private work volume visible on a platform that ignores it by default.&lt;/p&gt;

&lt;p&gt;If your company has a policy against this, check before using it. But most devs I've talked to have the same frustration: their profile doesn't reflect their actual output.&lt;/p&gt;
&lt;h2&gt;
  
  
  Set up automation
&lt;/h2&gt;

&lt;p&gt;Once greens is working, automate it. On macOS, a launchd plist that runs &lt;code&gt;greens&lt;/code&gt; daily at midnight keeps everything in sync without thinking about it. On Linux, a cron job does the same thing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# cron example&lt;/span&gt;
0 0 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /usr/local/bin/greens
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;yuvrajangadsingh/greens/greens
greens
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or clone manually:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/yuvrajangadsingh/greens.git
&lt;span class="nb"&gt;cd &lt;/span&gt;greens &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; bash setup.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GitHub repo: &lt;a href="https://github.com/yuvrajangadsingh/greens" rel="noopener noreferrer"&gt;github.com/yuvrajangadsingh/greens&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Star it if you find it useful. Issues and PRs welcome.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm Yuvraj, building AI systems at &lt;a href="https://www.meetaugust.ai" rel="noopener noreferrer"&gt;August&lt;/a&gt;. I write about dev tools and open source when I'm not debugging LLM pipelines at 2 AM.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>opensource</category>
      <category>cli</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Your GitHub Graph is Lying About How Much You Work. Here's How I Fixed Mine.</title>
      <dc:creator>Yuvraj Angad Singh</dc:creator>
      <pubDate>Fri, 06 Feb 2026 21:49:00 +0000</pubDate>
      <link>https://dev.to/yuvrajangadsingh/your-github-graph-is-lying-about-how-much-you-work-heres-how-i-fixed-mine-3hn1</link>
      <guid>https://dev.to/yuvrajangadsingh/your-github-graph-is-lying-about-how-much-you-work-heres-how-i-fixed-mine-3hn1</guid>
      <description>&lt;p&gt;I work 50+ hours a week writing code. My GitHub profile says I'm barely active.&lt;/p&gt;

&lt;p&gt;Sound familiar? If you work on private repos — enterprise GitHub, Bitbucket, GitLab, self-hosted — none of that shows on your contribution graph. Recruiters see empty squares. Your profile looks dead.&lt;/p&gt;

&lt;p&gt;I got tired of it and built &lt;strong&gt;contrib-mirror&lt;/strong&gt; — a CLI that mirrors your real commit timestamps to a public repo. No code exposed. No fake commits. Just your actual work activity, finally visible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew tap yuvrajangadsingh/contrib-mirror
brew &lt;span class="nb"&gt;install &lt;/span&gt;contrib-mirror
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Or use the one-liner:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.githubusercontent.com/yuvrajangadsingh/private-work-contributions-mirror/main/install.sh | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The setup wizard auto-detects your repos, emails, and org:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;contrib-mirror &lt;span class="nt"&gt;--setup&lt;/span&gt;   &lt;span class="c"&gt;# configure once&lt;/span&gt;
contrib-mirror           &lt;span class="c"&gt;# sync anytime&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Discovers repos in your work directory&lt;/li&gt;
&lt;li&gt;Creates bare caches (no source code touched)&lt;/li&gt;
&lt;li&gt;Extracts commit timestamps matching your email&lt;/li&gt;
&lt;li&gt;Optionally pulls PR/review/issue timestamps via GitHub API&lt;/li&gt;
&lt;li&gt;Creates empty commits with matching dates in a public mirror repo&lt;/li&gt;
&lt;li&gt;Pushes to GitHub&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Zero code is ever exposed.&lt;/strong&gt; The mirror contains only empty commits with timestamps.&lt;/p&gt;
&lt;h2&gt;
  
  
  It Also Tracks PRs and Reviews
&lt;/h2&gt;

&lt;p&gt;GitHub counts more than commits — PRs opened, reviews submitted, and issues created all count toward your graph. With &lt;code&gt;gh&lt;/code&gt; CLI authenticated, contrib-mirror picks those up too.&lt;/p&gt;
&lt;h2&gt;
  
  
  Set It and Forget It
&lt;/h2&gt;

&lt;p&gt;Add a cron job or launchd agent and it syncs daily at midnight. Your graph stays green without you thinking about it.&lt;/p&gt;

&lt;p&gt;The setup wizard handles scheduling too — just pick launchd or cron when prompted.&lt;/p&gt;



&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/yuvrajangadsingh" rel="noopener noreferrer"&gt;
        yuvrajangadsingh
      &lt;/a&gt; / &lt;a href="https://github.com/yuvrajangadsingh/greens" rel="noopener noreferrer"&gt;
        greens
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Your work is real. Your contribution graph should show it.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;greens&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;Your work is real. Your contribution graph should show it.&lt;/p&gt;

&lt;p&gt;If you commit to private/org repos all day but your GitHub profile looks empty, greens fixes that. It mirrors commit timestamps (and optionally PRs, reviews, issues) to a public repo without exposing any code.&lt;/p&gt;

&lt;p&gt;
  &lt;a rel="noopener noreferrer" href="https://github.com/yuvrajangadsingh/greens/assets/demo.svg"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fyuvrajangadsingh%2Fgreens%2Fassets%2Fdemo.svg" alt="greens demo" width="600"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Install&lt;/h2&gt;
&lt;/div&gt;

&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;brew install yuvrajangadsingh/greens/greens&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then just run &lt;code&gt;greens&lt;/code&gt;. Setup wizard runs on first use.&lt;/p&gt;


Manual install (without Homebrew)
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;git clone https://github.com/yuvrajangadsingh/greens.git
&lt;span class="pl-c1"&gt;cd&lt;/span&gt; greens
bash setup.sh&lt;/pre&gt;

&lt;/div&gt;


&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;What it does&lt;/h2&gt;
&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Scans your work repos (never modifies them)&lt;/li&gt;
&lt;li&gt;Extracts commit timestamps for your email(s) across all branches&lt;/li&gt;
&lt;li&gt;Optionally fetches PR/review/issue timestamps via GitHub API&lt;/li&gt;
&lt;li&gt;Creates empty commits with matching timestamps in a mirror repo&lt;/li&gt;
&lt;li&gt;Pushes to your public mirror&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No code is exposed. The mirror contains empty commits with only timestamps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Works with any git remote.&lt;/strong&gt; Your source repos can be on GitHub, GitLab, Bitbucket, or self-hosted. greens scans the local clone, not the remote. The mirror…&lt;/p&gt;
&lt;/div&gt;


&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/yuvrajangadsingh/greens" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;





&lt;p&gt;Give it a star if this is useful to you.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>github</category>
      <category>cli</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
