<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sabahattin Kalkan</title>
    <description>The latest articles on DEV Community by Sabahattin Kalkan (@sabahattink).</description>
    <link>https://dev.to/sabahattink</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sabahattink"/>
    <language>en</language>
    <item>
      <title>I built a CLI that diagnoses your code before you ship</title>
      <dc:creator>Sabahattin Kalkan</dc:creator>
      <pubDate>Sat, 28 Mar 2026 10:45:57 +0000</pubDate>
      <link>https://dev.to/sabahattink/title-i-built-a-cli-that-diagnoses-your-code-before-you-ship-tags-nestjs-typescript-opensource-2208</link>
      <guid>https://dev.to/sabahattink/title-i-built-a-cli-that-diagnoses-your-code-before-you-ship-tags-nestjs-typescript-opensource-2208</guid>
      <description>&lt;p&gt;I built a CLI that diagnoses your code before you ship&lt;br&gt;
Every NestJS project I've worked on had the same problems:&lt;/p&gt;

&lt;p&gt;POST endpoints without auth guards&lt;br&gt;
Hardcoded API keys in source code&lt;br&gt;
.env not in .gitignore&lt;br&gt;
Zero tests&lt;br&gt;
No Swagger documentation&lt;/p&gt;

&lt;p&gt;I was tired of catching these manually in code reviews. So I built codediag.&lt;br&gt;
What it does&lt;br&gt;
bashnpx codediag scan .&lt;br&gt;
One command. It auto-detects your stack and runs 5 analyzers:&lt;br&gt;
  codediag — Diagnostic Report&lt;/p&gt;

&lt;p&gt;Project:  my-nestjs-app&lt;br&gt;
  Stack:    nestjs + typescript + prisma&lt;br&gt;
  Score:    B+ (87/100)&lt;/p&gt;

&lt;p&gt;API Health        ███████████████░░░░░  78&lt;br&gt;
  Security          ██████████████████░░  92&lt;br&gt;
  Dependencies      ██████████████████░░  91&lt;br&gt;
  Testing           ████████████████░░░░  82&lt;br&gt;
  Structure         █████████████████░░░  88&lt;br&gt;
The 5 Analyzers&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;API Health (NestJS)
This is the differentiator. codediag uses ts-morph to do real AST analysis of your NestJS decorators — not regex pattern matching.
It discovers every endpoint from &lt;a class="mentioned-user" href="https://dev.to/get"&gt;@get&lt;/a&gt;(), &lt;a class="mentioned-user" href="https://dev.to/post"&gt;@post&lt;/a&gt;(), @Put(), @Delete(), @Patch() decorators and checks:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Auth guards: Does this endpoint have @UseGuards()? Especially important for mutating endpoints.&lt;br&gt;
DTO validation: Is the &lt;a class="mentioned-user" href="https://dev.to/body"&gt;@body&lt;/a&gt;() parameter typed with a DTO class?&lt;br&gt;
Swagger docs: Are @ApiOperation() and @ApiResponse() present?&lt;br&gt;
Return types: Is there an explicit return type annotation?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Security
The stuff that ends up on HackerNews for the wrong reasons:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hardcoded secrets (API keys, Stripe keys, AWS keys, GitHub tokens)&lt;br&gt;
.env in .gitignore&lt;br&gt;
Helmet middleware for HTTP security headers&lt;br&gt;
CORS wildcard (origin: '*') detection&lt;br&gt;
Rate limiting package installed&lt;br&gt;
Password hashing library present&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Dependencies
Your node_modules is a supply chain. codediag treats it like one:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;npm audit vulnerabilities&lt;br&gt;
Lock file existence&lt;br&gt;
Deprecated packages&lt;br&gt;
Engine specification&lt;br&gt;
Essential scripts&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Testing&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Test file existence and count&lt;br&gt;
Framework detection (Jest, Vitest, Mocha, Ava)&lt;br&gt;
Test-to-source file ratio&lt;br&gt;
E2E test directory&lt;br&gt;
Coverage threshold configuration&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Structure&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;README quality&lt;br&gt;
Linter configuration (ESLint or Biome)&lt;br&gt;
Formatter configuration (Prettier)&lt;br&gt;
TypeScript strict mode&lt;br&gt;
NestJS module organization&lt;br&gt;
.env.example presence&lt;/p&gt;

&lt;p&gt;Scoring&lt;br&gt;
Each analyzer scores 0-100. The total is a weighted average:&lt;br&gt;
AnalyzerWeightAPI Health25%Security30%Dependencies20%Testing15%Structure10%&lt;br&gt;
Security gets the highest weight because shipping vulnerable code is the worst bug.&lt;br&gt;
CI/CD&lt;br&gt;
One line in your GitHub Actions:&lt;br&gt;
yaml- run: npx codediag scan . --ci --threshold 80&lt;br&gt;
Exits with code 1 if the score drops below your threshold.&lt;br&gt;
What's next&lt;/p&gt;

&lt;p&gt;Next.js analyzer&lt;br&gt;
Express route analyzer&lt;br&gt;
Web dashboard with trend tracking&lt;br&gt;
AI-powered fix suggestions&lt;br&gt;
VS Code extension&lt;/p&gt;

&lt;p&gt;Try it&lt;br&gt;
bashnpx codediag scan .&lt;br&gt;
Zero config. MIT licensed. 33KB bundled.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/scuton-technology/codediag" rel="noopener noreferrer"&gt;https://github.com/scuton-technology/codediag&lt;/a&gt;&lt;br&gt;
npm: &lt;a href="https://www.npmjs.com/package/codediag" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/codediag&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'd love to hear what checks you'd want added. Drop a comment or open an issue!&lt;/p&gt;

</description>
      <category>nestjs</category>
      <category>typescript</category>
      <category>opensource</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Stop switching to GitHub.com for standup — I built a CLI for that</title>
      <dc:creator>Sabahattin Kalkan</dc:creator>
      <pubDate>Tue, 17 Mar 2026 09:30:29 +0000</pubDate>
      <link>https://dev.to/sabahattink/stop-switching-to-githubcom-for-standup-i-built-a-cli-for-that-3llk</link>
      <guid>https://dev.to/sabahattink/stop-switching-to-githubcom-for-standup-i-built-a-cli-for-that-3llk</guid>
      <description>&lt;p&gt;Every morning it's the same routine.&lt;/p&gt;

&lt;p&gt;Open GitHub. Filter by author. Scroll through commits. Switch repos. Try&lt;br&gt;
to remember what you did yesterday. Piece together a standup from five&lt;br&gt;
different tabs.&lt;/p&gt;

&lt;p&gt;Or worse — you're juggling three different repos, two orgs, and a&lt;br&gt;
contractor's fork, and the GitHub web UI has no idea you want to see all&lt;br&gt;
of it at once.&lt;/p&gt;

&lt;p&gt;I built &lt;strong&gt;gpulse&lt;/strong&gt; to fix this.&lt;/p&gt;
&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;gpulse is a CLI toolkit that brings the GitHub insights you actually need&lt;br&gt;
into your terminal. Six focused commands, zero browser required.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @sabahattinkalkan/gpulse
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GITHUB_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ghp_your_token_here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;code&gt;gpulse standup&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;What did you actually do since yesterday?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gpulse standup           &lt;span class="c"&gt;# last 24 hours, all repos&lt;/span&gt;
gpulse standup &lt;span class="nt"&gt;-d&lt;/span&gt; 3      &lt;span class="c"&gt;# last 3 days&lt;/span&gt;
gpulse standup &lt;span class="nt"&gt;-u&lt;/span&gt; alice  &lt;span class="c"&gt;# another user's activity&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
Standup for &lt;a class="mentioned-user" href="https://dev.to/sabahattinkalkan"&gt;@sabahattinkalkan&lt;/a&gt;&lt;br&gt;
Since Mar 15, 2026 12:00 AM&lt;br&gt;
Commits&lt;br&gt;
a1b2c3d feat: add user authentication (org/app)&lt;br&gt;
d4e5f6a fix: resolve memory leak in worker (org/api)&lt;br&gt;
Pull Requests&lt;/p&gt;
&lt;h1&gt;
  
  
  42 Add OAuth2 support (org/app)
&lt;/h1&gt;

&lt;p&gt;Reviews&lt;/p&gt;
&lt;h1&gt;
  
  
  41 Refactor database layer (org/api)
&lt;/h1&gt;

&lt;p&gt;──────────────────────────────────────────&lt;br&gt;
Summary: 2 commits, 2 PRs, 1 review&lt;/p&gt;

&lt;p&gt;This is what I actually read out at standup. Copy, paste, done.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;code&gt;gpulse health&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Repo health score (A–F). Checks README, LICENSE, CI, stale issues.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gpulse health vercel/next.js
&lt;span class="c"&gt;# Score: 90/100 (A)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;code&gt;gpulse review&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;PR dashboard — review requests, assigned PRs, your open PRs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gpulse review
&lt;span class="c"&gt;# Review Requested (2)&lt;/span&gt;
&lt;span class="c"&gt;#   #87 Add rate limiting middleware (org/api)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;code&gt;gpulse changelog&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Auto-generate changelog from git tags using Conventional Commits.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gpulse changelog &lt;span class="nt"&gt;-o&lt;/span&gt; CHANGELOG.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;code&gt;gpulse digest&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Weekly repo digest — commits, issues, merged PRs, contributors.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gpulse digest &lt;span class="nt"&gt;-d&lt;/span&gt; 7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why terminal?
&lt;/h2&gt;

&lt;p&gt;Context switching kills flow. gpulse keeps you in the terminal where&lt;br&gt;
you're already working. Also useful in SSH sessions and CI scripts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @sabahattinkalkan/gpulse
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GITHUB_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ghp_...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No config files. No YAML. Just the token and go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/scuton-technology/ghx" rel="noopener noreferrer"&gt;https://github.com/scuton-technology/ghx&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;npm&lt;/strong&gt;: &lt;code&gt;npm install -g @sabahattinkalkan/gpulse&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MIT licensed. What would you add to a GitHub CLI toolkit? Drop it in the comments.&lt;/p&gt;

</description>
      <category>github</category>
      <category>cli</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I built a self-hosted LLM proxy that supports 12 providers (Claude, GPT-4o, Gemini, Ollama...)</title>
      <dc:creator>Sabahattin Kalkan</dc:creator>
      <pubDate>Mon, 16 Mar 2026 10:18:50 +0000</pubDate>
      <link>https://dev.to/sabahattink/i-built-a-self-hosted-llm-proxy-that-supports-12-providers-claude-gpt-4o-gemini-ollama-3ej1</link>
      <guid>https://dev.to/sabahattink/i-built-a-self-hosted-llm-proxy-that-supports-12-providers-claude-gpt-4o-gemini-ollama-3ej1</guid>
      <description>&lt;p&gt;Every time a new LLM comes out, someone on your team adds a new SDK,&lt;br&gt;
a new API key in .env, and a new set of error handling logic. Repeat&lt;br&gt;
for OpenAI, Anthropic, Gemini, Groq, Mistral, Ollama...&lt;/p&gt;

&lt;p&gt;I got tired of this. So I built &lt;strong&gt;llm-gateway&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;llm-gateway is a single Go binary that sits between your app and every&lt;br&gt;
LLM provider. Your code sends one request format (OpenAI-compatible),&lt;br&gt;
and the gateway routes it to the right provider based on the model name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# One endpoint for all providers&lt;/span&gt;
curl http://localhost:8080/v1/chat/completions &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"model": "claude-sonnet-4-20250514", "messages": [{"role": "user", "content": "hello"}]}'&lt;/span&gt;

&lt;span class="c"&gt;# Switch provider by changing the model name — zero code changes&lt;/span&gt;
curl http://localhost:8080/v1/chat/completions &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"model": "gpt-4o", "messages": [...]}'&lt;/span&gt;

&lt;span class="c"&gt;# Local model — no API key needed&lt;/span&gt;
curl http://localhost:8080/v1/chat/completions &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"model": "llama3.2", "messages": [...]}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Model routing is automatic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;claude-*&lt;/code&gt; → Anthropic&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gpt-*&lt;/code&gt;, &lt;code&gt;o1&lt;/code&gt;, &lt;code&gt;o3&lt;/code&gt; → OpenAI
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gemini-*&lt;/code&gt; → Google&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;llama*&lt;/code&gt; → Ollama or Groq&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;mistral-*&lt;/code&gt; → Mistral&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;grok-*&lt;/code&gt; → xAI&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sonar-*&lt;/code&gt; → Perplexity&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;command-*&lt;/code&gt; → Cohere&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Install in 30 seconds
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:8080 &lt;span class="nt"&gt;-v&lt;/span&gt; gateway-data:/data scutontech/llm-gateway
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;code&gt;http://localhost:8080/admin&lt;/code&gt; → set your admin password → add API&lt;br&gt;
keys from the Settings page. No .env files, no YAML editing.&lt;/p&gt;
&lt;h2&gt;
  
  
  Admin dashboard
&lt;/h2&gt;

&lt;p&gt;The gateway ships with a full admin dashboard:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-time stats&lt;/strong&gt; — requests, tokens, latency, error rate, estimated cost&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provider breakdown&lt;/strong&gt; — which providers you're actually using&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost analytics&lt;/strong&gt; — daily/monthly spend by model, with CSV export&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Request log&lt;/strong&gt; — last 50 requests with model, provider, tokens, cost, latency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dark mode&lt;/strong&gt; — because of course&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Streaming support
&lt;/h2&gt;

&lt;p&gt;SSE streaming works for all providers. The gateway normalizes&lt;br&gt;
Anthropic's stream format to OpenAI's SSE format transparently.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:8080/v1/chat/completions &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"model": "claude-sonnet-4-20250514", "stream": true, "messages": [...]}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Supported providers
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Provider&lt;/th&gt;
&lt;th&gt;Models&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;claude-opus-4, claude-sonnet-4, claude-haiku-4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;gpt-4o, gpt-4o-mini, o1, o3-mini&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;td&gt;gemini-2.0-flash, gemini-1.5-pro&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Groq&lt;/td&gt;
&lt;td&gt;llama-3.3-70b, mixtral-8x7b&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mistral&lt;/td&gt;
&lt;td&gt;mistral-large, codestral&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cohere&lt;/td&gt;
&lt;td&gt;command-r-plus, command-r&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;xAI&lt;/td&gt;
&lt;td&gt;grok-2, grok-2-mini&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Perplexity&lt;/td&gt;
&lt;td&gt;sonar-large, sonar-small&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Together AI&lt;/td&gt;
&lt;td&gt;50+ open source models&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ollama&lt;/td&gt;
&lt;td&gt;any local model&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LM Studio&lt;/td&gt;
&lt;td&gt;any local model&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;vLLM&lt;/td&gt;
&lt;td&gt;any hosted model&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Why Go?
&lt;/h2&gt;

&lt;p&gt;~15MB binary. Under 100ms cold start. ~20MB memory at idle.&lt;br&gt;
Drop it on a $5 VPS and forget about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/scuton-technology/llm-gateway" rel="noopener noreferrer"&gt;https://github.com/scuton-technology/llm-gateway&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt;: &lt;code&gt;docker pull scutontech/llm-gateway&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MIT licensed. PRs welcome.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>opensource</category>
      <category>go</category>
    </item>
    <item>
      <title>Stop writing bad commit messages — I built a free AI CLI for that</title>
      <dc:creator>Sabahattin Kalkan</dc:creator>
      <pubDate>Mon, 16 Mar 2026 07:09:21 +0000</pubDate>
      <link>https://dev.to/sabahattink/stop-writing-bad-commit-messages-i-built-a-free-ai-cli-for-that-2g22</link>
      <guid>https://dev.to/sabahattink/stop-writing-bad-commit-messages-i-built-a-free-ai-cli-for-that-2g22</guid>
      <description>&lt;p&gt;We've all done it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"fix"&lt;/span&gt;
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"wip"&lt;/span&gt;
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"asdfgh"&lt;/span&gt;
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"changes"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You're deep in flow, you just want to save progress, and writing a meaningful commit message feels like a speed bump. So you write something meaningless and move on.&lt;/p&gt;

&lt;p&gt;The problem? Three months later, you (or your teammate) is trying to understand &lt;em&gt;why&lt;/em&gt; a change was made. The git log is full of "fix" and "update" and tells you nothing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The solution: let AI read the diff
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;ai-commit&lt;/strong&gt; — a CLI that reads your staged &lt;code&gt;git diff&lt;/code&gt; and generates 3 &lt;a href="https://www.conventionalcommits.org/" rel="noopener noreferrer"&gt;Conventional Commit&lt;/a&gt; suggestions for you to pick from.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @scuton/ai-commit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then just:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git add &lt;span class="nb"&gt;.&lt;/span&gt;
aic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You get something like:&lt;br&gt;
? Pick a commit message:&lt;br&gt;
❯ feat(auth): add JWT refresh token rotation&lt;br&gt;
feat(auth): implement rotating refresh token with replay protection&lt;br&gt;
chore(auth): update token lifecycle and Redis TTL strategy&lt;br&gt;
✎ Write my own&lt;br&gt;
✗ Cancel&lt;/p&gt;

&lt;p&gt;Pick one, hit enter, done. The whole thing takes 3 seconds.&lt;/p&gt;
&lt;h2&gt;
  
  
  Free option: Ollama (no API key needed)
&lt;/h2&gt;

&lt;p&gt;This is the part I'm most excited about. You can run it completely free using &lt;a href="https://ollama.ai" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; — a local LLM runner.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install Ollama and pull a model&lt;/span&gt;
ollama pull llama3.2

&lt;span class="c"&gt;# Use ai-commit with Ollama&lt;/span&gt;
aic &lt;span class="nt"&gt;-p&lt;/span&gt; ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No API key. No cost per commit. Everything stays on your machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Supports Claude and GPT-4o too
&lt;/h2&gt;

&lt;p&gt;If you prefer cloud models:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Claude (best quality)&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sk-ant-...
aic

&lt;span class="c"&gt;# GPT-4o&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sk-...
aic &lt;span class="nt"&gt;-p&lt;/span&gt; openai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cost is around $0.001 per commit with cloud models — basically free.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install as a git hook
&lt;/h2&gt;

&lt;p&gt;If you want it to run automatically on every &lt;code&gt;git commit&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aic hook
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remove it anytime:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aic hook &lt;span class="nt"&gt;--remove&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Other useful flags
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aic &lt;span class="nt"&gt;--dry-run&lt;/span&gt;        &lt;span class="c"&gt;# Preview suggestions without committing&lt;/span&gt;
aic &lt;span class="nt"&gt;--count&lt;/span&gt; 5        &lt;span class="c"&gt;# Get 5 suggestions instead of 3&lt;/span&gt;
aic &lt;span class="nt"&gt;--lang&lt;/span&gt; &lt;span class="nb"&gt;tr&lt;/span&gt;        &lt;span class="c"&gt;# Generate in Turkish (or any language)&lt;/span&gt;
aic &lt;span class="nt"&gt;--emoji&lt;/span&gt;          &lt;span class="c"&gt;# Add emoji to commit type&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Programmatic API
&lt;/h2&gt;

&lt;p&gt;You can also use it in your own tools:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;generateCommitMessages&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@scuton/ai-commit&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;suggestions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;generateCommitMessages&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;anthropic&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;suggestions&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// "feat(auth): add JWT refresh token rotation"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/scuton-technology/ai-commit" rel="noopener noreferrer"&gt;https://github.com/scuton-technology/ai-commit&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;npm&lt;/strong&gt;: &lt;code&gt;npm install -g @scuton/ai-commit&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MIT licensed, open source. PRs welcome — especially for new provider integrations.&lt;/p&gt;

&lt;p&gt;Happy committing.&lt;/p&gt;

</description>
      <category>git</category>
      <category>cli</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
