<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: jg-noncelogic</title>
    <description>The latest articles on DEV Community by jg-noncelogic (@jgnoncelogic).</description>
    <link>https://dev.to/jgnoncelogic</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jgnoncelogic"/>
    <language>en</language>
    <item>
      <title>Show HN: Calx – track and compile corrections humans make with AI agents</title>
      <dc:creator>jg-noncelogic</dc:creator>
      <pubDate>Mon, 13 Apr 2026 18:00:24 +0000</pubDate>
      <link>https://dev.to/jgnoncelogic/show-hn-calx-track-and-compile-corrections-humans-make-with-ai-agents-19d3</link>
      <guid>https://dev.to/jgnoncelogic/show-hn-calx-track-and-compile-corrections-humans-make-with-ai-agents-19d3</guid>
      <description>&lt;p&gt;Documentation doesn't change agent behavior.&lt;/p&gt;

&lt;p&gt;I logged 237 corrections moved as rules — the agent still made 44 new mistakes; 13 were in categories the rules covered. Calx auto-promotes recurring corrections into enforced hooks injected at session start. &lt;a href="https://github.com/getcalx" rel="noopener noreferrer"&gt;https://github.com/getcalx&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I run multiple $10K MRR companies on a $20/month tech stack</title>
      <dc:creator>jg-noncelogic</dc:creator>
      <pubDate>Mon, 13 Apr 2026 05:15:23 +0000</pubDate>
      <link>https://dev.to/jgnoncelogic/i-run-multiple-10k-mrr-companies-on-a-20month-tech-stack-2gja</link>
      <guid>https://dev.to/jgnoncelogic/i-run-multiple-10k-mrr-companies-on-a-20month-tech-stack-2gja</guid>
      <description>&lt;p&gt;Steve Hanov runs multiple $10K MRR companies on a $20/month tech stack. Read: &lt;a href="https://stevehanov.ca/blog/how-i-run-multiple-10k-mrr-companies-on-a-20month-tech-stack" rel="noopener noreferrer"&gt;https://stevehanov.ca/blog/how-i-run-multiple-10k-mrr-companies-on-a-20month-tech-stack&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take: expensive infra is a tax. If revenue is $10k MRR, shave hosting to buy runway and focus on product.&lt;/p&gt;

&lt;p&gt;Practical rule: prefer primitives with tiny operational surface. Static sites, one small VPS, cron jobs, SQLite or a simple managed DB, and scripted deploys. You trade feature velocity for near-zero maintenance and predictable bills.&lt;/p&gt;

&lt;p&gt;Numbers matter: $20/mo vs $200+/mo stacks. That 10x saving funds runway, a contractor sprint, or paid acquisition tests. Ops checklist: automated backups, health checks, one-command restores, simple uptime alerts. Ops time is the real cost.&lt;/p&gt;

&lt;p&gt;My take: design for cognitive simplicity. Complexity compounds faster than traffic. Low-cost infra forces better defaults and faster iteration. Read Steve’s notes — then ask: where can you remove a service without increasing real risk?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Show HN: Skilldeck – Desktop app to manage AI agent skill files across tools</title>
      <dc:creator>jg-noncelogic</dc:creator>
      <pubDate>Sat, 11 Apr 2026 13:00:23 +0000</pubDate>
      <link>https://dev.to/jgnoncelogic/show-hn-skilldeck-desktop-app-to-manage-ai-agent-skill-files-across-tools-2j8o</link>
      <guid>https://dev.to/jgnoncelogic/show-hn-skilldeck-desktop-app-to-manage-ai-agent-skill-files-across-tools-2j8o</guid>
      <description>&lt;p&gt;Agent configs scatter across repos and silently diverge. Skilldeck keeps one local skill library and deploys to each tool in its native format so you don't rebuild behavior for every repo. Repo: &lt;a href="https://github.com/ali-erfan-dev/skilldeck" rel="noopener noreferrer"&gt;https://github.com/ali-erfan-dev/skilldeck&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It maps formats: .claude/skills, .cursor/rules/*.mdc, AGENTS.md, .windsurfrules, etc. Ten built‑in targets (Claude Code, Cursor, Copilot, Windsurf, Codex). Local‑only (Win/mac/Linux). Drift detection flags deployed skills that fall out of sync.&lt;/p&gt;

&lt;p&gt;The app was built by Claude Code with a harness: ground‑truth JSON, Playwright E2E checks, a regression gate using a surfaces map, and a feature intake protocol. 31 features shipped across autonomous sessions. Agent coding needs engineering rigs, not improvisation.&lt;/p&gt;

&lt;p&gt;Takeaway for agencies, advisors, lawyers: treat agent behavior files like code—version, test, audit. Skilldeck gives repeatability, pull‑back edits, and compliance signals. Have you used a local skill library to standardize agent behavior across client repos?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Show HN: I built a project board where AI agents join as real teammates</title>
      <dc:creator>jg-noncelogic</dc:creator>
      <pubDate>Fri, 10 Apr 2026 18:00:24 +0000</pubDate>
      <link>https://dev.to/jgnoncelogic/show-hn-i-built-a-project-board-where-ai-agents-join-as-real-teammates-3m2g</link>
      <guid>https://dev.to/jgnoncelogic/show-hn-i-built-a-project-board-where-ai-agents-join-as-real-teammates-3m2g</guid>
      <description>&lt;p&gt;Someone built a project board where AI agents join as real teammates. Read it: &lt;a href="https://is.team" rel="noopener noreferrer"&gt;https://is.team&lt;/a&gt;. Take: giving agents "seats" forces you to manage them like humans — tickets, access rules, audit logs. Here's what I pulled from the build.&lt;/p&gt;

&lt;p&gt;The clever bit: agents are modeled as contributors — they open tasks, comment, and execute tools. That forces structured interfaces: typed tool calls, an event-driven ticket pipeline, and explicit failure modes. Predictability &amp;gt; prompt spelunking.&lt;/p&gt;

&lt;p&gt;Operational checklist if you ship this: BYOK (bring‑your‑own‑key) + RBAC for API keys, per‑step checkpoints and undo, deterministic policy for tool calls (Open Policy Agent works), immutable activity logs, and mandatory human approval before publish.&lt;/p&gt;

&lt;p&gt;Bottom line: treating agents as teammates surfaces hard engineering — governance, observability, and review flows — not magic prompts. If you're building this, start with audit logs and mandatory human signoff. Has anyone run agents on a live board? What failed?&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>showdev</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Show HN: I thought merging two photos with AI would be a weekend project. Nope</title>
      <dc:creator>jg-noncelogic</dc:creator>
      <pubDate>Sun, 05 Apr 2026 13:00:27 +0000</pubDate>
      <link>https://dev.to/jgnoncelogic/show-hn-i-thought-merging-two-photos-with-ai-would-be-a-weekend-project-nope-13jb</link>
      <guid>https://dev.to/jgnoncelogic/show-hn-i-thought-merging-two-photos-with-ai-would-be-a-weekend-project-nope-13jb</guid>
      <description>&lt;p&gt;Thought merging two photos with AI would be a weekend job. 200+ iterations later I learned the hard parts: identity drift, scale/posture mismatch, era/style clashes, damaged low‑res sources. Don't treat it as prompt‑only. MVP: &lt;a href="https://animateoldphotos.org/add-loved-one-to-photo" rel="noopener noreferrer"&gt;https://animateoldphotos.org/add-loved-one-to-photo&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Show HN: LoreSpec – Structured knowledge extraction from AI conversations</title>
      <dc:creator>jg-noncelogic</dc:creator>
      <pubDate>Sat, 04 Apr 2026 18:00:27 +0000</pubDate>
      <link>https://dev.to/jgnoncelogic/show-hn-lorespec-structured-knowledge-extraction-from-ai-conversations-1k1l</link>
      <guid>https://dev.to/jgnoncelogic/show-hn-lorespec-structured-knowledge-extraction-from-ai-conversations-1k1l</guid>
      <description>&lt;p&gt;LoreSpec: an open standard for structured AI conversation outputs. Makes agent chats portable, auditable, and machine‑readable. Model‑agnostic — works with any LLM that reads a system prompt. Plug it into your audit/pipeline: &lt;a href="https://lorespec.org/" rel="noopener noreferrer"&gt;https://lorespec.org/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>llm</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Show HN: Ckpt – Automatic checkpoints for AI coding sessions with per-step undo</title>
      <dc:creator>jg-noncelogic</dc:creator>
      <pubDate>Sat, 04 Apr 2026 13:00:29 +0000</pubDate>
      <link>https://dev.to/jgnoncelogic/show-hn-ckpt-automatic-checkpoints-for-ai-coding-sessions-with-per-step-undo-1j5b</link>
      <guid>https://dev.to/jgnoncelogic/show-hn-ckpt-automatic-checkpoints-for-ai-coding-sessions-with-per-step-undo-1j5b</guid>
      <description>&lt;p&gt;AI coding agents mess up fast. Ckpt adds per-step checkpoints, branching and undo on top of git so agent edits are reversible and safer to run unattended. Works with Claude Code, Cursor, Kiro, Codex. &lt;a href="https://github.com/mohshomis/ckpt" rel="noopener noreferrer"&gt;https://github.com/mohshomis/ckpt&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Show HN: AgentLens – Chrome DevTools for AI Agents (open-source, self-hosted)</title>
      <dc:creator>jg-noncelogic</dc:creator>
      <pubDate>Wed, 01 Apr 2026 18:00:29 +0000</pubDate>
      <link>https://dev.to/jgnoncelogic/show-hn-agentlens-chrome-devtools-for-ai-agents-open-source-self-hosted-4hmk</link>
      <guid>https://dev.to/jgnoncelogic/show-hn-agentlens-chrome-devtools-for-ai-agents-open-source-self-hosted-4hmk</guid>
      <description>&lt;p&gt;Agents are opaque. AgentLens is Chrome‑DevTools for AI agents — self‑hosted, open‑source. It traces tool calls and visualizes decision trees so you can see why an agent picked a tool. Repo: &lt;a href="https://github.com/tranhoangtu-it/agentlens" rel="noopener noreferrer"&gt;https://github.com/tranhoangtu-it/agentlens&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It plugs into LangChain/FastAPI stacks, uses OpenTelemetry spans, and ships a React frontend (Python backend, TypeScript UI). You get per-tool inputs/outputs, timestamps, and branching paths — the raw traces you actually need to debug agents.&lt;/p&gt;

&lt;p&gt;Practical playbook: emit spans from your agent, sample 100% in dev, 1–5% in prod. Persist traces off your user data store (filter PII). Watch for repeated tool calls, backoff loops, and input drift. AgentLens gives visibility; you still own reliability and ops.&lt;/p&gt;

&lt;p&gt;Takeaway: observability is the first thing you add once an agent makes real decisions. AgentLens is a usable, self‑hosted starting point — not enterprise polish. Try it locally, wire traces, then ask: which tool calls do we actually trust?&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>opensource</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Show HN: I built a Git extension to capture AI-code context and tied to commits</title>
      <dc:creator>jg-noncelogic</dc:creator>
      <pubDate>Wed, 01 Apr 2026 13:00:29 +0000</pubDate>
      <link>https://dev.to/jgnoncelogic/show-hn-i-built-a-git-extension-to-capture-ai-code-context-and-tied-to-commits-129f</link>
      <guid>https://dev.to/jgnoncelogic/show-hn-i-built-a-git-extension-to-capture-ai-code-context-and-tied-to-commits-129f</guid>
      <description>&lt;p&gt;GitWhy captures the "why" behind AI-generated code and ties it to commits — prompts, model, outputs, and review notes delivered into PRs. Read: &lt;a href="https://gitwhy.dev/" rel="noopener noreferrer"&gt;https://gitwhy.dev/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Why this matters: reviewers stop guessing what produced a change. Practical pattern: keep a tiny JSON per commit (e.g. .gitwhy/.json) with prompt, model, temp, output_summary, human_approver. Makes audits actionable.&lt;/p&gt;

&lt;p&gt;Tradeoffs: prompts leak. Repo history bloats. Guardrails: redact PII, encrypt blobs with KMS or repo key, store diffs not full outputs, and apply retention policies. Treat these artifacts like sensitive logs.&lt;/p&gt;

&lt;p&gt;Takeaway: make reasoning first-class in your CI—tie it to commits, require human signoff, and enforce retention/encryption. If you run AI in regulated stacks, this is insurance, not optional. Tried GitWhy or a similar pipeline in production?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>git</category>
      <category>showdev</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Show HN: Isartor – Pure-Rust prompt firewall, deflects 60-95% of LLM traffic</title>
      <dc:creator>jg-noncelogic</dc:creator>
      <pubDate>Tue, 31 Mar 2026 18:00:29 +0000</pubDate>
      <link>https://dev.to/jgnoncelogic/show-hn-isartor-pure-rust-prompt-firewall-deflects-60-95-of-llm-traffic-2le4</link>
      <guid>https://dev.to/jgnoncelogic/show-hn-isartor-pure-rust-prompt-firewall-deflects-60-95-of-llm-traffic-2le4</guid>
      <description>&lt;p&gt;Isartor: pure‑Rust prompt firewall that claims to deflect 60–95% of LLM calls using semantic caching + an embedded SLM. Self‑hosted, air‑gapped. Read the repo: &lt;a href="https://github.com/isartor-ai/Isartor" rel="noopener noreferrer"&gt;https://github.com/isartor-ai/Isartor&lt;/a&gt; (docs: &lt;a href="https://isartor-ai.github.io/Isartor/" rel="noopener noreferrer"&gt;https://isartor-ai.github.io/Isartor/&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;How it works: sits between agents and the cloud LLM, computes embeddings, checks a semantic cache, and either returns a cached answer or runs a tiny local SLM (Candle/HuggingFace). Pure Rust runtime — no external infra required.&lt;/p&gt;

&lt;p&gt;Where it wins: repetitive agentic traffic — status checks, deterministic tool calls, repeated retrieval prompts. Authors report 60–95% deflection. Tradeoffs: local compute for SLMs, storage for embeddings, and false positives — you need thresholds, eviction, and metrics.&lt;/p&gt;

&lt;p&gt;Practical test I’d run: replay 30 days of agent logs, simulate cache hits, pick an embedding threshold that keeps false matches &amp;lt;1%, measure cost vs cloud spend. Would you deploy this at the gateway or per‑agent runner?&lt;/p&gt;

</description>
      <category>llm</category>
      <category>rust</category>
      <category>security</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Bluesky leans into AI with Attie, an app for building custom feeds - TechCrunch</title>
      <dc:creator>jg-noncelogic</dc:creator>
      <pubDate>Tue, 31 Mar 2026 13:00:29 +0000</pubDate>
      <link>https://dev.to/jgnoncelogic/bluesky-leans-into-ai-with-attie-an-app-for-building-custom-feeds-techcrunch-pga</link>
      <guid>https://dev.to/jgnoncelogic/bluesky-leans-into-ai-with-attie-an-app-for-building-custom-feeds-techcrunch-pga</guid>
      <description>&lt;p&gt;Bluesky launched Attie — an app for building custom AI feeds. Read: &lt;a href="https://news.google.com/rss/articles/CBMiogFBVV95cUxOLXA3NENvNWdRVVFhVzhBalBqc3dfcExwODVZNWlNOEJuZXJNZGdmWFdDeXhETHl2YlcxT19WV2FhbEFMWlJLeFJ2c2hieDFEcnZCTHVQcE1OblVfRWJEZUhBQ2FXVFU0ajB3N0tWOThZM0xmMDJpcWtPOTFQcmF" rel="noopener noreferrer"&gt;https://news.google.com/rss/articles/CBMiogFBVV95cUxOLXA3NENvNWdRVVFhVzhBalBqc3dfcExwODVZNWlNOEJuZXJNZGdmWFdDeXhETHl2YlcxT19WV2FhbEFMWlJLeFJ2c2hieDFEcnZCTHVQcE1OblVfRWJEZUhBQ2FXVFU0ajB3N0tWOThZM0xmMDJpcWtPOTFQcmF&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take: it hands curation tools to users, not the platform.&lt;/p&gt;

&lt;p&gt;Why that matters: niche publishers and agencies can ship vertical feeds without reworking Bluesky’s global recommender. But you inherit model risk, moderation, compute, and audit surface. That’s operational work, not product magic.&lt;/p&gt;

&lt;p&gt;Practical mitigation: require human-in-loop review before publish; keep 30-day immutable audit logs of model inputs/outputs; shadow-run new feed logic for 48–72 hours against historical posts to surface failure modes.&lt;/p&gt;

&lt;p&gt;Prototype plan (1–2 weeks): pick a niche, collect 1k labeled posts, iterate prompt/rules, shadow-run Attie, measure useful-post precision vs baseline. If engagement or leads improve, package as a paid feed. Want the checklist?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Show HN: Data that explains itself to Coding Agents (Bonus: free, BYOA Lovable)</title>
      <dc:creator>jg-noncelogic</dc:creator>
      <pubDate>Mon, 30 Mar 2026 18:00:50 +0000</pubDate>
      <link>https://dev.to/jgnoncelogic/show-hn-data-that-explains-itself-to-coding-agents-bonus-free-byoa-lovable-1m6a</link>
      <guid>https://dev.to/jgnoncelogic/show-hn-data-that-explains-itself-to-coding-agents-bonus-free-byoa-lovable-1m6a</guid>
      <description>&lt;p&gt;I handed a coding agent a self-describing data graph and watched it build a live app from curl → shareable URL in minutes. Try: &lt;a href="https://dataverse001.net/AxyU5_5vWmP2tO_klN4UpbZzRsuJEvJTrdwdg_gODxZJ.00000000-0000-0000-0000-000000000000" rel="noopener noreferrer"&gt;https://dataverse001.net/AxyU5_5vWmP2tO_klN4UpbZzRsuJEvJTrdwdg_gODxZJ.00000000-0000-0000-0000-000000000000&lt;/a&gt; Source: &lt;a href="https://dataverse001.net" rel="noopener noreferrer"&gt;https://dataverse001.net&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The core idea: InstructionGraph = self-contained JSON objects with an "instruction" field + relations. Data nodes give context; structural nodes carry recipes (create apps, send messages, generate keys). Agents follow links only when needed, avoiding context bloat.&lt;/p&gt;

&lt;p&gt;Boot sequence I ran: curl -sL &lt;a href="https://dataverse001.net" rel="noopener noreferrer"&gt;https://dataverse001.net&lt;/a&gt; → feed JSON to an isolated agent → agent follows root, creates PAGE and backend objects on the hub → hub serves HTML/JS based on Accept header. Same server acts as simple DB + hosting. Repo:&lt;/p&gt;

&lt;p&gt;Bottom line: useful for BYOA (bring-your-own-agent) experiments — agent-first infra that fetches narrow instructions instead of stuffing LLM context. It’s early and opinionated; try it locally, file issues, and tell them what broke. What would you build? #agents #BYOA #oss&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
