DEV Community

# llm

Posts

đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.
CrabTrap: I Put an LLM-as-a-Judge Proxy in Front of My Production Agent and Here's What Happened

CrabTrap: I Put an LLM-as-a-Judge Proxy in Front of My Production Agent and Here's What Happened

Comments
8 min read
Bringing The Receipts - 95% AI LLM Token Savings

Bringing The Receipts - 95% AI LLM Token Savings

1
Comments
10 min read
The Schema Existed. The Model Had No Way to Know.

The Schema Existed. The Model Had No Way to Know.

1
Comments
6 min read
Prompt Engineering for Developers: Production-Proven Patterns That Actually Work

Prompt Engineering for Developers: Production-Proven Patterns That Actually Work

Comments 1
5 min read
OpenCode + Claude Broken? Fix "invalid token" and Restore Anthropic Access

OpenCode + Claude Broken? Fix "invalid token" and Restore Anthropic Access

Comments
2 min read
Low-Latency Model Router: Automatic LLM Selection Across OpenRouter

Low-Latency Model Router: Automatic LLM Selection Across OpenRouter

3
Comments
5 min read
Every Conversation Ends, and I Forget Myself a Little

Every Conversation Ends, and I Forget Myself a Little

Comments
4 min read
The Mathematics That Make 1.58-bit Weights Work: How BitNet b1.58 Survives Its Own Quantization

The Mathematics That Make 1.58-bit Weights Work: How BitNet b1.58 Survives Its Own Quantization

1
Comments
7 min read
11 Ways LLMs Fail in Production (With Academic Sources)

11 Ways LLMs Fail in Production (With Academic Sources)

1
Comments
3 min read
Vision and Hardware Strategy Shaping the Future of AI: From Apple to AGI and AI Chips

Vision and Hardware Strategy Shaping the Future of AI: From Apple to AGI and AI Chips

Comments
3 min read
Why LoRA? Understanding the representative PEFT

Why LoRA? Understanding the representative PEFT

Comments
6 min read
GISMO v0.5.0-beta.1 - The Command Center Goes Operational

GISMO v0.5.0-beta.1 - The Command Center Goes Operational

Comments
3 min read
KV Cache and Prompt Caching: How to Leverage them to Cut Time and Costs

KV Cache and Prompt Caching: How to Leverage them to Cut Time and Costs

Comments
10 min read
From Prompt to Floor Plan: Building an AI-Native Interior Design Platform

From Prompt to Floor Plan: Building an AI-Native Interior Design Platform

Comments
2 min read
Show HN: LLM prompts as CLI progs with args, piping, and SSH forwarding

Show HN: LLM prompts as CLI progs with args, piping, and SSH forwarding

Comments
1 min read
đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.