<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kernel Pryanic</title>
    <description>The latest articles on DEV Community by Kernel Pryanic (@kernelpryanic).</description>
    <link>https://dev.to/kernelpryanic</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kernelpryanic"/>
    <language>en</language>
    <item>
      <title>Are We Using AI at the Wrong Scale?</title>
      <dc:creator>Kernel Pryanic</dc:creator>
      <pubDate>Tue, 28 Apr 2026 11:07:28 +0000</pubDate>
      <link>https://dev.to/kernelpryanic/are-we-using-ai-at-the-wrong-scale-2klo</link>
      <guid>https://dev.to/kernelpryanic/are-we-using-ai-at-the-wrong-scale-2klo</guid>
      <description>&lt;p&gt;We open our IDE and let a model running somewhere in the cloud read our entire codebase to add a null check - and &lt;a href="https://www.scientificamerican.com/article/anthropic-leak-reveals-claude-code-tracking-user-frustration-and-raises-new/" rel="noopener noreferrer"&gt;track our behaviour&lt;/a&gt; along the way. We open Google Docs and ask Gemini to fix a typo. We fire up GPT-class models to refine a Slack message, restructure a comment, generate a thumbnail. We're going to shove AI into every single hole that has data for it to be trained on.&lt;/p&gt;

&lt;p&gt;I'm not saying we shouldn't - that's the nature of expected technological progress, and there isn't much choice in the matter. But somewhere along the way we stopped asking whether the &lt;em&gt;scale&lt;/em&gt; of the model matches the scale of the task. And the answer, more often than we'd like to admit, is no.&lt;/p&gt;

&lt;p&gt;This isn't a doom take. We're not being replaced. We're just still in the early adoption phase, when most people don't fully grasp what AI is &lt;em&gt;not&lt;/em&gt; and where its limits are, wishful-thinking about it a bit too much. Which means we &lt;em&gt;can&lt;/em&gt; still shape it - like we shaped radio, then the internet, then open source. We just need to find a more natural path for this technology, before the current default ossifies into the only option.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers don't support the defaults
&lt;/h2&gt;

&lt;p&gt;Take &lt;a href="https://huggingface.co/Qwen/Qwen3-Coder-Next" rel="noopener noreferrer"&gt;Qwen3-Coder-Next&lt;/a&gt;: 80B total parameters but only 3B active - performing on par with models that have 10-20x more active compute, runnable on high-end consumer hardware (think a 64GB+ Apple Silicon Mac, or a beefy workstation card) instead of a datacenter rack. Go smaller still and it gets more interesting. A &lt;a href="https://www.distillabs.ai/blog/we-benchmarked-12-small-language-models-across-8-tasks-to-find-the-best-base-model-for-fine-tuning/" rel="noopener noreferrer"&gt;Qwen3-4B fine-tuned for a specific task matches a 120B+ model&lt;/a&gt; on that task, deployable on consumer hardware. Or take &lt;a href="https://github.com/datalab-to/chandra" rel="noopener noreferrer"&gt;Chandra&lt;/a&gt; - a 5B OCR model purpose-built for PDF and image conversion that &lt;a href="https://huggingface.co/datalab-to/chandra-ocr-2" rel="noopener noreferrer"&gt;outperforms both Gemini 2.5 Flash and GPT-5 Mini on multilingual document benchmarks&lt;/a&gt;. Not because it's smarter. Because it's focused.&lt;/p&gt;

&lt;p&gt;And every major model release is announced like an earth-shattering event, destined to shadow everything before it and boost everything tenfold. Then we actually start using the thing, and we find a modest improvement - mostly specific, mostly a derivative of what the model was trained on. Take the &lt;em&gt;mysterious&lt;/em&gt; announcement of Anthropic's Mythos, supposedly "too dangerous to release" - we don't even know yet if it justifies the hype. Meanwhile &lt;a href="https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier" rel="noopener noreferrer"&gt;this experimental article&lt;/a&gt; from Aisle already suggests small models can match or outperform it in vulnerability scans - one early experiment, but telling.&lt;/p&gt;

&lt;p&gt;This isn't new, either. &lt;a href="https://arxiv.org/abs/2203.15556" rel="noopener noreferrer"&gt;Chinchilla&lt;/a&gt; challenged the "bigger is always better" orthodoxy back in 2022, and since then the evidence has only stacked up - &lt;a href="https://arxiv.org/abs/2411.15821" rel="noopener noreferrer"&gt;small models trained on high-quality data&lt;/a&gt; for a dedicated task can match or beat their much larger cousins. We just kept defaulting to the biggest available thing anyway, partly out of habit, partly because the cloud paradigm is being pushed hard by everyone with a stake in keeping us there. The headline outpaces the reality, and the reality is that for most tasks, we're already past the point of useful returns from going bigger.&lt;/p&gt;

&lt;h2&gt;
  
  
  A different path
&lt;/h2&gt;

&lt;p&gt;There's another path, and it doesn't look like Cyberpunk 2037. It doesn't require massive H200 clusters just to prettify your CV. It leads to more equal AI distribution, and it doesn't try to substitute anybody.&lt;/p&gt;

&lt;p&gt;That path consists of small, dedicated models trained to do one or a few specific things at most. Models that are just smart enough to fulfill their purpose, and small enough to avoid creating the false impression that they're replacing anyone. This is the &lt;em&gt;mass&lt;/em&gt; AI of the future - a true symbiosis. Or to be more precise, it's proper tool use.&lt;/p&gt;

&lt;p&gt;Because AI is not a being. It's a simulation of one: a very cleverly engineered statistical model that's good at approximation in a way that &lt;em&gt;looks&lt;/em&gt; like adaptability. Treating it as a being is what gets us reaching for the largest possible model every time, as if we were asking a person for help. Treating it as a tool is what lets us match the model to the task - the way you don't use a chainsaw to slice bread.&lt;/p&gt;

&lt;p&gt;What this looks like in practice is software built AI-native from the ground up, not bolted onto with MCPs and API calls to remote giants. A document editor with small models embedded or pluggable for grammar checks, restructuring, summarization, all running locally. An OCR pipeline that just does OCR, well - paired with a small RAG model that lets you actually search and query a shelf of scanned papers or PDFs locally. A video editor with a small model that clips and tags footage on your machine. An in-game AI that runs on the player's hardware. None of these require breakthroughs - the models already exist, or could be trained without a billion-dollar cluster if there's enough data available.&lt;/p&gt;

&lt;p&gt;What's missing is the software paradigm to host them properly - and the orchestration layer to chain them together. If general AI adoption is in its early phase, small-model orchestration is in its infancy: tooling, conventions, ecosystems, all still forming. &lt;a href="https://github.com/comfyanonymous/ComfyUI" rel="noopener noreferrer"&gt;ComfyUI&lt;/a&gt; already lets people chain specialized image and video models into local pipelines - the closest thing we have to a working blueprint, though it's fragile and leans heavily on Python venvs. &lt;a href="https://lmstudio.ai/" rel="noopener noreferrer"&gt;LM Studio&lt;/a&gt; and &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; make running local models trivial and stable, but they're runtimes more than orchestrators. These are embryos - but they prove the paradigm works. And it's the part worth building out further.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the big ones still belong
&lt;/h2&gt;

&lt;p&gt;Large models aren't a dead end. They're the right tool for genuinely hard, open-ended problems - complex coding across unfamiliar codebases, in-depth analysis, anything that genuinely requires reasoning across a wide context. The argument isn't "small models for everything." It's "stop using a trillion-parameter model to fix a typo."&lt;/p&gt;

&lt;p&gt;The honest version of the AI future is mixed: large models where their capabilities are actually needed, and small specialized models for the long tail of focused tasks - which is most of them. Treating those two cases the same way is what's wasteful. Not the technology itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;Using large models for everything &lt;strong&gt;is&lt;/strong&gt; the dead end. Not because it doesn't work, but because of what it costs and where it leads. Every "fix this typo" routed through a frontier model is a small vote for the centralization of compute, the centralization of data, and the centralization of who gets to decide what AI does next. Multiply that by a billion daily prompts and you get the bubble we're currently inflating - one where the only viable AI is the kind that requires a hyperscaler to run.&lt;/p&gt;

&lt;p&gt;The small-model path isn't just more efficient. It's more honest about what most AI tasks actually need, and it leaves room for AI to be something other than a service we rent from a handful of hyperscalers.&lt;/p&gt;

&lt;p&gt;We can still take that path. Many of the models are already there, others are still to be explored and trained. The hardware is there. What's missing is the will to stop assuming bigger is always better - and the software to make small the new default.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>llm</category>
      <category>software</category>
    </item>
    <item>
      <title>Should Hand-Written Code Be Considered Art Now?</title>
      <dc:creator>Kernel Pryanic</dc:creator>
      <pubDate>Sat, 18 Apr 2026 13:25:51 +0000</pubDate>
      <link>https://dev.to/kernelpryanic/should-hand-written-code-be-considered-art-now-4fph</link>
      <guid>https://dev.to/kernelpryanic/should-hand-written-code-be-considered-art-now-4fph</guid>
      <description>&lt;p&gt;A few years ago, "writing code" meant sitting down and writing code with your own hands, kicking in a concentrated thinking process, being in a "flow". Today, more and more of it is generated, scaffolded, auto-completed, or delegated wholesale to AI. The act of typing out a function by hand has shifted pretty drastically from the default way of building software to delegating most of the code writing to AI.&lt;/p&gt;

&lt;p&gt;And that shift makes me wonder: if coding by hand is no longer the norm, does it start to look less like labor and actually more like craft?&lt;/p&gt;

&lt;h2&gt;
  
  
  The car manufacturing analogy
&lt;/h2&gt;

&lt;p&gt;Consider the automotive industry. Most mass-market cars roll off highly automated production lines. Robots handle welding, painting, and assembly with precision and speed that no human can match. Humans are still in the loop, but mostly as reviewers, inspectors, and exception-handlers - finding defects and making judgment calls the machines can't.&lt;/p&gt;

&lt;p&gt;Then there's the other end of the spectrum. Bugatti, Pagani, Morgan, Rolls-Royce - manufacturers where people still shape body panels by hand, stitch leather in-house, and assemble engines one at a time. These cars cost a fortune, take months to build, and are objectively less "efficient" than their mass-produced counterparts.&lt;/p&gt;

&lt;p&gt;Nobody buys them for efficiency. They're bought because they're considered art.&lt;/p&gt;

&lt;p&gt;But here's the nuance worth keeping in mind: even these manufacturers don't do &lt;em&gt;everything&lt;/em&gt; by hand. Machine-stamped steel, off-the-shelf electronics, supplier-made fasteners - all of it shows up in a Bugatti too. The craft is concentrated where it matters: the engine, the interior, the body lines.&lt;/p&gt;

&lt;p&gt;The interesting thing is that hand manufacturing didn't become art because the craft itself changed. It became art because the context around it changed. Once automation became the default, building by hand took on a different meaning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software is going through the same shift
&lt;/h2&gt;

&lt;p&gt;For decades, all code was hand-written - so we never thought of it as artisanal. It was just how the job got done. But as AI-assisted development becomes the norm, the way code gets produced is splitting in two:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generated code&lt;/strong&gt; - fast, cheap, good enough for MVPs, but without proper supervision it tends to turn into slop - plausible-looking code that quietly rots a codebase over time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hand-written code&lt;/strong&gt; - slower, more expensive in human hours, but shaped by intent, taste, and experience at every line.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Real codebases won't be purely one or the other. Most will be a mix - boilerplate, glue, CRUD endpoints, and test scaffolding generated by AI, while the parts that really matter - the core domain logic, the critical algorithms, the architectural seams - stay hand-written. Same as the Bugatti. The craft gets concentrated where it counts.&lt;/p&gt;

&lt;p&gt;And if the analogy holds, those hand-written parts might eventually carry the same kind of prestige a hand-built engine does.&lt;/p&gt;

&lt;h2&gt;
  
  
  What would make code "art"?
&lt;/h2&gt;

&lt;p&gt;Art in craft isn't just about doing things the hard way. A handmade engine isn't art because it's slow to build - it's art because every decision was made deliberately by someone who understood the whole. The same could be true for code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Intentionality&lt;/strong&gt; - every abstraction, every name, every structural choice made for a reason.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coherence&lt;/strong&gt; - the handcrafted parts read like they were written by one mind, with a consistent aesthetic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Restraint&lt;/strong&gt; - knowing what &lt;em&gt;not&lt;/em&gt; to add, which is something unsupervised AI-generated code famously struggles with.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Taste&lt;/strong&gt; - the hard-to-articulate sense of what makes a solution elegant rather than merely correct.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here's where it gets interesting. There's growing evidence that heavy AI use doesn't just change &lt;em&gt;what&lt;/em&gt; we produce - it changes &lt;em&gt;how we think&lt;/em&gt;. &lt;a href="https://dornsife.usc.edu/news/stories/ai-may-be-making-us-think-and-write-more-alike/" rel="noopener noreferrer"&gt;A recent paper from USC researchers&lt;/a&gt; argues that when people's writing and reasoning are mediated by the same handful of LLMs, their distinct linguistic styles, perspectives, and reasoning strategies get homogenized into standardized expressions and thoughts. &lt;a href="https://www.media.mit.edu/articles/a-i-is-homogenizing-our-thoughts/" rel="noopener noreferrer"&gt;An MIT Media Lab study&lt;/a&gt; observed the same effect at the neural level: participants who wrote essays with ChatGPT showed measurably lower brain activity, and their outputs converged stylistically.&lt;/p&gt;

&lt;p&gt;If most code starts flowing through the same few models, codebases will start to look alike - same patterns, same abstractions, same "safe" choices. Which means the qualities above - intentionality, coherence, restraint, taste - were always valued, but they weren't &lt;em&gt;distinguishing&lt;/em&gt;. When everyone writes code by hand, having taste is an advantage. When most code is generated, and generated code trends toward the average, the hand-crafted parts are what give a codebase its character.&lt;/p&gt;

&lt;h2&gt;
  
  
  The counter-argument
&lt;/h2&gt;

&lt;p&gt;It's worth noting the other side. Software's value has always been measured by what it does, not by how it was made. A throwaway script that saves a team ten hours a week is more valuable than an elegantly crafted library nobody uses. If AI can produce working code faster and cheaper, the "art" framing risks romanticizing a slower, more expensive process for its own sake.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, is it art?
&lt;/h2&gt;

&lt;p&gt;I don't think all hand-written code is art, just as not every handmade object is. But I do think the category is emerging. Sometimes it'll show up as entire codebases built from scratch by human hands - small libraries, focused tools, passion projects where every line was deliberate. More often, though, it'll show up as the &lt;em&gt;deliberate pieces&lt;/em&gt; inside otherwise ordinary projects: the core engine, the key abstractions, the parts someone cared enough to shape themselves.&lt;/p&gt;

&lt;p&gt;Ten years from now, I suspect we'll talk about those pieces the way we talk about a Bugatti engine or a Patek Philippe movement: not because they're the most efficient way to solve the problem, but because someone chose to solve it with care, line by line, on purpose.&lt;/p&gt;

&lt;p&gt;And maybe that's enough to call it art.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What do you think?&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>art</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
