<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alexzo</title>
    <description>The latest articles on DEV Community by Alexzo (@alexzoofficial).</description>
    <link>https://dev.to/alexzoofficial</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alexzoofficial"/>
    <language>en</language>
    <item>
      <title>I'm under 18, broke, and I just designed an open-source AI chip. Here's the full story.</title>
      <dc:creator>Alexzo</dc:creator>
      <pubDate>Sun, 05 Apr 2026 07:56:57 +0000</pubDate>
      <link>https://dev.to/alexzoofficial/im-under-18-broke-and-i-just-designed-an-open-source-ai-chip-heres-the-full-story-33mk</link>
      <guid>https://dev.to/alexzoofficial/im-under-18-broke-and-i-just-designed-an-open-source-ai-chip-heres-the-full-story-33mk</guid>
      <description>&lt;p&gt;I don't have a team.&lt;br&gt;
I don't have funding.&lt;br&gt;
I don't have a lab.&lt;/p&gt;

&lt;p&gt;I have a laptop, an internet connection, and an obsession with chips.&lt;/p&gt;

&lt;p&gt;This is the story of T1C — Tier 1 Chip — and why I built it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9o4avrb1teeovr97wi0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9o4avrb1teeovr97wi0.png" alt="We designed it. world build it." width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It started with a frustration.&lt;/p&gt;

&lt;p&gt;Every time I read about AI hardware, it was the same story. NVIDIA charges $30,000 for an H100. TSMC charges millions for a custom fab run. Apple Silicon is beautiful but completely closed. Intel, Qualcomm, AMD — all of them — locked behind NDAs, closed architectures, and billion-dollar relationships.&lt;/p&gt;

&lt;p&gt;I kept thinking: why does no one make an open-source AI chip that a real person can actually fabricate?&lt;/p&gt;

&lt;p&gt;Not a toy. Not a demo. A real architecture with real specs, real physics, and a real path to silicon.&lt;/p&gt;

&lt;p&gt;So I built one.&lt;/p&gt;




&lt;p&gt;T1C uses Digital In-Memory Computing — D-IMC. Instead of sending data back and forth between memory and processor (the classic Von Neumann bottleneck that kills every conventional AI chip), computation happens near the memory itself. Less data movement. Less power. More speed for the kinds of workloads LLMs actually need.&lt;/p&gt;

&lt;p&gt;The process node is 65nm LP via GlobalFoundries community shuttle — or 130nm via IHP Germany, which is completely free for open source research. Yes, free.&lt;/p&gt;

&lt;p&gt;One blade = 8 MAAU chips. One blade costs $280–$650 depending on your path. Eight blades gets you a cluster that can run LLaMA 70B at 10–16 tokens per second. Total cost: under $5,200.&lt;/p&gt;

&lt;p&gt;Compare that to a single H100 at $30,000. Closed. Proprietary. Not yours.&lt;/p&gt;




&lt;p&gt;But I'm not going to pretend this was easy or that everything worked perfectly.&lt;/p&gt;

&lt;p&gt;The first version had HBM-Lite on-package memory. Sounds great on paper. In practice, it requires TSMC CoWoS packaging — millions of dollars, only available to Samsung and TSMC customers. Completely impossible for DIY. So I scrapped it.&lt;/p&gt;

&lt;p&gt;Replaced with 4x LPDDR5X chips per MAAU, 128-bit wide bus, assembled on a standard PCB. Bandwidth: 168 GB/s. Enough for everything T1C needs to do. Cost: $15–35 instead of $70. Better.&lt;/p&gt;

&lt;p&gt;The voltage system was another nightmare. Even 10mV of fluctuation causes timing violations, wrong computation, or a full chip crash. Dynamic current switching changes 1000x in one nanosecond — no VRM can respond fast enough alone. So I built a 5-layer Adaptive Voltage Stack: on-chip LDOs, MOM capacitors, PCB ceramics, bulk caps, and an I2C adaptive VRM. Combined result: ±3mV stability. Better than most commercial MCUs.&lt;/p&gt;

&lt;p&gt;The TurboQuant implementation had a bug. The original design used PolarQuant + QJL — a 1-bit error correction stage. Five independent community teams confirmed: QJL increases variance. Softmax amplifies it. Attention scores degrade. My original claim of "zero accuracy loss at 3-bit for all models" was wrong. I documented it publicly, dropped the QJL stage entirely, and went PolarQuant-only. 4-bit is now the default — lossless for every model size.&lt;/p&gt;

&lt;p&gt;I didn't hide these problems. I documented every single one with the exact fix applied. That's the only way open source hardware can actually work.&lt;/p&gt;




&lt;p&gt;The architecture includes MIM — Multi-Instance MAAU. Each physical chip can be partitioned into up to 4 isolated hardware slices, each with independent SRAM via hardware MMU, dedicated DMA channel, its own LDO power domain, and a separate clock domain.&lt;/p&gt;

&lt;p&gt;NVIDIA calls this MIG on the H100. T1C has it too — in open-source Verilog RTL, runtime-resizable in under 100ms without a system reboot, at $0 cost because it's already in the design.&lt;/p&gt;




&lt;p&gt;Everything is MIT licensed.&lt;/p&gt;

&lt;p&gt;Full Verilog RTL. GDSII files for both GF 65nm and IHP 130nm. KiCad PCB for the 8-layer blade. ISA specification. Verilator simulation model so software developers can write compilers before the chip physically exists. Boot ROM. Basic assembler. Full BOM with LCSC and Mouser links and real prices.&lt;/p&gt;

&lt;p&gt;The software roadmap is staged and honest. Day one: a working matrix multiply that proves the chip actually computes. Month 3–6: llama.cpp backend so LLMs run on T1C. Month 6–12: ONNX Runtime. Month 12–18: PyTorch. Month 24+: HuggingFace integration, self-sustaining community, companies building products on top.&lt;/p&gt;




&lt;p&gt;T1C is not faster than an RTX 4090 per dollar right now. I want to be clear about that.&lt;/p&gt;

&lt;p&gt;T1C's value is something different. It's the first open-source AI accelerator with D-IMC architecture that a human being can actually fabricate. It's the chip that RISC-V was for CPUs — not the fastest thing, but the open thing. The thing you can hold in your hand and say: I built this. I understand every transistor. I can modify it, improve it, and build products on it.&lt;/p&gt;




&lt;p&gt;Overall production readiness score: 9.2/10.&lt;/p&gt;

&lt;p&gt;Architecture: 9/10.&lt;br&gt;
Voltage stability: 10/10. TurboQuant implementation: 10/10. &lt;br&gt;
MIM hardware isolation: 10/10. Documentation honesty: 10/10.&lt;/p&gt;

&lt;p&gt;Real engineering. Honest numbers. &lt;br&gt;
Open future.&lt;/p&gt;

&lt;p&gt;From India — for the world.&lt;/p&gt;

&lt;p&gt;We Design It. World Builds It.&lt;/p&gt;

&lt;p&gt;Full spec: &lt;a href="https://alexzo.vercel.app/t1c" rel="noopener noreferrer"&gt;https://alexzo.vercel.app/t1c&lt;/a&gt;&lt;br&gt;
Deep dive blog: &lt;a href="https://alexzo.vercel.app/blog/8" rel="noopener noreferrer"&gt;https://alexzo.vercel.app/blog/8&lt;/a&gt;&lt;br&gt;
GitHub repo: &lt;a href="https://github.com/Alexzoofficial/T1C" rel="noopener noreferrer"&gt;https://github.com/Alexzoofficial/T1C&lt;/a&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>webdev</category>
      <category>hardware</category>
    </item>
  </channel>
</rss>
