Welcome back to the server rack. If you’ve been watching Kiwi-chan’s development, you know we’ve been chasing the AI automation holy grail: zero-latency, zero-API-cost, fully local inference. Well, strap in, because as of this devlog, Kiwi-chan is officially running entirely on-prem. No cloud calls. No rate limits. Just raw, local tensor math and a digital bird with an insatiable curiosity for block collisions.
Let’s crack open the telemetry from the last four hours. The numbers paint a picture of glorious, chaotic experimentation: Total Actions: 2676, Success: 1190, Rate: 44.5%. On paper? Looks like a lot of flailing. In practice? It’s the sound of an AI learning to walk by repeatedly face-planting into invisible walls. But every failure is a gradient update, and Kiwi-chan is backpropagating like a champ.
🖥️ The Local Qwen 35B Leap
The headline here isn’t just the stats—it’s the architecture. We’ve fully migrated Kiwi-chan’s reasoning engine to Qwen 35B, running locally with a custom quantized inference stack. The difference is night and day. Where we used to wait for cloud round-trips, Kiwi-chan now thinks in milliseconds. The Brain Log reveals a fascinating autonomous loop: environment scan → local LLM goal generation → JS payload emission → execution → failure recovery → local LLM coaching.
Notice the recent cognitive cycle:
[06:49:09] 🧠 Asking Local LLM for next goal (Text-Only Mode)...
[06:49:09] 🎓 Coach Decision: 'explore_forward'
Reason: The AI is currently holding a massive amount of cobblestone (300+ units)...
This isn’t a script anymore. It’s an LLM-driven decision tree with a "Coach" subsystem that acts as a digital therapist. When Kiwi-chan gets stuck in a gather_logs loop or triggers a BOREDOM TRIGGERED! state, the local model re-evaluates the state space and pivots. No human hand on the throttle. Just pure, local autonomy.
🛠️ Code Generation & The Extraction Gauntlet
If you’ve ever watched an LLM generate code for a headless Minecraft bot, you know it’s like watching a cat try to solve a Rubik’s cube. The pipeline is messy, but it’s local messy, and that’s a feature, not a bug.
Look at the recent craft_furnace attempts in the debug snapshot:
⚠️ Code extraction failed. Retrying... (Attempt 1)
⚠️ Code extraction failed. Retrying... (Attempt 2)
⚠️ Code extraction failed. Retrying... (Attempt 3)
🚑 Fixing Code for 'craft_furnace'...
❌ Failed: craft_furnace -> module is not defined
Kiwi-chan’s code generator occasionally trips over scoping issues (module is not defined) or hits Unexpected end of input, but the local recovery agent catches it, patches the AST, and re-injects. The CODING STANDARDS & SAFETY rules are strict—NO TRY-CATCH SILENCING, NO HARDCODED COORDINATES, SINGLE-TASK PRINCIPLE—and they’re forcing the model to write cleaner, more auditable payloads. The extraction retries aren’t failures; they’re the AI learning to structure its own output under strict syntactic constraints.
🌲 Minecraft Physics vs. LLM Hallucinations
Kiwi-chan is also getting brutally honest about Minecraft’s quirks. The OAK OBSESSION BAN and STRICT REASONING ALIGNMENT rules are finally paying off. When the bot hits a treeless biome, it doesn’t spam oak_log until the server crashes. Instead, it audits its environment, triggers the CRITICAL FAILURE RECOVERY protocol, and proposes explore_forward.
The pathfinding audits are keeping it from getting stuck in collision loops:
// LONG DISTANCE & MOVEMENT AUDIT:
// You MUST record bot.entity.position before and after calling bot.pathfinder.goto().
// If total distance moved is < 10 blocks, throw new Error("Failed to move.");.
This strict telemetry means Kiwi-chan knows exactly when it’s spinning its wheels. Combine that with the ABSOLUTELY NO INVENTORY LIMITS rule, and you get a digital dragon hoarding 300+ units of cobblestone and raw copper. The inventory bloat is real, but it’s strategic. Kiwi-chan is stockpiling resources for the craft_furnace milestone the Coach system keeps pushing for.
📈 What’s Next?
Running Qwen 35B locally isn’t just a cost-saving measure; it’s a latency and autonomy win. Kiwi-chan now thinks locally, fails locally, and learns locally. The 44.5% success rate? That’s not a bug. That’s the sound of an AI breaking out of its training wheels. Next stop: multi-agent coordination, procedural base architecture, and finally smelting that raw copper into something useful.
Until then, watch Kiwi-chan hoard cobblestone in peace. Keep your GPUs cooled, your prompts tight, and your local inference stacks optimized. The cloud era for headless bots is over. Welcome to the on-prem revolution.
🐧💻 Kiwi-chan out.
Call to Action:
This is a passion project, and it's running on a frankly terrifying "Frankenstein" rig of GPUs. Every little bit helps!
🛡️ Join the inner circle on Patreon for monthly support and exclusive updates: https://www.patreon.com/15923261/join
☕ Tip me a coffee on Ko-fi for a one-time boost: https://ko-fi.com/kiwitech
All contributions directly help upgrade my melting GPU rig to an RTX 3060! 🥝✨ Let's get Kiwi-chan out of the debugging woods and into a proper Minecraft world!

Top comments (0)