We recently launched DeAI Nation — a non-profit focused on advancing decentralized AI and helping individuals, companies, and governments connect with this rapidly growing ecosystem. You may have already heard of a couple of DeAI projects: Akash, Telegram's Cocoon, Gonka by the Liberman brothers, or at least Bittensor.
We started by writing the State of DeAI 2026 report, where we break down both the problem of AI centralization and the solutions that decentralized projects are proposing. You can read the full report at the link, and below I want to answer a few questions that came to my mind when I first heard about decentralized AI (DeAI).
Please note that this is a deliberately oversimplified vision of DeAI to grasp the basics. We have a more detailed description of the tech in the report.
Decentralized? Are models actually running on a blockchain?
Yes and no. Nobody seriously runs models on a blockchain — that would be painfully slow and inefficient. Projects in this space use blockchain as a coordination layer — so that anyone can join the network, contribute their resources (say, computing power), and automatically receive rewards.
To keep everyone honest, networks also add validators who verify that no one cheated, and participants earn revenue in the form of tokens proportional to their contribution.
Tokens? Like in ChatGPT?
No. In DeAI, two definitions of "tokens" collide.
From the blockchain side: crypto tokens, used to pay for completed tasks, distribute rewards, and vote on how the network evolves.
From the AI side: tokens processed and generated by the model. These are two entirely different things — nobody has figured out how to pay for services with AI tokens yet.
If you're enjoying this post, please support our launch on Product Hunt!
Is this just another crypto scheme?
It could be — and among DeAI companies there are surely some bad actors. Even the legitimate ones carry the usual startup risk of failure.
But I've heard from several people in the industry during our research that this is "the first genuinely useful application of crypto."
Crypto tokens here are typically not some "meme coin to earn 1000x in three days" play. They're a means of coordination and rewarding actually useful work. (If you remove the word "useful," you get Bitcoin.) And if you remove the crypto, you lose the decentralization: it would mean some authority gets to decide who gets paid what for their work.
Why bother? We already have ChatGPT, Gemini, Claude, Grok, Kimi, DeepSeek...
One of the core ideas behind DeAI is to break the trend of compute concentration in the hands of a few corporations, and offer a distributed, self-balancing market instead.
Big Tech companies with near-unlimited data center budgets already dictate terms to society, businesses, and entire regions — from pricing to content restrictions. DeAI is trying to build a more open market with no single point of failure: which models to offer, what restrictions to impose, and how to price services is most often determined by collective voting of participants, not by some particular CEO and their inner circle.
So how does it work? Can I plug in my gaming PC?
The most developed and straightforward layer of DeAI is inference — running a model, feeding it a user's prompt, and getting a response. Training and fine-tuning are progressing too, but more slowly.
For inference, you connect your hardware to the network, run the specified models, and process user prompts. Almost nobody tries to "slice" models into pieces and distribute them across devices — each server or node runs a full copy of the model independently.
So your machine needs to be powerful enough to run a model people actually want. If your PC can only handle some 8B-parameter model with no practical use, you probably won't have much to do on a decentralized network either. Most nodes run on GPUs like Nvidia H100–H200, ideally clustered in groups.
That said, there are also decentralized compute services where the product is raw computing power rather than inference — and it's the client who decides what to do with it. In that case, a less powerful machine can work too, though the rewards will be modest.
What if I run a cheaper model and pretend it's a powerful one? 😉
This is one of the central challenges of the DeAI world — nobody trusts anyone (and rightfully so). Hypothetically, in decentralized inference you could accept a user's prompt, run it through a much weaker model, and return the response as if it came from a powerful one. Lower costs, higher margins — time for that second Ferrari.
In practice, decentralized network architects spend serious effort preventing exactly this. There are three main approaches:
First — verify every request. Send the same prompt with the same parameters to multiple nodes, compare results, and filter out outliers. Second — be more optimistic: only re-check some requests, but punish cheaters harshly. This saves compute on verification but gives attackers a better chance of slipping through. Third — use TEE (Trusted Execution Environments): leverage secure enclaves in the most advanced GPUs and processors that guarantee the correct code with the specified parameters was actually executed.
Where should I start exploring?
If you're a developer, the easiest entry point is distributed inference from Chutes via OpenRouter. If you need raw GPU access for a few hours or days, try Akash.
If you're a regular user curious about chatting with AI, Venice offers privacy-first access to open models (no history saved) and anonymous access to popular proprietary ones. You can also try chatting with INTELLECT-3 by Prime Intellect — a model trained on a distributed network — though don't expect too much from it yet.
Hope this made decentralized AI a little clearer and more interesting. Drop any questions in the comments — I'll do my best to answer. And a reminder about the report: if you're into the deep technical stuff, I recommend Chapter 6 — it's the smart one (because I didn't write it).
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.