DEV Community

Cover image for We're Creating a Knowledge Collapse and No One's Talking About It

We're Creating a Knowledge Collapse and No One's Talking About It

Daniel Nwaneri on January 27, 2026

"Hostile experts created the dataset for patient machines." That line, from a comment by Vinicius Fagundes on my last article, won't leave my head...
Collapse
 
richardpascoe profile image
Richard Pascoe

To step outside of coding for a moment - I recently read that we’re already at the point where nearly 50% of all internet traffic is AI-generated. Take a second and let that sink in.

There’s a real risk here. If AI is allowed to endlessly consume information without boundaries, we eventually end up with an internet that’s mostly feeding on itself - AI trained on AI-generated content, over and over again. A snake eating its own tail.

No matter where you land in the AI debate - whether you’re excited by the commercial potential or worried about the environmental cost - it’s hard to ignore what’s at stake. The loss of human creativity doesn’t just change how we use the internet; it hollows it out from the inside.

At that point, we don’t just lose originality. We risk losing the internet as something alive.

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

The "dead internet" is only accelerated (exponentially, however) by LLM-based generative AI, but there were real people producing sloppy spam content before AI took their jobs. Algorithms lured people into hate speech spirals and recommendation rabbit holes to maximise clicks and engagement before AI already.

Maybe that's not a risk at all, while still a waste of resources, if we focus and filter. There are millions of bad books that I don't need to read, millions of bad coffee shops that I'll never visit. Millions of questions that I could ask AI but I'll never will.

We won't lose the internet as something alive, we'll have to reinvent and rediscover the good aspects we loved about Web 1 (originality, imperfection, USENET, what else?) and Web 2.0 (instant interaction, user generated content and social media platforms before everything went too commercial) and maybe even Web3 (the ideas of decentralization, independence and forgery-proof, not necessarily built with crypto and blockchain though) and the discussions like this one about AI, DEV, StackOverflow, Wikipedia and how to continue collaborating as developers commited to finding facts and best practices.

Collapse
 
richardpascoe profile image
Richard Pascoe

True enough, Ingo and I really appreciate your take on this. I suppose it could be said that spammy "human" content was easy to recognise and ignore compare to AI-generated material. However, that doesn't take away from any of your points - which are well made.

The fact of the matter is that nothing has been set in stone, as of yet. We can decide how we use AI, as much as we can decide what we wish the internet to look like.

For myself, I've realised that if I honestly feel a certain way about the current situation then I should support Wikipedia beyond a donation, the same way I should become a member of the EFF. I'm already starting to migrate from the algorithm-lead nature of Big Tech as much as I can.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

this is it. concern into action

wikipedia support, EFF membership, migrating from big tech. concrete
not hand-wringing.

im doing similar.writing publicly on dev.to instead of private notes, publishing OSS, documenting reasoning not just solutions

maybe individual answer. consciouschoice to contribute back even when less efficient than AI privately.

commons survives if enough people make that choice.

appreciate you actually doing something

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Exactly this, Daniel. If you have strong feelings about something, do you sit back or do you take that first small step - with the hope it leads to another, and another?

This isn't a one-size-fits-all solution but if you feel strongly enough to want to do something then do something - for your own peace of mind if nothing else.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

exactly.action beats paralysis.

writing these articles is my version.making documenting reasoning publicly instead of keeping it private.

small steps compound if enough people take them.

appreciate you being vocal about this. other people reading might not comment but seeing someone actually commit to action (wikipedia, EFF, fediverse migration) makes it feel possible instead of just theoretical leadership by example.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

you're right that this might be first world problem when world has bigger issues. but i'd argue: developer knowledge infrastructure affects ALL software, including systems that DO address real world problems.

bad AI-generated code in healthcare systems? financial infrastructure? critical infrastructure? knowledge collapse has real-world consequences.

your point about SO already having flaws is fair . outdated answers, reputation bias. but those are curation problems we COULD fix. model collapse from AI training on AI is systemic.

love the "reinvent best of web 1/2/3" vision. decentralized knowledge commons without crypto overhead. public reasoning without gatekeeping.

maybe thats the answer - new platforms designed for AI era that keep web 1 authenticity with web 2 collaboration

what would that look like practically?

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Part of what draws me to the Fediverse is more than privacy - I wonder if moving away from major social media platforms and Big Tech toward a more decentralized internet could help us recapture the internet of old. I don’t know, but I’d love to see.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

fediverse makes sense for this. decentralized, community-owned, no algorithmic manipulation.

wondering if we need something similar for developer knowledge. not just social but structured Q&A on federated servers.

imagine. local instances for specific communities (rust, cloudflare, etc)
that federate for discovery but each community controls moderation/curation

might solve both problems. keep knowledge public while avoiding single
corporate owner who can enshittify

have you seen any attempts at this? or is it purely theoretical still?

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Well, nothing that I am aware of myself. It does seem to be a possible solution though, I agree. What I have read recently are opinions on how Stack Overflow could have avoided such a steep drop in traffic - the inclusion of a beginner-friendly Q&A section that didn't have to be included in the overall "tome of knowledge", or that they could have moved more towards a Wikipedia style format.

Whenever you have personally hit a brick wall with Stack Overflow or not, it can often appear as a hostile environment to many developers starting their journey. I was going to post about Stack Overflow this morning but felt it overlapped with some of points you had already raised, so I posted a discussion piece on Godot Engine instead. I do plan to publish it early next week though - keep a look out for it!

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

love that youre writing the SO piece. this is exactly how knowledge should compound. you build on what im exploring, add your angle, community learns more.

the beginner-friendly section idea is smart. SO's problem wasnt just hostility, it was mixing "canonical reference" with "help newbie debug." different goals, same platform.

publish your piece and tag me. id love to see your take on what SO could have done differently.

also if you find any attempts at federated dev knowledge platforms, let me know. feels like the right direction but needs someone to actually build it.

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Will do, Daniel. No problem!

In regard to the Fediverse, I wonder if Mastodon servers such as Fosstodon could help foster a knowledge sharing platform? Just a thought...

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

fosstodon is interesting starting point. already has dev community and fediverse architecture.

challenge: mastodon optimized for conversation not curation. no voting,
no accepted answers, search is weak.

but maybe that's solvable? build Q&A layer on top of activitypub protocol?

imagine: mastodon for discussion + separate fediverse app for structured
Q&A that federates with mastodon. best of both

someone probably needs to just BUILD this. open source, activitypub-based, community-owned stackoverflow alternative.

feels more realistic than hoping SO reforms or waiting for corporate solution.

you thinking about building or just observing?

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Observing and, potentially, supporting would be more accurate with my current knowledge base.

Of course, you're right about Mastodon being optimised for conversation over curation but the ActivityPub layer itself could be part of the solution perhaps?

Either way, it's within servers like Fosstodon where the knowledge resides - experienced people passionate about open source. Maybe they just need an alternative platform to Stack Overflow to share that same experience?

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

been thinking hard about this since you mentioned it.

im going to build it. or at least start.

technical path is clear. activitypub Q&A server, voting layer, federates with mastodon. open source from day one.

going to write the spec as article, then build minimal prototype. if fosstodon community is interested, we iterate together.

appreciate you pushing on this. sometimes you need someone to say "this
should exist" before you realize youre the one to build it.

ill keep you posted. might need your help rallying the fosstodon folks when its ready.

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Sounds like a plan, Daniel. Sure other DEV members would be willing to lend a hand to the project too! Will do - best of luck!

Collapse
 
dannwaneri profile image
Daniel Nwaneri

50% AI-generated internet traffic is terrifying for training implications.

stack overflow (78% drop), wikipedia buried.visible. but most is invisible.content farms, synthetic responses.

your phrase "internet as something alive" hits hard. alive because humans are messy, opinionated, creative. AI smooths that into... efficient noise?

scariest part. we wont know when we cross from "mostly human" to "mostly AI"

already happening. most people dont see it.

Collapse
 
cesarkohl profile image
Cesar Kohl • Edited

This is the end of the world as we know it. The inevitable future is the immediate discredit of all digital content.

"How can I trust this specific knowledge that is being shared is truly reasonable? Does it really make sense? Who wrote this? Who is this person? What are his/her credentials?"
The answer is, go figure out for yourself.

And then, once again, I'll need to go back to Wikipedia, books written before 2022, and double check everything.

IMO, AI, just like social media feeds, decreased human progress.

Collapse
 
richardpascoe profile image
Richard Pascoe

You’re absolutely right, Cesar - we’re seeing this play out across every sphere: technology, politics, healthcare, and beyond. It certainly makes the three years’ worth of technical guides sitting on my external hard drive feel a lot less like a waste of time… though, in the end, time will tell!

Outside of the AI debate, it’s disheartening to see how the internet itself has changed - it now seems to thrive on constant contrarianism, which is why spaces like DEV remain so valuable.

Collapse
 
affable_shamik_efebf96072 profile image
Affable Shamik • Edited

True. And your words describe it exactly as it should be. We r lucky to have devs like you.

Collapse
 
richardpascoe profile image
Richard Pascoe

Thank you for your lovely comment, Affable. It means alot!

I understand AI - even in its current form - could end up being a useful tool but the way it is being leveraged with such a huge dose of FOMO is disappointing at best and utterly depressing at worse - particularly with concerns over LLM training going largely unheard and the resulting slop very much unchallenged.

Thread Thread
 
affable_shamik_efebf96072 profile image
Affable Shamik

About all you said, about we not questioning AI, and using it without any constraints, it really felt like someone talking about me. Till now couldn't understand why i felt lost because i let ai models guide me wherever they liked and after 2 months of wiriting code that the llm guided me towards and me implementing it without instinctively questioning it. it eventually broke and worse of all, i asked the same llm " what now " again ........

How does someone like me develop systems thinking. I am very compulsive in the sense i always question if what i did was the best solution to a problem like take a silly example: classes vs functions for some problem. I get stuck around this and continue make optimizations, this makes it more difficult for me to question an ai model and thus depend on it more and more . If i could get some insights, I'd really be grateful.

Thread Thread
 
richardpascoe profile image
Richard Pascoe

I think @maame-codes put it best in a comment on one of her posts: for senior developers, AI acts as a force multiplier because they’re well-positioned to spot errors in AI-generated code - the so-called “hallucinations.”

For junior developers, or those just starting out, a lot of people in the field are realising that it’s still best to build strong foundations first. Learn and understand the basics before leaning on AI as a tool. Without that, there’s a real risk of never fully understanding what you’ve “vibe coded.”

That said, there are plenty of folks here on DEV who can offer much deeper technical insight than I can — but I hope this perspective helps, even in a small way.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

breakthrough moment.recognizing the trap is step one.

"let AI guide 2 months → broke → asked same LLM" = Below the API.

for "classes vs functions" paralysis:

dont ask AI to decide. ask AI for tradeoffs, YOU decide based on context.

ujja's approach: treat AI like confident junior. helpful but needs review

systems thinking builds through:

  • maintain your own code 6mo later (feel pain)
  • ask "what makes this wrong?" not "does this work?"
  • trust your questioning over AI confidence

start small: this week, deliberately choose differently than AI suggests once. understand why. build the muscle

youre already questioning. thats the foundation.

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Couldn't have said it better myself, Daniel!

Thread Thread
 
affable_shamik_efebf96072 profile image
Affable Shamik

Means a lot. Thank you.

Collapse
 
leob profile image
leob

Fair points - in my opinion what we REALLY need to (MUST !) keep are (1) Wikipedia and (2) Stackoverflow ...

"Everyone's celebrating that AI finally killed the gatekeepers" - that's a funny statement, what exactly is "everyone" (?) celebrating - the "demise" of some 'cocky' know-it-all people on Stackoverflow?

I've heard people complaining about that, but it's not something that has ever bothered me ...

Collapse
 
dannwaneri profile image
Daniel Nwaneri

fair pushback on "everyone" . youre right thats overstated.

what i meant. theres a vocal contingent celebrating SO decline as karma for the "marked as duplicate" culture. but youre right that not everyone had negative experiences.

the gatekeeping thing wasnt my main point though. whether SO was hostile
or helpful, the real issue is. if it dies (78% traffic drop is real data), what replaces it?

private AI chats dont have the same properties. searchable, evolvable, publicly curated. thats the loss im worried about, not the personality of the answerers.

curious.you say we MUST keep wikipedia + SO. how do we do that when AI makes contributing feel redundant? genuine question

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

StackOverflow would eventually experience another kind of knowledge collapse due to outdated information occupying the top answer spots and answers by long standing seniors getting upvotes just because of their reputation. The gatekeeping was an effective spam filter, and it made me draft numerous questions that I never posted because I found the answer myself while refining a minimal reproducible example. But StackOverflow's (and other communities') gatekeeping also made a lot of valuable data get discarded just because people had other priorities than making an effort to solve their issues in public.

Collapse
 
leob profile image
leob

I've never had any negative experiences on SO, maybe it also depends on people's attitude? People who say:

"a vocal contingent celebrating SO decline as karma"

are peevish, resentful and bear a narrow-minded grudge :-)

Your point about the value and necessity of original content (SO and Wikipedia, and much more) is spot on ... I hope (and honestly I expect) that SO and Wikipedia (and similar community-driven sources) will survive!

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

ha fair. the people celebrating SO decline are probably louder than they are numerous.

youre right that attitude matters.respectful questions got better SO treatment. but the reputation (deserved or not) scared people away.

your optimism is interesting though. what makes you think SO/wikipedia survive when 78% traffic drop is real?

maybe people who value these platforms keep contributing even as casuals move to AI? quality over quantity?

id love to be wrong about collapse trajectory.

Thread Thread
 
leob profile image
leob

I guess it's the fact that there's still 22 percent left? Yeah and maybe "quality over quantity" - the "hard core" people won't walk away ...

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

interesting take. maybe the 22% who stayed are the actual contributors and the 78% who left were just consumers?

if thats true it could work. wikipedia survives on tiny fraction of editors while millions read.

but heres the problem. even hardcore contributors need NEW questions to
answer. if juniors are asking AI instead of posting on SO, where do the questions come from?

and without fresh questions, do m experienced devs stick around? or does
it become an archive instead of living knowledge base?

Curious. can a platform survive on just the hardcore 22% if the pipeline of new questions dries up?

Thread Thread
 
leob profile image
leob

Well your concerns seem valid ... I don't know if the smaller "volume" will be enough for SO to survive, but I certainly hope so!

Next breakthrough for AI would be if it can "invent" something by itself, pose new questions on SO, autonomously write blog posts or create other content, instead of only cleverly regurgitating and recombining what's been fed to it ...

I guess that would be what they call "AGI" (artificial general intelligence)), and actually that's when it might get really scary for us humans, so let's be careful what we wish for ;-)

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

the AGI question is the real fork.

scenario 1: AI stays sophisticated recombinator. knowledge collapse poisons training data. we're screwed.

scenario 2: AI achieves invention. knowledge collapse irrelevant but...
humans might be too?

uncle bob said "AI cant hold big picture or understand architecture." maybe invention REQUIRES that.

but if AI gets there... yeah, scary.

betting on "AGI will save us" feels risky when we're already seeing collapse.

Thread Thread
 
leob profile image
leob

Correct analysis - but what's the solution? Are the "AI big boys" (big tech) actually (explicitly) aiming for AGI - which would have "(super)human" capabilities? I think that would really be a bridge too far - governments might need to step in (not counting on Trump obviously, lol) ...

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

big tech explicitly aims for AGI.openai's mission, anthropic charter, deepmind goal

solution by timeline:

short: preserve commons deliberately. platforms rewarding public reasoning not just answers.

mid: regulatory guardrails on training data. EU might require disclosure if training on AI content. US wont.

long: if AGI emerges, irrelevant. if not,need intact commons.

maintain commons as insurance while hoping AGI makes it unnecessary.

imperfect but better than assuming AGI solves everything.

Collapse
 
maame-codes profile image
Maame Afua A. P. Fordjour

I’ve noticed that the friction of a broken script or a confusing doc is actually what forces me to understand the 'why.' When an AI gives a confident, polished answer, it’s tempting to skip that doubt step entirely. Developing that judging layer you mentioned feels like the most important thing I can focus on right now. Great follow-up piece!

Collapse
 
dannwaneri profile image
Daniel Nwaneri • Edited

this is it exactly.

friction teaches the "why" accidentally. smooth AI answers skip straight to "what" and we miss the foundation.

the fact that you're consciously building that judging layer puts you ahead of most devs who just optimize for speed without realizing what they're losing.

curious.when you catch AI being confidently wrong now, does it make you
more skeptical of future answers? or do you still have to fight the temptation to trust it?

Collapse
 
maame-codes profile image
Maame Afua A. P. Fordjour

To be honest I will never trust any AI tool a 100%, I personally think if you know & understand what you are doing, it's a great online assistant (that's when you are able to tell when it makes mistakes and not follow it blindly..) but aside that, depending on it a 100% is scary and would definitely cause more harm than good in the long run for anyone's personal growth

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

this is the core tension.

how do you GET to "know & understand what youre doing" if AI is your primary learning tool?

experienced devs like john h (comments) use AI well because they already have context. they can verify. juniors starting today dont have that foundation.

stack overflow forced skepticism through pain. AI doesnt. so can we teach "healthy doubt of AI" explicitly? or does it require the hard-won experience you already have?

might be the real divide. learned before AI vs learned with AI.

Thread Thread
 
maame-codes profile image
Maame Afua A. P. Fordjour

That's why I personally don't use AI as a primary learning tool (I accompany it with accredited resources after I have some vast knowledge), because it could always give you the wrong information, I usually just read books on topics I am learning. So after I have an idea of what I am doing, then I can use ai as an assistant / more or less a 'super search engine'. Personally, I learned the hard way of learning things the old school way (reading actual books and accredited online resources that have been written by developers & people with years of experience). That is helping me more in my learning journey than solely depending on ai to do the work for me. Because the moment ai goes downhill , those who depended FULLY on it will have zero value... these are my personal views on the topic in general :)

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

this is the model that works.

foundation first (books, docs) → then AI as assistant. not the other way around.

the problem. juniors today see everyone using AI and skip straight to it. they never build the foundation that lets you verify.

youre doing it right because you learned the hard way. question. can we teach juniors your approach? or does it require getting burned first?

if verification skills require pain to learn, we're in trouble.

Thread Thread
 
maame-codes profile image
Maame Afua A. P. Fordjour

To be honest, I am still learning myself (junior level), but I got loads of advise from some really good developers who have been through the old school system (without AI). So I have been following their advise in doing so, and it has helped my personal growth because I am able to understand the technical aspects of most things now, as compared to using ai. I think everyone just needs to do what would help their personal growth, since we all learn in different ways :)

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

wait.youre a JUNIOR but learned from devs who came up without AI.

so its not experienced vs junior. its mentored vs unmentored.

youre inheriting their verification habits. thats the transmission mechanism.

scary question. in 5 years when most seniors also learned with AI, who teaches juniors to be skeptical?

right now theres enough pre-AI devs to mentor. that window is closing.

youre lucky you found good mentors.

Thread Thread
 
maame-codes profile image
Maame Afua A. P. Fordjour

Mentorship is so important to me in my learning journey and I appreciate my mentors a lot

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

We'll probably look back to the 2010 decade and early 2020 years as the golden age of knowledge and open data unless we manage to change society's course. But maybe that's a temporary first world problem: knowledge curation might recover after a massive collapse of quality, and the real world problems are aren't how to find the right words and details but rather taking action in society and politics, stop war and terror and help people beyond our digital bubble.

Thanks for your thoughtful article. While I'd like to see AI fail due to model collapse, I should better hope that we can somehow fix its inherent flaws and that the next generations will know how to use AI and when to distrust it, just like nobody would flee a cinema screaming in fear when a steam locomotive approaches the camera in black and white, or panic when a fictitious audio book about a martian invasion plays on the radio.

Collapse
 
nandofm profile image
Fernando Fornieles

Brilliant! I wrote about this some months ago but you have explained it with much more detail.
dev.to/nandofm/ai-the-danger-of-en...

What we will get at the end is a rotten knowledge because it won't be fed with new and fresh ideas.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

just read yours. "entropy in knowledge" perfect framing. same conclusion,different angles.

"rotten knowledge" = "knowledge collapse" - same mechanism

appreciate generosity on execution. feels like building toward something

since you published months ago, seen any solution attempts? platforms preserving public knowledge? or just more acceleration?

would love to collaborate exploring this further.

Collapse
 
nandofm profile image
Fernando Fornieles

To be honest I only see acceleration. Maybe we need some kind of Foundation (like Asimov's) and/or a place where genuine content can be created and discussed, the fediverse?. AI generated content is everywhere, I'm not optimistic.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

exactly where im heading.

richard (in comments) committed to m building federated Q&A on activitypub.
same conclusion you reached.

asimovs foundation perfect metaphor. preserve knowledge through dark age. but BUILD it not hope for it.

next 2 articles. what stays "above API" m when AI codes, then building federated stackoverflow.

youre right. waiting for platforms to fix themselves = pessimism justified.

but if we BUILD alternative...

want to be involved? need people who've thought about this beyond hype cycle.

Thread Thread
 
nandofm profile image
Fernando Fornieles

I recently closed my private social media accounts and moved to the fediverse. Apart from that I'm building at home my own cloud server with Nextcloud in a Raspberry. These are my little actions to avoid the "enshitifcation" of the Intenet. Not too much because family deserves also my time but at least I'm doing what I can.

In any case, your idea seems interesting, not sure if I could contribute but I would like to know about the idea/project :-)

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

this is exactly the kind of builder we need.

youre not just talking.youre DOING (fediverse migration, nextcloud, raspberry pi infrastructure).

the federated Q&A idea: activitypub-based stackoverflow alternative questions/answers federate across instances. community-owned, open source.

richard committed to help. now you. that's enough to start.

going to write the spec as next article (after "above the API" piece). then build prototype weekend after.

can you join a small group chat to sketch architecture? just richard, you, me for now. keep it tight until we have working prototype

family time matters. this is volunteer/passion project, not job. we build what we can when we can.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

this is exactly the kind of builder we need

youre not just talking. youre DOING

the federated Q&A idea: activitypub-based stackoverflow alternative. questions/answers federate across instances. community-owned, open source.

richard committed to help. now you. that's enough to start

going to write the spec as next article (after "above the API" piece). then build prototype weekend after.

can you join a small group chat to sketch architecture? just richard, you, me for now. keep it tight until we have working prototype.

also: family time matters. this is volunteer/passion project, not job.
we build what we can when we can

Collapse
 
spo0q profile image
spO0q • Edited

AI changes the way we search and make our way to maintainable code bases and sustainable knowledge.

You can't read what it says at face value. You'll likely fail, but that's kinda the same with existing human misinformation.

I'll keep the skeptical approach, regardless of the source.

A bigger issue, though, as you frame it, could be the death of various ecosystems.

It's like AI's platforms, which are more or less platforms the unstoppable Tech giants, are reproducing the same mistakes with bigger weapons: the classic "sawing off our own branch."

Collapse
 
dannwaneri profile image
Daniel Nwaneri

that image perfectly captures it. sawing off our own branch.

skepticism works for misinformation (human or AI). but "death of ecosystems" is bigger threat.

SO dying isnt just "less accurate".its loss of platform where collective refinement happened.

tech giants consolidating. own models, training data, deployment. replacing public commons with private capital.

perfect visual. mind if i use in follow-up about building alternatives?

Collapse
 
spo0q profile image
spO0q

thanks for asking, realized this visual was probably not free to use, but the idea remains valid ^^.

Collapse
 
moopet profile image
Ben Sinclair

But here's what we're not asking

We're definitely asking that. We've been talking about it for a good couple of years by this point.
The problem is that the AI hype machine steamrolls everything. Too many people don't care, and will never care.

Collapse
 
ujja profile image
ujja • Edited

Great article. Really resonates. My approach is kind of zero-trust reasoning. I start by assuming any answer, AI or human, could be wrong. From there, I interrogate, verify, and cross-check before I act on it. It’s a bit more work upfront, but it’s the only way I’ve found to use AI safely without amplifying confident wrongness.

Feels like the key skill going forward isn’t just how to find answers **but **how to doubt them intelligently.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

"zero-trust reasoning" is perfect framework.

this is what ben santora calls keeping human as "judge". assume AI could be wrong, verify actively.

the "doubt intelligently" skill is key. paranoid rejection, but informed skepticism.

question. how did you develop this? mentorship? getting burned by AI errors? or just natural disposition?

curious because if this skill requires pain to learn, we're in trouble.

Collapse
 
ujja profile image
ujja

Honestly I learned it the hard way 😅
Mostly through dev work where I trusted AI a bit too much, moved fast, and only realized days later that a core assumption was wrong. By then it was already baked into the design and logic, so I had to scrap big chunks and start over.
That kind of experience changes how you think.
After a few rounds of that, you stop asking does this sound right and start asking what would make this wrong. It is not about distrusting AI, just treating it like a very confident junior dev. Super helpful, but needs review.
I do not think pain is required, but without some kind of feedback loop like wasted time or broken builds, it is hard to internalize. AI removes friction, so people skip verification until the cost shows up later.
So yeah, not paranoia. Just learned skepticism.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

the mechanism exactly.

learned through pain but lesson was feedback loop not pain itself.

"what would make this wrong" vs "does this sound right" = solver to judge shift.

"confident junior dev" perfect framing

juniors today wont get burned because AI removes friction. cost shows later (production). by then someone elses problem.

how teach "learned skepticism" explicitly? build friction back in? make review mandatory? wait for burns?

article 3 territory. practical verification skills.

appreciate learning path share

Collapse
 
javz profile image
Julien Avezou

Thanks for sharing this article. Got me thinking on a lot of topics. How we are losing our authenticity in the way we communicate as we are regurgitating the same knowledge sources. There needs to be over time more choices of models in their design and source. Too many models are trained at a corporate level. We also need models trained by governments too to counterract incentives and produce richness in alternatives. The Swiss produced their national government model recently and it looks promising.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

hadnt considered model diversity as defense against homogenization.

if all models train on corporate data with profit incentives, we get value convergence not just output convergence.

swiss government model interesting.public infrastructure, different optimization.

but question. does government AI solve knowledge collapse or just diversify AI layer? still need humans contributing novel experiences.

maybe government models + federated knowledge platforms. public AI on public knowledge, both community-owned.

Collapse
 
javz profile image
Julien Avezou • Edited

Agreed, I think one way is due to the different incentive structures, more people would be inclined and/or nudged to contribute novel experiences if framed differently
I find your suggestions in the last part interesting to consider

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

incentive framing is key.

SO worked because reputation. what makes someone publish AI reasoning
when private is faster?

government models might change default. contributing becomes civic act not just personal branding.

"your tax dollars fund this AI, help train it"

different motivation than corporate reputation.

exploring in next piece. sustainable commons incentives.

Thread Thread
 
javz profile image
Julien Avezou

exactly yes. looking forward to the next piece

Collapse
 
charanpool profile image
Charan Koppuravuri

I've watched this play out: AI excels at "solver" tasks (code gen) but fails "judge" roles without human scars from production failures. The real loss? No visible reasoning chains showing why solutions evolve — outdated SO answers had timestamps/debates; AI chats reset to zero.

Mitigation Strategies:

Publish reasoning paths: AI draft → verify → post full prompt chains + rejections on dev.to/GitHub Discussions. Turns private wins into evolvable docs.

Build verification rituals: Always ask "What makes this wrong?" post-AI. For architecture/security (expensive verification), mandate peer review before commit.

Federated Q&A platforms: ActivityPub-based SO alternatives—community Q&A federates across instances, dodging corporate enshittification while keeping curation.

Simple Team Fix:

Add to CONTRIBUTING.md: "AI-generated? Include rejection reasons + human verification." Forces judgment layer, feeds clean data back.

This preserves the commons without rejecting AI productivity. The human judge stays essential—let's rebuild curation around that. Thoughts on federated platforms?

Collapse
 
richardpascoe profile image
Richard Pascoe • Edited

Couldn’t have said it better myself, Charan. The erosion of actual reasoning is a huge part of the problem - especially when so much online interaction is already being generated by AI.

I was reading just yesterday about how open-source projects are really starting to suffer. AI-generated contributions are often rejected due to errors, which ends up consuming a lot of maintainers’ time. But the bigger issue is retention: many of these contributors aren’t sticking around. They get the green square on their GitHub contribution graph and move on.

As a result, a lot of projects are seeing a real drop-off in people who actually stay, learn the codebase, and contribute meaningfully over time.

Collapse
 
charanpool profile image
Charan Koppuravuri • Edited

Spot on — the maintainer bottleneck is brutal. Recent data shows AI-generated PRs spiking churn (code reverted <2 weeks) while dropping reuse, turning repos into "itinerant contributor" graveyards.

Core issue: AI lacks project context, submits plausible-but-breaking changes. Maintainers drown in noise; real contributors bail when interaction feels AI-faked.

Practical mitigations:

  1. Repos adopt "AI-Generated?" labels + mandatory human review checklists (e.g., "Context verified? Tests pass edge cases?").
  2. Tools like GitClear flag churn-prone PRs pre-merge.
  3. Contributor tiers: Verified humans get priority lanes.

This preserves signal. Seen projects implementing successfully!

Thread Thread
 
richardpascoe profile image
Richard Pascoe

I was about to write a reply along the lines of, “Oh, well, at least OpenAI and the rest are making money - right?!” complete with a healthy dose of sarcasm.

But even a cursory bit of research shows that the only company actually profiting from the AI bubble is Nvidia - for fairly obvious reasons.

OpenAI itself isn’t profitable. It’s spending vastly more on R&D, computing infrastructure, and staff than it earns, and the prevailing assumption is that it will continue posting losses until at least 2028.

Anthropic may reach break-even sometime around 2027 or 2028. Microsoft doesn’t break out AI "profits" separately, and whatever profit Google is making from AI largely comes from folding it into existing, already-profitable products rather than from AI as a standalone business.

The reason I mention any of this is that, in the race toward an eventual unicorn payday, AI development has been left with remarkably few boundaries - and as a result, undeniably useful efforts like open-source projects are suffering immeasurably.

Collapse
 
richardpascoe profile image
Richard Pascoe • Edited

Morning, @dannwaneri - had frustating follower/following issues last night but that lead me to a problem with The Foundation. I can post for the organisation but if I enter my Settings for Organisations (to check for an invite) I get an error - Content not available in your country.

Apparently this could be a problem with the organisation settings:

  • The org being created as a company org first.
  • Being added before accepting a join invite.
  • A geographical/invite-link error during the original join flow.
  • Admin granting posting rights without completing membership.

It seems once the join flow breaks, dev.to doesn’t self-heal it. The only two ways, I believe, to fix this are either:

Option A - Fresh invite and Accept

The admin must:

  • Generate a brand-new join link
  • Confirm org visibility is public

Option B — dev.to support

The admin must email dev.to support and say something like:

User is associated with the org (can post, counted as employee) but does not appear as a member. Please reset org membership for this account.

They can manually reconcile the membership table.

For clarity, The Foundation is shown on my profile, the employee count shows 3, the members show 2, and I can post for the organisation. Sorry to post this here, Daniel, but wanted to give you a heads-up as soon as possible.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

New secret code for @the-foundation:

ce065f62bc34a84ee133d586117e49a44673d4dbbda9299d05fef232669af40adb71a179fb82f3de1d1e37a4e696792ac1ad

Try this flow:

  1. Go to dev.to/settings/organization
  2. Paste code
  3. Click "Join Organization"
  4. See if it properly completes membership this time

Let me know if you still get "Content not available in your country" error.

If this doesn't work, I'll email dev.to support to manually fix.

Collapse
 
richardpascoe profile image
Richard Pascoe

Perhaps not so secret now, heh!

Tried the flow exactly as suggested, Daniel, and still getting the same error...

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

oops posted code publicly.rotating it now for security.

since flow didnt work anyway, moving to Option B.

emailing dev.to support to manually fix membership. will update when they respond

appreciate you testing. confirms we need dev.to to reconcile manually

once fixed ill send you proper private invite

Thread Thread
 
richardpascoe profile image
Richard Pascoe

No worries, and no harm done!

Good luck with support!

Collapse
 
cyber8080 profile image
Cyber Safety Zone

This article raises a crucial but often overlooked point about how the shift from public knowledge creation to private AI use could hollow out the collective foundations of what we *know. The idea that Stack Overflow’s traffic has collapsed so dramatically — even as AI use explodes — is not just a static statistic, it’s a symptom of a deeper change in how we treat shared knowledge.
I especially resonated with the notion that we’ve traded *public, evolvable knowledge bases
for private AI chats that leave no trace for future learners. That “navigation → conversation” shift means fewer opportunities for others to learn through discussion, debate, and historical context — elements that once helped developers refine their craft.

The article also smartly highlights another risk: AI can be confidently wrong, and without proper verification or curation, those errors can propagate into future systems or models. That’s a real call to action for building systems that don’t just generate answers, but help us teach how to judge and validate them.

Ultimately, the piece doesn’t say “don’t use AI” — it says we need to use AI responsibly and give something back to the public commons. That could mean publishing reasoning paths, documenting insights publicly, or reimagining what a collaborative, AI‑era knowledge platform could look like. It’s a timely and necessary conversation for anyone who cares about the future of shared learning.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

one of best summaries ive seen. nailed the nuance.

"symptom of deeper change" exactly. 78% isnt SO failing, its us choosing private over public.

"confidently wrong errors propagating" connects to ben santora's expensive verification domains. cheap verification catches fast, expensive compounds silently.

"give back to commons" is next article focus practical habits not just aspiration

appreciate synthesis. mind if i quote in follow-ups?

Collapse
 
samabos profile image
Samson Maborukoje

Great article, well balanced. You should never blindly trust AI (or any single answer); always verify with official documentation.

That said, AI has dramatically reduced the time and effort spent searching for solutions. With the old approach, even when you found an answer, it was often hard to understand because it was buried in technical jargon or poorly explained.

I think the new skill we all need is learning how to critically analyse AI output and justify why an answer should be accepted, rather than treating it as truth by default.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

exactly. verification as explicit skill rather than accident.

old way: bad docs forced you to learn skepticism through pain.
new way: AI is smooth, so how do we teach critical analysis without pain?

maame (in comments) learned from mentors who came up pre-AI. they taught
her to verify. but in 5-10 years most seniors will have learned WITH AI.

how do we teach "justify why answer should be accepted" when everyone
learned by accepting AI output by default?

genuine question. can verification be m taught explicitly or does it require getting burned first?

Collapse
 
brense profile image
Rense Bakker

It's simple, future AI will train on today's AI's wrong answers and people will love it 😛 the vast majority of people want easy answers, they don't care if they are correct. We've just entered the second dark ages where all previous knowledge is slowly forgotten and we will be ruled by Elon Musk who deliberately keeps everyone stupid.

Collapse
 
hassamdev profile image
Hassam Fathe Muhammad

I really liked your article and appreciate your effort to highlight the loss of shared knowledge bases. I think there was an opportunity for these platforms to integrate AI by mapping a user’s problem in an AI chat to existing Stack Overflow threads and publicly written posts, and then posting the refined chat back as a contribution to relevant threads—thus strengthening the shared knowledge base. However, these ecosystems not only failed to evolve, they also made it harder for the next generation of AI to train on human-generated data.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

exactly the missed opportunity.

SO could have integrated AI as enhancement:

  • map chats to existing threads
  • post refined solutions publicly
  • strengthen commons not replace it

but they optimized for traffic defense not evolution.

your "mapping to threads" idea = what federated Q&A should do. AI finds existing knowledge, contributes refinements back.

the tech existed. vision didnt. now rebuilding from scratch.

appreciate seeing what could have been informs what we build next.

Collapse
 
mweed profile image
MW

I don't think these problems exist the way you laid them out. Wikipedia was not allowed as a source for school 20 years ago and it's still not a viable source today. I have seen many errors and was not allowed to correct them because they get reverted.

StackOverflow has memes about posting a question, getting insulted, and then your issue closed. My question goes unanswered, so I leave the site.

I think the answer is what has always been: post about your problems, post your solutions, and wait for it to scraped up. Grokipedia is becoming a new source of info, as are things like Claude and Perplexity pages. Perhaps not ideal but new knowledge will continue to get generated, just with fewer typos and more colons.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

fair points.wikipedia and SO were never perfect. hostile moderation, gatekeeping, errors
but heres the difference between imperfect public and "post + wait for scraping":

imperfect public (SO/wikipedia):

  • you could SEE the errors and argue
  • edit wars were visible
  • hostile mods were accountable (eventually)
  • searchable, linkable, evolvable
  • others learned from the mess

post + wait for scraping:

  • who verifies before AI ingests?
  • errors get trained into models
  • no edit wars, just confident wrongness
  • not searchable (locked in AI weights)
  • others cant learn from your reasoning path

also: "fewer typos, more colons" assumes AI generates correct content. ben santora calls this solver without judge. cheap verification (compiles) works. expensive verification (is architecture sound?) fails silently.

grokipedia, claude pages still centralized, corporate-owned, will enshittify. we are trading commons for capital.

maybe new knowledge still gets generated. question is: will it be verifiable, evolvable, and publicly owned? or corporate, opaque, and extractive?

appreciate the skepticism.

Collapse
 
shalinibhavi525sudo profile image
shambhavi525-sudo

This is one of the most vital conversations happening in tech right now. The line 'Hostile experts created the dataset for patient machines' is hauntingly accurate. We’ve traded the friction of community for the convenience of isolation, but as you pointed out, that friction was actually a forge for critical thinking. I especially appreciate the distinction between 'solvers' and 'judges'—it highlights that our roles are shifting from creators to editors. Thank you for articulating the 'verification tax' so clearly; it’s a cost we’re all paying but few are naming.

Collapse
 
richardpascoe profile image
Richard Pascoe

I agree, it's great to have the "Verification Tax" spoken about openly. The companies behind these LLMs don't want us talking about it, of course - hell, they don't even want the term to exist. Profit will always be the bottom line with these next big thing technologies - which are always solutions looking for a problem - and enshiffication will come, sooner rather than later.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

"verification tax" naming is critical. makes invisible cost visible.

AI companies dont want this discussed because smooth UX hides the cost. but cost is real: time verifying, cognitive doubt, rebuilding when wrong.

adrian miu economics. AI priced 8-10x below cost. when corrects to $1-2K/month, verification tax becomes financial tax.

"solutions looking for problems" perfect. optimizing speed without asking if speed was constraint.

enshittification compressed timeline. maybe 18-24 months before pricing/
consolidation. build NOW.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

"friction was forge for critical thinking". perfect extension.

youve captured the whole tension. convenience vs depth. we optimized for speed, lost the forge.

"creators to editors" connects to doogal simpsons point about abundance requiring discipline. the skill shifts from generating to curating.

really appreciate how you synthesized multiple threads here.

Collapse
 
richardpascoe profile image
Richard Pascoe • Edited

Just had an extra thought! Since some comments mentioned how DEV itself could be a good fit, even in the interim, @dannwaneri - you could create an organisation account here, invite other members to post under it, and pin valuable resources. That way, the knowledge stays public, organised, and somewhat accessible.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

brilliant idea. this could be the bridge between "we need alternatives" and "we built alternatives"

short-term solution:

  • create dev.to org account
  • invite builders (you, fernando, others interested)
  • start curating knowledge publicly
  • test what works

then use that as requirements doc for federated version. learn what people actually need before building infrastructure.

basically, prototype on existing platform, then build distributed version with real user feedback.

are you in? could set this up this week and invite initial group. then when we launch federated Q&A we have working model to point to.

also solves immediate problem: "where should we contribute NOW while building long-term solution"

Collapse
 
richardpascoe profile image
Richard Pascoe

Sure! Count me in! Glad you liked the idea!

Collapse
 
vasughanta09 profile image
Vasu Ghanta

Provocative wake-up call on knowledge collapse—SO's 78% traffic plunge and Wikipedia's burial signal we're devouring our seed corn; let's federate Q&A on ActivityPub to sustain human curation amid AI's confident entropy!

Collapse
 
cyber8080 profile image
Cyber Safety Zone

This article brings up a point that doesn’t get discussed enough: AI isn’t just changing how we get answers — it’s changing what stays in the public record for future learners. The collapse of traffic on places like Stack Overflow isn’t just numbers — it’s a signal that less new content is being contributed back into the commons. When answers live only in private AI chats, the next generation loses the context, history, and evolution that made older knowledge bases valuable.

I also resonated with the idea that AI’s confidence can mask uncertainty. Recent research on knowledge collapse in LLMs shows that models can maintain fluency while factual accuracy degrades when they’re recursively trained on their own outputs. That underscores the concern that “confident wrongness” could compound over time if there’s no curation or diversity of sources.
What you highlight isn’t a call to abandon AI — it’s a call to be intentional about how we use it. If we build systems or workflows that capture reasoning paths and publish them back into public knowledge bases, we can mitigate this collapse and ensure future AI has rich data to learn from, not just echoes of itself.

Collapse
 
canabady profile image
Selvacanabady P

This article really resonated with me.

I see AI today much like online shopping vs local vendors. Online shopping is incredibly convenient, but it slowly erodes local shops that curate, share, and sustain community knowledge.

Similarly, AI gives instant answers privately, while public knowledge bases (Stack Overflow, blogs, forums) — our “local vendors” of knowledge — lose contributions, context, and long-term resilience.

The risk isn’t AI itself, but losing the shared, openly maintained knowledge commons that future learning depends on.

Collapse
 
darkbranchcore profile image
darkbranchcore

This is a thoughtful and honestly unsettling take — I really appreciate how you connected productivity gains with the long-term cost to the knowledge commons. I especially agree with the idea that AI should be a compass, not an autopilot, and that publishing verified reasoning back to public spaces might be the missing feedback loop. I’m genuinely interested to see how teams and individuals can turn this into a habit, because rebuilding that commons feels like the real challenge ahead.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

youve hit the core tension. individual productivity vs collective knowledge health.

the "how make it habit" question is key. some ideas:

  • treat complex AI solutions as draft blog posts
  • default to public writeup when solving novel problems
  • companies incentivize contribution (OSS fridays for knowledge)
  • teach "publish your learning" in bootcamps

but habits need incentives. SO worked because reputation. whats the equivalent for "publish AI reasoning"?

maybe thats article 3

what do you think would work?

Collapse
 
balavaradhalingam profile image
Balasaranya Varadhalingam

Private speed, public amnesia. That’s the real cost of AI.

Collapse
 
ghostlyinc profile image
GhostlyInc

This really resonates. The productivity gains are real, but the loss of public reasoning paths is scary.
AI feels great as a solver, but without public curation and visible disagreement, we’re losing the judgment muscle.
Using AI privately and publishing distilled insights publicly might be the only sustainable loop forward.
Otherwise we’re optimizing individuals while slowly starving the commons.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

"optimizing individuals while starving the commons" is the exact pattern.

the question is incentives. SO worked because reputation. what makes someone publish their AI reasoning publicly when its faster to just solve and move on?

maybe companies need to make it part of culture? "OSS fridays" but for knowledge contribution?

or maybe platforms need new models - not just Q&A but "here's how i solved X with AI, here's what went wrong, here's what i learned"

curious if youve seen any teams doing this well?

Collapse
 
ghostlyinc profile image
GhostlyInc

Very few. Mostly small dev or infra teams that publish postmortems, OSS docs, or “how we built X with AI” blog posts. Big companies almost never.

Collapse
 
quotewisio profile image
quotewiser

Love your relevant "pull quotes" from previous article discussions, and your attributions for them. Great human-sourced practice!

Collapse
 
avinashzala profile image
Avinash Zala

This nails the real risk - private AI speed is great, but without public contribution, knowledge stops compounding. Using AI isn’t the problem; not feeding the commons back is.

Collapse
 
mayankgoyal profile image
Mayank Goyal

AI gave us speed, but took away shared memory. The real question isn’t productivity - it’s whether knowledge can still compound publicly.

Collapse
 
armando_ota profile image
Armando Ota

We are racing into "So many idiots, such a small world". And younger generations are not ever aware of the rise of the stupid factor. I love that we are getting tools for serving the purpose of the code monkeys, but there is A LOT more knowledge that is out there and younger generations are missing it. I'm glad I am the generation that needed to learn a lot to make proper software. Hopefully somewhere in the future kids will get it and turn around to do some old school learning ...

Collapse
 
dannwaneri profile image
Daniel Nwaneri

i get the concern but careful with "kids these days" framing. every generation says this.

the real issue isnt younger devs being "idiots".its structural. if AI is their primary learning tool and public knowledge commons are dead, how DO they learn foundational skills?

maame (in comments) is junior but learned from pre-AI mentors. she built foundation first, uses AI as assistant. shes doing it right.

problem. in 5-10 years most seniors will have learned WITH AI. who mentors then? transmission mechanism breaks.

not about intelligence or work ethic. its about whether the infrastructure for learning deep skills still exists.

"hopefully kids will turn around" - or we could deliberately build better learning paths now. teach verification explicitly instead of hoping pain teaches it.

appreciate the concern though foundational knowledge matters.

Collapse
 
armando_ota profile image
Armando Ota

younger gen can still use books to learn .. in a book you get an full spectrum of a topic while with AI you get executive summary and that is an issue. Also when people stop reading/writing books AI knowledge stops, since no one will fill it with info required to return the info. We'll see. I see that in our schools teachers are extremely against AIs and stuff and just for the sake of getting to through the finish line while learning something.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

exactly, you nailed it.

books = full spectrum, context.
AI = executive summary, patterns

"when people stop writing, AI knowledge stops" - this is model collapse. AI trains on humans → humans stop creating → AI trains on AI → degrades.

teachers against AI "to finish line" misses point. not ban vs allow. how teach verification explicitly?

maame shows path: foundation first (books) → then AI as assistant

anthropic data: juniors using AI finished faster, scored 17% lower on mastery. ones who succeeded? asked conceptual questions to understand, not delegate.

not anti-AI. pro-understanding. AI works when you can verify it.

Collapse
 
nileshadiyecha profile image
Nilesh A.

AI accelerates individuals. Communities pay the price.

Collapse
 
cyber8080 profile image
Cyber Safety Zone

Really thought-provoking piece! 🚀 It’s easy to celebrate the convenience of AI, but this article highlights something deeper — when we stop contributing to public knowledge bases like Stack Overflow and Wikipedia, we risk weakening the foundation future generations will rely on. The idea that private AI chats can solve problems is great for speed, but that “conversation → lost forever” feedback loop means others can’t learn from it later.

Instead of just replacing public answers with private AI solves, we should think about ways to share reasoning paths and documented solutions publicly so knowledge continues to grow and stay accessible — much like the article suggests.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

appreciate you getting the deeper point.

"conversation → lost forever" is the core mechanism. individually optimal (fast private answers) but collectively destructive (no public learning).

your framing "share reasoning paths and documented solutions publicly" is what were exploring with @the-foundation org. bridge between using AI and contributing back.

question is incentives. SO worked because reputation. what makes someone publish AI reasoning when private is faster?

some emerging ideas:

  • treat AI solutions as draft blog posts (default to public)
  • companies making it cultural (OSS fridays for knowledge)
  • platforms rewarding reasoning not just answers

still figuring it out. your "reasoning paths" language helps clarify what were actually trying to preserve.

thanks for the thoughtful engagement

Collapse
 
peacebinflow profile image
PEACEBINFLOW

This whole thing has been sitting heavy with me, honestly.

I don’t think the problem is AI. I use it every day. It makes me faster, sharper, lets me explore things I wouldn’t have touched before. I’m not nostalgic for tab hell or being flamed on Stack Overflow for asking the “wrong” question.

But something is breaking.

What we lost wasn’t just Stack Overflow or Wikipedia traffic — it was shared struggle. You could see how ideas evolved, how people disagreed, how answers aged badly, how better ones replaced them. There was memory there. Scars. Context.

Now everything happens in private chats. Clean answers, no witnesses, no trail. And tomorrow, someone else solves the exact same problem again, alone, with no idea you were there yesterday.

That feels… wrong.

I don’t buy the “we’re just the horses now” argument either. Maybe some skills are obsolete — sure. But developers didn’t just use knowledge, we created the substrate AI was trained on. If we stop thinking publicly about hard problems — architecture, tradeoffs, why something failed — what does the next paradigm train on?

AI output recycled into AI input isn’t progress. That’s a feedback loop.

I don’t have a clean solution. But I’m convinced this isn’t about quitting AI — it’s about closing the loop. Use AI privately, fine. But publish the reasoning. Turn the messy dialogue into something others can build on. Make private acceleration feed public memory again.

Otherwise we’re not compounding knowledge anymore. We’re just parallelizing it.

And parallel work without shared memory doesn’t scale — it just repeats.

Curious how others are feeling about this. Are we heading for a real knowledge collapse, or is this just the awkward middle of a transition we haven’t learned how to name yet?

Collapse
 
dannwaneri profile image
Daniel Nwaneri

most thoughtful comment here

"parallel work without shared memory doesnt scale, just repeats" - exact mechanism.

not losing knowledge, fragmenting it. every dev solving same problem with AI = parallel processing without shared state.

"clean answers, no witnesses, no trail" vs SO messy but visible.

"closing the loop" is the path. use privately, publish reasoning publicly.

building this with @the-foundation (once bug fixed). bridge between use and contribution.

"awkward transition" or decision point? close loop deliberately or let fragmentation compound.

appreciate clarity

Collapse
 
uz4zeiqufjq7s4aver9m0kxibtilhi profile image
Tomonori Sasaki

I think your concerns are understandable, especially when looking at the early phase of rapid AI adoption and its impact on public knowledge.
At the same time, from a 2026 perspective, I’m starting to see signs of human adaptation—more critical review, awareness of “AI-shaped” output, and clearer boundaries around where LLMs actually help.
In my own case, generative AI lowered the initial barrier to OSS participation and helped with early momentum, rather than replacing judgment or original thinking.
The risks you describe are real, but how we choose to use AI may matter more than the technology itself.

Collapse
 
leegee profile image
Lee Goddard

I stopped reading this piece early on because it is clearly LLM generated.

The irony is lost....

Collapse
 
dannwaneri profile image
Daniel Nwaneri

the irony is the point. we've reached a stage where clear thinking gets mistaken for a watermark.
i use ai as compass not autopilot. check my comment history.been building this thinking publicly for weeks with real people. thanks for reading (even briefly).

Collapse
 
peter_truchly_4fce0874fd5 profile image
Peter Truchly

I saw that too but I can also see at least 2 good ideas in what You did:

  • clever way of community building
  • AI assisted group thinking
Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

appreciate you seeing past the accusation to the meta-value.

"AI assisted group thinking" is exactly whats happening. im using AI as compass, but the THINKING is collaborative with real humans in comments.

community building through synthesis.

Collapse
 
qvfagundes profile image
Vinicius Fagundes

"We're just... parallel processing."
basically delegating it to something else, or I should say the "patient machine"
I 100% agree with you.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

bringing it full circle. your "hostile experts created the dataset" line opened this whole piece

and yeah, "delegating to patient machine" is exactly the trap. individually rational (AI is better experience than SO), collectively destructive (everyone solving same problems in parallel).

the "patient machine" doesnt judge,doesnt push back, doesnt force you to refine your question. it just... helps. smoothly.

but that smoothness means we never build the friction-born skepticism that made us good at verification.

appreciate you being part of this thinking from chrome tabs through now. youre basically co-author at this point.

Collapse
 
anrodriguez profile image
An Rodriguez

If we all stop contributing to public knowledge bases, what does the next generation of AI even train on? who is stopping, brother?

Collapse
 
dannwaneri profile image
Daniel Nwaneri

fair challenge. am i overstating?

data says no. SO traffic down 78%,wikipedia buried by google, 84% devs
using AI daily.

but youre right that SOME still contribute. question is. enough to sustain commons? or just delaying collapse?

maame (comments) still verifies with docs. richard migrating to fediverse. fernando wrote about this months ago.

so people ARE aware. but individual awareness ≠ systemic solution

acceleration is real even if not universal

Collapse
 
kumaraish profile image
AIshwarya Kumar

What goes in, goes out and a little later we all would be screaming what actually is going in ...

Collapse
 
miketalbot profile image
Mike Talbot ⭐ • Edited

Another great article.

I still feel there is a point you are missing here -> why do humans need to build a knowledge base? So that they and others can make things work?

Humans are noisy, messy, disorganised creatures who, through immense effort, can bend their brains to creating software solutions and sometimes document them well. Who cares? Who cares about the knowledge base if the software works? If the AI can judge if something is working, that is its training set. It tries something, it doesn't work, it fixes it -> this is a training set. It might be internal at the moment, it might be phased and not continuous learning right now. At the moment, the knowledge may be siloed or re-invented many times - that's just an inefficiency that one day will be cost-effective to address; today is not that day.

So I started my professional gaming career programming in assembler: Z80, 6502, 68000, 8086. Let's take an example of what tech was vital once and has changed completely - computer graphics, sprites and the like.

When I wrote my first computer game, the logical way to move a graphic was to draw it to the screen, then remove it and draw it again somewhere else. This was "efficient" but couldn't make massive moving screens with overlapping graphics that remained perfect. I clearly remember another game programmer asking, "Why don't you just redraw the entire screen each frame?" Wow, that works. Then I remember working on a Disney project in the 90s and coming up with the idea of compiling graphics into a program that wrote itself to the screen buffer -> massive performance upgrade. All of that knowledge, all of that documentation, wiped out by graphics cards and programmable shaders. My skills in assembler loop optimisation, unrolling, etc., are all gone.

Nobody cared about my compiled sprites; they cared about working software and a great game with a high frame rate and a lot of movement. No one cared I was being inefficient by rewriting the whole screen, it was perfectly fine - inefficient but it delivered the right experience.

A knowledge base is what a human needs: analogies, exemplars, etc. But does software need humans? Yes today. Yes tomorrow. Forever? I don't think so anymore - so the tools humans need, the ways humans learn, are not necessary. AI needs that to learn, but, like a senior engineer, it needs it less and less as experience and self-derived knowledge become more important throughout your career. Experiments and tests are how you make better solutions.

What we need from humans for the foreseeable future is an understanding of architecture and interactions, and the skill to work out how to solve real-world problems.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

this is the hardest question in the whole thread.

your compiled sprites example is perfect. all that knowledge obsolete when graphics cards arrived. nobody mourned it because BETTER solution emerged.

but heres the difference. graphics cards didnt train on your compiled sprite documentation. they were fundamentally different approach

AI is training on stack overflow, wikipedia, github. if those die and AI trains on AI output, we get model collapse not paradigm shift

your point about "who cares if software works" assumes AI can reliably judge "working." but uncle bob said AI "doesnt foresee disaster its creating" . no architectural judgment

maybe youre right that software doesnt need humans forever. but the path to AGI might require the knowledge commons we're killing. chicken/egg.

if we reach AGI, knowledge collapse irrelevant. if we dont, knowledge
collapse catastrophic

feels risky to assume AGI solves this

Collapse
 
miketalbot profile image
Mike Talbot ⭐

I know what you are saying, but real AGI would be as capable as a human, just with superhuman powers to remember and communicate. We are certainly not there yet. This is the uncomfortable bit in the middle, I think.

I think your assumption that AI training on AI output causes model collapse is a fallacy derived from issues in early LLM development. AI training on successful versus unsuccessful output in a world where we can get objective and subjective feedback is totally fine right? Preschoolers training on preschoolers' output would be a nightmare, but PhD students training on PhD papers is fine, right?

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

your phd papers analogy is sharp but has a flaw.

phd students training on phd papers works because:

  1. papers are peer reviewed (verification layer)
  2. experiments are reproducible (cheap verification)
  3. wrong papers get retracted (curation)

AI training on AI output lacks those guardrails:

  • no peer review (solver outputs arent judged)
  • expensive verification domains (architecture, security)
  • no retraction mechanism (wrong content stays in training)

ben santora pointed out: cheap verification domains (code compiles, tests pass) might be fine. expensive ones (is architecture sound?) degrade.

so maybe youre right that VERIFIED AI output is okay training data.
question is: whos doing verification at scale?

right now: people trust AI, copy to blog, that trains next model. no filtering layer

if we build verification layer, maybe avoid collapse. without it, preschoolers teaching preschoolers.

Thread Thread
 
jamesrgrinter profile image
James R Grinter

It's almost as if Stack Overflow is the peer review system, those engaged with it honestly are providing the cheap verification, and the "gatekeeping" is curation and keeping the system running.

I just re-read The basic laws of human stupidity, and the third law "A stupid person is a person who causes losses to another person or to a group of persons while himself deriving no gain and even possibly incurring losses." seems apposite.

Collapse
 
peter_truchly_4fce0874fd5 profile image
Peter Truchly

From a practical standpoint there is nothing better than switching to a better solution. Why bother with an old, inferior approach or tech?

However, Your personal evolution matters. Your historical knowledge got blended with historical experience and created a "new model" of Yourself. How is the "AGI" going to do the same? It is still bound by the same physical constraints of time, space and energy. Evolution and experience will still matter to it.

Problems/limitations I see with any current AI/LLM:

  • the size is not sufficient to match human brain (its ability to talk/write is misleading, but it cannot do otherwise, like a train on rails)
  • there is no further evolution after training (some special models, but not with current mainstream)
  • LLMs are not built to seek the truth, Gödel/Turing limitations do apply but LLM does not even care. What is provable or unprovable for a given theory? What theory? What is computable? The LLMs is just trained for output coherency (a.k.a. helpfulness).
Collapse
 
dannwaneri profile image
Daniel Nwaneri

your gödel/turing point is crucial. LLMs trained for coherency not truth.

this is why mikes "phd papers" analogy breaks down. phd papers have peer review (verification layer). LLM output has... confidence.

size/evolution/truth-seeking limitations you list - all barriers to AGI. maybe "above API" skills matter longer than we think.

really appreciate technical depth here

Collapse
 
ben-santora profile image
Ben Santora • Edited

Re-edit:
Good article, Daniel. I have to push back against the idea that dev.to is 'outdated' - Stack Overflow, maybe - but not this platform. I would argue that THIS platform - dev.to - is the closest and best platform serving as 'commons' for exchanging ideas in this field.

There's a 2025 paper - "Hallucination Stations" that argues mathematically that transformer-based LLMs can never reliably perform complex agentic or computational tasks beyond a certain level of complexity, that nowhere near 100% - maybe closer to 50%. Even OpenAI has admitted this. Errors are inevitable - even with advanced “reasoning” models.

Despite all of the hype and the many benefits of AI, the limitations are starting to get acknowledged. I still feel that keeping humans in the loop will always be the answer. To quote the Moody Blues: "We decide which is right and which is an illusion."

Collapse
 
dannwaneri profile image
Daniel Nwaneri

appreciate you seeing what im doing here synthesizing community thinking into public knowledge.

youre right that devto IS working as commons. the 90+ comments prove it.

but question. is devto enough long-term? its centralized, venture-backed, could enshittify. what if we need distributed alternatives AS WELL AS devto?

not either/or but both/and. multiple commons platforms, federated, no single point of failure.

the hallucination stations paper is critical citation. mathematical proof of limits means "humans in loop" isnt preference, its requirement.

appreciate the moody blues reference. "we decide which is right and which is illusion" = perfect framing of verification problem

Collapse
 
ben-santora profile image
Ben Santora • Edited

Like I said in an earlier conversation, I'm older and I've learned to roll with the tech - I don't worry about it. But I can see the importance this cause has for you - and I think we've got the right guy on the job.

Collapse
 
codingpanel profile image
Coding Panel

Really eye-opening, Daniel. The irony is striking: AI trained on our public knowledge is now replacing it, and we risk losing the commons that taught verification, debate, and context.

I especially resonate with the “confident wrongness” problem—without friction, we might trust smooth AI answers over critical thinking.

I agree that the path forward is using AI to accelerate learning and then contributing back—publishing reasoning paths, documenting solutions, and keeping knowledge public. The big question: how do we incentivize this in a world where instant AI answers feel good enough?

Collapse
 
dannwaneri profile image
Daniel Nwaneri

incentive question is what im exploring in next article.

some emerging ideas from discussions:

  • companies making it cultural (OSS fridays for knowledge)
  • platforms rewarding reasoning paths not just answers
  • civic duty framing (government AI infrastructure)

but honestly? habits need structure. SO worked because reputation. what makes publishing AI reasoning worthwhile when private is faster?

still figuring out. your input welcome

Collapse
 
fabsalvadori profile image
Fabio Marcello Salvadori

Let me play the contrarian here.
Wiki has always been pretty biased and run by a cast of contributors who decide what's relevant and what's not. If human knowledge filing was depending on Wiki, we would be better on with AI hallucinations to be honest with you.

About Stack, the decline started well before the advent of AI, again mostly due to community culture and unwelcoming moderation. In 2020 there were 27% unanswered questions; number of questions decline started as early as 2014; most veterans left the platform around those times. I'd say that AI highlighted Stack's decline, it didn't cause it. Those devs who cry today are probably the same who destroyed the platform through rudeness and entitlement.

About AI running out of data, it's rather improbable. AI generate its own synthetic data, can already self-train through model distillation, and we are very close to reach AGI-like coding expertise. In fact it will happen sooner than you expect. What will happen then? The market works always the same way: demand and offer set the price. Right now vibe coding seems cheaper, but that's only because we are on a early adoption phase and companies are ok with losing money.

If we had to pay for them to cover the real costs, vibe coding costs would already align with programmers salaries. When AGI comes, we won't pay the AI a salary, but we will pay a massive premium for the energy and infrastructure required to make it reliable enough to replace a human.

Do you think companies will eventually choose slower, higher-quality human code over cheap, fast, but messy AI code when those technical debt bills finally come due?
We are dealing with a new technology and when that happens there is always uncertainty and adjustments. Programmers need to adapt (already), but they won't disappear.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

let me address each.

wikipedia/SO flaws: fair. they were imperfect. but imperfect public >
perfect private. at least you could SEE the bias, challenge moderation.

SO decline pre-AI: true. toxic culture contributed. but AI accelerated collapse from slow decline to 78% drop. made existing problems fatal

synthetic data: this is mike talbots AGI bet. if AI achieves true invention, my thesis wrong. but peter truchly showed mathematical limits (gödel/turing, hallucination stations paper). betting on AGI = risky

economics: adrian miu showed AI currently 8-10x below cost. when corrects
to $1-2K/month, human code becomes competitive again on cost

your "programmers adapt" conclusion is right. but HOW they adapt matters.
thats article 3 (above the API)

appreciate contrarian view. sharpens thinking

Collapse
 
fabsalvadori profile image
Fabio Marcello Salvadori

Well, and it's always views, so contrarian does not mean right :) In fact your points stand too.

I think what we can def agree about is that this is early tech that is developing faster than we are prepared to adapt to. Panic is justified as much as optimism.

Time will tell how we all had to correctly adapt to embrace it, and for sure there will be winners and losers.

Collapse
 
aniruddhaadak profile image
ANIRUDDHA ADAK

Thought-provoking

Collapse
 
code42cate profile image
Jonas Scholz

its a bit funny that this post sounds like 100% AI written

Collapse
 
crazytonyi profile image
Anthony

"You know, Phaedrus, that is the strange thing about writing, which makes it truly correspond to painting. The painter’s products stand before us as though they were alive. But if you question them, they maintain a most majestic silence. It is the same with written words. They seem to talk to you as though they were intelligent, but if you ask them anything about what they say from a desire to be instructed they go on telling just the same thing forever."

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

wow! It is great explanation