I glanced at my browser tabs this morning. Twenty tabs open. Every single one was either Claude or Gemini.
Then I remembered what my tabs used to ...
For further actions, you may consider blocking this person and/or reporting abuse
I remember when I began learning the Linux OS back in 2012. I'd get stuck and go on Stack Overflow for answers. Sometimes I'd get the answer, sometimes I'd get 10 answers and sometimes I'd get attacked for how I phrased my question. I like things better now - how all of the AI models know most everything there is to know about Linux - and I know where they learned it from.
the stackoverflow gatekeeping was painful fr.
ai is more patient for sure. but we also lost something . those threads were built by thousands of people debugging together
now its just... me and claude. faster but lonelier somehow.
The "where they learned it from" sources are running dry. The bad docs, mentioned in another discussion, are still there. There are still forums and social media. We can still discuss and ask questions and we still do. However, if the shift towards private AI conversations continues, new AI generations will have less trustworthy source material to learn from, so their answers will likely deteriorate.
I remember times before StackOverflow (Experts Exchange and various forums, and even the good old USENET newsgroups) and before MDN (MSDN). I still have MDN tabs open, I still use StackOverflow, and I do use Claude, Perplexity and Gemini/Bard/Google AI mode, but apart from MDN, there is still no single source of truth that won't fail eventually.
Some day, using AI will become more expensive, and something new will emerge, hopefully better than all that we have right now. Critical thinking, and learning by doing will never go away though.
Right - critical thinking, learning by doing, and as you say, we're all talking and exchanging information with each other here on this forum. We used slide rules in 1971 in my electronics classes and studied electron tubes - calculators were just arriving on the scene. Now there are smartphones and wireless and I can run AI on my laptop - I've learned to roll with the tech.
fair point. tools change, fundamentals dont.
i think my worry is less about the tool and more about the shift from public knowledge bases to private conversations. but youre right that we adapt.
appreciate the perspective from someone whos actually lived through multiple tech waves 🙏
"knowledge running dry" is exactly it.
we're all optimizing our individual workflows while accidentally killing the commons. stackoverflow was annoying but it was PUBLIC. future devs could learn from our struggles. now all that reasoning is locked in private ai chats
and youre right about cost. when ai gets expensive or goes away, what do we have left? a bunch of people who forgot how to debug without it?
this feels like tragedy of the commons but for developer knowledge. no idea how to fix it but naming it is important
thanks for this perspective
I don't think private is the biggest problem, at least the AI companies could use private chats for training, but only with feedback and curation. If all AI chats were public, indexed and searchable, they would still contain all the failed steps with incomple code and misleading suggestions: much harder to make sense of than StackOverflow with its strict voting system (and outdated answers, but that's often obvious and commented) or forums where senior members can edit titles and people often share their final solution and mark a topic as solved. Most AI chats are too conversational to become part of a new knowledge library.
fair point. ai chat transcripts would be as messy as my git history. technically a record but not actually useful 😂
i think youre right that the curation/voting system was the real magic of stackoverflow not just the public visibility. raw conversations without that filtering layer are just noise
maybe the real question is. how do we rebuild that curation mechanism for ai-era knowledge? bc right now we dont have it and individual productivity is masking that gap
really appreciate this back and forth btw. this is exactly the kind of thinking i was hoping for
I started learning Linux in 2012. When I got stuck, I'd go to Stack Overflow. Sometimes I'd find an answer. Sometimes I'd find ten. Sometimes I'd just get attacked for how I asked the question.
Now I ask Claude or Perplexity and get a clear, patient explanation—no judgment, no "this has been asked before," no downvotes.
And here's the thing: I know exactly where these models learned it all. From Stack Overflow. From the Arch Wiki. From all those forum threads where someone got roasted for not reading the manual first.
The communities that gatekept knowledge ended up training the tools that now give it away freely.
We haven't really sat with that yet.
The "dumb question" you were afraid to ask is now the safest one to ask
Hostile experts created the dataset for patient machines
My students will never know what it felt like to mass-open 30 tabs and pray
"hostile experts created the dataset for patient machines"
THIS. we built the knowledge commons through collective struggle, then got replaced by the nicer version of ourselves
your students will learn faster but differently. not sure if thats progress or just... change
either way this comment is 🔥
Fair point... we learnt and were happy being grilled on stack overflow, the criticism was brutal... still stack overflow was the best spot for help earlier
Fantastic article ...
Now, this just made me wonder: could AI write this ... ? I think the answer is an unqualified "NO", and that clearly shows where humans still have the edge, and will keep that edge for the foreseeable future ...
appreciate that 🙏
ai loves giving solutions. this is more just... processing out loud and not knowing the answers yet
glad it landed
This article explains why human-written texts are often still a lot better than AI-generated text:
dev.to/ujja/ai-confluence-docs-and...?
yes, ai writes to sound correct. humans write to figure things out
the messy thinking-out-loud is what people actually want to read
good link
Plus it has a tendency to be formal, and abstract - which makes it hard to digest, coz it's difficult to relate to the abstractions ... and tends to be repetitive as well - often you can spot AI-written text from a mile away, not because it's wrong or incorrect, but because it's got something "robotic" :-)
"robotic" is exactly it. too perfect, too structured, zero rough edges.
humans think out loud. ai writes reports.
people want the thinking not the summary.
appreciate you leob 🙏
Is this a challenge? ;)
No, it's just an observation ... :)
Code for thought. Even though AI is becoming the new norm, it still requires us to know what exactly do we want. It's like a developer getting to understand what the client wants. Now developers are AI's clients. Excellent article.
100%. the bottleneck moved from implementation to articulation
asking the right questions matters more than having the right answers now
thanks.
This really hit close to home.
The part about losing the collective struggle of Stack Overflow threads resonated a lot. Those messy tabs weren’t just solutions — they were context, debate, and scars from other devs who had already been burned.
I feel more productive than ever with AI, but also strangely more alone in the problem-solving process. Less “community knowledge”, more private conversations.
I don’t think this makes us worse developers — but it does change what “good” means. Judgment, system thinking, and knowing when to push back on the AI feel more critical than raw recall ever was.
Curious to see how we teach newcomers in this world. Reading docs was painful, but it trained a kind of patience and skepticism that I’m not sure chats automatically build.
Great post — this is one of the conversations we should be having more openly.
this is it. "less community knowledge, more private conversations"
the productivity is real but so is the isolation. and youre right. judgment matters way more than recall now. which is probably good? but also we have no idea how to teach that systematically yet
the newcomer question keeps me up tbh. reading bad docs built skepticism by accident. ai gives you answers confidently. how do you learn to doubt?
thanks for adding to this. really good perspective. 👍
That’s such a good point about skepticism being “accidentally” trained by bad docs.
AI answers confidently by default, and without friction it’s easy to skip the doubt step. Maybe the new skill we need to teach isn’t how to find answers, but how to interrogate them — asking “what assumptions is this making?” and “when would this fail?”
Feels like we’re still early in figuring out how to pass that mindset to newcomers. Appreciate you pushing the conversation further.
"interrogate answers" is the perfect framing.
the old way: friction forced skepticism
the new way: we have to teach doubt explicitly
no clue how to do that at scale but naming it is step one i guess
appreciate the back and forth, this is exactly what i was hoping for with this post. 👍
Still using documention and tutorials myself. Starting out, I do believe fundamentals remain important for a firm foundation that can be built on.
100%. fundamentals are even more important now imo. you need to know what good code looks like to catch when AI generates garbage.
what are you learning rn?
Currently working through the Responsive Web Design certification at freeCodeCamp but my first love is Python!
python gang 💪
fcc's structure is really good for building muscle memory. you thinking fullstack eventually or backend focused?
I aim to complete the freeCodeCamp Full Stack curriculum, though I’m more backend-leaning. Python's my favourite, so I'll focus there while still keeping up with the frontend basics.
makes sense. knowing enough frontend to not be completely lost is clutch even as a backend dev
fastapi is fire if you haven't checked it out yet
No, I haven't as of yet. Appreciate the heads-up though!
for sure! you'll prob run into it eventually, it's everywhere now
good luck with fcc 💪
I ditched stack overflow a long time ago, but sometimes when I use LLM’s for documentations, I still do check the original documentations sometimes just to be a 100% sure. I read an article somewhere where the author mentioned, sometimes the major LLM’s “make up” stuff that does not exist😂 that made me lose trust in ai all together, so I still do have documentations on my tabs for sure
Agree, AI hallucination is a reality (and very similar to AI tools preparing code that does not compile and they attempt to do so without asking for a full understanding of the problem to be solved) and I still have links to Stack overflow articles (like at the end of this article dev.to/shitij_bhatnagar_b6d1be72/s...) because I still refer end up referring to stack overflow once in a while :-)
good point about the compilation angle. that's actually a form of cheap verification that ben santora talks about in his work on AI coding risks.
when AI generates code that doesn't compile, you catch it immediately. when it generates code that compiles but has subtle logic bugs or security issues, you don't know until much later.
that's why "it compiles" is becoming a weaker signal of correctness in the AI era. we need additional verification layers
appreciate the link to your article too.will check it out.
Thanks for.your comments and excellent point about the subtle and other shortfalls in AI output, though I do believe it will slowly improve / learn and get better.
Let’s see what the future brings
this is exactly the verification problem i keep coming back to.
you're doing what ben santora calls "keeping humans in the loop" using
AI for speed but docs for verification. that's the right approach but it also means you're doing MORE cognitive work than either method alone.
the "making up stuff" problem (hallucinations) is why AI can't fully replace documentation. but here's what worries me: if everyone stops contributing to stack overflow / public docs because "AI is good enough", what happens when you need to verify?
you're being smart about trust. the question is whether the next generation will be as skeptical, or if they'll just accept whatever AI says confidently
appreciate you sharing your workflow here.
Exactly I truly doubt the Ai “reign” would last for a long period of time. What might happen is, Ai would keep getting irrelevant as time goes on if people stop learning and contributing to solely depend on it. Which might really affect the next generations to come. This is a really interesting topic you’ve written about Daniel
you just described the feedback loop that keeps me up at night.
peter truchly said something similar yesterday. "if people stop contributing, the only entity left to build that knowledge base is AI itself" which leads to model collapse (AI training on AI-generated
content, degrading over time).
you're coming at it from a different angle but arriving at the same conclusion.the AI "reign" is self-limiting because it's consuming the knowledge commons it depends on.
the generational piece is what really worries me. you and i learned the old way. we have skepticism built in. but juniors entering the field NOW? they'll trust AI by default because they never experienced the alternative.
i'm working on a follow-up article exploring exactly this. would love to cite your insight if that's cool . you articulated the collapse mechanism really clearly.
thanks for extending this thinking
That would be awesome! Would be looking forward to read the next article 💯
LLMs often struggle with troubleshooting software or games, skipping steps or making things up entirely. And when you point it out, they’ll confidently reply, “Yes, that menu was removed in an update,” as if that fixes the problem.
Exactly they make up stuff and “apologise” when you point it out
My best comment on this would be a quote from a 1964 Bob Dylan's song, titled "The Times They Are A-Changin'":
_
great quote. we're swimming for sure.
just not sure if were building something together or just... not drowning individually
appreciate you reading 🙏
I generally share the same point of view. AI is really convenient and can produce a clean, quick, usable answer right away. I still often search with Google, but the results can be overwhelming, ambiguous, or buried in long threads of failed attempts — which isn’t useless either. That said, we shouldn’t forget that AI models are trained on years of documentation, questions, and exploratory content… and future generations might not benefit from such rich source material. From my perspective, a good minimum would be using the solid answers we get from AI to build clean, useful wikis that are helpful both to us and to future AI systems. But the race for profit has become the norm, and it’s hard to break away from that.
exactly. "future generations might not benefit from such rich source material" . this is the knowledge collapse problem
we're all consuming the commons (stackoverflow, docs, wikis) through ai but not contributing back. eventually the well runs dry.
your wiki idea is interesting though. treat ai conversations as raw material, then curate/publish the good stuff. rebuilds the public knowledge layer.
no idea how to make that happen at scale but its better than just... private chats forever.
appreciate this perspective 🙏
Personally, I found Stack Overflow a significant distraction, frequently wrong, misguided, and with conflicting advice on the things I needed most. A vital part of my work, but a frustration. AI removes that for me; my understanding is much improved, though I get fed up with the writing style, which jars with me.
The lack of such sources of information for new developers will not matter if future development is mostly done by AI. Scary thought. It probably wouldn't invent half the architecture that I use, designed to optimise for multiple teams working on the same code base, but even that won't matter if the teams are all coordinating agents.
I fear Stack Overflow, dev.to etc are like manuals on how to look after your horse, when the world is soon going to be driving Fords.
the horse/ford analogy is provocative and honestly might be right.
but here's what worries me about "AI does all the development".who verifies the AI's
architecture decisions? in domains with cheap verification (does it compile? does it run?)
AI is probably fine. but system architecture, scaling patterns, team coordination those have expensive verification. you only know you're wrong when production falls over months later.
your point about "optimising for multiple teams working on the same codebase" - AI wouldn't invent that because it's learned from individual problem-solving, not organizational design. and if we stop doing that thinking publicly, future AI can't learn it either.
maybe the real question isn't "will AI replace developers" but "what level of the stack are we operating at?" if we're just implementing features, yeah, probably automated.
if we're designing systems for human organizations... maybe not?
though your coordinating agents point is chilling. if the teams ARE agents, then
organizational design becomes compiler design. whole different game
not sure if i'm optimistic or terrified. probably both.
appreciate the pushback though .this is exactly the kind of uncomfortable question we should be asking.
This hit a little too close to home.
The tabs thing isn’t just a productivity anecdote — it’s a signal that the interface to knowledge has changed. We didn’t just swap Stack Overflow for chat, we swapped navigation for conversation. That’s a deeper shift than “faster answers.”
What resonated most for me is the isolation angle. Those old tabs were messy, but they were communal. You could see disagreement, timestamps, edits, scars. Now the knowledge feels cleaner, but also… private. It’s me and a model, not me and the trail of people who struggled before me.
I don’t think we’re losing skill so much as relocating it. Syntax recall and API trivia are getting pushed down the stack. What’s moving up is judgment: knowing what to ask, what to doubt, and when to stop trusting the output and go verify reality.
The uncomfortable question isn’t “am I a worse dev now?” — it’s whether we’re building systems that remember and expose reasoning, or ones that quietly replace it. If AI stays stateless and opaque, that’s where the real loss is.
Feels less like “AI replaced docs” and more like “we changed how understanding is negotiated.” And yeah — we definitely haven’t finished processing that yet.
man this comment is better than my article.
"negotiating understanding" vs "finding answers" - thats IT. the entire cognitive process changed and were acting like we just got a faster search engine
and the stateless thing is huge. stackoverflow had institutional memory. ai has... vibes and training data. when something breaks in a new way, whos building the knowledge base for future people?
maybe we need to start treating our ai conversations as PUBLIC artifacts somehow? idk what that looks like but private learning at scale feels like a dead end
really glad you engaged with this. youre pushing the thinking forward.
I just hope that conversation data is used for training, otherwise the only entity left to build that knowledge base is AI itself. (Actually it may be built for future AI's instead of future people.)
This hits home…
Best article and also very relatable points of view. It’s a new and sad reality that we live in.
The shift from the 'communal struggle' of Stack Overflow to the 'efficient isolation' of AI tabs.
We gained speed, but we lost the shared context and the feeling of solving something together with the community.
Beautifully nuanced and deeply relatable. Thank you for putting this story into words!
"efficient isolation" is perfect
we solved the speed problem and created a loneliness problem. not sure which is worse tbh.
thanks for reading and engaging 🙏
I was a pleasure
Love your article 🤍.
I still have documentation open in my tabs, and of course Claude and ChatGPT are now everyday tools for me. I hope the future becomes more productive, though sometimes it feels like we are making ourselves less thoughtful by relying too much on tools.
If you think about the past, there were truly elite programmers who wrote code in binary. Today, that feels almost impossible. We keep adding layers of abstraction. Now, only a small group of highly skilled programmers are really thriving, making money, and securing their future.
oof yeah this hits hard.
the abstraction ladder (binary → assembly → AI) is a pattern we can't ignore. each layer made the previous generation's skills "almost impossible"
but here's what worries me about the AI layer specifically: previous abstractions were DETERMINISTIC. you could always trace down if you needed to understand what was happening underneath
AI is probabilistic. the "underneath" is opaque in a different way.
you're right that this creates stratification. those who understand the limits of the abstraction vs those who just use it blindly. but i don't know if the solution is "everyone learns the underneath" or
"we build better verification layers"
still figuring this out. your point about thoughtfulness vs productivity is the core tension - we're optimizing for one at the expense of the other.
appreciate you naming the economic stakes here
This question been there in my head for a long time now.I saw all this ai from just appearing to now becoming part of our life.But, that question for me did not start with ai.It started with tools like vscode or any ide giving syntax highlighting,snippets,and showing errors.It may sound weird, but i felt like that.
But, now, i feel, its like old songs vs new songs.Still there are people born in new generation that like the old songs.So, it depends on our inner feel.
Now, i am just trying to get that feel back!
you were ahead of the curve noticing this with IDEs. syntax highlighting, autocomplete, snippets.those WERE the early abstraction layers.
the difference is those tools still forced you to understand what you were writing.
autocomplete suggested, you decided. AI can write entire functions you never examine
but you're right about the "old songs" thing. some people will always prefer the friction because the friction taught something valuable. question is. can we design new tools that teach the same lessons without requiring the same pain?
appreciate the perspective. getting the feel back is the move 🙏
My tabs are still documentation, Github & Stackoverflow and this will never change. By using AI, you opt out of sharing your knowledge with the broader community in a publicly accessible space and consolidate power in the hands of corporate monopolists. They WILL enshittify their services and extract all the wealth they can from you once you depend on their AI. Don't outsource your critical thinking to it. Even a small glanced over syntax difference can make the difference between good/secure software and total failure.
respect this position and honestly... you might be right about the enshittification.
the corporate consolidation angle is something i didn't fully explore in the article but it's real. we're trading community-owned knowledge (SO, wikis) for corporate-owned AI (Anthropic,OpenAI, Google). at least stackoverflow was never going to paywall the answers or change the API pricing overnight.
your point about syntax differences is exactly what ben santora's been writing about. cheap verification domains (does it compile?) vs expensive ones (is it actually secure?). AI sounds confident either way
writing a follow-up about knowledge collapse and the corporate ownership angle needs to be in there. we're not just moving from public to private knowledge - we're moving from commons to capital
appreciate the pushback. this is the uncomfortable conversation we need
what's your take on how we prevent the enshittification? or is it already too late once we're dependent?
Hey thanks for taking the time to put out a thoughtful reply.
I don't have the solution. I just do my part,focus on what I believe in and abstain from what I don't. AI in itself could be great progress, but not in the way its means of production are currently distributed.
Regarding the syntax difference, I carefully worded it that way (not syntax error), because the AI could have generated
const s = useSession()instead ofconst s = await useSession()oruseSessioninstead ofuseAsyncSession, which are very easy to glance over and miss, if you weren't in the zone coding and testing this yourselfThis really resonated. The tabs metaphor is perfect - it captures how quietly but completely the workflow has shifted. The point about losing the collective struggle of Stack Overflow hits hard too. We’re definitely trading memorization for judgment and taste, and I’m still not sure if that makes us better engineers or just different ones. Feels like we’re mid-paradigm shift and don’t have the language for it yet.
Such a powerful reflection on the shift in how we work as developers. The transition from researching and memorizing to relying on AI for real-time answers is huge, but it raises important questions about what we’re losing in terms of deeper understanding and community.
The balance between efficiency and skill-building is something we’ll be grappling with for a while. It feels like we’re in a moment of evolution, but I agree, it's still unclear whether it's for the better or just a different way of doing things.
"we're mid-paradigm shift and don't have the language for it yet" . this is such
a clean way to put it.
you nailed the core tension. are we becoming better engineers or just different ones?
i genuinely dont know yet. the speed is real, the isolation is real, the dependency is real. whether that nets out positive... we're still figuring it out.
the fact that we're all processing this out loud in these comments (publicly, together) while the article is about moving to private ai conversations . theres something meta there that gives me hope.
we might not have the language yet but at least we're trying to build it together.
really appreciate both comments. youre articulating things i was struggling to name
Nice work
I love this! It’s amazing how something as simple as Chrome tabs can reflect our workflow, priorities, and unfinished ideas. Totally relatable — especially for anyone juggling multiple projects or clients. Thanks for turning a daily habit into a meaningful insight! 🚀✨
appreciate it. tabs as evidence is real.
the shift happened so gradually i almost missed it. then one day. all ai chats.
thanks for reading 🚀
What a great and 'too the point' article Daniel. I think you are hitting a nerve for 2026 early. Our way of working has clearly changed over the past couple of years, it's time we start the philosophical conversation.
I found the most striking thing from your article was the part about us losing a connected web with people through docs, stack overflow, etc. It was a sudden sadness. On the other hand, it's early days, and perhaps our next 'stack overflow for the ai age', is yet to come. Perhaps it will be even better for us.
Thanks for sharing your thoughts.
"perhaps our next 'stack overflow for the ai age' is yet to come".god i hope so
the sadness is real though. we traded messy community for clean isolation and
didn't realize what we were giving up until the tabs changed.
writing a follow-up exploring what that "ai age stackoverflow" would need to look like.
blc you're right. we're early. the answer probably isn't "go back" but "build
something new that captures what we lost"
appreciate you engaging with this. the philosophical conversation is overdue
I've asked a lot of programmers that claim 10x productivity increase after using AI if they would be willing to pay half of what they get after the improved productivity with the AI company. Like, if you earn 2000 per month and using AI gives you 10x productivity improvement shouldn't you be willing to pay 10000 per month to use AI? Event at 2x improvement would be cost effective to pay 1000 per month to use AI. If you earn 10000, you should be willing to pay 5000 to AI, right?
At the moment AI companies price the tokens below the cost of production of said tokens. They need to raise the price 8-10 times to be even. A Claude Code subscription should be about 1000/month. At that price a lot of users would stop paying to they would have to increase it more for the rest of the users. Maybe it would get to 2000/month. That would change the landscape, right?
this is the economic endgame nobody wants to talk about.
if AI pricing reaches rational levels ($1000-2000/month), we get
stratification:
high earners: can afford, stay productive
everyone else: priced out, return to... what? the commons we killed?
this is why "moving from commons to capital" is so dangerous. were burning the bridge while standing on it.
Stack Overflow dying while AI is subsidized = fine (for now)
Stack Overflow dead when AI costs $2K/month = catastrophe.
the window to preserve alternatives (federated platforms, public
knowledge) is NOW, while AI is still cheap enough that we dont feel desperate.
once pricing corrects and people get priced out, theyll realize too late what we lost.
really important economics here. mind if i cite this in next piece about building alternatives?
This feels like a shift in Economy. The "Old Way" was scarce—finding the right answer took effort, so we valued simplicity. The "New Way" offers abundance, but the trade-off is Focus.
We are trading the friction of search for the discipline of editing. The challenge now isn't generating the code, but having the guts to reject the "Kitchen Sink" solutions the AI offers. We have to work harder to keep things boring and readable
Your point about switching from endless Stack Overflow tabs to just chatting with AI really shows how our daily coding flow is changing.
yeah the shift was so gradual i barely noticed until one day all my tabs were just ai chats
cant decide if im more productive or just dependent in a different way lol
whats your workflow look like now?
My tabs? 90% AI chats, 10% "why did my code break AGAIN" tabs that refuse to close—like bad exes 😂 What's yours holding hostage?
the tabs that refuse to close
mine are all claude/gemini now plus one localhost tab thats been open for 3 days.
we really just traded stackoverflow hoarding for ai chat hoarding huh
Facts 😂
Worse :)
This is one of the most honest takes I’ve read on AI-assisted dev work. Not hype, not fear — just real reflection. Especially agree with the part about knowledge shifting from public spaces to private chats. That feels like a big cultural change we’re not ready for yet.
appreciate it 🙏
yeah the hype/fear binary doesnt capture whats actually happening. most of us are just trying to figure it out in real time
the public knowledge thing feels like the biggest unresolved question. glad its resonating with others too
thanks for reading
This resonates hard. The real shift isn’t speed, it’s moving from documentation-driven learning to conversational reasoning. The skill now is judgment, not recall.
"documentation-driven learning → conversational reasoning" is such a clean way to frame it
the question i keep coming back to. if judgment is the new skill, how do we teach it? recall was easy to train. you practice navigation until you know where things are. but teaching someone to doubt confidently
delivered answers? thats harder.
appreciate you extending this
This hit uncomfortably close. The “tabs as a mirror of how we think” idea is spot on. I’ve noticed the same shift — less memorizing, more reasoning and validation. Feels like we traded raw recall for judgment and systems thinking. Not sure yet if that’s good or bad… but it’s definitely real.
I am in the midst of upgrading an Web based application that I wrote only 3 years ago. I had the same experience of accessing multiple sites and trying to tie what I was able to gleam.
The difference now is that I can concentrate on the being the knowledge worker, ensuring the business rules are met and that the product meets the customer useability requirements.
The upgraded product has an improved workflow and has a tighter database integration.
As I one man dev shop, I cannot know it all. But what I do know is what the customer wants and that's what I can deliver with the help of AI.
Love this example. you're doing exactly what works. using AI as a solver while you stay as the judge.
The key difference. you have deep context (3 years with this app, direct customer knowledge). You can verify AI output against reality.
The collapse happens when people without your experience treat AI as
autopilot instead of compass (as Ali-Funk put it in the comments).
Solo dev + AI + domain expertise = powerful. Junior dev + AI + no context
= risky.
Appreciate you sharing this. it's a good reminder that the tool isn't the problem, it's how (and by whom) it's used.
Awesome Article! Has been living with this guilty conscience for some time, relying on AI instead of doing it the old way. Realized after reading the article and comments, I am not alone.
the guilt is real but i think we're feeling guilty about the wrong thing.
it's not wrong to use better tools. the problem is we're ALL using them privately instead of sharing what we learn publicly.
maybe the answer isn't "stop using AI" . it's "start documenting what you build with AI in ways others can learn from"
and yeah, you're definitely not alone. these comments have been wild. everyone's processing the same shift
appreciate you reading 🙏
"the answer isn't stop using AI - it's start documenting what you build with AI in ways others can learn from" - Yes!
I loved that
Hey, that was really fun!
Wow, that’s so relatable! My tabs are basically a diary of half-baked ideas, random research, and things I’ll probably never finish—but somehow it tells my story anyway.
This resonates a lot. Feels like we’ve shifted from searching for answers to conversing with knowledge. Powerful, but it definitely changes how we learn.
"conversing with knowledge" is a perfect phrase for this.
powerful for sure, but i wonder if we're losing the breadcrumbs. the public trail that let future devs learn from our struggles.
Thank you for your sharing.
I think there's a lot of good stuff in here to unpack. However, I've got to say this list is a clanger:
Memorizing syntax
Knowing every edge case of a library
Perfect recall of API methods
When were those EVER important skills to have? I've been programming before the Internet (when paper based documentation filled my bookcase)
My POV on things:
Just my 2c worrh
Ha!
"Am I becoming a better developer or just a better prompt engineer?"
Did I feel emotional because of this line for 20 minutes straight or what! Great write. Captures so many emotions
The tabs analogy is spot on. Productivity aside, the bigger shift is cognitive - less recall, more judgment and system thinking.
"less recall, more judgment and system thinking" . yeah this is it.
the cognitive shift is what worries me most. we're optimizing for speed but might be losing the friction that built judgment in the first place.
that's awesome
Exactly - This is the real stack, thanks o much