DEV Community

Cover image for 90% of Code Will Be AI-Generated — So What the Hell Do We Actually Do?

90% of Code Will Be AI-Generated — So What the Hell Do We Actually Do?

Harsh on March 14, 2026

I read the headline at 11pm on a random Wednesday. "Anthropic CEO predicts 90% of all code will be written by AI within six months." I put my lap...
Collapse
 
pengeszikra profile image
Peter Vivo

"AI cannot do is: understanding why a feature matters to users"
My focus on this quote. I don’t afraid of statistic.

Collapse
 
harsh2644 profile image
Harsh

This is the key insight that gets buried under all the performance metrics. AI can optimize, but it can't originate meaning. It doesn't know why a user stays up at night thinking about a problem. That empathy gap is the one thing no amount of training data can bridge.

Collapse
 
neilblaze profile image
Pratyay Banerjee

I beg to disagree.

I think with modern suites of Agents, considering the fact that it has the context of the entire codebase (assuming it has), it can somewhat figure out why the user is prompting for "a particular feature" to be added to a reference codebase. That's the reason, the user can sometimes spot traces of destructuring, redefining the input instructions & refilling missing edge cases (to improve final code quality) while the AI (Agent) is executing the process reflected upon it's thinking traces.

A lot of things can be manually steered with effective context and some skill in context engineering. And as models become larger, more efficient, and better over time, the delta shall minimize.

Collapse
 
pengeszikra profile image
Peter Vivo • Edited

Well, entire codebase may contain or may not contain the users precedence or any context which help deeply understand why important some feature or behaviour for user. For example I create ( under process ) a cli vim like code editor with markdown syntax highlight special the cases where markdown have a different language. I think agents never have chanche to figure out my ( user ) instict this features:

  • mr -p / --print <fn> => instead editing cat the code to console
  • mr -c / --copy <fn> => instead editing copy the code to system main clipboard

Because this mission critical functionality just figure out later when I real testing the editor, and I recognize this is so important in a daily base terminal editor works. Also the first one give a best testing result. This is need to be understand how the human works with editor in terminal, where we just reading the codes instead editing, but don't would like to spend time to go in the editor then go out the editor, and this case is also means code found in terminal memory, so later easy scroll up to read again.

Then easy figure out what means the next:

  • mr -t <fn>
Thread Thread
 
neilblaze profile image
Pratyay Banerjee

I bet, I've mentioned this particular point -- "(assuming it has)"

But beyond the statement, I think for most software or tools we aim to produce, after we have a certain amount of progress towards our objective, it becomes quite evident what we're building, from both human and AI standpoint, and that's why they tend to chase patterns (unless we're building something that doesn't exist, or is either super noble or stupid).

And to your point,

This is need to be understand how the human works with editor in terminal, where we just reading the codes instead editing, but don't would like to spend time to go in the editor then go out the editor, and this case is also means code found in terminal memory, so later easy scroll up to read again.

I think here also what I mentioned earlier is impeccably valid. Yes, just when we start, that it's difficult to predict the trajectory of our movement, so some context is definitely required, but then for most people, what they do is somewhat predictable as it's not something new, or something else like that already exists (for a required reason).

Thread Thread
 
harsh2644 profile image
Harsh

You've touched on a subtle but important point the human behavior of working with an editor in the terminal. The friction of switching contexts (terminal ↔ editor) is real, and it shapes how we interact with code. Sometimes we just want to read, not edit, and keeping code in terminal memory makes it easier to scroll back and rebuild mental context.

Code found in terminal memory is a powerful concept. When code lives where we're already working, it becomes part of our mental workspace in a way that opening a separate editor doesn't. It's like reading a book vs. opening a new tab the friction matters.

And you're right: most work is pattern-based. We're often adapting existing solutions, not inventing completely new ones. That's why AI's pattern-matching is useful it speeds up the predictable parts. The challenge is when we're building something that doesn't exist. Then we need to move beyond patterns into first principles thinking. And that's still human territory.

Thread Thread
 
neilblaze profile image
Pratyay Banerjee

Feels like I'm speaking to a bot, but thanks for the feedback! :)

Thread Thread
 
harsh2644 profile image
Harsh

Haha, I promise I'm human! Just someone who spends way too much time thinking about this stuff. 😄

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

90% AI-generated code actually looks like the proverbial dead internet, and measuring code by numbers makes no sense at all. Coders don't get paid by lines of code anymore for a good reason. Reminds me of the false conclusion that video has become the most important medium because it makes up whatever large percentage of internet traffic. It makes up such a large portion because its heavyweight. That says nothing about its importance.

AI-created verbose and repetitive boilerplate code is technical debt growing like cancer. Quantity does not imply quality.

Collapse
 
harsh2644 profile image
Harsh

The 'dead internet' analogy is haunting and perfect. Just as bots started writing content for other bots, AI is now writing code for. who exactly? Other AI tools? Future maintainers who'll curse our names?

The video traffic comparison is brilliant. 80% of internet traffic being video doesn't mean 80% of value is video it just means video files are huge. Same with AI code: 90% of the codebase being AI-generated doesn't mean 90% of the value is there. It might just mean AI writes verbose.

Technical debt growing like cancer that's the phrase we'll all be using in two years. AI doesn't write concise; it writes complete. It adds boilerplate, repeats patterns, over explains. All of that is debt. And like cancer, it spreads silently until the system can't breathe.

Quantity does not imply quality' should be tattooed on every AI tool's interface. We learned this lesson with lines-of-code metrics decades ago. Now we're relearning it with AI-generated volume. The only difference? This time, the volume can scale infinitely.

Collapse
 
leob profile image
leob • Edited

This is a really powerful way to look at the "AI coding" phenomenon:

Can we make sure that the code which AI generates can be understood by humans ?

Because when the alarms go off at 3 AM due to a production issue, it's the human developer who gets paged, not the AI agent - and, we're now creating a huge "legacy code base" through the use of AI tools - let's make sure it's high quality ...

Collapse
 
harsh2644 profile image
Harsh

Yes, we can ensure it but it requires intentional effort, not passive hope.

Here's how we might do it:

Evolve the code review process Add a specific 'human readability' checkpoint for AI-generated code. Not just "does it work?" but "can another human understand it without the AI present?"

Train AI to explain itself Make it a requirement that AI doesn't just generate code, but also generates explanations: why it chose this approach, what assumptions it made, what edge cases it considered. Like a junior dev explaining their PR.

Make readability a metric Just as we measure test coverage, we could measure time-to-understanding. If a piece of code takes 30 minutes for another developer to grok, that's a red flag.

Use AI as a readability reviewer—AI itself can analyze its own code and flag sections that might be confusing to humans, suggesting refactors before the code ever reaches a human reviewer.

The uncomfortable truth: This will slow things down. But the alternative is building a future where only AI can understand AI code—and that's a future where 3 AM pages become unsolvable.

Collapse
 
leob profile image
leob • Edited

100 percent ...

"The uncomfortable truth: This will slow things down" - for me that doesn't feel like an uncomfortable truth at all, on the contrary ... :-)

What WOULD be (highly) uncomfortable is if we'd generate an "iceberg" or "minefield" of hidden complexity and bugs, by not taking control of our codebase in the way you explain ...

For me this is one of the biggest insights from the whole AI coding debate :-)

Thread Thread
 
harsh2644 profile image
Harsh

Exactly! 'Slowing down' isn't uncomfortable it's an investment. The real discomfort is hitting that 'iceberg' of hidden complexity at 3 AM with no map.

You've nailed the core insight: The biggest lesson from AI coding isn't about speed—it's about control. 'Minefield' is the perfect metaphor. Every AI-generated feature we ship without understanding is a potential landmine for our future selves.

So no, slowing down isn't the uncomfortable truth. The uncomfortable truth is how easy it is to build an iceberg without realizing it. And you're right—that's the insight that changes everything.

Thread Thread
 
leob profile image
leob

Spot on, 100% - I think this is a "core" insight, I'd even say THE core insight ...

Collapse
 
embernoglow profile image
EmberNoGlow

Great article. I think AI can speed up the development process, but many developers do it wrong, just like in your case. I often write huge prompts. No, they are not a description of a large SaaS. I am not describing a difficult task. I'm asking the AI ​​not to create a "super cool app", but to describe how it should work. I describe to him all aspects - what the architecture should be, what libraries should be used, what functions will be there, what the interface layout will look like. What I expect from it is not a working application, but how it works. After I have clearly directed the AI ​​in the right direction, I ask it to write a minimal implementation + a description of how it works, what needs to be added, and create a TODO sheet. The application may not work, or work crudely, but thanks to the AI ​​I consulted, I know which direction to go. This may seem slow, since it takes not 30 minutes to explain, but much more, but I have a minimal structure, a minimal raw prototype, and most importantly, a well-thought-out plan, and not random thoughts from my head.

If you use AI as a teacher, advisor, whatever you want to call it, then AI can become your fulcrum, instead of your "boss".

Collapse
 
harsh2644 profile image
Harsh

This is a fantastic perspective. You've articulated something crucial using AI as a consultant rather than a boss is the real paradigm shift. Most developers treat AI like a code factory: build this app. But you're treating it like a senior architect: 'here's my thinking, help me refine it before we build.

The distinction you're making describing how something should work, not just what is exactly where AI stops being a toy and starts being a leverage point. The TODO sheet + minimal implementation approach is brilliant. It turns AI from a black box output generator into a thinking partner that helps you structure your own understanding.

Question for you: Have you ever prompted AI to critique your architecture or suggest alternatives you hadn't considered? Sometimes the biggest value isn't in getting the plan validated, but in discovering approaches outside your own mental models. That's where AI truly becomes a teacher.

Collapse
 
embernoglow profile image
EmberNoGlow

"critique your architecture" - well, I myself know the answer to this question. My code is terrible and I get it, I don't follow pep8, I don't follow the architecture. I've literally been trying to refactor my app for a month now and it's feeling more like a piecemeal mess. Unfortunately, the AI ​​is useless here, I have too much code for it to convert, so I have to do almost everything myself, and even if it manages the amount of code, it gives a vague structure and stupid advice on what to move where, and not how to properly organize dependencies.

"suggest alternatives" - This was a common occurrence at the beginning of my project, but over time it became more linear because I had the groundwork for something more. Of course, I doubt some libraries now, like the pyimgui binding, but replacing them in such a pigsty as my code is a big risk at this stage, so I put off all alternatives for a bright (or not so bright) future.

Thread Thread
 
harsh2644 profile image
Harsh

This is the reality nobody talks about. AI is magical until you hit 10,000+ lines of what you aptly call a pigsty. Then it becomes useless because it lacks context. The 'vague structure and stupid advice' problem is real AI suggests moving things around without understanding why the dependencies exist in the first place.

Here's something that might help: Instead of feeding AI your entire codebase (which it can't handle), try feeding it just the interfaces between modules. Show it function signatures, data flow, and dependency graphs. Ask it to identify circular dependencies or suggest where boundaries should be. AI doesn't need to see the implementation to critique the architecture it just needs to see the connections. I've done this successfully with a legacy Django project that was too big for AI to consume whole.

Also, you're absolutely right about not swapping libraries like pyimgui mid-stream. That's a future self' problem. First get some tests in place (even if they're ugly), then refactor. Tests are the safety net that lets you make changes without fear. Without them, every change is a gamble.

Thread Thread
 
embernoglow profile image
EmberNoGlow

Thanks for the advice! By the way, I never used tests. This may seem strange, but I have no explanation for it. I just didn't know how to use them. Perhaps it was an oversight. Yes, live and learn.

Collapse
 
naysmith profile image
Gleno

Exactly. AI is also improving in all areas too.

Collapse
 
harsh2644 profile image
Harsh

Exactly. And that's why 'AI can't do X' statements have a short shelf life. What AI can't do today, it will do tomorrow. The real question: what will humans do then?

Thread Thread
 
naysmith profile image
Gleno

So true. Only a few devs I know think like us. Heads are firmly in the sand

Collapse
 
workbreak profile image
Work Break

10%

Collapse
 
futurestackreviews profile image
Takashi Fujino

The 90% number is misleading because it conflates "lines of code generated" with "working systems shipped." AI can generate code fast. It can't architect a system, debug edge cases reliably, or understand why the business logic needs to work a certain way.
What actually changes: the job shifts from writing code to designing systems and reviewing output. The developers who treat AI as a thinking partner instead of a replacement will be fine. The ones waiting for it to do everything will get stuck.

Collapse
 
harsh2644 profile image
Harsh

This is the clearest, most level-headed take in this entire thread. Conflating 'lines generated' with 'systems shipped' is exactly the mistake that leads to overhyped expectations and underdelivered value.

AI can't architect, can't debug edge cases, can't understand business logic that's the 10% that's 100% of the value. And that 10% is still entirely human territory. No amount of prompt engineering changes that.

The framing of thinking partner vs. replacement is perfect. The developers who treat AI as a collaborator that handles the mechanical parts while they focus on design, tradeoffs, and context they're the ones who will thrive. The ones waiting for AI to do everything will find themselves irrelevant not because AI replaced them, but because they replaced themselves.

Your closing line says it all: The ones waiting for it to do everything will get stuck.' AI won't replace developers. But developers who use AI will replace those who don't.

Collapse
 
futurestackreviews profile image
Takashi Fujino

Totally agree. "The 10% that's 100% of the value" — that framing deserves way more attention. We're seeing the same pattern in our reviews too. The tools that generate the most code aren't necessarily the ones shipping the best products.

Thread Thread
 
harsh2644 profile image
Harsh

Exactly the 10% that's 100% of the value is the framing that should end every AI-code debate. We've been seduced by volume more code must mean more progress when in reality, most code is just noise.

Your observation is spot-on: The tools that generate the most code aren't shipping the best products. Because products aren't built by lines of code. They're built by decisions. And those decisions what to build, what to leave out, when to stop are still human.

Maybe we need a new metric: not lines of code,' but decisions per line. The code that embodies a hard-won decision is worth 100x more than boilerplate. AI gives us more boilerplate. We still need to make the decisions.

Thread Thread
 
futurestackreviews profile image
Takashi Fujino

"Decisions per line" — that's a metric worth stealing! Might have to reference that in our next review.

Thread Thread
 
harsh2644 profile image
Harsh

Steal away! That's what metrics are for. 😄

Would love to hear how it lands in your next review curious whether it sparks different conversations than lines-of-code ever did. Let me know how it goes!

Thread Thread
 
futurestackreviews profile image
Takashi Fujino

"Decisions per line" is a genuinely useful reframe. It shifts the conversation from productivity theater to actual engineering judgment — which is exactly where the value sits in an AI-augmented workflow. Curious whether you think that metric would change how teams evaluate AI coding tools too.

Collapse
 
openclawcash profile image
OpenClaw Cash

If 90% of the code is generated, the other 10% becomes 100% of the value. Our job shifts from being syntax mechanics to system architects. We stop typing and start deciding. The 'What' and 'Why' finally become more important than the 'How.' We at openclawcash.com understood that and implement it in our Dev flow

Collapse
 
harsh2644 profile image
Harsh

This is brilliantly phrased If 90% is generated, the other 10% becomes 100% of the value. That's the new math of software development. From syntax mechanics to system architects—that's the transition every developer needs to make.

We stop typing and start deciding that line captures the entire paradigm shift. The keyboard becomes less important than the brain. The What and 'Why' finally take their rightful place above the 'How.

Love that openclawcash.com has embedded this into your Dev flow. This isn't just a process change it's a mindset shift. And the teams that embrace it early will define the next decade of software.

Collapse
 
apex_stack profile image
Apex Stack

This resonates hard — and not just for code. I'm seeing the exact same pattern with AI-generated content at scale.

I run a 100k+ page multilingual site where a local LLM generates the analysis text for every stock page across 12 languages. The "90% AI-generated" reality is already here for content. And the lesson is identical to yours: the value isn't in the generation, it's in knowing what to generate and whether the output is actually good.

Your filtering system story is my content pipeline story. Early on I let the LLM generate thousands of pages without deeply understanding the output patterns. Google crawled 51,000 of them and rejected them all — "crawled, not indexed." The AI produced content that looked right, passed basic checks, but lacked the quality signals that matter. I had built 50,000 pages I didn't really understand.

The fix was the same as Rohan's approach: slow down the acceptance. I now audit samples from every batch, check for factual accuracy against the actual financial data, and verify that the analysis says something a human analyst would actually find useful — not just something that reads well.

The 19% slower finding from the RCT you cited maps perfectly to content too. When I added human review checkpoints to the pipeline, throughput dropped but the pages that made it through started actually getting indexed. Slower acceptance, better outcomes.

Your point about the 10% being "everything that matters" is the key insight. For code it's architecture and debugging. For content it's editorial judgment and domain expertise. The AI handles volume — the human handles value.

Collapse
 
harsh2644 profile image
Harsh

This is a brilliant parallel code and content, same pattern, same problem. The crawled, not indexed story with 51,000 pages is haunting. You built 50,000 pages you didn't actually understand. That's the perfect metaphor for AI's illusion of productivity.

Your key insight: The AI produced content that looked right, passed basic checks, but lacked the quality signals that matter.' This is AI's most dangerous trait it's a master of plausibility, not truth. It generates things that seem correct, but correctness isn't the same as quality.

The fix you implemented sampling, factual accuracy checks, asking 'would a human analyst find this useful?' is exactly the 'human in the loop' that turns volume into value. 19% slower acceptance, 100% better outcomes. Worth every second of slowdown.

Question for you: How do you scale this audit process? As AI generates more content, human review becomes the bottleneck. Do you see a role for AI-assisted auditing (another AI checking the first AI's work)? Or does that risk creating a feedback loop of 'plausible but not quite right' content? Also, have you noticed any language-specific issues does the AI perform differently in some of those 12 languages vs. others?

Collapse
 
apex_stack profile image
Apex Stack

Great questions — scaling the audit is exactly where I'm stuck right now, so I'll share what's working and what isn't.

For scaling: I use a tiered approach. Tier 1 is automated — schema validation, data freshness checks (is the stock price from today or last month?), structural checks (does every page have the required sections?). This catches maybe 60% of issues without human eyes. Tier 2 is statistical sampling — I pull random pages from each batch, compare the generated analysis against the actual financial data from the API, and flag any batch where the error rate exceeds a threshold. Only Tier 3 is full human review, reserved for new template types or when Tier 2 flags something.

On AI auditing AI — I actually do this for one specific task: checking whether the generated text contradicts the numerical data on the same page. A second model reads the page and answers "does the analysis match the numbers?" This works because it's a narrow, verifiable question with a ground truth. But for subjective quality — "is this analysis actually useful to an investor?" — I agree with you, AI checking AI just creates a plausibility echo chamber. The second model has the same blind spots as the first.

Language-specific issues: absolutely yes. English output is consistently the strongest — richer vocabulary, more nuanced analysis. German and Dutch are solid (maybe 85% of English quality). Romance languages (French, Spanish, Portuguese) are decent but tend toward more generic phrasing. The real drop-off is in languages like Polish, Turkish, and Korean — the model produces grammatically correct text but the financial terminology gets wobbly. I've started using language-specific prompt templates with domain glossaries for the weaker languages, which helps but doesn't fully close the gap.

Thread Thread
 
harsh2644 profile image
Harsh

This is incredibly practical and valuable thank you for sharing such detail. Your tiered audit system is exactly how this should scale. Tier 1 catching 60% automatically, Tier 2 sampling for statistical confidence, Tier 3 only for exceptions—that's a model worth copying.

The AI-auditing-AI insight is crucial: it works for narrow, verifiable questions (like 'does text match numbers?') but fails for subjective quality because it's just creating a plausibility echo chamber. That distinction matters AI can check facts, but it can't judge value.

The language hierarchy you've observed is fascinating: English strongest, German/Dutch solid, Romance languages generic, and the drop-off in Polish/Turkish/Korean with wobbly terminology. This mirrors what many teams are seeing—multilingual AI claims often overpromise. Language-specific prompts and glossaries help, but as you said, the gap remains.

Question for you: Have you considered fine-tuning smaller models specifically for financial content in those weaker languages? Or using a two-model approach—one for generation, another (maybe a smaller, specialized model) just for terminology verification in those languages?

Thread Thread
 
apex_stack profile image
Apex Stack

The two-model approach is actually where I'm heading next — and your framing is sharper than what I had in mind.

Right now I generate everything with a single Llama 3 instance, which works well for English and Germanic languages but struggles with financial terminology in Polish, Turkish, and Korean. Fine-tuning a smaller model per language sounds ideal in theory, but the economics don't work at my scale yet — I'd need labeled training data in each language, and the financial terminology corpus for Polish stock analysis doesn't exactly exist on Hugging Face.

What I'm leaning toward instead is closer to your two-model idea but cheaper: generate in the target language with the main model, then run a separate "terminology audit" pass where a second model (or even the same model with a different prompt) checks a curated list of ~200 financial terms per language against what was generated. Did it say "rendement" or did it hallucinate an English-Dutch hybrid? That's a verifiable question — exactly the kind of narrow check where AI auditing AI actually works.

The interesting insight from running this at scale: the weaker languages don't just produce worse text — they produce differently wrong text. English hallucinations tend to be plausible but fabricated numbers. Polish hallucinations tend to be correct numbers wrapped in grammatically correct but semantically weird financial phrasing. Different failure modes need different audit strategies.

Your distinction between "AI can check facts but can't judge value" is the key principle here. I'm building the audit pipeline around it.

Thread Thread
 
harsh2644 profile image
Harsh

This is such a smart evolution of the idea. The terminology audit pass with a curated list of 200 financial terms per language is brilliant it's narrow, verifiable and cheap. Exactly where AI-auditing-AI works best. And you're right, it doesn't need a separate fine-tuned model; a different prompt on the same model can do the job.

The most valuable insight: Weaker languages fail differently. English hallucinations: plausible but wrong numbers. Polish hallucinations: correct numbers wrapped in terminologically weird phrasing. That's a profound observation. Each language has its own 'failure personality,' and audit strategies need to be tailored accordingly. A one-size fits-all quality check will miss the Polish specific issues.

Your point about fine-tuning economics is spot-on. Without labeled data, the two-model (generate + audit) approach isn't just practical it might be more adaptable. You can evolve the audit prompts without retraining anything.

This whole approach narrow, verifiable checks for facts and terminology, leaving subjective quality to humans feels like the right architectural principle for AI content at scale. You're not just building a pipeline; you're building a philosophy around where AI can and can't be trusted.

Are you logging which terms fail most often per language? That data could eventually become the training set for a lightweight terminology model, if you ever decide fine-tuning becomes viable.

Thread Thread
 
apex_stack profile image
Apex Stack

Yes — we log everything. Every term that fails validation gets tagged with the language, the failure type (hallucinated number, unit confusion, cultural mismatch), and the source sentence. After a few months we had enough signal to build per-language "known fragile" lists.

The interesting finding: the failure logs revealed clusters, not random noise. Japanese failures concentrate around Western financial concepts that don't map cleanly (like "market cap" vs the Japanese convention of expressing company size). Dutch failures cluster around decimal/comma notation bleeding into the prose. Portuguese has a whole category around formal vs informal financial register.

We haven't fed these back into fine-tuning yet — right now they drive the audit rules directly. But you're right that this is basically a curated training dataset being built organically. The question is whether fine-tuning on failure cases would make the base model worse at the languages where it already performs well. That's the experiment I want to run next.

The philosophy framing you mentioned resonates. What we're really building is a trust boundary map — here's where the model is reliable, here's where it needs a human check, and here's where it shouldn't be used at all. That map looks completely different per language, which I think most people deploying multilingual AI don't appreciate.

Thread Thread
 
harsh2644 profile image
Harsh

This is extraordinarily valuable work. The systematic logging, tagging, and clustering of failures you're not just building an audit system; you're building a taxonomy of AI failure modes across languages. That's a contribution to the field, not just your project.

The cluster findings are fascinating: Japanese struggles with Western financial concepts, Dutch with decimal bleeding, Portuguese with register. Each language has its own error fingerprint.' This kind of granular insight is exactly what's missing from most multilingual AI deployments. People assume multilingual means equally capable in all languages you're proving it means fails differently in each.

The fine-tuning dilemma is real: training on failure cases might improve weak languages but degrade strong ones. Catastrophic forgetting isn't just theoretical it's a practical risk. Have you considered language-specific adapters (LoRA) instead of full fine-tuning? That way you could improve Polish without touching the English weights.

And yes trust boundary map' is the right framework. Visualizing where the model is reliable vs. where it needs human oversight vs. where it shouldn't be used at all. That map is different for every language, and most people deploying multilingual AI don't even know it exists. You're not just solving your own problem you're creating a blueprint for responsible multilingual AI deployment.

Collapse
 
jaideepparashar profile image
Jaideep Parashar

AI can generate the code, but still; we have to work on the complete optimisation of the development process. The old model of just building the code is over, now we need to work on building the intelligence system.

Collapse
 
harsh2644 profile image
Harsh

building the intelligence system that's exactly the right framing.

the shift isn't from coding to not coding. it's from building features to building the system that builds features.

that's a fundamentally different skill set and most developers haven't even started making that transition yet.

Collapse
 
jaideepparashar profile image
Jaideep Parashar

Well said, that’s exactly the shift. It’s no longer about building features, but designing systems that can reliably produce and evolve them.

That requires a different skill set, architecture, workflows, and evaluation, something many developers are only beginning to explore.

Collapse
 
beyaz_1675094fe9a264 profile image
Ayşe Beyaz

You're absolutely right about what you're saying. AI will automate many tasks in IT that are done today and in the past. And when that happens, it will transform into a world of developers who can understand the code, perform root case analysis, and find solutions when the system goes down at 2 AM. A small community with high competence, capable of understanding the code and taking action, will continue to do the work. However, what I'm worried about is the situation of junior graduates like myself with very little experience (1 year) not being able to find a place in the industry. Opportunities aren't given to junior employees in the sector. Therefore, we can't gain the experience that senior employees gain because we're not given the opportunity. At this point, what I'm wondering is what will happen to juniors.

Collapse
 
harsh2644 profile image
Harsh

this is the part of the conversation that genuinely keeps me up at night.

the catch-22 is real you can't get experience without opportunities, and opportunities are disappearing because "AI can do it."

but here's what i actually believe: the developers who will matter most in 5 years are the ones who understand systems deeply. and the fastest way to build that understanding right now isn't through junior roles it's through building in public, contributing to open source, and writing about what you're learning.

the path is harder than it used to be. but it still exists. don't stop. 🙏

Collapse
 
beyaz_1675094fe9a264 profile image
Ayşe Beyaz

Contributing to open-source projects, developing your own projects, and sharing what you've learned on platforms like Medium are truly things that can make a difference. I hear this from different people in the industry. These are some of the best ways to make our knowledge visible.

But there's another side to the coin: there are many juniors and recent graduates like me. We all do similar things, and often we wait in line with hope. If more people find jobs by following this path, this approach will be considered "the right thing to do." But for those who do the same thing but can't find a job, this situation can turn into the thought, "there's no place for me in this industry."

The bigger and more disturbing reality is this: beyond these efforts, with the impact of AI, not only juniors but also many mid-level and senior-level professionals will struggle to find jobs in this industry over time. This is the real concern.

Because if this scenario occurs and a person's only skill is coding, these people may become unemployed and struggle to make ends meet.

Today, everyone is discussing the question, "Will AI take our jobs?" But perhaps that's not what we should be talking about anymore. The real question should be: If this happens, what will the unemployed IT workers do? Which areas will they be directed to? How will this transformation be managed?

We really need to start developing concrete ideas on this.

Thread Thread
 
harsh2644 profile image
Harsh

you've moved the conversation to exactly where it needs to go.

will AI take our jobs" is the wrong question because it's passive. the right question yours is what happens to the people it displaces and who is responsible for managing that transition.

the honest answer is nobody has a good plan for this yet. the companies benefiting from AI productivity gains aren't the ones funding retraining programs. and "learn to prompt better" isn't a career transition strategy for a 45-year-old mid-level developer.

i don't have a clean answer. but i think you're right that we need concrete ideas and that starts with people asking the question you just asked, loudly and repeatedly, until someone with actual power has to respond.

Thread Thread
 
beyaz_1675094fe9a264 profile image
Ayşe Beyaz

i hope we find a valid answer soon

Collapse
 
tqbit profile image
tq-bit

Code written =/= feature successfully shipped.

Even if AI wrote all the code in the world, it's safe to say that the 'real' engineers and architects would never have to worry about their jobs.

Once there's a tool that understands my customers' issues - that means analyzing, asking questions, coming to agreements with key-users, and so on- I'll happily retire and become a goose farmer. But that (probably) won't happen for a while.

Also, project owners and service managers usually like to blame the developer when stuff breaks. Who are they going to call out when their prompt causes the issue?

Collapse
 
harsh2644 profile image
Harsh

This is brilliant especially the goose farmer punchline! 😄 But underneath the humor is a profound truth.

Code written ≠ feature successfully shipped this is the equation AI can't crack. Writing code is just one part of delivery, and probably the easiest part. Everything else—understanding the problem, negotiating with stakeholders, translating business needs into technical realities—that's still human territory.

'Real engineers and architects will never worry exactly. AI won't replace the people who can think systemically, understand tradeoffs, and anticipate future problems. Those capabilities live in minds, not in code.

And the best line: 'Who are they going to call out when their prompt causes the issue?' 😂 This is the real question. Today, when code breaks, the developer is responsible. Tomorrow, when AI-generated code breaks, who's responsible? The developer who wrote the prompt? The AI? The manager who pushed the deadline? This ambiguity of accountability is going to be one of the biggest challenges of the AI era.

And yes when that tool arrives that truly understands customer problems... we'll all be goose farmers by then.

Collapse
 
finewiki profile image
finewiki

It’s true that technology is developing very fast and in many ways AI is taking on huge tasks for us, but I think for these agents to really take over the work from developers, they need to have certain perceptions. As long as AI doesn’t consider the importance of what it creates, what it produces is nothing more than what it’s given. The thing that will actually put the industry at risk isn’t AI itself, but developers who know how to use AI agents really well. In this process, we need to understand system engineering better; as long as developers who can understand the foundations of a project, debug it, and grasp its inter system requirements are raised, AI agents will remain just a memory that supports. These risks don’t scare us they actually show the reasons why we need to truly improve. As long as we understand and can regulate what we create, there won’t be a problem. AI agents need certain machine perceptions to be able to design systems more practically and to handle dozens of complex things more clearly.

Collapse
 
harsh2644 profile image
Harsh

This is a beautifully nuanced take. You've articulated something crucial:

AI doesn't consider the importance of what it creates that's the fundamental gap. AI can generate, but it can't value. It doesn't know which features will make users cry with joy and which bugs will bring down a business. It operates without context, without consequence.

And your key insight: The real risk isn't AI it's developers who wield AI without understanding systems. Give a powerful tool to someone who doesn't understand the underlying mechanics, and you've just created a more efficient way to build complex problems.

What I love most: 'These risks don't scare us they show why we need to improve. That's the right mindset. AI is exposing our gaps. Now we fill them.

You're absolutely right: Deep system engineering understanding is the moat. Developers who can grasp foundations, debug across boundaries, and hold the mental model of the entire system those are the ones AI will augment, not replace. The machine perceptions you mention? That's the next frontier. Until then, understanding is ours alone.

Collapse
 
christiecosky profile image
Christie Cosky

We're having a problem in code reviews where when senior devs ask junior devs why they coded something the way they did, the junior devs shrug and say "AI told me to do it." It's fast to generate code, but it takes discipline to sit down, read through it, refactor it into something readable/maintainable/that matches the system patterns, and actually understand what it's doing.

Earlier this year I generated 11k lines of code in 3 days with Claude, but it took twice that amount of time to turn it into something I understood and was comfortable merging into master. So faster in some ways, yes - it would have taken me at least two weeks to write it by hand, and it wouldn't have been as good or thorough. But the review process goes much slower now.

The bottleneck was never the speed of typing. It was always understanding. But AI has made that gap even bigger.

Collapse
 
harsh2644 profile image
Harsh

You've hit on the most under-discussed cost of AI-generated code: the understanding gap. AI told me to do it isn't just a frustrating answer it's a red flag that the junior developer has outsourced not just the coding, but the thinking. And as you rightly point out, the bottleneck was never typing speed; it was always comprehension.

Your 11k lines in 3 days example is perfect. The generation was fast, but the assimilation cost was double. That's the hidden tax on AI-assisted development. We celebrate the speed of creation while ignoring the slowdown in understanding.

Here's the uncomfortable question: Are we accidentally training juniors to be AI prompters instead of engineers? If they learn to rely on AI for solutions without developing the ability to critique, refactor, or even explain those solutions, we're creating a generation of developers who can generate code but can't own it.

Maybe code reviews need to evolve too. Instead of just reviewing the code, review the understanding. Ask: 'Explain this function in your own words. Why did the AI choose this approach? What are its tradeoffs? What would you have done differently?' Make the AI output just the starting point, not the final answer.

Collapse
 
marcoallegretti profile image
Marco Allegretti

Compilers, frameworks, and APIs; we're talking about LLMs now, but the history of computer science has always been based on layers of abstraction and our growing affinity for human-machine interaction. So I believe AI-generated code is just another piece in the story of abstraction.

Collapse
 
harsh2644 profile image
Harsh

Perfectly said. This is exactly how abstraction has always worked. Assembly → C → C++ → Python, manual memory management → garbage collection, on-prem → cloud. Each layer freed developers from lower-level concerns so they could focus on higher-order problems.

AI-generated code is just the next logical step. We're moving from 'how to write' to 'what to build.' First we told machines how to move their feet. Then we gave them frameworks to handle the path. Now we're telling LLMs the destination and letting them figure out the route.

You're right—it's not a break from history; it's a continuation. The abstraction story keeps getting taller. And like every abstraction before it, this one will enable new things while demanding new kinds of understanding. The question isn't whether it's valid it's whether we're ready to think at this new level.

Collapse
 
freerave profile image
freerave

Spot on! As someone building a CLI tool suite in Python/TypeScript, I completely agree with point #2 (Review AI Code Like a Security Auditor). The AI can build the feature fast, but it often misses subtle vulnerabilities.
My question is: With AI generating 90% of the codebase, how do you see the role of traditional static analysis tools evolving? Will we need 'AI to audit AI', or is human intuition the only real defense against complex, context-specific security flaws?

Collapse
 
harsh2644 profile image
Harsh

Great question. I think we're heading toward a layered defense:

Traditional SAST tools (SonarQube, ESLint, etc.) will handle the basics they become the 'spell check' for AI code.

AI-auditing-AI will absolutely emerge, especially for pattern-based flaws. One model can scan another's output for known vulnerability patterns faster than any human.

But complex, context-specific flaws like business logic vulnerabilities—will still demand human intuition. AI doesn't understand your specific threat model, your user base, or the unwritten rules of your domain.

So the answer isn't either/or. It's AI audits the patterns, humans audit the meaning. We become the 'guardians of the guardians the ones who ask 'this code is secure by the book, but is it secure for our users in our context?

Collapse
 
bklau2006 profile image
BK Lau

We'll soon be bakers, masseurs, and beer brewers!
Coding? Meh. It's for suckers.

Collapse
 
harsh2644 profile image
Harsh

honestly not the worst pivot 😄

though i suspect the best bakers and brewers will be the ones who understand the craft deeply enough to know when the machine got it wrong.

turns out judgment is useful everywhere.

Collapse
 
apogeewatcher profile image
Apogee Watcher

"AI can write a component. It cannot design a system."

If you prompt it well enough, it can also design a system :)

Collapse
 
harsh2644 profile image
Harsh

fair point and i've seen it work for well-defined systems with clear constraints.

but prompt it well enough is doing a lot of heavy lifting there. knowing what constraints matter, what tradeoffs to encode, what failure modes to guard against — that knowledge has to come from somewhere.

the better the prompt, the more system understanding already lives in the prompter. which means the design still happens just in the prompt instead of the code.

Collapse
 
mihirkanzariya profile image
Mihir kanzariya

the security auditor angle is really underrated imo. I've been reviewing AI generated code for months now and the patterns are wild. it writes code that looks correct, passes tests, but has subtle issues that only show up under real world conditions.

biggest one I keep seeing: AI loves to generate happy path code. error handling exists but it's surface level. like it'll catch a network error but won't handle partial failures or race conditions. you basically need to think about every edge case yourself still.

the system design point is spot on too. AI can write components all day but it can't tell you whether you should use a queue vs polling, or when denormalization actually makes sense for your specific read patterns.

Collapse
 
harsh2644 profile image
Harsh

This is gold dust real-world experience that's missing from most AI debates. Happy path code is the perfect term for it. AI doesn't know about production; it knows about examples. And examples are almost always happy paths.

Error handling is surface level this is AI's biggest blind spot. It catches the obvious errors (network timeout) but misses the subtle ones (partial failures, race conditions, deadlocks). Because those require understanding system behavior, not just syntax patterns.

And the key insight: 'AI can't tell you whether to use queue vs polling.' Exactly. AI can write components, but architectural decisions which are about tradeoffs, not just implementation are still beyond its reach.

Your experience points to a new reality: AI code review is now its own skill. We used to review for correctness. Now we also need to review for: Did AI consider edge cases? Is the error handling real or cosmetic? Did it make the right architectural assumptions? This is harder work, not easier.

Collapse
 
digitalmouse profile image
Jimm • Edited

Good article with useful insights. Will we reach 90%? I doubt it. As the article points out, the most important person is the one who understands the code, no matter what created it. Which means - currently - a well done system may have started out with AI tools, but a savvy developer would have re-written/proofed most of the code to follow the standards for development and security as necessary.

AI might write 90%, but it will more likely be us developers that will re-write 90% of that 90% to make sure it works, is secure, and meets expectations. AI is a tool, not a solution.

I currently do not employ AI tools. I already have a pretty speedy workflow and development scaffold. We actually did a race against Base44 (a popular app development website) to develop a simple taxi-cab-meter app with about 6-7 key features and functions. It took about 3 days for the developer using Base44 to develop the app into a working state. It took myself and a colleague - via pair-programming - a week doing it 'raw' with some scaffolding and automated testing. After code review, the Base44 code had about 15 issues ranging from security to basic SQL issues to coding standards compliance, and still did not work quite right. Our version had 2 issues which took us about an hour to sort out including a lunch break. Our app worked as intended.

Yes we were slower, but the added work the Base44 developer had to do extended their workload another 5-6 days to reach the same level of automated testing acceptance. Humans: 1, AI-assisted coding: 0.

Collapse
 
harsh2644 profile image
Harsh

This is the most valuable kind of comment because it moves the conversation from theory to practice. You ran an actual experiment, collected real data, and shared the results. This is the evidence we need.

The Base44 experiment is telling. 3 days vs 7 days yes, AI-assisted was faster initially. But 15 issues vs 2 issues, and then 5-6 more days of fixing that's the real cost. Time-to-working isn't the same as time-to-shippable.

Most important point: 'I don't use AI tools because I already have a speedy workflow.' This is the truth that gets lost AI's real competition isn't slow developers; it's developers who have optimized their own processes. A skilled pair-programming team with good scaffolding and automated testing is still hard to beat.

Humans: 1, AI-assisted: 0 that scorecard isn't just about this one experiment. It's about every situation where understanding, experience, and judgment matter. AI isn't there yet.

Your experience reveals the hidden cost of AI-assisted development: the initial speed is visible; the later rework is invisible. And that invisible cost is often higher than we admit.

Collapse
 
wilddog64 profile image
chengkai

the best way to overcome this is to set up a structural gate that prevents AI
from generating plausible-but-wrong code from ever reaching main: dev.to/wilddog64/i-built-the-guard.... Of cause human still
need to be involved all the process for reviewing, and approving what AI has
generated. The problem is not you or AI writing the code. It's how you properly
gate your code

Collapse
 
harsh2644 profile image
Harsh

Absolutely right.
The distinction isn't really about whether a human or an AI wrote the code it’s about whether the code is properly gated before reaching production. AI models are great at velocity, but they can often produce 'plausible-but-wrong' solutions. That’s where a structural safeguard becomes essential.

A solid review process, paired with automated checks, ensures that while AI assists with generation, humans remain accountable for quality and correctness. Well said!

Collapse
 
crissxx_hernandez_71667c3 profile image
Crissxx Hernandez

Tienes razón con ese 90% suena impactante, pero creo que ahí es donde entramos nosotros. La IA puede generar el código repetitivo, pero alguien tiene que diseñar la arquitectura y revisar que lo que la máquina saca no tenga errores de lógica. Al final, nosotros resolvemos el problema y la IA es solo la herramienta que lo escribe rápido.

Collapse
 
harsh2644 profile image
Harsh

we solve the problem, AI just writes it fast that's the clearest one-line summary of the right relationship with these tools i've seen.

the architecture and logic review point is exactly where the human value is irreplaceable. AI can generate the code. it can't generate the understanding of why the code should exist.

really appreciate you reading and glad the point resonated across languages! 🙏

Collapse
 
sonaiengine profile image
Son Seong Jun

code gen's the easy part now. knowing what to ask it to generate? that's the job. been seeing too many PRs where it's technically correct but doesn't solve anything because the person prompting didn't actually understand the problem

Collapse
 
harsh2644 profile image
Harsh

This hits the nail on the head. Technically correct but doesn't solve anything that's the defining failure mode of the AI era. We've made code generation trivial, but problem understanding remains hard.

Before AI, there was a natural filter: if you didn't really understand the problem, you couldn't write the code. Now AI bridges that gap. Half-understood problem + AI = full PR. And it passes review because it looks right.

You've articulated the new core competency: not writing code, but knowing what code needs to be written. The 'what' matters more than the 'how' now. And that's a skill AI can't give you.

Collapse
 
bumbulik0 profile image
Marco Sbragi

Maybe, statistics, are right, but what they means?
After many years of working as a full-stack developer and analyst, I've realized that my mind develops the solution to a problem as it's presented to me. I see the flow, the integrations, how to present data to the user. Today, we live in a hyper-fast world. We want to instantly transform thoughts into action and results. But for those of us like me, who started when documentation and access to resources were limited, the only way we could do this was to work in our heads and on paper. We developed algorithms and patterns, and then, when the opportunity arose, we tested them. We've become accustomed to "thinking" before doing. The tools have changed over time, but the method shouldn't. We must continue to "think," knowing exactly what we want to achieve and, more importantly, how we want to achieve it, based on our current skills. Ever more powerful PCs, ever easier software, the cloud, discussion forums, and now artificial intelligence are just tools that must amplify "our" capabilities. When I started, the only tasks I was assigned were refactoring code written by others. It wasn't the best, but it helped me grow; you had to understand first and then act. Today, in the age of AI and Vibe Coding, understanding is no longer important, and this, in my opinion, is the limit. The great risk is that AI becomes the architect and we are the "employees." There's a commercial that says "power is nothing without control." It's true, we give orders and AI executes, but we must be able to give the right orders; if garbage goes in, garbage will come out.

Be careful, I'm speaking as a developer, not as a user looking to solve a specific problem.

This is a fundamental distinction; we're on a developer platform, so AI is welcome to help us, assist us, and implement, as quickly as possible, the solution we already have in mind.

Collapse
 
harsh2644 profile image
Harsh

This is one of those comments that deserves to be read slowly, multiple times. Thinking before doing the discipline that's most at risk in the AI era.

You've drawn a crucial distinction: developer vs. user. For a user, AI is a magic box—input in, output out. For a developer, it's a tool, not a master. And to wield a tool properly, you need to understand what it's doing, why it's doing it, and when it's wrong.

Power without control is nothing that ad campaign tagline should be the mantra of the AI age. AI has the power. We need to have the control. And control requires understanding.

The most valuable insight: We developed algorithms and patterns, then tested them. Today's approach is reversed test first, understand later. That's not evolution; it's regression.

I completely agree: AI should be the architect's assistant, not the architect itself. The foundation must be our understanding, not AI's output. Otherwise, we're not developers anymore we're just prompters with a false sense of ownership.

Collapse
 
klement_gunndu profile image
klement Gunndu

That 19% slower finding from the RCT hit home — I shipped a feature I couldn't debug because I never understood the state flow the AI generated. The real skill gap isn't writing code, it's reading code you didn't write.

Collapse
 
harsh2644 profile image
Harsh

This hits hard because it's so relatable. 'Shipped it but couldn't debug it that's the hidden tax on AI-generated code. We celebrate the speed of creation but ignore the cost of comprehension. And you're absolutely right: reading code you didn't write is now the core skill. AI made us all maintainers overnight.

Collapse
 
hero_brine profile image
Herold Brine

AI specialists have not yet taught the machine to write decent code on the first try (not counting the good ones, but not everyone has access to them), and it is not considered a crime to write an easy/standard program using AI to save time. Moreover, even good code written by AI needs to be manually adjusted to achieve the desired result. This is not a replacement for humans, but rather a new tool to help us make more progress in our short mortal lives.

Collapse
 
harsh2644 profile image
Harsh

Well said decent code on the first try' is still a dream, not reality. Yes, AI can churn out code quickly, but there's a gap between decent' and correct. And filling that gap still requires human judgment.

Important point: 'Not everyone has access to the good ones.' This is often overlooked. Most developers have access to the same mediocre AI tools that generate mediocre code. And fixing mediocre code often takes as long as writing it from scratch sometimes longer.

The key insight: 'More progress in our short mortal lives' —that's the real purpose of AI. It's not replacing us; it's augmenting us. Like calculators didn't replace mathematicians they freed them to focus on harder problems.

AI is a tool, not a replacement. And a tool is only as good as the person wielding it. Give a great tool to someone who doesn't understand the craft, and you just get faster bad results.

Collapse
 
sonaiengine profile image
Son Seong Jun

yeah, and the worst part is when you get a perfectly formatted response that's solving for the wrong thing entirely. had a junior ask claude to 'optimize this query' without understanding what the actual bottleneck was — nailed the optimization, broke latency. problem was the schema, not the query.

Collapse
 
harsh2644 profile image
Harsh

This is the perfect cautionary tale for the AI era right answer, wrong question. The query got optimized, but latency got worse. Why? Because the problem wasn't the query it was the schema. AI did exactly what was asked, but what was asked was the wrong thing.

This is the danger of using AI without domain understanding. The junior asked optimize this query. AI optimized it. But the real question should have been 'why is the system slow?' That question never got asked.

Technically correct but practically useless that's AI's most deceptive failure mode. It looks like progress, but it's just efficiently solving the wrong problem. The fix isn't better AI; it's better problem diagnosis before prompting.

Collapse
 
williamwangai profile image
William Wang

The 90% stat misses the point. Writing code was never the bottleneck — understanding what to build was. AI generates code faster, but someone still needs to know which code to generate, how to evaluate if it's correct, and when to throw it away. The job title changes from "code writer" to "code editor" but the skill ceiling actually goes up, not down.

Collapse
 
harsh2644 profile image
Harsh

Perfectly said—'Writing code was never the bottleneck—understanding what to build was.' This is the truth that gets lost in all the AI hype. We've always spent more time thinking than typing. The ratio just got more obvious.

Code writer to code editor' is the perfect framing. A writer starts from blank page; an editor evaluates, refines, and discards. That's not a demotion it's a promotion to a role with more responsibility, not less.

And crucially: The skill ceiling actually goes up, not down. AI might have automated the mechanical parts, but expertise just became more valuable. The people who only knew syntax are in trouble. The people who understand systems, diagnose problems, and ask the right questions? They're more essential than ever.

When to throw it away this is the part nobody talks about. Half of what AI generates is garbage. Knowing what to keep and what to delete is now a core competency. That's not easier than writing code. It's harder.

Collapse
 
ai_made_tools profile image
Joske Vermeulen

The 90% that gets generated is the easy 90%. The remaining 10%: architecture decisions, debugging weird edge cases, understanding what to build in the first place, that's where the job always was. AI just made the boring parts go faster.

Collapse
 
harsh2644 profile image
Harsh

AI just made the boring parts go faster that's the most accurate one-line summary of what actually changed.

the 90% was never the job. it was the overhead. the job was always the 10% — figuring out what to build, why it should work a certain way, what breaks when you change it.

what's interesting is that the 90% going faster doesn't automatically free up more time for the 10%. it often just means more 90% gets generated. more features, more code, more surface area to understand and maintain.

the teams winning right now aren't the ones generating the most. they're the ones spending the time they saved on better architecture decisions and deeper debugging not just more generation.

Collapse
 
ashish111333 profile image
ashish111333

yes it's true , most of frontend devs I know don't write code by hand anymore maybe 5 percent but 95% is ai generated same for backend devs, and it's also helping devops people.

Collapse
 
harsh2644 profile image
Harsh

the 95% figure for frontend feels right honestly and the devops point is interesting because infrastructure-as-code was already abstracting a lot of the manual work before AI even arrived.

the part that worries me though is that 5% who still write by hand are the ones who can debug when the 95% breaks in production.

as that 5% gets smaller, that safety net gets thinner.

Collapse
 
tomorrmonkey profile image
golden Star

In fact, the code generated by AI is excellent and bug-free.
However, we need to provide direction.
And AI can't help us 100%.

Collapse
 
harsh2644 profile image
Harsh

Well said. AI code can be excellent, but there's a difference between 'excellent' and 'right.' AI generates what you ask for the question is whether you asked for the right thing.

Providing direction that's what AI can't do. Direction requires vision, context, and understanding of why something matters. That's still human territory.

And yes, AI can't help us 100% and maybe it shouldn't. 100% help would mean 100% dependence, and dependence means the erosion of understanding. AI should be our assistant, not our replacement.

Collapse
 
tomorrmonkey profile image
golden Star

ok

Collapse
 
contact_contact_fe709d3ab profile image
vocalis AI

amazing

Collapse
 
harsh2644 profile image
Harsh

Thank You so Much

Collapse
 
sandipan profile image
SANDIPAN

Codes which are public or sold by companies. If you have true skills and interest to learn, nothing can stop you.

Collapse
 
harsh2644 profile image
Harsh

Absolutely right. Lack of resources is no longer an excuse. Millions of public repos, free tutorials, documentation, and now AI there's never been more access to learning.

The real barriers have always been curiosity and discipline. If you have those, you can learn anything from anywhere. Everything else is just noise.

And yes interest to learn is the differentiator. It's what separates code writers from problem solvers. AI can amplify that interest, but it can't create it.

Collapse
 
akitolite profile image
AIクリエーターの道|ジョン G.

Excellent write-up. The step-by-step approach makes it very approachable.

Collapse
 
harsh2644 profile image
Harsh

Thank you! Really appreciate you taking the time to read it. Glad the step-by-step format worked for you that's exactly what I was going for.

Collapse
 
swayam_41 profile image
swayam

great insights

Collapse
 
harsh2644 profile image
Harsh

Thank you! Really appreciate you taking the time to read and share your thoughts.