DEV Community

Cover image for How I learn in the age of Ai coding
Hooman
Hooman

Posted on

How I learn in the age of Ai coding

There is a strange thing that happens when AI starts writing your code for you. You might assume learning slows down. That you become passive. That you are just a prompt monkey, copy-pasting outputs and shipping features without really understanding what is going on under the hood.

That has not been my experience at all. At least not anymore.

If anything, I am learning differently now, and in some ways more richly than before. The topics I am picking up are not always the ones I expected to care about. But they are sticking, and they are making me a better builder. Let me explain what I mean.

The AI Codes, But I Still Have to Understand the Problem

When I work with an AI coding assistant, the code gets written fast. But bugs still surface. Edge cases still bite. And when something breaks, I still have to diagnose it. That diagnosis process is where a lot of my learning now lives.

A recent example: I was building a text-to-speech (TTS) feature. The AI scaffolded the whole thing quickly. But then things started going wrong in ways I did not immediately understand. Fixing those issues sent me down some genuinely interesting rabbit holes.

Learning About TTS Input Length Limits and Chunking

The first thing I ran into was that TTS APIs have input length limitations. Most of them cap how much text you can send in a single request. When I fed a long block of content into the API, it either failed silently or threw an error I did not immediately recognize.

The AI could generate a chunking solution for me, and it did. But to actually steer it toward the right solution, I had to understand the problem first. What counts as a “chunk”? Do you split on character count, word count, or sentence boundaries? What happens if you split mid-sentence? Does the audio sound jarring at the seam?

I learned that splitting on sentence boundaries produces much cleaner audio output. I learned about the tradeoffs between chunk size and API latency. I learned how to think about reassembling audio segments in the right order. None of this was in the original feature spec. All of it came from debugging a problem the AI helped create and then helped solve.

Learning About Markdown and Special Characters in TTS

The second rabbit hole was even more interesting. When you pipe markdown content directly into a TTS engine, it reads everything. And I mean everything. The asterisks. The pound signs. The underscores. The hyphens used as bullet points.

Suddenly your clean article gets narrated as “asterisk asterisk important asterisk asterisk” and the whole thing sounds broken. This is not a bug exactly. It is just a mismatch between how markdown is structured for visual rendering versus how raw text is processed by a speech engine.

To fix it, I had to learn about stripping markdown before passing text to TTS. There are libraries that help with this, and the AI pointed me toward them. But understanding why the problem existed, and what categories of characters cause issues, meant I could write better prompts the next time. I could tell the model exactly what I needed: strip headers, remove emphasis markers, preserve sentence structure, handle code blocks gracefully.

That kind of specific instruction only comes from having gone through the problem once.

Learning About Porosity

This one came from a completely different project but the same pattern. I was working on something that involved understanding how materials or data structures “breathe,” for lack of a better word. How things pass through layers. The concept of porosity came up, and because the AI was doing the implementation, I had the mental space to actually sit with the concept rather than rushing to write code.

I looked it up. I read about it. I let myself get curious about it in a way that I might not have if I had been heads-down in syntax.

That is a real benefit of AI-assisted development that does not get talked about enough. When the mechanical parts of coding are handled, you get cognitive headroom to actually learn the domain you are building in.

How This Changes the Way I Work Next Time

Here is the part that matters most to me. Every one of these learning moments, TTS chunking, markdown stripping, porosity, they do not just inform the current project. They change how I approach the next project.

When I hit a TTS feature again, I will prompt the AI differently from the start. I will say: handle input length limits with sentence-aware chunking, strip markdown before synthesis, and return audio segments in order. That is a much better starting prompt than “build me a TTS feature.” The AI can only be as precise as the person steering it.

This is the core of how I think about learning in this era. The AI handles execution. My job is to build a richer and richer mental model of the problem space so that I can direct the execution better each time. The learning loop is still very much alive. It just runs through different channels now.

The Takeaway

AI coding has not made learning irrelevant. It has shifted what you need to learn and when. You spend less time memorizing syntax and more time understanding systems, constraints, and domain concepts. You learn through debugging, through curiosity, and through the iterative process of steering a model toward better outputs.

Every weird edge case is a lesson. Every broken feature is a map of something you did not know yet. And the next time you sit down to build something similar, you bring all of that with you.

That feels like learning to me.

Originally posted on my Substack

Top comments (0)