<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lars Faye | Confident Coding</title>
    <description>The latest articles on DEV Community by Lars Faye | Confident Coding (@larsfaye).</description>
    <link>https://dev.to/larsfaye</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/larsfaye"/>
    <language>en</language>
    <item>
      <title>My AI workflow seems to be the opposite of what the industry is encouraging, and I don't care.</title>
      <dc:creator>Lars Faye | Confident Coding</dc:creator>
      <pubDate>Sun, 29 Mar 2026 19:32:00 +0000</pubDate>
      <link>https://dev.to/larsfaye/my-ai-workflow-seems-to-be-the-opposite-of-what-the-industry-is-encouraging-and-i-dont-care-14il</link>
      <guid>https://dev.to/larsfaye/my-ai-workflow-seems-to-be-the-opposite-of-what-the-industry-is-encouraging-and-i-dont-care-14il</guid>
      <description>&lt;p&gt;The general consensus is that you should spec out your project, requirements, generate a bullet-proof plan, and then implement it via some kind of agent workflow. I've attempted this numerous times, and yes, it works...sort of. It can produce an application to the spec, but the main issue I continuously run into is that it's really hard to think about all the nuances and caveats ahead of time, and its not until I see something start to come together where I realize I need to think about things differently. Any ambiguity and the LLMs fills in with assumptions (or hallucinations).&lt;/p&gt;

&lt;p&gt;I can just keep iterating with the agents, but its just more token usage, more code churn, more disconnection from the codebase, and more potential complexity as I am needing to trust the agent to refactor appropriately. It can be done, but I personally find it exhausting and rather annoying.&lt;/p&gt;

&lt;p&gt;Lately, I've started to do the opposite, which I'm sure the AI bros would balk at: I use the LLM to generate the plan, and I do the implementation. Especially starting from scratch, deciding how to architect and plan an app out can be challenging, and I often like to see examples of other architectures to help me decide how I want to structure things. In the past, I'd often look for starter repos as inspiration and then begin putting things together after getting some initial direction.&lt;/p&gt;

&lt;p&gt;With these AI tools, I can work with them to tailor an architecture and structure that suits my exact needs, plan the entire app, and then...&lt;strong&gt;build it myself&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Sure, I use these AI tools alongside to delegate tasks to and implement features on an as-needed basis, but its highly incremental and I'm still "manually" coding a good 50% of the project.&lt;/p&gt;

&lt;p&gt;This flies in the face of how I think the industry is hyping things and seemingly the opposite of the &lt;em&gt;"let AI do the coding, you only do the planning &amp;amp; review"&lt;/em&gt; workflow, but I &lt;strong&gt;really&lt;/strong&gt; don't care. Anytime I've done that, I could feel the atrophy of my critical thinking setting in within a few days, and feeling like I &lt;em&gt;inherited&lt;/em&gt; a codebase rather than helped &lt;em&gt;create&lt;/em&gt; one.&lt;/p&gt;

&lt;p&gt;Thinking in and working through the project in code isn't just drudgery; it forces you to think about things on a technical level that involves everything from security to performance to user experience to maintainability. Trying to do that while staying in the "natural language" mindset doesn't get specific enough, and specificity is absolutely essential to doing this work successfully.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>I'm thinking of putting together a course that focuses on webdev troubleshooting and debugging.</title>
      <dc:creator>Lars Faye | Confident Coding</dc:creator>
      <pubDate>Tue, 24 Mar 2026 21:00:48 +0000</pubDate>
      <link>https://dev.to/larsfaye/im-thinking-of-putting-together-a-course-that-focuses-on-frontend-troubleshooting-and-debugging-2aph</link>
      <guid>https://dev.to/larsfaye/im-thinking-of-putting-together-a-course-that-focuses-on-frontend-troubleshooting-and-debugging-2aph</guid>
      <description>&lt;p&gt;I've been in the industry a while (back when tables were used for layout) and I've learned most of what I know through reverse engineering and breaking things/putting back together. I've always had a knack for it, and have helped a lot of developers over the years with tips and tricks I picked up along the way. I've had instances where I've found the solution in minutes that other developers were spending hours on. It's not like I was a better developer, it just seemed I had a process and mental framework whereas they would get overwhelmed on where to start.&lt;/p&gt;

&lt;p&gt;My theory is: if developers can be more confident they can troubleshoot problems, they're less likely to feel imposter syndrome. I find I'm at my happiest when I'm being helpful and working with other developers, so I'm moving on something that I've wanted to do for over a decade and put the course together.&lt;/p&gt;

&lt;p&gt;I'm working on content, and I'm still proving the concept out, so curious what you guys think. I want to focus on frontend workflows, although IMO, debugging skills are pretty universal.&lt;/p&gt;

&lt;p&gt;Landing page: &lt;a href="https://confident-coding.com/" rel="noopener noreferrer"&gt;https://confident-coding.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Everyone Can Delegate Now | AI is enabling every knowledge worker to learn management skills.</title>
      <dc:creator>Lars Faye | Confident Coding</dc:creator>
      <pubDate>Mon, 23 Mar 2026 23:30:32 +0000</pubDate>
      <link>https://dev.to/larsfaye/everyone-can-delegate-now-ai-is-enabling-every-knowledge-worker-to-learn-management-skills-ci1</link>
      <guid>https://dev.to/larsfaye/everyone-can-delegate-now-ai-is-enabling-every-knowledge-worker-to-learn-management-skills-ci1</guid>
      <description>&lt;h2&gt;
  
  
  Delegating was once reserved for managerial positions.
&lt;/h2&gt;

&lt;p&gt;The ability to effectively delegate and outsource tasks was the primary role of a manager, who had, ideally, already spent time "in the trenches" doing the work they now assign to others.&lt;/p&gt;

&lt;p&gt;Compiling data and generating reports, creating slideshow presentations, implementing features per client requirements...this delegation process flowed from the project managers and was distributed across the team.&lt;/p&gt;

&lt;p&gt;Delegation was a skill that had to be honed and refined. You would move up the chain and take on higher-level work, &lt;em&gt;orchestrating&lt;/em&gt;, rather than &lt;em&gt;doing&lt;/em&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  AI tools can empower individuals to learn delegation.
&lt;/h2&gt;

&lt;p&gt;Because of their general ease of use, AI tools have proliferated quickly and we're still trying to pin down exactly what role they can effectively play, and how integrated they should be. The major shift that I have observed in the workplace, however, is that their introduction has created an opportunity where now &lt;strong&gt;everyone has the ability to leverage them for delegation&lt;/strong&gt;, no matter the type or scope of task.&lt;/p&gt;

&lt;p&gt;The ability to delegate is an empowering skill to practice. It offers a sense of freedom to the individual who might be facing down an exhaustive list of tasks: they don't have to do it &lt;em&gt;all&lt;/em&gt; themselves. They can learn how to manage a workload efficiently because there's a way to relieve some of the pressure. &lt;/p&gt;

&lt;p&gt;Tasks that were once required to be completed by an individual (e.g. populating a spreadsheet, drafting an email) can now be outsourced to a system which can &lt;em&gt;loosely emulate human interaction&lt;/em&gt;. AI tooling has zero &lt;em&gt;interface&lt;/em&gt; learning curve. Anybody can type into a chatbox. Just upload your request, requirements, and documents, and get a response. A draft of a report could be generated while you are on your lunch break, or article topics could be researched while you respond to emails.&lt;/p&gt;

&lt;p&gt;Using AI tooling for delegation is different from delegating directly to humans, but there are core similarities: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Analyzing one's own workload and learning how to prioritize effectively&lt;/li&gt;
&lt;li&gt;Breaking up large tasks into smaller, actionable pieces&lt;/li&gt;
&lt;li&gt;Deciding what can be completed oneself versus what can be assigned to "someone else"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the past, all these tasks would sit on someone's desk until they were able to get to them. If there wasn't enough time in the day, the work didn't get done (and the backlog grew).&lt;/p&gt;

&lt;h2&gt;
  
  
  AI delegation is a skill that needs to be learned.
&lt;/h2&gt;

&lt;p&gt;While it's an empowering skill, delegating doesn't come naturally to everyone, and we should not conflate the ease of use of the tool with mastery over delegation as a skill. Delegating to AI in particular comes with some extra "gotchas", as well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I've witnessed individuals not providing enough guidance and context, resulting in half-baked and incorrect results &lt;em&gt;(this isn't all that different with humans, either!)&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I've seen people blindly using responses without verification because they are delegating work &lt;em&gt;outside their domain of expertise&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I've worked with people who didn't understand the nature of context window limitations and how to break their task up into smaller pieces, eventually being forced to abandon it and start over&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, while the &lt;em&gt;interface&lt;/em&gt; for an AI tool does not have a learning curve, these tools have an element of unpredictability &amp;amp; unreliability, and the responsible party needs to heavily scrutinize the outputs. If someone is not careful, it could end up proving to be more time consuming than doing the task manually (and you end up being more of a "micro-manager" than a "delegator").&lt;/p&gt;

&lt;p&gt;Now that &lt;em&gt;everyone&lt;/em&gt; has a potential &lt;em&gt;delegatee&lt;/em&gt; they can assign work to, this is a skill that will need to be learned by everyone who is in a knowledge work role. The ability to offload some of their responsibilities and tasks and mitigate some overhead &lt;strong&gt;can teach essential lessons around management and leadership&lt;/strong&gt;.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Efficiency breeds expectation.
&lt;/h2&gt;

&lt;p&gt;Still, there are legitimate concerns that it &lt;a href="https://www.theregister.com/2026/02/11/ai_makes_employees_work_harder/" rel="noopener noreferrer"&gt;won't reduce workload&lt;/a&gt;. When personal computers were rolled out into office spaces in the 70s/80s, the role of the secretary, with the narrow discipline of answering phones and taking messages, transformed to include word processing, electronic filing, spreadsheet management, and print queues. This resulted in a higher workload and &lt;em&gt;more&lt;/em&gt; skills to learn.&lt;/p&gt;

&lt;p&gt;Giving every individual the ability to delegate their work could create a situation where the increased amount of the 'resource' (an individual's productivity) ends up being consumed at an equal rate, negating any efficiency gains or resulting in &lt;a href="https://fortune.com/2026/02/10/ai-future-of-work-white-collar-employees-technology-productivity-burnout-research-uc-berkeley/" rel="noopener noreferrer"&gt;burnout&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Delegation is likely to evolve the workforce.
&lt;/h2&gt;

&lt;p&gt;The ability for workers to delegate is a skill that is going to have to be continuously refined over the years to come. For the employer, it should &lt;em&gt;not be an excuse&lt;/em&gt; to merely shoehorn more work into a workday, but to reduce stress on the individual &lt;em&gt;and to create space to do better work&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;For the employee, whether a manager or a team member, individuals need to be mindful about what they are offloading. If someone delegates &lt;em&gt;all&lt;/em&gt; of their workload to others (or to AI tools), they will end up in a "placeholder" role, rendering their own position all the more easy to replace (and creating their own potential &lt;a href="https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/" rel="noopener noreferrer"&gt;cognitive atrophy&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;These tools also offer tremendous opportunity in shaping how we can not just get &lt;em&gt;more&lt;/em&gt; work done, but get &lt;em&gt;better&lt;/em&gt; work done, as the ability to delegate can hone critical thinking, build managerial skills, and give every individual a sense of personal power. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Your Indispensable Value in the AI Era</title>
      <dc:creator>Lars Faye | Confident Coding</dc:creator>
      <pubDate>Sat, 21 Mar 2026 18:23:37 +0000</pubDate>
      <link>https://dev.to/larsfaye/your-indispensable-value-in-the-ai-era-427g</link>
      <guid>https://dev.to/larsfaye/your-indispensable-value-in-the-ai-era-427g</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"In a world where the cost of answers is dropping to zero, the value of the question becomes everything."&lt;br&gt;
&lt;cite&gt;— Brit Cruise, the &lt;a href="https://www.youtube.com/watch?v=dcolM6W5Odc" rel="noopener noreferrer"&gt;AI Paradox&lt;/a&gt;&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;LLMs and their adjacent AI tools have provided us with something truly novel: the ability to ask anything, at any time, and receive an answer. We might have thought Google was playing this role, until these tools showed us that what we had before was a ubiquitous digital encyclopedia, and what we have now resembles a librarian who will &lt;em&gt;attempt&lt;/em&gt; to answer any question, regardless of the complexity or the absurdity.&lt;/p&gt;

&lt;p&gt;When you have a tool that has &lt;em&gt;all the answers&lt;/em&gt;, what value do you have to bring? &lt;/p&gt;

&lt;p&gt;It turns out: a whole lot. More than you probably thought, too.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Question is the Value
&lt;/h2&gt;

&lt;p&gt;If I were asked by someone to describe what it's like be a developer &lt;em&gt;(or programmer, or coder, whatever you identify with)&lt;/em&gt;, I would describe it as &lt;strong&gt;a state of being in which one is ceaselessly asking questions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why doesn't this work?&lt;/li&gt;
&lt;li&gt;Why &lt;strong&gt;&lt;em&gt;does&lt;/em&gt;&lt;/strong&gt; this work?&lt;/li&gt;
&lt;li&gt;Is there a better way to approach this?&lt;/li&gt;
&lt;li&gt;How can I build this feature? &lt;/li&gt;
&lt;li&gt;Should I refactor this code?&lt;/li&gt;
&lt;li&gt;What happens if I change X?&lt;/li&gt;
&lt;li&gt;How does it behave when I move Y?&lt;/li&gt;
&lt;li&gt;What happens if I remove Z?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Turning Blue to Purple
&lt;/h2&gt;

&lt;p&gt;Many moons ago, circa 2010, I was hired to build an eCommerce site using &lt;a href="https://www.cs-cart.com/" rel="noopener noreferrer"&gt;CS Cart&lt;/a&gt;, a platform that was still in its infancy (even eCommerce was not that old by that point). During the checkout process, I got an obscure mySQL error, and I wasn't super experienced with SQL at the time, nevertheless something specific to CS Cart.&lt;/p&gt;

&lt;p&gt;Off to Google I went, hoping to find an easy answer &lt;em&gt;(spoiler: nope)&lt;/em&gt;. I quickly turned every blue link I found to purple, with no clear resolution. So, I kept asking. &lt;/p&gt;

&lt;p&gt;Each time I reformulated the question, I would get slightly different results, which led to refining the question. I was piecing together jigsaw puzzle without the picture, and each piece showed me the shape of the next piece I should begin sifting through the box for. Each reformulation of the question was another potential shape that could fit.&lt;/p&gt;

&lt;p&gt;Eventually, after creating page after page of &lt;span&gt;purple links&lt;/span&gt;, I gathered together enough pieces to formulate the &lt;em&gt;actual question that I needed to answer&lt;/em&gt;. And once I had that question, the answer was immediate, and self-evident. After a solid day of searching (punctuated with many walks around the block), I found the answer! &lt;/p&gt;

&lt;p&gt;Or, more accurately: I found &lt;strong&gt;the&lt;/strong&gt; question.&lt;/p&gt;

&lt;p&gt;The answer was the result of the labor, it was the outcome. As Brit Cruise also said in his scintillating video, &lt;a href="https://youtu.be/dcolM6W5Odc?t=850" rel="noopener noreferrer"&gt;finding the question itself is the work&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nothing Is Becoming Simpler
&lt;/h2&gt;

&lt;p&gt;From the introduction of GPT 3.5 to the latest models and tooling, answers are now abundant. Whether or not they are the &lt;em&gt;correct answers&lt;/em&gt;...that is a problem that remains unchanged even with the latest frontier models. And it's a problem that is only solveable by those that know the power of asking the right questions.&lt;/p&gt;

&lt;p&gt;I realize that with modern troubleshooting tooling and the assistance of AI models, that particular debugging session would have likely been resolved differently, and likely quicker. This is a wonderful turn of events; this is how progress works. That is that is applying an old problem to new tooling, though. The problems we face now are very different, and &lt;em&gt;the challenges we face at any point in time, scale to the complexity of industry.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Programming and development are far more complicated in modern times than they were in 2010, if not just due to the sheer number of abstractions we've created for even the simplest features. &lt;/p&gt;

&lt;p&gt;I recently found a great article by Paul Herbert who demonstrates this so effectively: &lt;br&gt; &lt;a href="https://paulmakeswebsites.com/writing/shadcn-radio-button/" rel="noopener noreferrer"&gt;The Incredible Overcomplexity of the Shadcn Radio Button&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, troubleshooting a Docker issue is orders of magnitudes more complex than anything I encountered in 2010, because the state of technology to support something like Docker did not exist then, but it does now. &lt;/p&gt;

&lt;p&gt;As the tools grow in capability, the complexity grows. And as things get more complex, we encounter novel problems. We'll need more people who are able to approach these novel problems to help solve them, who know how to ask questions, how to research, and how to formulate new questions to navigate these new problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Cannot ask the Question For You
&lt;/h2&gt;

&lt;p&gt;It's been a few years now and it's quite apparent that LLMs and AI tools are &lt;em&gt;not&lt;/em&gt; going to simplify anything about what it means to program and create software. Peruse OpenAI's &lt;a href="https://openai.com/index/harness-engineering/" rel="noopener noreferrer"&gt;Harness engineering&lt;/a&gt; write-up, and it's clear to see that the new way to approach programming, seems to still be the same fundamentals as programming, but with more abstractions between you and the result, and with potential higher complexity due to the sheer volume of code that can be generated in shorter amounts of time. Which will lead to more complex software, which leads to more complex issues. Software is not a static industry, and &lt;em&gt;we constantly scale the capabilities of the software to the capabilities of the tools&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;What remains is the most valuable discipline that you can cultivate: how to ask effective, productive questions. And once you receive an answer to that question, how to take that answer to reformulate the next question, performing this process recursively until clarity comes. This is the process of "critical thinking", but that phrase handwaves away the mechanics of what it means to do just that. Being able to distill a general question into a highly specific one involves relentless scrutiny, ongoing experimentation, and being comfortable residing in a state of unknowing for an unspecific amount of time. &lt;strong&gt;This is the true job description of the programmer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI models cannot think for you, and they cannot formulate the question for you. They can certainly be a tool that helps you on your research path, but they are still bound by their training data, and they cannot escape the weighted dice that determines their paths which drive to their outputs. Their ability to generate answers is unmatched, but those answers are highly sensitive to the original input &amp;amp; context, and fundamentally untrustworthy by their very nature. That's OK, because they still have tremendous value, but &lt;em&gt;only&lt;/em&gt; when someone is present and capable to sift through the noise, distill the truth, and verify the answer (or perhaps, the next question).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Answers are Cheap Now
&lt;/h2&gt;

&lt;p&gt;As Brit Cruise stated in a beautifully succinct manner: in an era where AI tooling has dropped the cost of answers to near zero, the value resides in asking the right questions. And while AI tools have exacerbated this situation, I would postulate that the value has &lt;strong&gt;always&lt;/strong&gt; been in asking the right questions.&lt;/p&gt;

&lt;p&gt;And when you have an &lt;em&gt;infinite answer machine&lt;/em&gt;, your ability to ask good questions &lt;em&gt;becomes infinitely more valuable&lt;/em&gt;.&lt;/p&gt;




&lt;p&gt;View the article at: &lt;a href="https://larsfaye.com/articles/the-question-is-the-work" rel="noopener noreferrer"&gt;https://larsfaye.com/articles/the-question-is-the-work&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Interesting in learning how to ask better questions? Check out &lt;a href="https://confident-coding.com" rel="noopener noreferrer"&gt;Confident Coding&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>development</category>
    </item>
    <item>
      <title>Your AI generated code is "almost right", and that is actually WORSE than it being "wrong".</title>
      <dc:creator>Lars Faye | Confident Coding</dc:creator>
      <pubDate>Fri, 20 Mar 2026 15:13:02 +0000</pubDate>
      <link>https://dev.to/larsfaye/your-ai-generated-code-is-almost-right-and-that-is-actually-worse-than-it-being-wrong-59og</link>
      <guid>https://dev.to/larsfaye/your-ai-generated-code-is-almost-right-and-that-is-actually-worse-than-it-being-wrong-59og</guid>
      <description>&lt;ul&gt;
&lt;li&gt;"Almost right" will make it past reviews. &lt;/li&gt;
&lt;li&gt;"Almost right" will pass tests and linters&lt;/li&gt;
&lt;li&gt;"Almost right" will make it in your codebase, and wait for the right mix of reasons to create a potential catastrophe.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Yes, AI tools enhance your work and empower you to "punch above your weight". You also need discipline and practice, and you can give yourself permission to slow down and learn what is happening on a deeper level.&lt;/p&gt;

&lt;p&gt;While the industry is pushing relentlessly for handing over control to “agents,” I propose a more measured approach, and recommend that the default mode when working with LLMs should always be scrutiny and skepticism. The trust needs to be earned, not granted.&lt;/p&gt;

&lt;p&gt;When working in areas where the training data is robust and plentiful and the requests are clearly architected with proper context, they have a fairly high accuracy rate. Nevertheless, the real work happens in the nuance and the details, and they are renowned for introducing application-breaking issues through seemingly innocuous additions or changes. Every response should have a “trust, but verify” approach.&lt;/p&gt;

&lt;p&gt;Anthropic themselves support this approach:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic#trust-but-verify" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A security engineer highlighted the importance of experience when Claude proposed a solution that was “really smart in the dangerous way, the kind of thing a very talented junior engineer might propose.” That is, it was something that could only be recognized as problematic by users with judgment and experience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It's only by knowing how to code, and practicing your coding on a regular basis (and your debugging, &lt;a href="//confident-coding.com"&gt;which I'm starting a course on&lt;/a&gt;), will you learn the skills to be able to catch those "almost right" solutions that these models provide, and be able to vet them properly, and ensure you're not pushing up a time bomb to your repo! 💣&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
