DEV Community

Cover image for Harnessing AI for Code Creation: Strategies, Risks, and Rewards - TECH
Bala Madhusoodhanan
Bala Madhusoodhanan

Posted on

Harnessing AI for Code Creation: Strategies, Risks, and Rewards - TECH

Intro:

At AgentCon Manchester, my good friend Liam Hampton and I explored a topic that’s redefining the software engineering landscape: the rise of AI agent-based development. The session was packed with converation / demo on some of the latest capabilities and we also reflected on how we, as engineers and architects, must adapt to a world where code is no longer just crafted—but co-created with intelligent agents.

Software development has always evolved to meet the demands of scale, complexity, and speed. From the linear logic of procedural programming in the 1950s to the modularity of microservices in the 2010s, each paradigm shift has brought new tools and mindsets.

Now, we’re entering the era of AI Agent-Based Development—where autonomous, goal-driven agents powered by LLMs and reinforcement learning are becoming active participants in the development lifecycle.

“The shift is no longer just about writing better code—it’s about designing better collaborators.”

💡 Code as Conversation: A Demo That Sparked Dialogue

To illustrate how AI changes the way we think about code, we kicked off our session with a deceptively simple engineering challenge:

“Can you write a stored procedure to flag orders based on value—‘Big Customer Order’ for values over 30, ‘Medium Order’ for 10–30, and ‘Low Priority Order’ for anything below 10?”
Enter fullscreen mode Exit fullscreen mode

It’s the kind of batch-processing logic many of us have written dozens of times. Liam, drawing from his engineering instincts, implemented the solution in Go using switch logic—an approach that emphasizes clarity and maintainability. The audience chimed in too, with many favoring switch over if/else for its readability and structure.

Then we flipped the script.

We fed the same problem statement—verbatim—into GitHub Copilot, without specifying a language. Within seconds, it returned a verbose SQL stored procedure, spanning 25–30 lines. It was technically correct, but contextually disconnected.

This wasn’t about picking a “winner.” It was about provoking thought:

  • Which version is easier to maintain?
  • Which is more testable?
  • Which is more AI-friendly?

The discussion that followed was electric. Some were impressed by the speed of the AI-generated solution. Others raised concerns about readability, overengineering, and lack of context.

The key takeaway? Understanding fundamentals still matters. AI can generate code, but it can’t (yet) understand the nuance of your architecture, your team’s conventions, or your business logic.

We emphasized that blindly copy-pasting AI-generated code—without understanding the problem or validating the solution—can lead to technical debt, security risks, and brittle systems.

Agent Mode Demo:

Mode 1: What we did ws use the GitHub Copilot’s new Agent in Visual Studio and VS Code, Agent Mode allows developers to ask Copilot to perform tasks like editing multiple files, running tests, or even setting up projects, all through natural language. It understands your codebase, keeps context across files (leveraging MCP server) , and helps you complete complex workflows faster.

Think of it as moving from autocomplete to a true AI teammate that can reason, plan, and execute tasks with you.

Mode 2: Think of GitHub Copilot’s coding agent as an independent virtual assistant—almost like a junior developer on your team. You assign it a task by creating or tagging a GitHub issue, and it quietly gets to work in the background. It reads the issue, understands the context of your codebase, writes the necessary code, and even opens a pull request when it’s done. You don’t have to micromanage it—it simply notifies you once the task is complete.

Other AI Inspiration examples:
Demo 1: Use of Plan Designer in power platform to build a quick prototype.

Demo 2: Automating Test Case Generation with AI
One of the most exciting real-world applications we’ve explored is using AI to automate test case generation directly from user stories in JIRA. By integrating Power Automate with a large language model (LLM), we created a flow where each new or updated user story is passed through a prompt that generates relevant test cases

Image description

Demo 3: Threat modeling
As solution architects, we often start with a short description of a business opportunity and a high-level architecture diagram. By passing both into a large language model (LLM)—via a Power Automate flow—we’re able to generate a working draft of a threat model. This includes potential attack vectors, risk classifications, and mitigation strategies. It’s not a finished product, but it gives us a strong starting point to collaborate with InfoSec teams and enrich the model further.

Image description

GitClear’s Research: Productivity vs. Maintainability

Recent findings from GitClear’s 2024 research reveal a compelling yet cautionary tale about the impact of AI on software development. While AI-generated code is undeniably on the rise—accounting for 46% of all code changes this year, up from 39% in 2020—this surge comes with trade-offs. Code reuse, a hallmark of clean and maintainable engineering, has sharply declined, with “moved” lines (a proxy for refactoring) dropping to just 9.5%. Even more striking, 2024 marked the first year where copy/pasted code surpassed refactored code, with duplicate blocks increasing eightfold since 2020. This trend correlates with a 7.2% drop in delivery stability for every 25% increase in AI adoption, as reported by Google’s DORA metrics. Additionally, short-term churn is on the rise, with more newly added code being revised within 2–4 weeks—suggesting rushed or unstable initial commits. Despite these shifts, human developers continue to outperform AI in areas like modularization, refactoring, and long-term maintainability. The implication for teams is clear: while AI can supercharge productivity, it must be balanced with deliberate practices around code reuse, clone detection, and refactoring. Tracking metrics like churn, duplication, and moved lines is essential to ensure that speed doesn’t come at the cost of sustainability.

Top comments (0)