DEV Community

Rizèl Scarlett
Rizèl Scarlett

Posted on • Edited on

13 4 1 2 1

Your AI Agent isn't an Engineer

Table of Contents

  1. Why This Conversation Matters
  2. How AI Marketing Shaped This Perception
  3. The Problem with Marketing AI as a Human
  4. The "Year of the Agent"
  5. Framework for Effectively Marketing AI Agents to Developers
  6. Conclusion

Raise your hand if you've been personally victimized by the question: 'Will AI replace software engineers?' It's a common debate that drives developers to extremes—either avoiding AI entirely or frantically signing up for every AI course available.

Mean Girls

Why This Conversation Matters

However, it's not a hypothetical or frivolous concern. Companies are making hiring decisions based on AI productivity. Salesforce's CEO recently announced plans to reduce hiring software and support engineers after seeing a 30% productivity boost from AI.

As a Developer Advocate in AI, my public response has always been to upskill and adapt to the changing economy. After all, you wouldn't want to be the person insisting on driving a horse and buggy while everyone else has moved on to cars.

I believe that AI is a helpful tool. I've used it to understand new technologies and quickly prototype ideas.

But internally, I've wrestled with a different question: Why do we keep framing AI primarily as a replacement for human beings?

How AI Marketing Shaped This Perception

My spicy take is that our industry helped shape this narrative. We inadvertently leaned into a lazy marketing strategy prioritizing quick wins over sustainable adoption. It's easier to tell VCs and executives that your AI tool replaces developers than to demonstrate how it augments developer capabilities.

Anthropomorphism Is Not All Bad

Anthropomorphism is the practice of assigning human traits to non-human entities. It isn't inherently problematic. In fact, it's a common practice in tech. Thoughtful anthropomorphism makes digital experiences more intuitive and helps users embrace new interfaces. For example:

  • E-books mirror traditional reading experiences by simulating page-turning animations, even though there's no physical page to turn.
  • Electric cars (as Sunil Pai pointed out to me) play pre-recorded engine sounds when they start, providing a familiar affordance for drivers.

Sunil Pais bluesky post about electric cars and prerecorded engines

In these cases, users don't actually believe their e-book contains paper or that their electric car has a combustion engine.

But, AI presents a unique challenge. Its complexity and "black box" nature make it harder for users to grasp its true capabilities and limitations. To bridge this knowledge gap, companies lean heavily into human-like descriptions:

Claude is a "friend."

Billboard that says Claude is a friend

Devin is an "AI Software Engineer."

Blog post about Devin as an AI software engineer

ChatGPT is "reasoning."

a screenshot of chatgpt that said reasoned for a few seconds

While these descriptions make AI feel more familiar, the drawback is that they can also mislead users to believe that AI can think, reason, and work independently like humans.

The Problem with Marketing AI as a Human

Anthropomorphic AI marketing is sometimes a form of self-sabotage because:

It Alienates Developers

  • When AI is marketed as an "engineer" or "developer, " decision-makers view it as a one-to-one substitute for human talent.
  • This is counterproductive because developers are some of the most valuable users of AI tools. They are the users who know how to use AI and contribute to the ecosystem effectively. According to the 2024 Stack Overflow Developer Survey, 76% of developers currently use or plan to incorporate AI into their workflows. However, our industry's marketing suggests that using AI and contributing to the ecosystem will eventually put AI in a place to take their jobs. So why would they want to further the movement?

It Sets Unrealistic Expectations

  • If an AI tool is marketed as "just like a human," users will expect it to perform at human levels.
  • AI is a non-sentient tool that processes historical data patterns, is prone to hallucinating, misses important context, and provides non-deterministic output.
  • When developers realize it's not as good as the marketing implied, the company and product risk losing credibility. Developers are notorious for valuing authenticity. Over-exaggeration or misrepresentation in marketing only drives developers away.

But don't take my word for it. A Wired article titled Developers Are Getting Fed Up With Their Bosses' AI Initiatives shares findings from the 2025 Game Developer Conference Survey. While 52% of companies now use generative AI in their games, 30% of surveyed developers expressed negative sentiment.

One survey participant shared their reflections: "I have a PhD in AI, worked to develop some of the algorithms used by generative AI. I deeply regret how naively I offered up my contributions."

Another participant stated, "We should use generative AI to help people be faster at their jobs, not lose them."

It Misses the Real Value Proposition

The real value of AI developer tools includes automating boring tasks, faster prototyping, and quicker debugging, which leaves more time for creative problem-solving.

The "Year of the Agent"

And now, AI enthusiasts have dubbed 2025 as the Year of the Agent. In short, AI agents are tools that can autonomously take action on our behalf, like executing shell commands, creating calendar events, and building applications. But as we move from LLMs that suggestion code to us to more autonomous agents, anthropomorphic marketing is only increasing.

Framework for Effectively Marketing AI Agents to Developers

Here's how to market AI developer tools in a way that both builds trust and differentiates your Agent in an oversaturated market:

Understand How It Works

If you work in Developer Relations, Sales, Marketing, or as an executive promoting an AI agent, you're probably representing a product you didn't build. This means you may not fully understand how the tool works, its true capabilities, or its limitations. Developers have a knack for spotting misrepresentation or inauthentic marketing.

You can mitigate this challenge by:

  • Becoming customer zero
    • Use the product extensively before it reaches the public
  • Investing time in learning the following fundamentals:
    • LLMs and their capabilities
    • Key differences between Copilots and Agents
    • Core AI Agent operations and your product's unique approach
    • Token handling and context management
    • Tool calling mechanisms
    • System limitations
    • Points requiring human intervention
    • Your product's agentic loop. For example, some agents use the following loop:
      • Accept user request
      • Share requests and available tools with an LLM
      • Receive LLM's execution plan
      • Execute the plan and tool calls
      • Verify results with the LLM
      • Revise and re-execute if needed
      • Deliver final results to the user and wait for the user's request

I used these two resources to help me understand AI agents:

Thoughtful Naming

It might be difficult to eliminate anthropomorphism entirely, especially since it is useful. My advice is to use it sparingly.
Skip human names.
Skip titles like "AI Engineer" or "AI Teammate." Choose names that set clear expectations, like Copilot, Agent, or Assistant. GitHub's use of "Copilot" and "AI Pair Programming Assistant" exemplifies this balance because it suggests collaboration while keeping humans in control.

Augmentation > Replacement

Let's understand who developers are. They're not rockstar/ninja/10x developers. Those stereotypes are so 2014.

Developers juggle multiple roles – they're parents, open source maintainers, bootcamp instructors, and more. AI agents shine brightest when they complement these diverse responsibilities, taking on parallel tasks while developers focus on high-impact work. Instead of marketing your tools as whole substitutes for developers, position them as tools part of a developer's toolkit.

I expand more on this thought in my blog post titled, "The Average Developer is a Multitasker: A Case for Agents."

Transparency

If possible, go open source. If not, find ways to explain the architecture through whitepapers and conference talks. This approach will help your users understand that it's not magic so they can determine how to use the product and get the best performance from it.

Many times, when there's a lack of transparency, developers will theorize how they think it works and create their own narrative, which can backfire on your product. I remember this happened in the early days of GitHub Copilot. I would hop into Twitter Spaces, where people would share how they thought it worked, but they were wrong and spreading misinformation.

Developer Control

You can build trust with developers by putting them in control of their workflow. Here are some of my suggestions:

  • Similar to how developers choose IDE settings, allow developers to choose their preferred LLM models and customize the Agent's behavior and verbosity.
  • Show what actions the Agent will take before executing them and provide detailed logs for debugging.
  • Provide APIs and hooks so the Agent fits into existing workflows.

codename goose is my favorite example of this, although I'm biased because it's an agent my company made. It's open source. Goose, as it's fondly called, lets developers choose their LLM model and extensions via Model Context Protocol. Developers can also choose to interact with the Agent via the CLI or GUI.

Show, Don't Tell

Instead of making false promises, demonstrate your AI agent's value through concrete examples. Create short, engaging video demos, GIFs, or blog posts showing the Agent in action:

  • Creating and running test suites
  • Converting code between languages
  • Transforming wireframes into interactive UIs
  • Generating API documentation from code comments
  • Automating environment setup
  • Reviewing pull requests

Don't be afraid to demo live and make it fun so it can be memorable! When I worked at GitHub, I used to demo GitHub Copilot at conferences. I would prompt GitHub Copilot to post a tweet that said, "I wrote this tweet with Copilot." It was a short and simple demo that was memorable for attendees and sparked curiosity from those who weren't there.

Note: Demoing generative AI tools live is scary because the output is non-deterministic. If your live demo fails, that's even better because you can use it as a teaching moment. Show how you work around issues and where human expertise adds value. This authenticity builds more trust than a perfectly polished demo ever could.

Documentation

Documentation often determines whether developers adopt your tool. Strong documentation for your Agent could include:

  • Installation guides
  • Accurate technical specifications of model training and limitations
  • Comprehensive feature guides
  • Step-by-step tutorials
  • Prompt playbooks
  • Clear explanations of data usage and privacy

Open Collaboration

Build product credibility by fostering an ecosystem where developers can learn from each other, and you can learn from them. You can do this by:

  • Using platforms like GitHub Discussions and Discord to create spaces for feedback and support
  • Encouraging knowledge sharing by letting developers exchange prompts, best practices, and integrations
  • Recognizing community contributions
  • Maintaining a transparent feedback loop to show that you value developer input

A great example is Cursor.directory - a platform by and for the community where developers share .cursorrules prompts.

Conclusion

Our presentation of AI shapes how the world perceives and uses it. Let's move beyond the tired question of whether AI will replace developers and focus on how it can augment developer capabilities.

Share your thoughts below!

Image of Datadog

How to Diagram Your Cloud Architecture

Cloud architecture diagrams provide critical visibility into the resources in your environment and how they’re connected. In our latest eBook, AWS Solution Architects Jason Mimick and James Wenzel walk through best practices on how to build effective and professional diagrams.

Download the Free eBook

Top comments (7)

Collapse
 
manchicken profile image
Mike Stemle

I recently had a difficult conversation about this with a former colleague. These topics are getting more difficult as folks lose sight of what computers and the internet were meant for.

Your piece really helped me process that more. Thank you, friend.

Collapse
 
blackgirlbytes profile image
Rizèl Scarlett

Yeah, a lot of people are struggling with this lately. I'm glad it was helpful. Sometimes, I just have random thoughts and dump all of them into a blog post, but I'm not sure if they're valuable. Your comments are so validating. Thanks, friend!!!

Collapse
 
manchicken profile image
Mike Stemle

If I'm being honest, friend, those tend to be your best pieces.

I love that you're simultaneously optimistic about AI while also succeeding at remembering that this stuff is supposed to be for people. I've only succeeded at the latter while struggling with the former (though mostly in business contexts).

Thread Thread
 
blackgirlbytes profile image
Rizèl Scarlett

🥺

Collapse
 
robross0606 profile image
robross0606 • Edited

From a business costing and resourcing perspective, I'm going to need someone to explain to me the difference between "augmentation" and "replacement". That feels like really splitting hairs. If businesses can increase productive engineering output by a decent percentage with relatively lower cost AI, they absolutely will reduce their engineering staff. Anyone who thinks they will keep the same staff and produce "even more cool stuff" is kidding themselves. Shareholders want profit. We may not see the entire profession disappear for a while, but a percentage of people's jobs WILL be replaced.

Collapse
 
blackgirlbytes profile image
Rizèl Scarlett • Edited

Thanks for your comment. I'm confused by it though.

My post agrees with your position. I even have a line that says, " It's easier to tell VCs and executives that your AI tool replaces developers than to demonstrate how it augments developer capabilities."

This was me pointing out that it's an easier sell if you say "buy this AI tool and you can cut x amount of developers that costs 300k" rather than "buy this AI tool and keep your costly developers."

Yeah, some jobs will be eliminated. I wrote that many times.

But that's not the point of my post. My post is a call to action for companies to remind them that AI is not a one size fits all solution. Innovation and problem solving still require human oversight and in the long term treating AI as a pure replacement may mess up their company. My advice is for them to learn how to balance augmentation with workforce strategy.

and I'm a huge fan of AI for coding. I've been talking about using AI for coding since 2021, but I'm able to look at both sides.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs