<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gail Axelrod</title>
    <description>The latest articles on DEV Community by Gail Axelrod (@gail_axelrod_03844667f64d).</description>
    <link>https://dev.to/gail_axelrod_03844667f64d</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gail_axelrod_03844667f64d"/>
    <language>en</language>
    <item>
      <title>The most popular AI coding tools, according to devs</title>
      <dc:creator>Gail Axelrod</dc:creator>
      <pubDate>Tue, 08 Jul 2025 19:48:07 +0000</pubDate>
      <link>https://dev.to/gail_axelrod_03844667f64d/the-most-popular-ai-coding-tools-according-to-devs-28pf</link>
      <guid>https://dev.to/gail_axelrod_03844667f64d/the-most-popular-ai-coding-tools-according-to-devs-28pf</guid>
      <description>&lt;p&gt;We just released our 2025 State of Engineering Management report for the sixth year in a row. This year’s findings, based on responses from more than 600 engineering professionals, are more eye-opening than ever. &lt;/p&gt;

&lt;p&gt;According to the report, 90% of engineering teams are now using AI in their workflows, up from 61% just one year ago. &lt;/p&gt;

&lt;p&gt;Almost a third of respondents have formally supported and widely adopted AI tools, while another 39% are actively experimenting with them. Only 3% of respondents reported no AI usage and no plans to change that.&lt;/p&gt;

&lt;p&gt;48% of respondents reported using two or more AI coding tools, suggesting teams are taking a diversified, exploratory approach by evaluating multiple solutions simultaneously rather than standardizing on a single platform.&lt;/p&gt;

&lt;p&gt;The leader among AI coding tools was Copilot with 42% of surveyed engineers naming it their tool of choice, followed by Gemini, Amazon Q and Cursor.&lt;/p&gt;

&lt;p&gt;You can get all of the data here: &lt;a href="https://jellyfish.co/resources/2025-state-of-engineering-management-report/" rel="noopener noreferrer"&gt;https://jellyfish.co/resources/2025-state-of-engineering-management-report/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0k5p8vuymw15vp1gher0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0k5p8vuymw15vp1gher0.png" alt=" " width="557" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>We Gave Our Engineers AI. Here's What Happened.</title>
      <dc:creator>Gail Axelrod</dc:creator>
      <pubDate>Wed, 25 Jun 2025 15:19:14 +0000</pubDate>
      <link>https://dev.to/gail_axelrod_03844667f64d/we-gave-our-engineers-ai-heres-what-happened-2din</link>
      <guid>https://dev.to/gail_axelrod_03844667f64d/we-gave-our-engineers-ai-heres-what-happened-2din</guid>
      <description>&lt;p&gt;By Eli Daniel, Head of Engineering, Jellyfish &lt;/p&gt;

&lt;p&gt;We all know that AI coding tools are generating a lot of attention and hype, and that their emergence is changing the software development landscape remarkably quickly. And we can probably all trade stories of cases where they’ve been transformative, and others where the results might not live up to the promise. But stories and anecdotes are one thing and data is another.&lt;/p&gt;

&lt;p&gt;Last week Jellyfish published new data around how AI coding tools are being used and the impact they’re having across engineering teams. Here’s what we found across more than 21,000 engineers and 2 million PR reviews:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI use has exploded. 51% of PRs in May 2025 use AI, compared to just 14% in June 2024.&lt;/li&gt;
&lt;li&gt;Cycle times for PRs using AI are faster than those without: AI-assisted PRs were 16% faster in Q2 2025.&lt;/li&gt;
&lt;li&gt;Both faster coding and faster reviews contribute to this speedup.&lt;/li&gt;
&lt;li&gt;What isn’t changing? Quality. We see no meaningful correlation between an organization's level of AI adoption and the number of bugs introduced.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These numbers are notable for being both higher and lower than I might have guessed. They aren’t small – they are showing some real meaningful shifts – but also they are well short of some of those 10x stories we hear.&lt;/p&gt;

&lt;p&gt;To answer the question of why that might be, we can turn from pure data analysis and statistics to some more insight from Jellyfish’s own AI journey.&lt;/p&gt;

&lt;h2&gt;
  
  
  Drinking Our Own Champagne
&lt;/h2&gt;

&lt;p&gt;In parallel to our external research, our engineering team began its own internal experiments across AI coding tools including Copilot, Cursor, Gemini Code Assist and others. We also experimented with PR review bots like Greptile and Cursor BugBot, as well as agents like Devin.ai.&lt;/p&gt;

&lt;p&gt;So far, we’ve found that in a two-week period, a little over half of our engineers are routinely using AI coding assistants. And this group tends to do more – and more quickly. Across this group, we observed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;55% decrease in PR cycle time&lt;/li&gt;
&lt;li&gt;66% increase in Jira issues resolved&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AI: Great at Some Things, Less Great at Others
&lt;/h2&gt;

&lt;p&gt;With results like these you might assume that AI is now mandated across our R&amp;amp;D org. But here’s why it’s not: the causality here seems to go in the other direction. Our engineers are generally eager to try out things that will improve their workflows, but today’s tools are not equally effective across all kinds of work.&lt;/p&gt;

&lt;p&gt;Work that is well defined, with a clear understanding of what success means? Great! This could mean app interactions (“add a widget to display a metrics graph over here in the same format we usually use”) or even performance optimization (“make this database query go faster”).&lt;/p&gt;

&lt;p&gt;But there are lots of engineering activities where AI just isn’t as good. For instance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Architectural decisions&lt;/li&gt;
&lt;li&gt;Subtle systems behavior&lt;/li&gt;
&lt;li&gt;Things that don’t have good automated test coverage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One funny illustration of limitations in today’s tools came when using our recently-released MCP server, which provides a way to provide chat bots like Anthropic’s Claude access to Jellyfish data. In playing with it, one person prompted:&lt;/p&gt;

&lt;p&gt;“Ask Jellyfish what Allison worked on last week.”&lt;/p&gt;

&lt;p&gt;Claude cheerfully explained that it would query the Jellyfish API, then proceeded to give a completely wrong answer. When challenged, the LLM explained its reasoning: “I didn’t actually retrieve any data from Jellyfish. My answer was completely fabricated.”&lt;/p&gt;

&lt;p&gt;It’s a funny moment, but also a powerful reminder: these tools are confident, even when they’re confidently wrong. In this case, we quickly learned to build some extra checks into our prompts when working with MCP.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Net Positive, But Not a Magic Wand
&lt;/h2&gt;

&lt;p&gt;So what’s the outcome of our AI experiments? We learned a few key lessons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Today’s tools are powerful, but imperfect&lt;/li&gt;
&lt;li&gt;What’s possible is changing quickly&lt;/li&gt;
&lt;li&gt;Ongoing experimentation is a must&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For engineering leaders, we need to keep our team members excited about AI’s possibilities. We should encourage our teams to experiment with these tools where they can comfortably harness the improvements AI delivers today.&lt;/p&gt;

&lt;p&gt;But we shouldn’t burn people out trying to force things that don’t yet work. That’s not to say they never will. AI is moving at a breakneck speed and things are changing daily. What worked yesterday might not work tomorrow and what failed the week before might return 10x next month. This is a moving target, and experimentation and humility are key to getting the most out of AI. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jellyfish AI Impact gives engineering leaders unparalleled insight into how their teams interact with tools including GitHub Copilot, Cursor, Gemini Code Assist, Sourcegraph and more. &lt;a href="https://jellyfish.co/get-an-ai-impact-demo/" rel="noopener noreferrer"&gt;Request a demo here.&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Average PR cycle time went from 95.5 hours to 83.8 hours (13.7 hours saved).</title>
      <dc:creator>Gail Axelrod</dc:creator>
      <pubDate>Wed, 18 Jun 2025 15:44:31 +0000</pubDate>
      <link>https://dev.to/gail_axelrod_03844667f64d/ai-use-in-engineering-up-260-yoy-according-to-jellyfish-analysis-of-2m-prs-3e6m</link>
      <guid>https://dev.to/gail_axelrod_03844667f64d/ai-use-in-engineering-up-260-yoy-according-to-jellyfish-analysis-of-2m-prs-3e6m</guid>
      <description>&lt;p&gt;As AI coding tools continue to reshape the role of engineers and the impact they drive throughout R&amp;amp;D organizations and businesses overall, Jellyfish is taking a closer look at the actual use and impact of AI.&lt;/p&gt;

&lt;p&gt;The following analysis of over 2 million PRs, from July 2024 to June 2025, is based on AI use in real-world engineering environments pulled from our AI Impact solution.&lt;/p&gt;

&lt;p&gt;What we’re seeing is astonishing. Our data shows that it’s getting easier to use AI as barriers to entry continue to fall. That’s caused a surge in use, with PRs using AI increasing from just 14% in June 2024 to 51% this year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AI use has exploded. 51% of PRs in May 2025 use AI, compared to just 14% in June 2024.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Average AI PR cycle times are 1.16x faster in Q2 2025, compared to 1.11x faster in Q3 2024.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gains are coming from both faster coding and faster reviews. In Q2 2025, average PR cycle time went from 95.5 hours to 83.8 hours (13.7 hours saved). Of this total, 8.6 hours was coding time and 5.1 hours was review time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We found no meaningful correlation between the number of bugs introduced and an organization’s level of AI adoption. Ranking companies by their level of AI adoption, the number of bug (vs story or task) PRs remains consistent in the 8-9% range.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Senior engineers saw bigger gains in 2024, but junior engineers have caught up over the last 12 months. Regardless of seniority, PRs using AI are around 1.2x faster than those authored without AI as of Q2 2025.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;About the Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This data set is pulled from Jellyfish’s AI Impact solution and covers GitHub Copilot users only from June 2024 to mid-June 2025. It covers 259 companies across 21,209 engineers with 2,160,981 merged PRs.&lt;/p&gt;

&lt;p&gt;For PRs with 2 or more commits, we look at cycle time (time from first commit to merge), which is broken down into “coding time” (from first commit to last commit) and “review time” (from last commit to merge).&lt;/p&gt;

&lt;p&gt;PRs are also broken down by seniority and role:&lt;/p&gt;

&lt;p&gt;Across 21,209 engineers*:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Junior: 813,748 PRs (41.1%)&lt;/li&gt;
&lt;li&gt;Senior: 762,958 PRs (38.6%)&lt;/li&gt;
&lt;li&gt;Unknown: 368,996 PRs (18.7%)&lt;/li&gt;
&lt;li&gt;Manager: 34,072 PRs (1.7%)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*Seniority / role are parsed from roster titles.&lt;/p&gt;

&lt;p&gt;Get the full analysis: &lt;a href="https://jellyfish.co/blog/ai-impact-data-june-2025/" rel="noopener noreferrer"&gt;https://jellyfish.co/blog/ai-impact-data-june-2025/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>githubcopilot</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
