DEV Community

Kazuya
Kazuya

Posted on

AWS re:Invent 2025 - Developer Experience Economics: Moving Past Productivity Metrics (DVT207)

🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.

Overview

📖 AWS re:Invent 2025 - Developer Experience Economics: Moving Past Productivity Metrics (DVT207)

In this video, Eva Knight and Bethany Otto from AWS discuss how Amazon measures developer productivity beyond traditional metrics like lines of code. They introduce the Cost to Serve Software framework, inspired by Amazon's retail supply chain model, which achieved a 15.9% improvement in business value. The framework uses normalized production deployments as units and balances velocity with quality through tension metrics like high severity tickets. Amazon's Software Builder Experience (ASBX) team focuses on eliminating, automating, and assisting developers across the full SDLC, with AI tools like Amazon Q Developer driving significant improvements: 18.3% increase in weekly deployments, 30.4% reduction in manual interventions, and 32.5% decrease in incident-related tickets. The session emphasizes that AI-native teams integrating generative AI throughout the development lifecycle see the biggest gains, and highlights the importance of measuring developer experience holistically rather than relying solely on productivity metrics.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

Introduction: Developer Experience Economics Beyond Productivity Metrics

Hello everyone, good evening and welcome to DVT207 where we'll be looking at developer experience economics and how we're moving past productivity metrics. My name is Eva Knight and I'm a worldwide go-to-market specialist with our next generation developer experience team. Our team helps customers look at their development processes and understand how you can integrate generative AI tooling into those existing workflows. I'm joined here with Bethany Otto, who is a principal technical program manager on our Amazon software builders experience team, and we're really excited to be here with you tonight to talk through this session.

I know the idea of developer productivity is something that gets brought up a lot, especially in this new time of AI technology, and it's something that our teams are passionate about. We've written blog posts on it this year. Bethany and my colleagues have presented on this topic at the previous two re:Invents, but there's been a lot of really exciting breakthroughs that we've had this year that we're excited to introduce you to today.

Thumbnail 80

At a high level, we're going to talk about how we're seeing improvements in the developer experience in this new time with AI technologies. We're going to talk about some of the challenges we're seeing, especially in how we're actually quantifying the impacts of these improvements, and we're going to introduce you to something that we announced this year called the Cost to Serve Software framework, which was created by Amazon's software builders experience team. This framework reflects how we actually quantify the impacts of the developer experience with AI technologies in order to better serve our customers.

We're going to hear from Bethany on Amazon's journey in improving the developer experience to ultimately drive improvements for our customers. Finally, we're going to take away some of those lessons learned to help show you how you can potentially take some of these back to your own organization and think about productivity in this time.

Thumbnail 140

The Evolution of Development Practices and the Challenge of Measuring AI Impact

The idea of improving the developer experience and increasing productivity isn't a new concept. This is something that since software development was created, we've been trying to find ways to better optimize the developer experience and introduce new ways of working to optimize that workflow. We're all familiar with waterfall and agile methodologies at this point. We're seeing the rise of DevOps, introducing concepts like CI/CD and containerization, and these are pretty well understood and adopted concepts at this point.

Thumbnail 210

Now with the introduction of AI, we're seeing yet another shift and an introduction of a new way of working for developers. We're seeing that developers are starting to adopt these technologies, but ultimately these different ways of working each bring different tools, methodologies, and processes, but it also brings different metrics in how we actually think about quantifying the impacts of these new technologies. We're seeing a lot of challenges in how we can actually articulate the impacts that we're seeing with AI technologies.

We all know at this point that lines of code does not equate to value. Using that as a metric can really incentivize verbose solutions rather than incentivizing elegant and concise ones. We see that time-based metrics can potentially incentivize corner cutting and often miss out on elements of quality. Finally, telemetry data or what individual users are actually doing fail to show the full picture of what our development teams are actually accomplishing.

Thumbnail 250

We at Amazon, of course, have a large development team and ultimately we would love to see increased productivity amongst those teams. We hear from customers that they too want to increase their productivity. This is a pretty well understood idea that is a goal of many, but here at Amazon we see this as an outcome. In order to actually impact developer productivity, we need to focus on the developer experience as an input.

Thumbnail 280

Thumbnail 300

The developer experience focuses on the lived experience of developers that focuses not just on quantity but also aspects of quality in their daily workflows. It takes a more human-centric and holistic approach and actually looks at their day-to-day work. Now in this time of AI technologies, it's become even more important to think about how you're effectively integrating AI into that developer experience to ultimately drive improvements in developer productivity.

We have this equation: to improve developer productivity, focus on developer experience, and to do so, integrate AI effectively. However, we're still left with the question of how we actually quantify this relationship in a meaningful way.

Thumbnail 340

Amazon Software Builder Experience (ASBX): A Unified Approach to Developer Tools

There has been a lot of thought done on this topic, driven primarily through our ASBX team this year in trying to understand how we at Amazon can quantify this relationship. ASBX is our Amazon Software Builder Experience team. A few years ago, we initially had different teams, each focused on individual tools in the developer experience here at Amazon, whether it be code build and deployment systems, ticketing, paging, monitoring, or knowledge management. Each of these different tools was driven by individual teams.

Thumbnail 410

In 2022, we decided to unify all of these tool-based teams toward a more outcome-based approach, taking the name of the Amazon Software Builder Experience team. Additionally, we introduced a couple of different teams under that ASBX organization, some focused on driving insights and metrics and looking at the developer experience as a whole. The ASBX mission is to really drive what matters with our development teams across Amazon. It takes a more holistic approach and really focuses on enabling developers and maintaining software across Amazon's breadth of services.

When my team engages with customers, we're often running proof of concepts, helping customers test new technologies, and ultimately we need to help customers go back to their stakeholders and say, "What impact did this have on our teams? What outcomes and business value are we seeing with these new tools?" When Bethany and her team make improvements in the developer experience here at Amazon, they too need to go to leadership and say, "We saw this impact. This is the change that we're ultimately seeing," and give them that answer.

I imagine that if you're here today, you too have had this question, maybe you still do. That's why I'm really excited to introduce Bethany to walk through what we're doing here at Amazon in order to drive impact and effectively quantify this for our teams. Thanks, Bethany.

Thumbnail 500

Team-Centric Development: Research Findings and the CFO's Question

Thanks, Eva. Hi, I'm Bethany Otto. I get to be part of the team that focuses on developers at Amazon. As Eva mentioned, the Amazon Software Builder Experience organization was formed by some existing teams and some new teams. The reason we did that is we wanted to create better outcomes for our developers. Part of that mission was enabling software builders, and with that mission, we have a set of tenants. Tenants at Amazon help teams align and share the same mental model.

Thumbnail 540

This is one of our tenants: providing capacity for teams to reclaim. The words in this tenant matter, and one of those words that is very important is that we focus on teams. The reason we focus on teams is that we actually did an analysis into the indicators of success for developers. This analysis spanned tens of thousands of developers over a five-year time period. What we found will not be surprising to those of you who have developed software. We actually went out and proved that software development is indeed a team endeavor.

Thumbnail 590

Software builders who have high code review velocity actually return to the mean of their team. What we found was that due to the situation, the job function of the team, their current and past decisions with their services, teams had a predictable velocity. This team velocity is the strongest influencer of individual velocity and perceived productivity. We know that developers join Amazon to create innovations for customers. So as a platform team, one area we focus on is how we free up developers to make the decisions that matter. We look for ways to eliminate, automate, and assist developers across the full software development life cycle.

Thumbnail 620

We obsess over ways to improve the builder experience, the things that make developers more efficient.

So we work backwards from key outcomes. How do developers come up to speed faster? How do they build and deploy software easily and safely, all with lower operational toil? We have a basket of several dozen metrics behind each one of these outcomes, bridging these outcomes from the company to the team level. We actually work backwards to present to teams the controllable inputs, the things that they have influence on and can control.

Thumbnail 670

After our first few years as an organization, we were really excited about continued double digit improvements. Yet when we went to present to the S team—the S team is Andy Jassy's direct report and Brian Olovsky, our CFO—they said, "This is great, but what does Amazon get for this?" Those of you who have been on platform engineering teams, have you heard this before? This may be something that you heard from your CFO or COOs. Well, our story may resonate with you.

Thumbnail 710

Cost to Serve Software Framework: Adapting Supply Chain Economics to Development

Since simply showing the individual metrics was not enough, we knew we needed a metric that resonated with business value and aligned to that. So how did we accomplish this? Well, I led a small team at Amazon, and I'm going to take you through that story. We knew that the traditional method was to add up the tiny time slices of metrics. Small changes do provide value, but they remove friction and frustration. But we know it's not productivity because one minute of time back to a team does not end up in code to production. Often when we add up all of those small bits, we get up to more than 100 percent time back, so the math just doesn't add up.

Thumbnail 750

We needed something better, and we pulled on Amazon's deep history. We found that in our retail business, we had teams that had tackled a similar problem in physical supply chain. They have this metric called cost to serve. You may have read about it in some of our shareholder letters. It essentially is, how much does it cost Amazon to get that box on the customer's doorstep?

Thumbnail 770

Thumbnail 790

So really, what is cost to serve in supply chain? It's an enormously complex system. There's no single metric, but the core goal is to deliver value to your customer. So the route is how do you take that unit, package it up, and deliver it? Well, cost to serve looks at the full system, focusing on removing friction, delay, defects, and waste across the entire system. It captures all of the benefits if you improve your picking efficiencies in your fulfillment centers, if you reduce defects from product damages, if you make improvements to your demand forecasting. All of these benefits get captured in cost to serve. It's an economic measurement of productivity improvement.

Thumbnail 820

Thumbnail 850

Well, it turns out this maps very nicely to software because software is also a complex system. It's much more than just coding. At the root, we have a builder that needs to create and deliver software that provides value to our customers. In this case, there are units—software units—that are chosen by the team. I'll touch on that a little bit later, but it's development, not developer productivity. And cost to serve captures the benefits in improving developer experience. You improve your release processes. You centrally remediate risk. You do the things that developers find satisfaction in their job in. You're going to improve your cost to serve. But it works backwards from the software delivered to your customers. It's a number that you can use in your planning and prioritization.

Thumbnail 880

So I promise you only one equation on this. It's very, very simple. At the top you've got your costs, and at the bottom is your unit of delivery. I will explain the unit of delivery. So with that unit of delivery for Amazon, we have a bunch of microservices, so we can use normalized production deployments. Yet for certain businesses, say Prime Video or stores mobile app, they're going to want to use something that is a proxy of that.

Thumbnail 940

When teams want to use something that is a proxy of that application, they use what we call CRs, or pull requests, to production. You also have other types of teams, like enterprise teams that are committing to trunk, where you would want to look at commits. That is what the software delivery unit is. Underneath this simple equation, we actually apply science and research to understand the quantitative and qualitative values of these metrics and behaviors that are correlated to it.

Thumbnail 1000

We know that velocity is a metric, and we know we need to move fast. However, we also know we need to not break things. So cost to serve needs to be achieved in tandem with security and resilience. A long-term benefit of cost to serve software is that the costs in maintaining the quality bars of your software are included in that top-level cost, but that is a lagged effect. So we knew we needed to see something in real time that equated that velocity with our quality.

What we did was create the concept of tension metrics. These are the things that were good indicators of quality that we could look at in the same period of time. We created a handful of those metrics as well. One of the metrics that we had that created this balance is high severity tickets. When we looked at this, it had high precision and recall, so we knew it was something that we could monitor in real time with cost to serve.

Thumbnail 1020

Implementing Cost to Serve: Results, Drivers, and the Rise of AI-Native Teams

You may be wondering how to apply this to yourself. First, you need to make it specific to your team. Software is a socio-technical system, so look at how your teams are delivering software. Are they using microservices? Would you do normalized productions? Are they going to be one of your applications or hardware? A software team committing to hardware that is going to be your monolithic apps will use pull requests. Your enterprise releases will use commits.

Thumbnail 1060

We let our teams choose which value represents their teams the best, and that is what we would show to them. We can also plug in any type of investments that you have already made in measuring your developer experience. We said that we had common outcomes that we were trying to drive for our developers. These work with your metrics if you are already applying DORA, SPACE, or other types of developer experience frameworks.

Thumbnail 1090

You do not need to start from scratch. You can start small, and it also works if you want to start small with a local team or start with federated teams and let them choose the metric.

I know that some of you will be asking about Goodhart's law. Goodhart's law simply states that when the metric becomes the target, it ceases to be a good metric. You may say, will not teams manage to that metric? We ask this frankly about all of our metrics, but teams want to deliver innovation to customers, so we have them obsess over the inputs to that. This is where your input metrics, or those DORA metrics, are key. How do you work backwards from those?

Thumbnail 1130

A particularly Amazonian concept is mechanisms. Mechanisms are encoded behaviors that facilitate innovative thinking and systematically address recurring challenges. In short, mechanisms help to change good intentions into action and true change. We work backwards from the outputs that we were trying to drive to identify the controllable inputs. What we have teams focus on are those controllable inputs. Are they deploying often? Are they on the latest versions of software? These are the things that teams have control over and can impact directly. Applying this framework allows you to work backwards from your cost to serve as an output while having your teams focus and obsess on the inputs, or the things that they can control.

Thumbnail 1180

The power of cost to serve emerges through year-over-year or quarter-over-quarter improvement. What we want to see is that the trend moves down, and this actually creates an interesting resistance to Goodhart's law. If you have slow teams or fast teams, it does not matter. What matters is how that team compares to itself over time. The downward trend and the percent improvement of a team is where the value comes. Improvement can be expressed in savings.

This can be expressed in savings, return on invested capital, or effective headcount capacity that they get back. This represents your CFO view, your COO view, or your CTO view. As a platform team, we use all three. We have return on invested capital and savings for our planning purposes, but what we give to teams is we ask with these initiatives, what is it that they are giving back? Or conversely, what is the opportunity that they are leaving on the table by not doing the initiatives that we bring to them?

Thumbnail 1240

Thumbnail 1250

Thumbnail 1260

Thumbnail 1280

So with all of the efforts we put towards eliminating, automating, and assisting our developers, let me share some numbers. We increased weekly production deployments per builder by 18.3%. We made it faster and less manual by reducing the number of human interventions by 30.4%. We made it safer by reducing incident-related tickets per deployment by 32.5%.

Thumbnail 1290

But we still got this question, and this time we were able to answer it: what did Amazon get back? 15.9%. This is not simply velocity. This is a business value metric. This is the total benefit that Amazon got back for their investments in developer experience. Yet we did not stop there. We wanted to know what were the big drivers. We had our scientists look at what actually impacted our cost to serve software.

Thumbnail 1320

Thumbnail 1340

Thumbnail 1350

You are not going to be surprised. CI/CD was a key driver. We took an initiative to make sure that teams were looking at their pipelines, deploying frequently, all with lower manual interventions. By taking that out to teams, we did affect our cost to serve. Another thing is managed abstractions. We have managed capacity, managed fleets, and managed services. And then of course, generative AI. We see the benefits increasing quarterly, especially now with Amazon Q.

Thumbnail 1370

Yet this goes beyond code authoring. It moves into the full breadth of the SDLC with agentic AI. Cost to serve software captures all of these benefits, and one of the things we are seeing is that the teams with the biggest changes are the teams who have become AI native. Teams who are AI native with software development are using AI throughout the SDLC. This involves a fundamental shift in how software is designed, developed, and maintained, handling tasks from ideating through to long-term autonomous maintenance. The teams focusing on a goal to build more adaptive, intelligent, and efficient software by leveraging AI's capabilities to manage complexity and continuous change are those that are gaining these efficiencies.

Thumbnail 1410

Thumbnail 1430

Thumbnail 1440

AI-Native Software Development: Transforming the SDLC with Agents and Building Blocks

Stepping back, historically a team would start with a plan, complete a BRD or maybe a high-level design before they wrote any lines of code. Then once they had authored the code, they would test it, release it, ensure it was stable, and then have to maintain it. This loop was the SDLC. What we are seeing now is that the SDLC is becoming two loops that are becoming tighter. You have your plan, develop, test, release, monitor, and operate.

With AI software development, once we have an idea, we can fast track to a prototype and then have AI generate a more detailed plan from that. Developers are now engaging in fluid dialogue with agents to shape the requirements and architectures first. They can express concepts in natural language, explore trade-offs, and watch solutions take shape all before writing code. That detailed AI-generated plan includes relevant requirements, architecture choices, data handling details, and error handling strategies.

Our internal AI-powered planning systems will provide real-time visibility across all work at all levels of the organization. The tools will update documentation along the way as well. Software developers will spend more time developing software because we are eliminating or automating more of the undifferentiated work, including software maintenance and upgrades.

Thumbnail 1510

Developers are generating more features, but our AI code maintenance systems will create millions more software changes to release. We won't be able to use AI to generate millions of code changes unless we have a safe release process that is completely free from manual interventions, which means no failed builds, no failed tests, or at least failures that require manual interventions. AI will automate our event remediation. When it can't do that, it will assist our operators with more data and insights so they can reduce the event duration. AI will also look for root causes and suggest upstream fixes to software to eliminate future events.

Thumbnail 1550

Thumbnail 1570

Thumbnail 1580

As a platform team, an area we focus on is how we free up developers to make the decisions that matter. We have been focusing on what we can eliminate, automate, or assist for our developers—the things that make developers more efficient. So how are we going to deliver that? Well, at Amazon, we leverage these building blocks that are necessary for us to reach AI native software development. Over the past few years, we have seen our internal Amazon developers adopt AI tools starting with Q IDE, then Q CLI, and now Q Repo. These are synchronous AI agents or AI assistants embedded in the tools that developers are using to interact with AI and agents like an IDE, CLI, or even a website.

We even have a team that integrated Q Business for knowledge discovery. Our team saw an issue where developers had two common surfaces where they asked questions: our internal QA site, a tool called Sage, and relevant Slack channels. Yet in both cases, developers had to wait for an answer. Our ASBX team saw this as an opportunity. We wanted to better assist our developers to remove that wait time. Our integration with Q Business allowed our solution to pass rigorous security and privacy bars along with scaling to support the volume of documentation and questions that Amazon developers generate. The result has helped tens of thousands of Amazon developers answer questions and get back to building.

Thumbnail 1680

We also integrate assistance in other phases of our release process and our software life cycle. The release assistance helped build and fix a broken release pipeline. Or our on-call assistance helps remediate an issue builders have operating their service. These are internal tools that existed before AI that we are now using AI to complete tasks. With AI, we can move beyond simply assisting with tasks, and we continue to build asynchronous workflow agents that will perform work on behalf of our developers, sometimes without their interaction. Software maintenance is one of the big areas for us. Code transformations is one of those examples. We don't need to involve a software builder in every migration.

You may have read last year that we took a goal to migrate our pipelines from JDK 8 and 11 to 17 and above. Well, we did that with the assistance of Q Code Transformation. This allowed us to eliminate the heavy lifting of code transformations and move it to simply automating the complete end-to-end process of upgrading and modernizing applications. This significantly reduced the time and costs associated with transformation projects while enhancing security and performance. This year, we have used AWS Code Transform technology to do multiple dozens of these all at the same time. Initially, an agent might require a review from a builder, but the key is to continue improving workflow agents so that they become autonomous over time. Otherwise, we'll overload our developers with more work.

Thumbnail 1770

Thumbnail 1790

Agents can't perform expert work without the right foundational models. We saw a transformational shift in our use of AI earlier this year as new models came out, and we regularly evaluate models and have a curated list of what we support and recommend to teams based on multiple factors. But even the best available tools and models don't work out of the box on the software and tools that we've accumulated over the last 30 years. We have new and legacy software. We have a wide and diverse set of businesses, even some with their own bespoke tools and specialized needs.

Thumbnail 1850

This is where we are investing the most: integrating our internal tools and software ecosystems into AI agents. The tools and agents won't work without expert foundational models. This is where we use Amazon Bedrock to incorporate the latest models into all of our software tools and systems, and it's been amazing to see the advances in foundational models that we get to use. But even that's not enough. We need to do the internal Amazon integration so that the AI tools and foundational models understand our Amazon systems. This is where we're leveraging MCP across the entire company so the AI tools understand what a Brazil build is, our pipeline release processes, or get farm source control.

Thumbnail 1880

This is something that all AWS customers will also have to reconcile with, especially enterprise customers. How do you get the AI tools out of the box to understand your bespoke systems? This is how we do it. Today our developers still spend too much time providing manual context to get our tools to work. We have communities of practice sharing ideas and context files among team members. A lot of this is new undifferentiated work, but we're still a large organization and there's no one size fits all context.

Thumbnail 1920

Thumbnail 1930

Thumbnail 1940

Thumbnail 1950

So we focus on peer-to-peer learning mechanisms like internal events and demos, and we build mechanisms so that we're constantly improving the knowledge of our tools, agents, and models so they don't become stuck in time. The way we approach building this knowledge is through a model we use for software development across Amazon. It's the core. These are the things that are applicable to all of Amazon, like what source control to use, how to apply code review, or release software safely. Then we have common, like what tools and architectures we recommend for web services or for the store's mobile application. And lastly, we have custom. These are team-specific guidance, all with the goal of reducing manual context and shared knowledge.

Thumbnail 1960

Thumbnail 1980

Lessons Learned, Metrics That Matter, and Resources for Further Learning

So as you may know, adopting AI is not without its issues and challenges. Let me share with you some of our learnings. How many of you can relate to this? AI does not simply make the entire business go faster. Being intentional in how you approach your AI adoption is key to successful improvements and getting innovation to your customers. Our mental model is eliminating undifferentiated work wherever possible, like simplifying an architecture or code base using managed compute over hosts.

Thumbnail 2010

Thumbnail 2020

How can we eliminate paging and operator? Where we can't eliminate undifferentiated work, we look for ways to automate it, like software migrations or automatically remediating an alarm. Then we use AI to assist builders in their remaining differentiated work, like inventing for customers. Measuring lines of code does not get you AI productivity numbers. Much of AI productivity claims are based on simple greenfield projects, which don't translate to true team outcomes. The reality is development tasks are mostly brownfield coding and bug fixes.

Developers work in complex custom code bases with years of embedded knowledge. Modern developers own their own products and features end to end. They do analysis, design, customer support, operations, not just coding. Large enterprises always have bureaucracy and friction that prevents theoretical benefits from being realized. But we can use AI to reduce the day-to-day friction if we take a holistic view of product development, understand the fundamental concepts, context, models, and agents, and lean in where there is willingness and patience to experiment.

Thumbnail 2080

Thumbnail 2100

Just like how your developer experience impacts agents, security and safety mechanisms are needed to protect work done by agents, especially in the release and operate workflows where doing the wrong thing will impact your customers. Everything is moving quickly; agents are moving fast. There aren't always best practices. It's important to ground your work in the overall developer experience and the metrics we've established.

Here are some of the metrics we look at to see where there is friction in the developer experience. Use these as common baselines to see where there are patterns or hidden issues that could be addressed. Remember when I said we had tension metrics for cost to serve software? Well, one of these is high severity tickets. Actually, the real name is human actioned high severity tickets per normalized deployments. It's a mouthful, right?

What we found with this is we first looked at high severity tickets, but that was just a count and it wasn't in relation to cost to serve or the normalized deployments. So then we put high severity tickets per normalized deployments, but then we found out that a lot of bots were affecting our high severity tickets, so we had to move those out to truly have an indicator of quality. That's why we have human actioned high severity tickets per normalized deployments.

The reason I went through that is because metrics mature over time. I know that what we've presented today is very advanced and maybe some of you don't have an idea of where to start with your developer experience, but wherever you start is a good starting point for you and will progress forward. If your metrics need to mature over time, let them.

Thumbnail 2190

Thumbnail 2210

There are many different metrics that we look at for AI outcomes. We look at leading and lagging indicators of AI, yet we don't just look at quantitative metrics. Since quantitative metrics don't capture enough nuance, we have a team of researchers that look out and use qualitative metrics and anecdotes to find the paper cuts in our tooling. This adds color to what the metrics are telling us.

Thumbnail 2240

This helps us create a holistic view into our approach that is changing our developer experience across Amazon. We shared these metrics to our organizations so they can use them to improve as well. Here's a recent quote from one of our customers. She says ASBS metrics combined with the qualitative metrics from the tech survey insights have been instrumental in improving their builder experience strategy.

She goes on to say quantitative data showed where AI tool adoption was strong, but velocity outcomes uneven, while qualitative feedback revealed why. This guided investments in AI native transformation and operational automation, helping us build the right mix of centralized and differentiated tools to measurably improve their developer velocity and operational excellence.

I know this was a lot, so let me do a quick recap. You can show economic value of improving your developer experience. It's much more than lines of code. We know that there is value in capturing all the improvements across the full software development life cycle. Tailor your approach to your teams. Start small, start where you can.

Thumbnail 2310

Use metrics to prioritize investments and encourage adoption of the technical investments you know will improve the outcomes for developers. Look for ways you can eliminate, automate, and assist your developers. I'm going to invite Eva back up onto the stage because we have more than this presentation that we wanted to introduce you to.

Thanks Bethany for taking the time to share Amazon's story with their development workflows. We get asked all the time from customers how Amazon themselves are thinking about this, and we appreciate you taking the time and driving improvements across our development teams. It seems that these concepts are really resonating with our own customers. I encourage you to take a picture of that if you haven't already.

If you're interested in learning more on this topic, there were a couple of sessions earlier this week. There was DVT219 that took place on Monday, and this session really walked through more of a tactical example of how a customer today is measuring the impact of tools like Amazon Q Developer and Jellyfish and how they're quantifying those impacts. Similarly, there was another session earlier today, DEV323, which walked through Prime Video's journey in a deeper session.

Thumbnail 2380

I encourage you to watch both of those videos once they're available on YouTube later this week. If any of you are interested, there's one that you can attend later this week. There's INV205: Reinventing software development with AI agents, taking place Thursday, December 4th from 11:00 AM to 12:00 PM PST at the Venetian, Level 5, Palazzo Ballroom B.

Thumbnail 2430

There are multiple options, but one that we recommend is our innovation session taking place on Thursday. That session will look at some of the new launches that we announced today, which you may have seen in the keynote, and how we're thinking about reinventing software development with AI agents and continuing to focus on developer productivity and improving the developer experience.

Additionally, more specific to this topic, we have a few blogs that were released earlier this week. We have one on the left here that was produced by our Amazon science team, and it walks through how we're measuring the effectiveness of software development with different tools and practices. Another blog in the middle was released by Bethany and her team, walking through the Cost to Serve Software framework you saw today and how we ultimately arrived at that 15.9% improvement. Finally, on the end there is a podcast done similarly by Bethany's team that walks through this story as well.

Thumbnail 2500

Whatever medium you want to consume, I encourage you to take a look at those resources. Bethany and I will be outside in the hallway should you have any questions and want to talk further about this. We really appreciate you taking the time on your Tuesday evening to sit with us and talk about developer productivity. We hope you have a great rest of your week here at re:Invent. Thank you.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)