DEV Community

Cover image for AWS re:Invent 2025 - From Lab to Market - AstraZeneca's Enterprise-Wide AI Success Story (IND214)
Kazuya
Kazuya

Posted on • Edited on

AWS re:Invent 2025 - From Lab to Market - AstraZeneca's Enterprise-Wide AI Success Story (IND214)

🦄 Making great presentations more accessible.
This project enhances multilingual accessibility and discoverability while preserving the original content. Detailed transcriptions and keyframes capture the nuances and technical insights that convey the full value of each session.

Note: A comprehensive list of re:Invent 2025 transcribed articles is available in this Spreadsheet!

Overview

📖 AWS re:Invent 2025 - From Lab to Market - AstraZeneca's Enterprise-Wide AI Success Story (IND214)

In this video, Ujjwal from AWS and executives from AstraZeneca discuss their AI transformation journey in drug development and commercialization. AstraZeneca aims to deliver 20 new medicines by 2030 with $80 billion in revenue. Cassie Gregson presents their Development Assistant, an agentic AI system that went from POC to 1,000+ users across 21 countries in six weeks, integrating 16 data products with nine agents. Ravi Gopalakrishnan describes the AZ Brain platform for precision commercial operations, with 500+ experiments deployed. AWS showcases Bedrock AgentCore for secure agent deployment and their open source toolkit for healthcare and life sciences, providing templates for R&D, clinical, and content supervisors to accelerate production deployments.


; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.

Main Part

Thumbnail 0

AWS and AstraZeneca Partnership: Transforming Drug Development with Agentic AI

Hello everyone. Good afternoon and welcome. My name is Ujjwal. I lead machine learning for Healthcare and Life Sciences at AWS, and I'm going to be joined today by Cassie Gregson, who's a Vice President for R&D IT, and Ravi Gopalakrishnan, who's the Vice President for Commercial Data Science and AI from AstraZeneca. We're going to talk about the entire transformation process that AstraZeneca went through in partnership with AWS on creating value propositions and use cases that led to a variety of different optimizations in their entire journey of taking a drug to the market.

Thumbnail 50

So the agenda for today is going to cover a couple of different points. The first thing that we'll touch upon is the impact of agentic AI. We'll talk about some of the learnings that we got during the process, how we have created ROI from a variety of use cases that have led to production deployments and how we have done it. I'm then going to invite our speakers and guests from AstraZeneca to talk about their experience of working with AWS, the kind of use cases they've been building, how they've been building those, and what kind of AWS services they've been using. And then finally I'm going to come back and talk about some new additions that we have made on our platform that makes all of this extremely easy. And of course, through this journey we have learned a lot that we have tried to bring as part of the innovation in our stack, the agentic care stack that we have.

Thumbnail 100

Thumbnail 110

Thumbnail 120

Thumbnail 140

Thumbnail 150

Before I get started, I wanted to recognize the fact that AWS has been working on a variety of use cases across the top 20 pharmaceutical organizations in the world. And they range from content summarization, chatbots for very standard use cases, getting into production for a variety of use cases in commercial, manufacturing, and drug discovery. And we've done this through a variety of services that AWS creates. The stat that we have right now is we are extremely proud to say that 95% of the top 20 pharmaceutical organizations use AWS for generative AI and machine learning. And this has taught us a lot. During the process when we actually did these projects, it taught us a variety of things that have now led into transformations that we have in innovations that we have actually included in our stack.

Thumbnail 170

Building the Foundation: Data, Applications, and AI Agents Across the Pharmaceutical Value Chain

If I have to summarize our learnings into two major headers, what I'll say is the first one is there are no shortcuts. Agentic AI is something that's extremely transformational. It is at a stage where we're seeing a lot of value, but to get it right, it needs to be a journey that actually goes from data foundations that cannot be an afterthought. We've seen a lot of customers struggle on creating data assets or data products that actually matter to an agent. A lot of these data products have been traditionally designed with analytics and human interaction in mind. Agents are not the same. Agents do require specific patterns and specific assets to be organized in a certain way, and before you know it, even though you want to do agent tech projects quickly, you will run into walls of making sure that your data assets are actually keeping up.

So making sure that your data foundations are actually in line with what you want the agents to actually give you answers to is extremely important. And then moving up, it's how you actually package these agents that also matters a lot. That's in the middle tier over there, which is the AI applications. Now these applications could be hosted within production systems that are in existing larger enterprise stack. It could be standalone applications that run out of a browser. It could be an extension of a browser or a chatbot that actually sits within an existing application. There are many ways in which we have seen these patterns evolve, so learning from there has led us into creating specific accelerators in that space too.

And then finally when you have the data and the application strategy fixed, that is where I think the actual value of agents begin to emerge. We've obviously seen agentic AI applications start with simple summarization read-only use cases that go and look for information. But now it's actually evolving into more sophisticated workflows that can be executed at scale. So if you move along these steps and actually go into a direction that ultimately gives you all the sophistication and all the automations that are needed to create the system at scale, that is how we've generated a lot of ROI. Now this journey is not simple,

Thumbnail 330

but what I would say is that the learning through the years that we have got has led us to create accelerators that have short-circuited it. So instead of starting from scratch, you now have the option to actually start from the learnings you've already made. You'll hear a lot more about that when I come back towards the end of the presentation. But if I have to summarize some key use cases, they exist across the value chain of pharmaceutical organizations.

It starts from the R&D, so everything from target identification to hit optimization. There are multiple generative AI models that are specifically designed to understand proteins better, understand molecules better, their bonding sites and things like that. We are working increasingly hard to actually make these models available as managed services that researchers, lab technicians, and scientists can easily make use of.

Thumbnail 380

Thumbnail 400

Secondly, once the drug is optimized and maybe a target is identified or a molecule is identified, taking it through the clinical process is also very suboptimal in the current sense. It's very sequential in nature and document heavy. So we've seen a lot of use cases in the clinical space, things like protocol generation, reviewing of protocol, authoring protocol, finding sites or patients, or matching sites to patients for trials, finding the right mechanisms to report any adverse effects or adverse events during the trial. All these are areas where we are seeing increasing adoption of generative AI.

Thumbnail 420

Thumbnail 440

And then when you come to manufacturing, we obviously have use cases that increase yield optimization on the pipelines, finding ways in which we find the bad batch during the cycle as early as possible and reject them. We've seen a lot of success with such use cases in the manufacturing space. And then finally in the commercial space where you're trying to find the right market for your drugs, tracking its performance, phase four data, real world evidence, all of these are areas where we've seen a lot of success with customers.

Thumbnail 470

Cassie Gregson on Clinical Development: The Development Assistant and AstraZeneca's Bold Ambition for 2030

So with that introduction, I would like to invite Cassie from AstraZeneca to take you through her journey with AWS and the clinical development phase. Thank you. Hello, I'm Cassie Gregson, and I'm the VP for R&D IT at AstraZeneca. And two decades ago, I never would have even considered that I'd be standing here talking to you today. In fact, back then, I was a research scientist, an apprentice research scientist in one of our early discovery labs at AstraZeneca in the UK. I always had a bit of an inkling that technology would be critically important and would transform the way that we do our research and development.

Thumbnail 500

But what I didn't know, and what we have recently launched, is our bold ambition within AstraZeneca. So our bold ambition, emphasis on bold for 2030, is to be pioneers in science, lead in our disease areas, and transform patient outcomes. And to do that, we'll deliver 20 new medicines by 2030, so that's not application of medicines to existing other conditions, it's brand new medicines. We aim to have an $80 billion company and continue to sustain growth thereafter. I also didn't consider that AI would be so transformative when it comes to helping us to achieve our bold ambition. And so I'm going to talk a little bit today about how we're using this, especially in the development part of research and development.

Thumbnail 540

So in my role as global VP for research and development IT, there are three key areas where I believe that data and AI will truly transform what we're doing. That's partly, number one, in the speed and the decision making across our R&D pipeline. Secondly, increasing the probability of success, whether that is through identification of new molecules, identification of new targets, or increasing the probability of success for our clinical trials. And then finally, better predicting patient impact. So what I mean by this is identifying those patients that will most benefit from our medicines, really focusing on where we can have the most impact to all of the patients around the globe across our oncology and biopharma portfolios.

Thumbnail 590

So if we think about our incredible science that we have, it's led to a significant volume in our development pipeline. And by development, I mean our clinical research, our regulatory submissions, and our patient safety portfolio, as well as the quality of all of the research that we do across the enterprise.

In 2024 alone, we had 191 projects in our development pipeline, 27 new molecular entities that progressed to the next phase of development, and we invested 13.6 billion dollars in our science. Now it's super critical for us to get our medicines to our patients as quickly as we can, high quality medicines that have a positive impact. Every minute truly matters as we're continuing to fill that development pipeline with our life-changing medicines.

Thumbnail 650

So if we think about data, we think about our global clinical trial portfolio. The nature and the complexity of the work that we do across the globe can lead to disparate systems and siloed data. What that means is it can be really difficult to answer questions. For example, our clinical researchers may be thinking, what are our highest performing sites? Which clinical study sites do we go to as we're designing our clinical trials? The way that we've done that in the past is manual. It's hours and hours of work, pulling data from here, from there, a little bit of data here, transforming that data, analyzing that data, getting some insights. As I mentioned before, every minute really matters when it comes to getting our medicines as quickly as possible to our patients. That's really been a key focus of what we've been doing in the AI space: how do we accelerate all parts of the development pipeline with a keen focus initially in the clinical trial space, but then broadening that out to the rest of development.

Thumbnail 710

Thumbnail 760

This is where the work with AWS has really come in and positively impacted what we're doing. When we look at our disparate data, our siloed data, what we have done as part of this work is really bring together all of those disparate data sources, applying contextual ontologies, bringing in all of the agentic solutions, the frameworks and the products that you see here to enable our scientists, our clinicians, to ask questions in a very conversational way, in a natural language way. They're able to not worry about where's my data, what is it telling me, but actually focus on the answer so that they can then make decisions about the next phase of the clinical development cycle. In partnership with AWS, we have built an agentic powered Development Assistant that I'm going to show you a short demo of right now.

Thumbnail 770

Thumbnail 780

Thumbnail 790

Thumbnail 800

Thumbnail 810

So Development Assistant, what is it? As I mentioned, it's agentic, and I'll go through some numbers in a moment because the numbers are always really important. The agentic powered AI assistant, Development Assistant, looks across many different data products that weren't previously connected across our clinical, regulatory, patient safety and quality domains. It allows any user with the appropriate level of access to those data products, which is key when we're talking about regulated environments, to ask any question they have in a very natural language kind of way. Think ChatGPT, the kind of ways you interact with that. You'll see from the screen that it shows the reasoning, it shows the data, it shows you where it's pulled that information from, and provides all of the insights in a very simple, easy to understand way that you can then click through to every single source document, source data product that we have.

Thumbnail 840

You can go back and you can check, is this accurate? Is this telling me the right information? Oh, that's a really interesting insight. I'm not sure I believe that, let me go and understand that. Or you know what, I've found out something brand new today that's enabling me to make that next decision as we go through. So let's talk numbers. I like numbers, I love data. When we're talking about the Development Assistant, the real key piece here in partnership with AWS really turbocharged what we were able to achieve. In six weeks, we went from a proof of concept to an MVP, reaching more than 1,000 users across 21 different countries. This is a truly global technology. We pulled together 16 different data products across that clinical, patient safety, regulatory, and quality domain. There are nine agents working together, so it's truly a multi-agent system. There are eight knowledge bases and seven domains.

All of this together, imagine, one year ago, six months ago, I would have had to have gone in, find that data, think about my question, answer my question. Now I can just go to Development Assistant. Hey, how many clinical trial sites do we have? How do I find out this information about this SOP? I remember reading it a few years ago. I can't quite remember what it told me. What can you tell me about a particular phase three clinical trial?

Thumbnail 910

So all of this information is now at the fingertips of all of our scientists across the globe. And as I mentioned, every minute really counts when it comes to what we're delivering. Every minute is super important, so every single minute, every week, every day, every month that we can shave off any of our pipeline enables us to have that positive impact much sooner.

But really, it's not about data and technology. It's about us focusing on identifying, discovering, and developing our medicines for today, tomorrow, and the day after, and the patients are truly at the center of everything that we do. So our focus is to accelerate that with high quality. And once we have that information and we have those successful molecules, we have the regulatory submissions approved by the FDA and the other regulators, we then hand that over to our colleagues in commercial.

And our commercial colleagues then focus on making sure that our medicines are getting to the healthcare professionals. They're getting to the patients where it's really needed. And so with that, it's a great pleasure that I hand over to my colleague Ravi, who's going to talk you through the next stage of the pipeline. Thank you very much.

Thumbnail 1020

Ravi Gopalakrishnan Introduces Precision Commercial: Strategy, Planning, and Execution

Thanks, Cassie. I'm Ravi Gopalakrishnan. I'm the Vice President of Data Science and AI for Commercial at AstraZeneca. And my job and my team's job is to ensure that we serve two of our biggest businesses, our Oncology Business Unit and our BioPharma Business Unit. And our job is to make sure that all these wonderful AI-accelerated pipelines that our R&D team is delivering, how do we make sure that we deliver that to our patients at the right time when they need it the most, right? I think that's our mission in commercial.

So to achieve our ambition in 2030, which is 20 new medicines and 80 billion in revenues, you have to have precision commercial. What do I mean by precision commercial, and why do we need it? Today, as I'm speaking here and we're all here, there are many, many patients being diagnosed with either lung cancer in early stage or breast cancer in metastatic stage. And they have to, what do they do, right? Okay, they just discovered that they got cancer, so they have to go through multiple steps before they get the treatments that we already have that can be life-changing.

They have to go talk to the right specialist. They have to talk to a surgeon. They have to be tested for biomarkers. They have to make sure that they get all the right pre-treatments before they're eligible for our drug. So that's what we mean by precision commercial. And in order for us to deliver precision commercial, we need to look at our entire commercial organization, which is sales, marketing, medical, market access, working together across three main pillars.

One is around helping strategy. What do we mean by strategy? Really understanding our patient pathways, patient journey insights in depth, understanding our HCP behaviors. Are they really following guidelines or not? What are their preferences in treating certain patients? What are their preferences in doing biomarker testing? We need to have a complete understanding of all that. Are they following guidelines or not? Are there care gaps because of not following guidelines that are severely impairing patients?

And then who are the influencers in HER2-positive biomarker treatment, or who has done the most research? Who are the influencers that we should engage to spread the word? So that helps inform the strategy for every new drug, every new indication that we deploy. The second aspect of delivering precision commercial is around planning. Who do we engage with early on during the launch? Who are the early adopters, and what is the propensity of HCP to prescribe our drug versus some other treatment which is not following the guidelines?

And then things like forecasting to enable how we grow our business across the business team. So all this is around planning. And then the last mile is all about executing. How do we provide sub-national territory-level insights to our field representatives? How do we provide the right level of digital and marketing and media investments to our marketing colleagues and our agencies? How do we create the right content that's highly personalized and delivered on the right channel to the physicians and patients at the right time?

So I mean, these are all three, unless all these three things work together, you can't really deliver precision commercial at scale. Now, let's see what do we need to do to deliver precision commercial at scale.

Thumbnail 1210

AZ Brain Platform: Unified Data Foundation, AI Models, and Scaling Across Therapeutic Areas

I think we need an enterprise-grade platform that is running on top of a unified linked data foundation. And then we have to invest in different ways to answer different types of questions. So we embarked on this journey of building a platform that we call AZ Brain.

We didn't start with technology. When we thought about the AZ Brain, we actually talked to all our users, our field teams, our MSL teams to really understand their story. What do they really need to ensure that our life-changing medicines get to the right patient at the right time? So we started with the use case very intentionally across everything and then started building the components to deliver on that use case, to provide the right level of insights.

There are four key components to the AZ Brain platform. The first thing is a solid data foundation. Data, as you all know, is extremely siloed and it comes from so many different sources. You have claims, multimodal claims, and EMR data which gives you all the real-world evidence insights. Then you have our own market research data. We have a lot of conversations with doctors. We have a lot of domain intelligence on our doctors and our patients, so that's captured in some format.

Then we have all our medical research, clinical trial reports, and publications. We have all these big events like ASCO and ESMO, where everything is published and a lot of people are speaking about new data. So that's all coming in as well. And then there are NCCN guidelines, which continuously get updated to ensure that doctors need to keep up with it. I think for precision therapy, the rate at which the NCCN guidelines are changing is astronomic. It's changing every few months.

Then we have all our internal data. It's all our internal domain knowledge and our interactions with HCPs through our CRM systems and all our digital engagement systems. So we bring it all together and build a solid data foundation. And then the next step is a whole host of AI models and services. Again, those are all use case specific, based on understanding patient pathways, lines of therapy for oncology.

And then we use that to predict patient eligibility for a particular indication or particular drug, patient progression across lines of therapy, how patients are going to respond based on their personal characteristics. So there are a whole bunch of predictive models as well as a whole bunch of AI classification models, again, use case specific to guide our field team, to guide our marketing team, to also guide our overall strategy and planning team.

And then it's no good if you just have models. The models and the insights from the models, whether it's predictive or otherwise, need to be delivered to the right users with the user experience that is easy for them to make decisions on or take actions on. So that's why we build a whole suite of products. Products can either be standalone applications or they could be embedded into workflows.

Thumbnail 1440

For example, we have this thing called predictive field triggers where a patient is going to show up to a doctor with certain symptoms because they've progressed from one line to another line that will make them eligible for our drug. So we provide this insight through real-time notification to our field teams, and then they act on that. And then they have a meaningful, timely conversation with the oncologist, because they know that the patient is going to show up, showing that the patient has progressed. So that's the level of precision around triggers.

Thumbnail 1480

That's one example of a product. And then we have a whole host of products for our headquarter teams, our medical teams, our market access teams. So there are a whole host of users and personas. So one is to build a platform and a whole host of products for one therapeutic area, for lung cancer. Another is how do you scale. So I think this is not just about building a platform that works only for one therapeutic area or one medicine.

Scaling is a very important aspect, and that's the whole concept of platforming and building reusable components. So scaling in multiple dimensions. You have to scale across multiple tumor types for oncology, multiple disease areas in biopharma, multiple biomarker type therapies across both. So that's one dimension of scaling.

As part of this, and based on the use cases, we've developed 500 plus experiments. Not all of them were needed, maybe half of them went into production. They're all running, they're all being maintained, they're all being continuously retrained, and that's what will get us to ensuring that every patient gets access to the right medicine at the right time. We have to scale it across disease areas, and we have to scale it across markets. We started in the US and now we have scaled to all of Europe, then eventually to Asia and South America.

Thumbnail 1550

From Insights to Action: Agentic AI Automating Workflows and Achieving Bold Ambitions

So far, what I've talked about is foundational AI capabilities. I think now the next step that we've embarked on for the last two years is to build agents. Agents essentially move from AI-driven insights into actual agents performing tasks, orchestrating and automating workflows, and making a whole bunch of decisions on behalf of each of our business personas, guided by the human in the loop. This broadly falls into five different domains. First is an agent around insights generation from real-world evidence data. Our medical colleagues are constantly mining through lots of publications and data and manual analytics to get to that.

Second is around content, everything around how do you have more agility in the type of promotional content, medical information content to create it at a rapid pace and go through the reviews. And then the third use case around this where we've seen a lot of value is around reimbursement dossier authoring to get approvals for a drug in a particular market. That's a pretty laborious process too. So we all of a sudden, where we think agentic AI can help is around automating some of these manual processes that used to take months, bringing it down to weeks.

Same thing with market research, heavily manual process. If you have a new TPP for prostate cancer, it takes about three months and a lot of resources, expensive domain knowledge of people to actually create views like market share and that's used to do forecasting and all that to see whether the TPP is ready to go to market or not. Same thing with our marketing team. So I think there are a whole host of workflows that can be automated through agents.

Thumbnail 1660

So let's see where we are at. We've been on this journey using AWS ecosystem for a while. These are all some of the agents. IQ is an agent similar to the commercial assistant that Cassie showed that's used by the commercial colleagues that perform certain tasks. Again, it's a collection of about 20 plus agents that queries, cross, and interrogates all the different data sets and then the models and the documents and the guidelines to not just answer questions but actually give you very concrete recommendations at an N equals 1 level.

We say N equals 1 is because every HCP, every patient is a very unique combination for targeted therapies. So I think that's the level of insight. And what we've seen is amazing results. People who use this and engage with it generate two times more scripts, that's contributing to the revenues, plus reaching the patients. So that's a good learning. This is in production now being used by pretty much all our lung franchise teams to drive the business.

The second is around content lifecycle. So we've built this suite of agents that takes complicated scientific literature and tables from a whole bunch of approved publications and documents, and then it has to format it in a way that's very specific for regulatory authorities to accept. So it has to fill in templates. Germany has got one, Canada's got another, US got a third. That used to again take many months and a lot of resources to actually get because if you don't do that, the more delay you have in getting approval for this, the more delay you have in getting patients access to our drugs. So it's a super important thing here.

Same thing on market research. We went from three months to go from a TPP to market share estimation and forecasting and preparing for commercial to two weeks. So that's again the power of agentic AI here. Anyway, I mean, I think these are all some of the things that we've worked on, a lot of learnings along the way. But I think what I want to say is in order to achieve our ambition 2030, which is we have a bold ambition, Cassie talked about it, $80 billion in revenues, 20 new medicines, transforming patient outcomes along the way. I mean, an ambition alone is not enough.

You need precision. Every patient, every HCP needs to be treated differently, and that's where AI and Agentic AI kind of helps. So AI helps with the breadth. Agentic AI helps with speed. And then we have our people who have a purpose. I think the combination of all these three things together is what is going to help us achieve our bold ambition and along the way transform care for patients. And another big bold ambition is to eliminate cancer as a cause for death.

Thumbnail 1870

AWS Bedrock AgentCore and Open Source Toolkit: Making Production-Grade Agent Development Accessible

Thank you, Robert. So as you heard, this is a really bold ambition, and to achieve such things at scale and the platforms that AstraZeneca has been building, it really comes back to the points of primitives and what are the specific technology assets that need to be created that would support AstraZeneca in the journey that they are on. So what I'm going to do right now is to bring it back to some of the key components that we have launched either recently or during the very recent past or is something that we are actively working on. And our ambition actually is to be the best place to build agents. That is where our North Star is, and it's a very wide goal if you think about the ways people are building agents today.

Thumbnail 1930

It includes a variety of different frameworks, a variety of different infrastructure deployment architectures, orchestration models, tooling, standards, and it's very important for us to understand each of these different patterns and translate them into services that allow for these to be executed in a manner that you don't feel the heavy burden or undifferentiated heavy lifting of taking these use cases into production. So what I want to start with is Bedrock AgentCore. This was a service that was unveiled at our New York summit earlier this year, has been generally available for a few months, and we have seen amazing response from the audiences just because of the way in which AgentCore makes some of these capabilities available to developers.

Thumbnail 1950

So it provides a very secure runtime. That's the first thing that AgentCore provides. With runtime, you would be able to deploy these agents at scale in isolation. So that's also very important. Infrastructure level isolation is something that we continuously hear from customers as a requirement, especially for regulated industries where you're trying to make sure that an agent can actually run in a very governed manner as far as deployment architectures are concerned. It provides a virtual gateway that allows you to access external tools. So these tools could be either from our marketplace where we have a variety of agents listed from third parties, or it could be agents that you build and make available via architectures like MCPs, simple functions or tools that can be wrapped into a container, a variety of ways in which AgentCore enables that.

It has a very unique way of managing memory. It preserves both short-term in-context memory as well as long-term memory and has very intelligent ways of unloading and offloading those memories between various storage mediums. It also provides authentication and identification of these agents. In a lot of cases, you might have seen patterns where a certain orchestration step requires you to authenticate against a database, like in the example that we saw earlier around clinical development agent. There were questions that were being asked by end users in natural language that were being translated into actual SQL queries that were run against the database.

Thumbnail 2070

To build that workflow end to end, you need various steps of authentication. You need to understand how the user or what kind of access the user has. Does he have access to actually go and get information from that database? If he does, how do you manage the authentication of that query into the backend? All that is actually heavy engineering work that AgentCore identity solves for you. So these are some of the primitives that we've been working around to make AgentCore more suitable for these kinds of production use cases.

Thumbnail 2080

In addition to the AgentCore being available as a service, earlier in the year, we unveiled something called an open source toolkit for healthcare and life sciences on AWS. This toolkit is actually a set of templates, examples, deployment scripts that are all available in an open source format under MIT Zero. So you don't really need to pay anything to get access to this. And what this allows you to do is basically get started very quickly.

So even though many of you in the room may be familiar with what AgentCore is, there's still a learning curve. We continuously launch so many new features and capabilities into our stack, and it's very hard for developers to keep up.

So what we are doing on our side to make it slightly easier for the developer community who are in the healthcare and life sciences space is we are identifying some standard data sources or standard agents where it's almost necessary for some of these use cases to exist, and we are taking the burden of creating a template. So when you actually have to develop something similar, you don't have to start from scratch. This is obviously targeted most towards developers, developers who are very comfortable with our APIs, comfortable with code, extending code, deploying it themselves, and it creates a mind share.

So if you are experimenting with a few use cases, you can easily clone this repository. You can look at these examples in a sandbox environment and you can extend it. We're always seeking contributions to this repository from developers. We've seen excellent response to some of the use cases that we've already made available, and this is again something that we'll continue to maintain as AgentCore evolves so you don't have to keep up with every new addition that the platform is adding. So this is one effort that we are actually behind in an open source way, in a community driven way, so that more and more of such use cases can see production deployments quickly.

And if you see the toolkits divided into supervisors, supervisors essentially means a set of orchestrators with access to certain types of tools. So for example, the R&D supervisor or biomarker supervisor actually has access to information on molecules, around clinical trials, around research, and all of this is available to the orchestrator. So when a user asks a question, they will be able to orchestrate across each of those supervisors, sorry, sub-agents, to get to the answer.

Similarly, the clinical supervisor actually has access to a lot of information that would help it design clinical trials, review clinical trials, look at inclusion, exclusion criteria, compare trials, outcomes. And then, of course, the content supervisor agent actually allows you to take all of this information, generate reports for submissions, search through competitive analysis and things like that.

Thumbnail 2270

Now, while the open source toolkit is a great step in which we have seen developers respond and create sandbox environments and get started with, I think there's still this journey that once you get satisfied with the use case, you have to make, or a certain set of steps you have to take to take these agents into production. And these include things like compatibility to your existing SSO or how do I manage these agents at scale. While developer productivity is extremely important to get started with these MVPs and POCs, the real value comes when you actually can scale them at production grade.

Thumbnail 2340

There are questions about what orchestrator architectures could you use, right? So that's another thing that we see a lot of customers ask. Now in our effort to actually provide an answer to these questions, obviously there's no right or wrong in terms of which frameworks you choose because your end goal remains the same. But a point of view is extremely important because that grounds you into something that you can get started with and move into production with the trust that AWS brings to the table. So what we've been doing on our side is taking all of these questions and trying to see what we can do better in terms of just providing these assets to customers.

Thumbnail 2360

So what we recently launched was the ability for certain set of these use cases to be packaged up as assets under various categories that you see over here and make those available to you in a more production ready sense. So instead of just working directly, like if you're a developer who loves to sort of experiment with code, you can start with the previous GitHub repository that I showed you. And then if you're like someone who is more into maybe creating a consulting engagement or a customization engagement for these use cases, we have packaged them in a way that's very easy for you to do, you know, work with our consulting teams or even with your own engineering teams to deploy into production. Because we have gone a little ahead in terms of taking those certain use cases that we've seen great response to and made those available inside the portal. I know there might be some questions, so if you can save them towards the end, so they can get through the content, and I'll address them.

Thumbnail 2410

The Complete Stack: Infrastructure, Development Services, and Life Sciences-Specific Accelerators

So here's how things look in terms of workflow. If you are a native builder, we have a lot of AWS services. We saw great launches throughout the week at re:Invent, where we have services like Quick Suite, AgentCore, and Bedrock with all of its additions and generative AI capabilities for you to get started with. We are very accelerated in our investments in that space, creating capabilities for developers to quickly start and experiment on their own.

Thumbnail 2490

Now, in addition to what we are doing in the entire AWS stack, a team of us are forking that stack and making it very specific to life sciences and healthcare. Every pharma customer, you saw a stat earlier that 95% of the top 20 pharma customers work with us on such problems. That has allowed us to get a lot of feedback and has led us down this route of packaging some of these services in a very specific way for certain types of use cases for the developer community. So that's the middle ground where the AWS open source toolkit for healthcare and life sciences allows you to do. And then finally, the bottommost layer is where the portal comes in, where you'll see a proper set of use cases where you can select from them. And then once you are ready to actually go further, you can go ahead and deploy them into production.

So this is how the stack looks now after all these new additions. On the bottommost tier we have our infrastructure, which includes our containers, our newly announced Trainium 3 chips, and our ability to fine-tune and train these models, which is extremely important for regulated industries and domains like healthcare and life sciences. We've obviously made a lot of investments in that layer, not only from an infrastructure perspective but also APIs that allow you to take that infrastructure and fine-tune pre-trained models. You heard about Nova Forge, which is a great technology for domains like healthcare and life sciences that allows you to actually update these model weights and blend them with your own data and make use of that specialized model that truly creates differentiation for you as a customer.

I always get asked that if everyone has access to the same model, how do we differentiate? And the real differentiation comes from your data. But the problem so far has been that the data has only existed either through agents or through RAG applications, which is great, but now we have the ability to actually make that data rightly available at the pre-training phase of the model, which really creates differentiation in terms of the specificity that you get into some of these use cases.

Now on top of that, we have the development services. These include our specific models, our capabilities around guardrailing and optimizing these models. We obviously have AgentCore that allows you to actually build these agents at scale, and these are all integrated into a variety of different capabilities that make use of these frameworks for specific use cases. Now, in addition to what I showed you in terms of our baseline stack, the life sciences specific things that we have added into this is the toolkit, and then of course the AI portal that I was talking about earlier.

So we are very excited about the response that we are getting in some of these use cases. We're seeing great demos being created. We have seen academic collaborations like the Stanford project like Biomni is actually now available in one of these toolkits. So if you are in the business of actually going and looking through literature or trying to find details about a molecule, or querying the TCGA database for certain oncology related questions, they're all available as MCP servers that we have made available via this toolkit. And of course, you can blend this all with your own datasets, your own assets, your private molecules, because this is all running within the confines of your account, never leaves your account, so it's very easy for you to keep building very specific workflows that go deep into the domain.

Thumbnail 2670

Thumbnail 2680

Thumbnail 2720

If you want to know more about these accelerators, you can go to this QR code. It'll give you a contact us form. It'll allow you to explore more, look at the portal, different use cases, and we would love to work with you on some of these. And if you're sticking around, there's still time. You can go to our pavilion. We have an extremely good setup this year. We have a demo for AI powered lab in the loop that takes you through an exercise of an in silico molecule optimization right from hit optimization all the way into clinical and going into the lab. And then we also have a healthcare demo that takes you through a journey of a patient right from scheduling an appointment to the care delivery. So we'd love for you to go check them out, and if you have any questions, we'll be around and happy to take them. Thank you.


; This article is entirely auto-generated using Amazon Bedrock.

Top comments (0)