🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - What’s new in fullstack AWS app development (DVT204)
In this video, Mark Rambow and Salih Güler discuss AWS Amplify's evolution for full-stack development with generative AI integration. They introduce AWS MCP (Model Context Protocol), which consolidates multiple MCP servers to provide best practices, documentation, and standard operating procedures for building production-ready applications. The presentation demonstrates how Amplify Gen 2 simplifies complex features like passkey authentication, email MFA, Storage Browser, and WAF integration with minimal code. Through a live demo, they build a real-time location-sharing app showcasing authentication flows, S3 file management, and deployment pipelines. The speakers emphasize that while generative AI accelerates development, frameworks like Amplify ensure security and best practices are built-in, increasing production-ready code success rates from 10% to 90%. They announce plans for deeper CDK integration and broader infrastructure-as-code platform support in 2026.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: Full-Stack Development and Generative AI
Welcome to our talk on what's new in AWS full-stack app development. My name is Mark Rambow, and with me is my co-presenter Salih Güler. We are both based out of Berlin and have a background in full-stack development, which is what brought us here to have this talk with you. Today, I will be discussing what's new in full-stack development. Obviously, one of the biggest changes that has happened in the last few years is how generative AI is being used during development, and tech development is no different.
Because we do not want to make this a sales presentation telling you that generative AI is solving all of your problems, we will have an honest checkpoint with you about where the gaps are, what generative AI can do, where it does not do well, and what we can do about it. To make things real, we will have a live demo with Salih, who is brave enough to build a real app on stage for you.
Let me see who we have here today. Who of you would identify as a full-stack developer? Please raise your hands. That is pretty much the majority. Nice. I think you found the right talk. How many of you are using generative AI tools day to day to do software development? That is a bit less, maybe we can change that a little bit. Now tell me, who of you has used generative AI and checked it in right away to production without checking it twice or triple? That is what I thought. Nobody is doing that, and I am very happy about that.
The Reality of Full-Stack Development: Complexity and the Role of Generative AI
So what is a full-stack developer? For me, a full-stack developer is handling the entire stack: front end, back end, and databases. Most of you will be able to code an idea into production and have it running right away. Who of you are full-stack developers because the team is small or you are maybe solo running your own startup? Who of you is actually a full-stack developer because of those circumstances? Yes, okay. For me, full-stack development has something beautiful in it. It is a very pragmatic approach. It is about velocity and ownership.
Because you are usually alone building your application, you can build everything from the back end to the front end. You have an idea, and you can bring it all the way through. This is the prototype of an entrepreneur: a single person with a great idea, a great mind, and great skills can build it all. However, while you have all of this control, it also comes at a certain cost. You need to deal with a lot of complexity. Most of the things you are dealing with and juggling are a mental burden.
You have to deal with all the different things that might break in the night. You have to deal with security vulnerabilities that you need to keep track of. A stage now has a new update and there is a new vulnerability found. Is my app really working all the way through? All of a sudden you are on TV or some social media outlet. Does it scale? All of that you have to deal with, but it is not something that you will enjoy all the time. In the last year, the nature of that complexity shifted. It is not that generative AI solves all of that. Rather, the complexity shifted into a different style of dealing with the complexity.
So what do you need as a full-stack developer? You have to deal with hosting. You have to deal with the content delivery network with your DNS records. You have infrastructure as code, hopefully, because you are not doing click ops. You have to deal with your API gateway configuration, your compute, your database, your storage, your authentication, all of it. Not only that, but you also have a pager because between all of those integration points that you have to deal with, there might also be a little bit of business logic that you need to implement to actually make that app real. At least in my time as a full-stack developer, things usually break at three in the morning when you really do not expect it, or on a Sunday afternoon when you are out there with your kids.
Dealing with all of those things is a lot on your plate. Generative AI in the last few years actually gave you a boost, right? You're much more productive and get a lot more things done, but faster is not always better.
How I see generative AI, and I don't know if this is a popular vote, at least it comes to mind for me, is that generative AI is the fastest keyboard on Earth, but it's not really more than that. There's still someone that needs to know what you want to achieve. Generative AI usually does not have an intent or an objective to reach. It is something that brings you forward fast and creates very confidently a lot of code. Most of the time the code is correct, but do you know if it is following best practices? Does it give you secure components? So you have this fast pace now that you need to deal with, and fast also means a lot more complexity in understanding that.
AWS Amplify: A Production-Grade Framework for Full-Stack Applications
If you remember in the last 20 years when we had to deal with complex integration projects, there were frameworks that helped us through it, and that's not any different today. If you think about AWS Amplify, which is a full stack application framework from AWS, it has comprehensive service coverage. It has global hosting, TypeScript for full stack development, and it gives you pipelines, a sandbox, UI building components, and all of them fit very well together. It lets you use CDK to define your infrastructure and expand that infrastructure with custom hedges where Amplify does not have a component for you right away. What's new is we also put some effort into making it easier for your cursor or your preferred AI tool to actually use it.
The cool thing about Amplify is it's been out there for seven years or so, and it is used globally in hundreds of thousands of apps. All the components that are defined in Amplify actually work, they're better tested, we have fixed all the bugs, and we keep it current. Now if you use generative AI, the problem that most of us have seen is if you ask it to create you a database, it goes with Python code and Boto3 and gives you all the SDK code required to actually set up your database. That might not be what you want, and it might also not give you guidance on whether you want to use Aurora database or DynamoDB. Obviously I always want DynamoDB, but that might not be the right thing for your specific use case.
To get your specific use case there is not so easy. It's a lot of prompting that is happening. Amplify is helping you by giving you predefined constructs that you can base on. That helps the AI to actually limit the scope of possibilities, which then also gives you a lot less code that you have to review.
Amplify, as I said, is already out there for a long time and just alone this year we have built 31 major features, which was basically multiple deployments of new things every month. To just pick out a few, CDK we integrated across Amplify, and one thing that we integrated this year was the CDK toolkit library, which allows you to programmatically use CDK under the hood. What it gives you is full abstractions of L2 and L3 constructs that you can use, and CDK promises to have AWS best practices built in so that you can rely on that to be the right thing that you want to use.
The other thing that we built quite heavily on was Storage Browser, so all the components here. Storage Browser is basically a micro site that you can embed and it helps you to host or to manage your files on S3. It gives you permissions, download capabilities with multi-file downloads, thumbnails, custom validation for file uploads. You can share them privately or with a certain group, so you have really enterprise control over what's needed in your file storage. The cool thing about that is you don't have to build that with custom code and generative AI, you can just drop it in.
But we wouldn't be AWS if security were not our prime focus, and that's the one that also scares most people off when it comes to Gen AI. Am I really sure that all the code that is being generated is secure? I have a hard time reviewing all of the code that is being generated. So you can see there's a drumbeat of security-related features that we built into Amplify. There was Web Application Firewall integration. There's IAM support, which helps you to log in as a customer, and one thing that probably is a very tedious task is refresh token rotation. If any one of you ever had a certificate that ran out of time and just broke in a moment, it's a lot of work to actually get that back up, and I have never seen a perfect way to deal with that. It's something that just requires a lot of automation and consciousness about that to be the case.
We also released Email MFA. If you're like me, you don't like to give your phone number to any website just for logins. Email MFA might be a way to deal with it. The last thing I would like to highlight is something that is in Svelte support. Svelte is not just yet another JavaScript framework that we integrated into Amplify. It was done by the community. Amplify is an open source community project, and we are super happy with our community contributing to it. When Svelte became famous, there were people that were actually saying that they would really like to use Svelte within Amplify because they already had mobile applications with Amplify and on the website they wanted to use Svelte for whatever reason because they liked it. So I said, okay, let's help and get that out there, and now Svelte is fully supported in Amplify. I think it's something that we should continue doing.
I know there's a lot going on on that slide, but those are not random features. Our mental model about Amplify is that we want to build you a framework that is a rock solid foundation for production-grade apps. All of those things that we have built in, we also built with the mindset of, okay, so if you want to be part of this new Gen AI wave where people are using more and more things like Cursor, Kiro, or any AI coding platforms, we need to make sure that the things that are really hard to get right are built into the framework so that Gen AI doesn't have to guess it. All of what we have built here actually brings us closer to a framework that can be more easily accessible by AI because it does not have to guess how certain things have to be done. Token rotation is something I don't want to have custom code for. I would like to know that this was tested all the way through in production and I can trust it.
The Context Problem: Why Generative AI Struggles with Intent
So we have done some strategic investments into what Gen AI would need from a framework to actually make things better for customers. So how is Amplify with Gen AI? How many of you are using Cursor, Kiro, or any of those tools? That's a few. You should try it. It's actually pretty amazing. Even I as a manager now can code again, which is pretty cool to see. So many times when I was creating software with one of those Gen AI tools, the suggestions were going in the right direction, but they were not exactly matching my intent. There's a lot of context in my mind of what I want to build, and the only way that I can transfer it into a generic Gen AI tool is through prompting.
I don't know if you have tried, but I ran into doom prompting over and over again because I just didn't get it right in the first place, and then I iterated over it and at some point I just gave up and said, okay, let me do it myself. If you ever experience that, this is not great. You feel at some point like, okay, is this Gen AI thing really real? Does it really help me? And it feels so tedious because why do I need to explain all of those things? It should be more obvious what has to happen, but Gen AI does not have the context of your last two years of development.
It does not know what Peter did last week, and it does not know what was discussed in your last sprint planning. You can have something listening to all of your conversations and put it all in, but then the context is too big and does not work either.
Let's go back to first principles. How do we get to a place where generative AI knows what it needs to know to make proper choices in your software development life cycle? I hope no one of you ever had to build your own JSON parser because there are libraries out there. Generative AI should not try to reinvent the wheel and build everything from scratch, but rather use things that are already ready made.
Last year, the Stack Overflow survey said that 84% of developers were using generative AI on a daily basis or intended to do so. I guess this year it is even higher, and I did not check. The benefits are real. You can get a lot more things done much faster. But it also sometimes is tedious. Now what happened this year is that we tried to close the gap on the context that generative AI needs to have to actually make proper decisions on what needs to happen.
Model Context Protocol: Bridging the Knowledge Gap
One emerging technology this year was the Model Context Protocol. Model Context Protocol is a way that I can define a server for generative AI that it can actually ask certain things. I have the following task: is there any best practices or documentation that I might need to pull into my context window? We have released a Model Context Protocol server called AWSBase MCP, and that one embodies the documentation of AWS, all the different documentation, not only the main documentation, but a lot of outlets, blog posts, and everything about best practices.
The cool thing about MCP is that it is not only documentation, it also gives you the ability to define workflows in a specific predefined manner. It is called SOPs, like standard operating procedures. So if you want to create a database, it does it in a certain way so that it follows best practices. It uses CDK under the hood, has a certain configuration there, makes sure that it is secure, and ensures that your S3 buckets are not publicly available. All of those things that otherwise you might need to go through tedious prompt cycles are already there, and the model can take the right choices from the get-go.
The problem that we have seen is that if you take a generic generative AI tool, only in 10% of the cases it was creating production-ready software. Most of the time it compiled and actually worked. But all the nitty-gritty details that you want to have in this code and that you would choose directly were not there. They were not configured correctly. They might miss the index for your DynamoDB table. They choose Aurora over DynamoDB or vice versa, no matter what it is, and that is why you had to iterate over it.
With MCP, we can make sure that the choices are more valid, and we have created a lot more knowledge, especially for Amplify as a framework, so that it picks up things that are already made and the same for CDK. So CDK, CloudFormation, and Amplify are no strangers to your tool if you use that MCP. With that, I would like to give over to Sally, who will go a bit deeper into that and also make it a bit more real with a proper live demo.
Introducing AWS MCP: Unified Tools for Agentic Development
Thank you very much, Mark. All right, so today we will talk about our journey from going from the vibe into something that is real. Generative AI from the hands that I see, generative AI is still a bit of a thing that we are afraid of, but we have to understand that this is just a tool that is making our lives easier. To use this tool, AWS actually offered you a lot of different options like Mark said earlier on. For example, we had Knowledge MCP that was allowing you to get some best practices or example codes and so on. We had API that was allowing you to use AWS CLI to get any information from your account, so it was doing the necessary calls for your command.
We also had frontend MCP that was allowing you to build a front-end application with React and CDK in the most effective way possible. And we had the IAC and several MCPs as well that was allowing you to use these AI agent IDs or coding tools to move forward.
And maybe you heard, maybe you heard not. We released in the middle of July Kiro. Kiro is the AI IDE that allows you to not only write code but also create spec-driven development. You tell it what you want, and it brings it into a more structured way. Today, I'm happy to announce AWS MCP. AWS MCP brings together the knowledge MCP that we have, the API MCP, and many others. These MCPs give you a chance to create your full stack applications through procedures that we've already created for you. We've created more than 30 different procedures to help you have the best experiences for your agents to build an application.
For example, if you want to search for documentation, you don't need to add the documentation MCP anymore. You still can, but you don't have to. You can now use the AWS MCP to call that. If you want to get information about deploying an app to Amplify, you can get that information directly through the SOPs and procedures that we already have. With AWS MCP in place, we want to switch from 10% to 90% so you can have deployment-ready applications through agentic IDs.
If you walk around, you will see the Kiro sign all around. With that, we are committing to building applications through agentic workflows. We are giving you a success rate of more than 90%, and it brings best practices automatically for you. One important point is that with large language models, there's always a cutoff date, and one of the solutions to that was the MCP server. Thanks to AWS MCP, you will know that you get the latest version and all the security-related information as well.
I've been building front-end applications for 10 years before I joined AWS, and for me, AWS is still a new role that I learn about every day. If you're coming from a world like me, it allows you to move faster and learn better. Of course, like everyone does, do not push code from AI into production.
From the SOP perspective for Amplify, we have three important SOPs that you can use. One of them is the front-end MCP that gives you a chance to use the front-end libraries of Amplify to connect with the backend structure that you build. The second one is the backend MCP. Building a backend with Amplify can be easy but also complicated at the same time. People ask a lot of questions about how to do complicated data operations. Now, thanks to the MCP, you can actually build these operations by telling the AI what to do or even ask to learn or get the documentation. You don't have to let the agent do its tricks. You can just ask the agent to bring you the information, and you will still be the person to open it. These are your tools, and it depends on you how you want to use them.
We also have the deployment pipeline. For Amplify, we have something called Amplify Hosting, which allows you to deploy your front-end application easily without touching any additional AWS services. We all know CloudFront and S3 work really well, but if you come from the front-end world, when you jump to the AWS console, it's a scary time. There are hundreds of things written there, and you don't know which one to click. We're all afraid of paying thousands of dollars by making a simple click mistake. These tools are allowing you to bridge that knowledge gap and fear.
Live Demo Setup: Building a Real-Time Location Sharing App
Today, what we're going to build is a demo. I'm going to show you two apps that I built. One is the app showcasing what we have built in 2025 with Amplify. The second thing is I was talking to Mark about how to communicate with people during re:Invent, and I thought of bringing the maps of re:Invent and giving people a chance to share their location for a certain time period so others can see if they can find them and ask them questions. It's not like they dock your location. It's actually you share where you are so people can find you and ask your questions.
So with that, I will go back to the demo. I'm really curious how it's going to work with this and the coding. First question: can everyone read this? Everything good on the backside? Perfect.
So this is the first application that I built. This application is the showcase app, but the apple of my eye is the map face meetup that I call. So let's run this application. To run the application, this is a simple reactive application. So what I'm going to do is I will just do npm run dev. This will just run my application locally so we will see how it looks like with mock data. So I will go back and open up my window. Thank you very much, and it's 8081.
So this is a very basic application that I built. It has the sign in and sign up flows, and I can just add my information. This is totally random stuff and this shouldn't connect to the backend yet. Let me check it really quickly. The goal of this application is actually to show you the video so we can build it together. No, the goal of this application is to show you and showcase you real time the file upload mechanism and actually just everything that you would do with authentication and so on, including the pipelines that you would normally do with functions and everything else as well. So I will do one more npm run dev. And I would hope that demogods would allow me to showcase you this thing. I have a working version of it, but I really want to build this with you.
So let's say that this is my name. So this is how it looks like. This is the map that you have probably seen around, and this is the expo hall map as well that I found online. I hope it's correct. So what you will do is you can click here and you can say that I will be at the venue. You can select the services that you know, fine like CloudFront, for example, and even a picture, and it will show a marker and you will automatically see the other people are going to automatically see how it will look like as well.
So to turn this mock data into real data, I prepared some prompts and I want to take you through the prompt and tell you a few tricks to build your applications through prompts as well. The first thing that I want to tell you is creating a prompt. We will have one single prompt just for the sake of making it run everything, but first rule: never have a full prompt like this. Split it up into multiple ones. Agents are like your kids, you know. You have to tell them exactly what they need to do. They will say no in the beginning because again they will ask more questions, but later on it will come to the place that you want it to be.
Here I said use Amplify Gen 2 and use the AWS MCP server that I have mentioned today that we have released to build this application. I added an application overview and I added some backend requirements on like what to do for URLs, the marker, basically for everything. We have Lambda triggers, so I don't need to do the email confirmation for every user that I have created. Every single detail that I have here is there so I can let this thing know that it needs to do a proper implementation for this. With everything in place, what I will do is I will add even one point here extra and say that make sure you remove the mock data and actually use the real data at the end. I don't want to keep you on hold. That's why I also want to talk at the same time. I was like you don't have to make things complicated. You can simply state what you need, numbered so it knows what is most important and how it's structured.
I'll just copy and paste this. It's touching my lips now, which feels uncomfortable. Maybe this is better, but then it feels super interesting. You hear me good, right? Yes, this is how I drink my tea. So I started the session. What it does is take everything and first get it through the application. It tries to understand what it needs to do. For this, we have something we call in-cure steering files. Steering files are given information about your application to the agents so they don't need to read every single file, but they would have an understanding of what they need to do. In here, it reads everything and checks what clients exist, so it knows what it needs to change and what it needs to remove later on as well.
The next thing I want to show is that it calls something called SOP over there. This SOP is one of these profiles or procedures that allows you to build these applications. In here, we have amplified backend implementation, which I mentioned earlier, and it is calling them directly. So I would just call it, it would get that information and showcase it directly. Like I said, we also have the front end for the front end libraries. We also have the deployment for the deployment part of things as well. It might need to call some AWS CLI as well. Then it will call the AWS tool inside it as well.
How many of you have not heard of MCP servers before? For the few folks, I want to give a very basic example that everyone asks me about as well. MCP servers are basically universal buses for your agents to get information in the most efficient way possible. It allows you to get the latest information from a third party or official party immediately. Earlier on, we needed to do, for example, vector database and so on to bring this information, but MCP changes the way that we build things. However, it's not also the lifesaver thing. You shouldn't put your very delicate data to the MCP as well. It's not secure. It's basically allowing you to expand the knowledge so large language models don't cut off date. It doesn't mean to you at the end because of this. Anyway, we are creating this stuff in the meantime. I also want to showcase the other application that we have built and what we have built in Amplify in 2025. How many of you have built an application with Amplify Gen 2 before?
Demonstrating Amplify Features: Authentication, Storage Browser, and WAF
All right, so let's showcase from there. I will say I trust you. You can do this thing. And I will go back so for building an Amplify application. When this is bigger, it's more visible. For building an Amplify application, you have to initialize the app by installing, by saying NPM create amplify, and this will create this amplify folder that you see here.
From the Amplify perspective, the most important one is the backend.ts file that we have here. The backend.ts file keeps the infrastructure information that you need about Amplify or overall AWS. You can add any feature or any service that you want in the human readable way like auth, storage, and so on. To build them, you need to create an auth folder or storage folder or any folder that you want to build with the service that you want. These things will allow you to just get these all together and make a composition of your infrastructure directly. For example, for authentication, we say I have email authentication and I want to have multi-factor authentication because we released it this year as well. With email to use the multi-factor authentication, Cognito asks you to have a phone number alongside it as well, so you have to have it too. The sender of the email to get the confirmation is like the email that I have.
From the storage perspective, it's just an image and we will showcase how the storage browser works as well. In here, we define the storage as the name. The name defines the beginning part of your bucket because S3, as you know, has universal naming, so it needs to be unique. We have the rules for public, protected, and private. Private is for you, protected is for everyone authenticated, and public is for everyone.
With these in place, let me showcase how they look.
So we have this application with web storage, email MFA, and web authentication to showcase everything to you. First of all, let me show the authentication workflow. With the authentication workflow in place, I will showcase the passkey.
Passkey is the new way of authenticating through your biometrics. If your platform supports it through camera, you can do facial recognition, finger recognition, or whatever you find useful. How we do it is through two things. You can have a custom implementation, or we have a ready-to-use UI component called Authenticator.
An Authenticator basically looks like this. We have the sign in and create account, so you can just create an account. I will use my burner account because my Amazon email didn't work, so I will use the other one. So it will automatically send a confirmation email.
Now I will read the confirmation email quickly, and with the confirmation email, I will be able to log in and do everything that I need. So let me use that one very quickly. I know I can trust you, but I still want to make sure that I have the right information. All right, so let's go back to the demo. So I have my code in place, right? And I'm confirming that.
Once I confirm that, now I'm inside and I have my email and so on to register a new passkey. We have directly APIs and directly functions that you can call from our libraries. So I click here and it will automatically showcase the email address, and once I select the profile, it will directly edit on Chrome. So if I sign out now and go back to sign in with passkey and click on this button, it will show me that. And when I use my finger, it will allow me to log in.
So this was more or less the passkey approach with biometrics. The second one that I want to showcase is in the meantime I want to check out the. And this thing is reading every information. Another one that I want to showcase is the Storage Browser.
Storage Browser is a plug-in and use kind of component that we have to cater as well. This allows you to use S3 without extra hustle. We have UI components directly that you can use with the structure that you have built. For example, in the protected one, like we have nothing here, I can just upload an image by dropping something. I will just drop very quickly. Yeah, okay.
All right, let's go back to demo. So you can see that I put an image here and I can just click on the upload button. Once I click on the upload button, I can go back to the protected and I can, for example, display the image that has been created. I can click on download to be able to download the image directly, like it is here. I can also do simple operations like deleting and so on as well.
With all of these, imagine you can just do it with one line of code. You don't need to think about how to do it and what to do it. Everything is happening through Amplify's way of building apps, which is, let me go back. Which is through the amplify_outputs.json file here. It keeps all the information that we need and it allows us to share this through the components as well.
But this is not it. We have more things to showcase. In the meantime, other things work. And I want to showcase the multi-factor authentication as well. Last year we announced the OTP, so you can use the authenticator apps like Google Authenticator and others, and you can enable it with one click or one call to the APIs. And this way, now if I sign out and if I sign in.
Now it will ask for a confirmation code to me, and the confirmation code should be coming here as well. Yeah, sure, sure. Yeah, the confirmation code is here as well.
So I can click, close it, and go back and paste this one here. So if I confirm now I have the multi-factor authentication set up. For all of these, what we did was two things. One was calling the library from the front end, and the second thing was you have seen the authentication workflow, right? Adding this information about the auth here was the way to go, like adding this true to go.
Let's do a time check. All right, so we have showcased multi-factor authentication, we have showcased the storage and so on. I will take you through the code, but before I do, I want to check out what we are doing. It is doing some changes, which is good. Meanwhile, let's go back and check out the code. For the storage, for the UI component, what you need to check out is how we are using this. Let me showcase the storage page.
In the storage page, we have many components that we have used, such as description, the storage browser, and so on and so forth. This allows us to bring in the storage browser directly into place. So let me find the code. It's really hard with this. So if I go to the demo here, you have to show. Yes, so everything that we have here, like we are first configuring the browser here, the Amplify here. Before we use the Amplify libraries, we have to pass in the information directly. The storage browser here gets the adapter to get the authentication, not only authentication, but authorization for work as well. Storage browser works directly through the UI components that you have created and you can just embed it into anything, including the card components as well. Here you can see that we have the Authenticator. In case the user is not authenticated, we don't want to showcase them the storage browser, so everything can be through this one line of code that is coming from the Amplified UI libraries itself. All right, so let me go back.
In the meantime, I want to showcase you one last thing and how we have deployed it, how AWS Amplify hosting works, and this year we also announced WAF support which will allow you to have a firewall mechanism easier. So let's go back. Let me log in really quickly. All right, so let me go back to demo. So in the AWS Amplify console, if you go to the AWS Amplify, you will see either an empty page in case you didn't do it or every application that you have created for Amplify. When I showcase, we have this application in place. Everything that I've shown you was out of the deployment stuff, so I didn't use the local version for it, so you can see this is the real one. You can see the previous deployments, the statuses, and the problems that happen with them. Also for hosting, you can have many different approaches, and one of them is the firewall. With this one click here, now you can have WAF support directly. So you can prevent additional attacks and so on from happening to your Amplify front end applications with a single click. It allows you to protect yourself from the common vulnerabilities automatically. Malicious actors, for example, someone might try to just send a user agent call like my user agent or something, and if it matches, it will automatically trigger back and calls directly and of course block IP addresses as well.
Looking Ahead: The Future of Amplify and Final Demo Results
With these, let's see what we are doing still. I'm very happy that Sally could actually do a live demo one-handed with a mic holding in the other hand. I don't know if I would have dared to try. So what you could see is that there was a lot of complex features like passkey, storage browser, and web application firewall, all kinds of things, right? But the amount of code necessary was like in the twenties of lines of code instead of hundreds if you try to do a Botus 3 application with all the security codes.
Because it's so little code necessary, by using ready-made components, we can guide the AI to actually use those few pieces, and it is much easier to review. I don't know if you have seen a 600-line pull request on GitHub with just a "ship it" comment underneath, while a 10-line request got like 10 comments on it. That's where we are, right? You see hundreds of lines of code and at some point you say, yeah, okay, go ahead.
For 2026, this is something that we plan for Amplify. First and foremost, we want to get deeper integration with CDK. This is also because CDK and Amplify are in one team underneath me. What we want to have is that you have more escape hatches, that you can use more constructs, that you can use more components within AWS without having to do custom work. If you think about a framework, usually frameworks have a certain limit, and then you're stuck. If you want to do something custom, you need to do a lot of handstands to do it. But in our case, we actually want to give you the full power of AWS, but with everything with sensible defaults that most of you need. We can cater to the 95 percent, and the 5 percent should still have a good time to actually escape where they need to go.
Amplify libraries working with any other infrastructure as code platform is something that we have in mind. If you have created your stuff with Terraform beforehand, I don't mind. That's fine. But Amplify should be able to recognize what infrastructure there is and actually apply that. We would like to go even deeper on integration with other AI tools. Because we are an open source framework, please reach out over GitHub or Discord to tell us what you need for your application development. Last but not least, the AWS MCP is part of my team's responsibilities, and we will get more and more deep SOPs in there to actually help you to get things out to life as quickly as possible.
Please reach out to us on the Amplify documentation on GitHub if you really want to give us feedback. If you have feature requests, please, please, please give them to us on Amplify Discord to really have a conversation with my team directly. At the end of this talk, I would really like to ask you to also go to that survey. We would like to know a little bit better how you use infrastructure as code, what you actually demand there, how your experience is using it, and what we can do to improve it further. So CDK and CloudFormation, we are all trying to get things forward for you so that you can have a good time.
Let's see if the application now is deployed. Almost there. We are almost there. I know with a lot of setbacks and so on, it was a very interesting session, but thanks for sticking around. Thanks for being here for us. It's really important that we tell you everything that we have done for re:Invent, not only re:Invent but also for Amplify as well. Amplify is one of those services that is very close to my heart, and I'm really happy that we can showcase what we have built around it.
Right now it says it built everything. It actually deleted the mock files and so on. Let's see if everything is working fine. We will start off with, I hope, a proper sign-in flow. Then afterwards we will add the marker, and if it works that would be great. If it doesn't, then I will share the code with you later on. I will open source it hopefully so you can try it yourself as well. All right, so everything is in place. Let's go. Let's create a new user.
So the first thing that we will see is it is directly going to say that the account is created. Once the account is created, it will send any information and so on, but I didn't want this. I actually wanted it to skip it so you can add this flow directly into it as well, and it should have, but let's see. Let's see what it did with Amplify. Yeah, for the pre-sign-up we have everything in place, so maybe it was just a simple error. Let's see how we can log in.
There's already an assigned user. Let me log in. All right, let's do one more round and we will be good. Let's see if we can see the problem and fix it immediately. So right now what it does is it actually confirms the user and everything, but it is not allowing me to move forward with that. So the issue might be around the old flow, the old component, and once we work with it directly. No, it's not. It will check the component directly. It will check the state, which is your state. The useAuth hook is controlling the state for the authentication for the app and it uploads everything and checks out the run build.
You can see that for every change that it does, it commits stuff. This is what I tell it to do. Tell it to do so it can know what I have changed and so on as well. Once everything is in place, we will continue. This is also not working. It keeps dropping out. OK, this might be the only one. Sorry everybody, this might be the only station in Greenland history to have five or six different microphones in place.
All right, we are like we started off. Let me open up the dev connect here. Let me refresh and log in. Come on. I will just do one more thing. If it doesn't work, I will cheat and I already have a deployed version. I will just showcase that one. Yes, so we logged in and once we logged in, the next thing that we will do is add a location here. I would say Expo at the Venetian. I would just say, for example, I'm an expert in API gateway. It will add the user directly and you can see that we have something here.
You can also upload images too, so I can just click on here and say again Aria. Like I stay at Aria, so I would just write Aria and say ECS and no worries at all, and I will just select the image that is showing me and I will just upload that image. Once that image is uploaded you can see that it actually brought everything together. It cascaded and when I come in, you can see that both of the markers have changed because I edited my image. When I create a new user, it will showcase the others as well. So the demo worked. The storage I put everything in and we can slowly take time. Everyone is sticking with it.
; This article is entirely auto-generated using Amazon Bedrock.
























































































































Top comments (0)