🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Modernize managed file transfer workflows with event-driven SFTP (STG339)
In this video, AWS Transfer Family product managers Smitha Sriram and Suh Yoon, along with principal solutions architect Prabir Sekhri, demonstrate how to modernize managed file transfer (MFT) and EDI workflows using AWS services. They cover Transfer Family's support for SFTP, AS2, FTP/FTPS protocols, authentication methods including custom identity providers, and storage integration with S3, EFS, and FSxN through S3 access points. Key features highlighted include VPC endpoints with IPv6 support, SFTP connectors with PrivateLink, Transfer Family web apps with IAM Identity Center integration, and AWS B2B Data Interchange for EDI processing. The session includes a live demo building a cloud-native insurance claims processing system using Terraform, incorporating GuardDuty malware scanning, Amazon Bedrock agents for intelligent document processing, and event-driven architecture with EventBridge. Customer success stories from FICO, BMW Group, and City of Los Angeles demonstrate cost savings and operational improvements. The presentation emphasizes the shift from legacy infrastructure to fully managed, scalable, event-driven MFT solutions with built-in compliance for HIPAA, PCI, and FedRAMP.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction to Managed File Transfer and EDI: Setting the Stage for Modernization
Hello everybody, can you hear me with your headphones on? All right, good. Let's just get a quick show of hands. I always like to do this. How many of you use SFTP? How many of you use an MFT, a managed file transfer, or are responsible for delivering an MFT? Great. How about AS2, the protocol, EDI? Anyone here? I see some. And how about partners here who are helping our customers with their journey? Nice. I recognize a few here. Awesome. You are in the right session here. STG 339, helping customers modernize their managed file transfer with event-driven SFTP. I'm Smitha Sriram. I lead the product management for AWS Transfer Family. Joined by me is Suh Yoon, who is also the product manager for Transfer Family, and Prabir Sekhri, a principal solutions architect for Transfer.
Before the afternoon slump kicks in, let's get started. I'm going to talk a little bit about managed file transfer and EDI, just level setting on what that means to you and our customers. Then I'll talk a bit about trends that I've been hearing and how those trends are informing our roadmap and our approach to this space. I'm going to hand that over to Suh, who will do a deep dive into features that underpin our MFT and EDI offering. Prabir has a demo showing how you can build a cloud native MFT. There's some agentic AI in there, so I won't steal his thunder by talking about it now. We'll wrap it up. We won't have time for Q&A here, but my colleagues and I can meet you outside if you want to talk more.
As your businesses grow, so does your data and data sources. In order to get value from that data, you want data to be mobile. For best-in-class data mobility, you need an MFT strategy that can help you move data securely, effortlessly, and efficiently within your organization and across it. This value means different things for many of our different customers. For example, if you're in payments, reconciliation, or you're a healthcare company needing to process claims and claim payments, you perhaps need to interact with a clearing house for settlements. Or you're a producer of value-added data sets that you want to sell. You want a robust MFT that helps you grow your subscriber base.
Many of our customers use MFT to automate their internal processes from HR to finance to payroll. Customers in healthcare, pharmaceutical, or financial services have to regularly submit filings to regulatory bodies like the FDA or SEC, and that's where an MFT has helped them streamline those submissions. A big industry or group of industries is supply chain and logistics, where MFT is used for exchanging transactional data like purchase orders and shipping notifications. For all of these customers, building their data pipeline using their MFT data has become super important.
Four Pillars of MFT and the Launch of AWS Transfer Family and B2B Data Interchange
As such, there are four pillars that we see that underpin an MFT. One is support for industry standard protocols. You have a business application that resides in your environment and your trading partner's environment, and the common denominator for them to communicate are SFTP, AS2, EDI X12. These data formats make it seamless for applications and users across environments to communicate. Second is authentication and access control, so you give access to data that your users should only be able to see and use. Third is processing and automation so that you can build a data pipeline from this data end to end. Fourth and most important is robust auditing to meet your data security and regulatory needs.
Some of the trends that I'm hearing, starting with security, have always been a priority and will always be. Customers are looking for secure and reliable SFTP and MFT in the cloud. Second is tools that give them proper governance so that they're able to track and audit what's happening, who's accessing the data, and who has access to what kind of data. Third is simplicity so that they manage as little infrastructure as possible and make it easy to build their MFT.
Lately, three more trends have emerged. One is automation, where trading partner relationships serve a dual purpose as a data pipeline. As customers transact, they can also get real-time insights into that data by running analytics. Many of these customers are also looking to become AI ready and innovate in their own business areas. That's exactly the reason we launched AWS Transfer Family here at re:Invent 2018. Our team has seven years of experience helping customers like you modernize your MFT. The service is fully managed, highly available, and scales in real time to meet your business needs.
We've developed many features over the years that help you migrate from existing MFT systems without having to change your business partner integrations, making it easy to accelerate your migration. AWS Transfer Family is built on event-driven paradigms so you can run the whole process automatically. Gone are the days of manual polling and writing scripts. Now you can plug into Amazon EventBridge and automate your processes end to end. Most importantly, the service comes built in with industry standard compliance such as HIPAA eligibility and PCI to help you meet your security requirements.
More recently, we launched AWS B2B Data Interchange, which is a managed EDI service. The service helps you automate the translation, validation, and generation of X12 documents. Again, the service is fully managed just like AWS Transfer Family and operates on a pay-as-you-go model like any managed cloud service. A favorite feature of mine is the use of generative AI to generate mapping code between X12 and your custom formats for JSON and XML and vice versa. This has really helped our customers accelerate their EDI migration using this service.
Customer Success Stories: FICO, BMW Group, and Partner-Driven Transformations
Regarding the resources we provide, we started by launching the service with SFTP servers, so SFTP and FTP clients anywhere in the world can connect and the files land in your S3 bucket or EFS file system. Then we added a feature called SFTP connectors so you can talk to external SFTP servers, whether they are located on a public IP or accessible over a private network. More recently, we added Transfer Family web apps, which is a managed web UI for non-technical users in your organization to access files stored in S3. This will allow us to talk in more detail about Transfer Family web apps and how you can use it to empower those business users.
With the files in your Amazon S3 bucket, as many of you know, the possibilities are endless for what you can do with that data. One example is EDI processing. I want to take a moment to thank tens of thousands of customers around the world who are using AWS Transfer Family today. One specific customer I'm excited to talk about is FICO. FICO helps customers in around 80 countries globally with anything from protecting credit cards from fraud to increasing financial inclusion to improving supply chain resiliency. As a global leader in credit card scoring and analytics, FICO processes massive amounts of sensitive data using managed file transfer, which means security and efficient file transfer is crucial for their business operations.
They were using a legacy MFT, and the problem was they had to manage a lot of infrastructure even when they weren't using it, which resulted in overhead and costs. They embarked on a journey to modernize their MFT using AWS Transfer Family, and now they've eliminated a lot of infrastructure, reduced their deployment time because they can use infrastructure as code, and lowered their total cost of ownership. They can also track their costs more granularly and accurately as they roll out Transfer Family across FICO's multiple business units.
There's a QR code there that goes into detail on FICO's MFT transformation journey. Another customer that doesn't need an introduction is BMW Group, a global leader and manufacturer of vehicles and motorcycles. They developed a vision, sound, and analytics service on AWS not just to keep up with their reputation of delivering high-quality vehicles, but they used AI and analytics to analyze image and audio assets from their production line. In order to do that, they needed their camera data to go into AWS. AWS Transfer Family helped bridge this camera data into Amazon S3 by SFTP, transferring the data directly into their S3 buckets with low latency.
Now with all of this data in S3, they are able to process and deliver very compelling analytics with high quality. They store about 1.3 million files worth 1.3 petabytes in Amazon S3. Let me also take a moment to talk about our partners. We have a service delivery program where we validate partners who follow best practices to deploy AWS Transfer Family for customers' MFT implementation. For example, Scale Capacity recently helped the City of Los Angeles save substantial costs by migrating their MFT to AWS Transfer Family, and that QR code goes into detail about Scale Capacity and the City of LA, a public sector customer.
Another partner story that I want to share is BisCloud Experts. They are a leading IT consultant and DevOps company who help businesses empower with scale and confidence as customers innovate. A supply chain and logistics customer approached BisCloud Experts because they were struggling with their EDI infrastructure due to overhead and costs. This customer was managing over 1,000 trading partners and exchanging 250,000 messages per month. BisCloud Experts stepped in, developed expertise in AWS Transfer Family and B2B Data Interchange, and helped this customer migrate their EDI to these services within a very short amount of time, so short they did not have to renew their license.
As a result, the customer saved over a million dollars annually. Not only did they achieve cost savings, but they also reduced their trading partner onboarding time because now they have full control over the process. This is a great story, and again, there's another QR code so you can read about more customers who have benefited from our services. With that, I'll hand it off to Sue, so she'll talk about the features.
Transfer Family Servers: Networking Controls and Endpoint Options
Great, thanks Smita, and it's great to be here with you all. Smita gave you all a great overview of the common use cases, trends in MFT and EDI, and an overview of our service. In this part, I want to talk a little bit more about some of the specific features that you can use to build end-to-end complete MFT and EDI workflows. We're going to start with talking about the launches for this year. The great news is everything I talk about today will be tied to a recent launch that you can see highlighted in pink. Unfortunately, I don't have time to go into every single launch, but after this, as Smita mentioned, Prabir, Smita, and I will be sticking around to make sure we answer any and all of your questions, and we're looking forward to the discussion.
We're going to return to this service overview that Smita had gone over earlier, and we're going to use this to really navigate through the various modular offerings of Transfer Family and B2BI. Here I wanted to start with servers. Transfer Family servers are fully managed endpoints that eliminate the need for you to manage or maintain your file transfer infrastructure. For servers today, we support protocols SFTP, FTP, FTPS, and AS2. Because servers are fully managed, they're scalable, they're secure, and they scale to your demand, we're finding that a lot of customers like you are using servers as a foundation for your MFT whether this is for use cases like B2B exchange, integrating your applications, or managing data distribution.
Now when we're talking about servers and especially if you're in a regulated industry like financial services or healthcare, your first priority is often networking controls, and in 2018 we launched the service with a public endpoint.
The public endpoint is really simple to set up in the console—you could set this up with a few clicks, and then your external partners can start connecting and transferring files over the public internet to a managed hostname, and it lands in your S3 bucket. We also use this managed hostname for high availability failover and load balancing as well. On the public endpoint, there's also built-in endpoint protection that benefits you.
As we were talking to customers, we started to learn that you wanted more control over who can access your endpoint, and you also sometimes didn't want public internet at all. That's why we launched the VPC endpoint. With the VPC endpoint, you can choose to make this internal, so anything that routes into your VPC. This is great for your internal pipelines—your users can connect over VPN or Direct Connect, and this is great because you can manage access through your security groups and other controls.
But what if you want the best of both worlds? You want to open up your VPC endpoint to the public internet while you still want to leverage the existing controls in your security groups. You can choose to make your VPC endpoint internet-facing as well. You can choose which elastic IPs to bind to your internal IPs, and you can still leverage the controls in your VPC and in your security groups. You can also specify non-standard ports that the Transfer Family supports today, like port 2222 or port 2223.
I also wanted to mention that a few months ago we added support for IPv6 for both the VPC endpoint and for public endpoints as well. The benefit of this is that with a dual-stack endpoint, you can support clients that are IPv6 and IPv4 enabled without having to transition to IPv6 all at once. Now that we've talked about who can access your endpoint, let's talk about who can authenticate to them. Transfer Family today supports three authentication modes.
Authentication, Storage Integration, and the New FSxN Support
The first is service-managed, which provides simple key-based authentication. This is great for getting started quickly. You can create users right in Transfer Family and manage your users in the console as well. We also support direct integration with AWS Directory Services. If you're using AWS Managed AD or you're using AD Connector, you can simply select your domain in the console and use this with your existing users and groups. This is great if you want to maintain consistent access.
Our third and most flexible option is the custom identity provider, or custom IDP option. There are two deployment models you can use here. You can call Lambda directly, which is great for straightforward authentication flows and what we generally recommend. You can also route through API Gateway as well, which provides you the benefit of being able to leverage a web application firewall in front of your authentication endpoint if you're looking for more capabilities like rate limiting, geo-blocking, and so on.
A few years ago with custom IDP, we announced the option for multi-method authentication. Now you can specify a combination of methods that your users will provide. For example, you can specify key only, password only, key or password, or key and password. How this would work is in the initial authentication request, your user would provide the key, and then the server would prompt for a password. The final thing I want to say here is that as of a couple of months ago, we made it even more flexible for your identity provider.
Now on a Transfer Family server, you can dynamically shift between whatever IDP option you're using. This is because we've heard from you about use cases like starting with a proof of concept where you want to get started quickly. You start with service-managed key-based authentication. Your proof of concept goes smoothly, and you're ready to move your workload to production, but you don't want to recreate your server. This capability allows you to do exactly that. You can simply shift between your IDP modes, and there's zero downtime to your users.
Double-clicking a bit into the custom identity provider option, the custom IDP option is our most flexible option. It's an adapter and it can pretty much talk to any IDP out there, which is great for lift and shift without having to worry about importing your users or passwords. But we also heard from you that writing your custom IDP or writing your custom adapter can feel daunting.
You want a blueprint for your specific IDP. Additionally, you cannot store or access specific information that Transfer Family needs for your users. That is exactly why our AWS community, in partnership with our service team, launched a Transfer Family custom identity provider solution, which is an open source standardized option that you can use. It is easy to deploy with CloudFormation templates and infrastructure as code, and it also has several ready-to-use modules for popular identity providers you may already be using, like Cognito, Okta, and Entra. You can see the list on the screen. Finally, there are several built-in security best practices, like granular per-user controls, so you can specify things like IP allow listing per user or per IDP. With this option, you get the flexibility while maintaining security best practices.
A few months ago, and Prabir is going to talk a little bit more about this, we already supported Terraform for Transfer Family resources, but we invested in Transfer Family-specific modules in Terraform. This gives you the benefit of being able to automatically provision and deploy Transfer Family resources along with your existing infrastructure. As of a couple of weeks ago, we also announced Terraform support for custom IDP. There is a QR code there, and I would love for you all to check it out and give us feedback.
We have talked about access and authentication. Now let us talk about storage. Transfer Family directly integrates with Amazon EFS and Amazon S3, so your users can transfer files to EFS file systems and S3 buckets. For more granular controls, we also support S3 access points, which through unique access endpoints allow you to specify permission policies and network controls. Earlier, you may have heard Matt Garman in his keynote sharing that S3 access points now support FSxN. I am super excited to share that Transfer Family also supports this integration and also supports S3 access points through FSxN. I am really excited about some of the hybrid access patterns that this will open up.
For example, your external users can continue transferring files through SFTP while your internal users can continue using familiar protocols like NFS and SMB. Getting started is simple. Just create an S3 access point for your FSx file system, and Transfer Family will talk to it just like any other access point today. With this launch, your users will be able to perform file operations like upload, download, delete, and copy. Some limitations to keep in mind are that rename and append operations are not supported, and there is also an upload file size limit of 5 GB. If you want to learn more about the full list of what is supported and what is not, there is a QR code where you can read the launch log to learn more.
SFTP Connectors and PrivateLink: Connecting to Remote Servers Securely
We have talked about servers. Now let us talk about connectors. While servers are endpoints that your clients connect to, connectors work in reverse. Connectors are fully managed SFTP clients that connect to remote SFTP servers. Connectors establish a connection to your remote SFTP source and S3. If you wanted to initiate your connector, you would do that through CLI or API calls. Let us say you want to send a file to your partner. You can simply use start file transfer with the file path. Now let us say you wanted to retrieve a file from your partner. You just specify the retrieve file path and the S3 destination.
Let us say you want to check what files are available. You can use start directory listing to get a complete inventory on the remote SFTP source. Finally, let us say you finished processing and want to clean up. You can use remote file operations that we recently supported, which are move, delete, and rename as well. If you want to learn more about these new operations, there is a QR code that I would love for you to click into. Just like we support PrivateLink for servers, I am really excited to share that we are bringing that same capability to our SFTP connectors. This means your SFTP connectors can now connect to private SFTP servers wherever they are hosted, whether this is in shared VPCs, on premises, or in your partner environments that are accessible through your private networks.
All traffic routes through your VPC environment, which allows you to enhance your security and comply with your security mandates by traversing through centralized firewalls and traffic inspection points. You present the IP address, which eliminates the need for your partner to allow list any additional IPs. Finally, for heavy file volumes, you get the full performance of your net gateway.
Let's talk about how to set this up. As a quick note, this does not require you to modify anything in your existing VPC configuration. However, this does require you to set up two simple components in VPC Lattice, which is a feature of VPC. First, you would create a resource gateway, which you can think of as a bridge between Transfer Family and your VPC environment. Then create a resource configuration that represents your SFTP server address. You can use either the private IP address or the public DNS name.
Your resource gateway connects your connector to these configurations and can now connect to your remote servers through your VPC. The best part is that there are no changes required to your VPC environment. You can use your existing security controls and present your existing IP address. Everything should just work.
Transfer Family Web Apps and AWS B2B Data Interchange for EDI Workflows
Now I want to shift gears slightly because what if the end user persona we're talking about is different? What if these are non-technical users in your organization, like your HR business partners, financial analysts, and so on? All these people need to transfer files to S3, and for you, S3 is where you want to be because it offers security, durability, scalability, and more. But for your end users, you need to provide an easy-to-use experience so they can easily upload and download files. That's where Transfer Family Web comes in.
Authentication is driven through IAM Identity Center, so your users can use single sign-on or present their existing credentials that they're already using for other applications. User permissions are driven through S3 access grants. With access grants, you can enable fine-grained access control so your users only access the data they're authorized to access and only have permissions to the data you allow them to have. For example, read-only access or read-write access. With the web app, you also have customization options so your web app aligns with your company brand.
As for how to set up a web app, it's quite simple, and you can set this up in the console in just a few clicks. While you're setting up your web app, you have several options. You can add a company logo, add a favicon or browser tab icon, and specify the page title, which is what shows up in the browser tab presented to your end users. Web apps are part of Transfer Family, so they share compliance status like HIPAA eligibility, FedRAM, PCI, and more. Most importantly, with your end user in mind, it's a simple-to-use experience that's accessible through any browser they're already using, and it's a drag-and-drop, point-and-click experience.
We launched web apps this time last year, and in that year, we've learned a great deal from you about what use cases you're using web apps for. For example, we're hearing that your finance teams are exchanging reports through web apps, your marketing teams are uploading media assets through web apps, and your research scientists are collaborating on datasets through web apps. There are many use cases, and while they all sound different, the underlying themes are the same. First is easy-to-use user experience, and second is that you're centralizing access across your workforce using existing security controls and identity credentials.
If you look at this flow, it's straightforward. You as the admin create the web app and share the URL with your end user. Your end user then accesses the web app through the URL and uses single sign-on or logs in with their username, password, and MFA that they're already using with whatever identity provider you're using. After a token exchange between your identity provider and Identity Center, your user lands in the web app where they only see the files and folders you've given them access to, either because of their user identity or group membership. They're only allowed to perform certain actions through S3 access grants.
The backend file transfers go directly to S3, and in CloudTrail you can audit all user interactions down to the user identity, so you're able to see who does what. Just like we brought VPC support for servers and for connectors, I'm excited to share that as of a few weeks ago you can also enable a VPC endpoint for web apps. With a VPC endpoint, we create a service-managed endpoint within your VPC at no additional charge. Your users can now securely access the web app through a web browser while keeping all traffic within your VPC.
This is really important if you have internal teams who need to handle sensitive documents and you have regulatory or security mandates that you're trying to meet. Now your users can connect directly through Direct Connect or VPN or from within your VPC. You can also further enhance your security by only allowing access from approved client IPs. If you have mandates you need to meet, you can do that through this feature.
Now with this launch you have two flexible options for using web apps. Through public endpoints your users can continue connecting over the public Internet, which is great if you're using this with external collaborators or partners. Now through a private endpoint your users can connect through your VPC, Direct Connect, or VPN. This is great for sensitive workloads that might require strict network controls. You can choose whatever deployment option works for you, or you can use both depending on who the user is within your organization.
So we've talked about different modes of being able to transfer data through servers, connectors, and now web apps. But now I want to shift gears slightly because what happens if you need pre or post transfer processing, especially for your EDI workloads? That's where AWS B2B Data Interchange comes in. We're seeing a lot of customers using AWS Transfer Family and AWS B2B Data Interchange together. for their EDI workloads, and here's a typical architecture that we see. I'm going to start from the left and walk through it.
In this architecture your trading partner is transferring files, your EDI files through industry standard protocols with AWS Transfer Family like SFTP or AS2. AWS Transfer Family takes this file and puts it in an S3 bucket. AWS B2B Data Interchange is listening to the event, monitoring the bucket, and as soon as the file arrives, AWS B2B Data Interchange is validating the EDI file. It's translating it to JSON or XML and writing the transformed data into an output S3 bucket. It also emits an EventBridge event with the output location and the status. Then this event triggers any post-processing actions you might want and allows your data to flow into your business applications like your ERPs or your data lakes.
If you're interested in learning more about architectures like these, there's a QR code where you can learn more. We launched AWS B2B Data Interchange just a couple of years ago and as you can see we've been really busy investing in the different features for AWS B2B Data Interchange, especially this year. Most recently we expanded support into our first European region in Dublin to now support four regions. If you want to learn more there is also another QR code that you can click into.
Building Modern MFT with Agentic AI: A Live Demo of Insurance Claims Processing
Great. With that, I've talked about some of the components that you can use to build your modern MFT systems, and now I'm excited to pass off to Prabir who's going to show you this in action. Now, building on top of some of the components that Sue and Smitha mentioned, today I'm going to walk you through how you can build a modern managed file transfer system using AWS that leverages agentic AI. Let me quickly highlight the core building blocks. We're going to build our secure file transfer by using Amazon AWS Transfer Family. You're going to add malware protection by using Amazon GuardDuty. And this is what makes the solution truly modern. So instead of using rigid code rules-based engines, we're going to use AI agents to be the intelligence behind our file processing.
Tying it all together is going to be Amazon EventBridge, which forms the foundation of our event-driven architecture. And this is what automates the entire system. How many folks over here use Terraform? Awesome, quite a few of you. So you'll be super excited to know that the demo that I'm going to show you today is built entirely using Terraform. I will be linking the QR code in the end, so please feel free to check it out. There are some other examples that you can also build on.
Now for today's use case, we're going to take a traditional insurance claims processing system, and we're going to see how we can modernize this using our modern architecture. Also, how many of you are from insurance or financial services organizations? Can I see a few of you? So the use case that I'm going to show you today is actually quite applicable to almost all industries who do some kind of file processing.
Typically we start with an ingest phase in which you may have some kind of documents like for our use case we have policy documents, images or repair estimates that are ingested in an SFTP folder. The next phase is our extract phase, in which case most organizations do have some kind of basic OCR. OCR is a technique to extract text from images. The challenge with traditional OCR is that it is rigid. It only expects files and formats in a certain way.
Finally, our next phase or the last phase of our flow is the analysis phase in which typically there is a bit of manual processing. You may have some kind of rules-based engine that does part of the automation for you, but this is where humans have to get involved and you have the human in the loop. The challenge for this entire approach is that not only is this error prone, it is time consuming and it just doesn't scale.
Now let me walk you through how you can modernize this by using cloud native architecture. Here's our modern approach that builds on top of the core building blocks that I spoke about: secure file transfer, malware scanning, and agentic AI. I'm going to show you a demo that will automate an end-to-end workflow of claims processing using agentic AI all powered by event-driven architecture. Let's dive deep into each of those components to explore more.
Our first stage is where we are modernizing our foundation. We will replace our legacy SFTP servers with AWS Transfer Family, which means that there's no infrastructure to manage, it automatically scales and it's highly available. Now we want to authenticate external users, and we do that by adding support with custom identity provider. These external users could be anywhere from repair shops to partners that your company works with, and it allows you to offload and almost integrate with their identity provider so you don't have to do separate credential management.
Now we want to store everything in Amazon S3 for unlimited scalability and high durability for all your files. In the next stage, we will be adding automatic malware scanning. This is a requirement for most regulated industries to do malware scanning of all files as they land, so we're going to achieve that by leveraging GuardDuty's native malware scanning capability. GuardDuty offers immediate threat detection and intelligent file routing for all your files. What it means is that your clean files will land automatically in a clean bucket and any malicious file or suspicious files move directly to a quarantine bucket, all done through event-driven architecture.
And lastly, when we talk about the intelligence layer, this is where we've seen a huge transformation happening in industries that are trying to modernize their traditional file processing workflows. So we are going to be using Amazon Bedrock AgentCore, and we have AgentCore orchestrating all of our AI agents. Now the agents themselves can use the most sophisticated models that are available not only in Bedrock but even from other cloud providers. These could be from Anthropic, the Claude models, these could also be our Amazon Nova models, and these in general offer a lot more flexibility and accuracy over traditional OCR.
Now finally we want to make this accessible to human users in a simple way. So this is where we use AWS Transfer Family's web apps feature that is a simple browser-based access for your end users to access these files. It is completely self-serve. It has security built in, which means that it provides role-based access, the right access to the right people at the right time, built on the principles of zero trust. Now let me show you all of this in action. It's probably the demo that you've been waiting for.
And before I dive deep in, I want to mention that I've broken this demo down into multiple stages. You will see that in the code as well. For this demo I've pre-deployed certain stages. I have the identity foundation pre-deployed. I've set up a Transfer Family server with custom identity provider. I've also set up our automatic malware scanning using GuardDuty. Now we're starting with the dual identity system. We're using IAM Identity Center for our insurance users. For this demo I've created local users, but your Identity Center setup may look very differently. You may have already federated with another identity provider.
I have 2 users: claim reviewers and claim admins, and these are what I'm going to be using for a demo. Now for our external users, I'm using Amazon Cognito to simulate user management. I've created an AnyCompany Insurance user, and you see AnyCompany Repairs user is the entity and identity that I'm going to be using throughout the presentation of the demo. I've also set up a transfer server, so let's just check that out in a second. And if you look at the identity provider configuration, I set this up using a custom AWS Lambda, which is our custom IDP solution. I've created a bunch of different buckets. I'm using my random page generator. So if you like these names, I'm glad you did. Alright, so we're going to be using our claims file to ingest all our files, but the bottom three buckets—clean bucket, errors bucket, and quarantine bucket—are going to be used by our malware scanning.
Now to test our event-driven architecture, what I'm going to do is upload a file. I'm using Claude CLI by the way, so I'm not typing SFTP commands manually. I'm very clumsy, so I thought this is a better use of natural language by asking Claude to do the work for me. So what I've done is Claude created a test file. I've uploaded this to my bucket. Claude is also parsing logs from CloudWatch, so it states that the GuardDuty scan has now been kicked in. And let's see what's happening. So the first thing I see is that my claims file test file did land up in my SFTP bucket. If my malware protection is working as designed, I should have seen this file automatically processed, and if I do a refresh, it should end up in my clean bucket, which is exactly what you see.
Now I want to test the same thing for my malicious file. So I don't want to infect my system, so I'm using an EICAR file, which is a test file that's used to detect malware scanning. It's the same flow. I'm going to be uploading this into an SFTP folder. OK, so again, Claude uploaded this file using SFTP. We can double check on our console. We see that this file did land up in our SFTP folder. Now I also want to double check if my clean bucket is intact. If I quickly do a refresh, I see that nothing's changed. I still have my clean file that I uploaded earlier. But if I go to my quarantine bucket, this is where the magic is. GuardDuty automatically detected this as a malicious file and moved it to my quarantine bucket.
So now that I've done this again, Claude is summarizing all the findings. It detected everything that we saw in the console, so nothing new there. Now what I'm going to do is deploy my Stage 3, which is the agentic AI foundation layer, and you can do this using a simple terraform apply. That's all that Claude is doing on my behalf. So the commands that you see over here, they're not something from the Matrix, they are a terraform apply, so most of you are familiar with this. That's all I'm doing right now. I want to just quickly walk you through a snippet of what my agents look like. What you see over here is that instead of using complex logic, I'm using a very simple prompt, and if those who have used some kind of chat-based interface such as ChatGPT, this actually might look very familiar. So I've actually asked this agent to do something for me. I've said that you are a claims processing workflow agent. You use STRANDS, and your job is to extract entities, validate the damage claim, insert the enriched data into a database, and generate a summary. So that is my instruction to this agent. Now the agent is going to figure out what to do and how to do it, and which agent or sub-agent to engage.
Now for this demo I'm actually using two different claim files, so I'm going to just show you this in just a second, but before I do that, let's check our agents in Agent Core. Anybody using Agent Core? OK, some of you, alright, so for those who are using this, this might look very familiar. I've deployed 5 different agents and I'm using agent core runtime. That's exactly what we saw, like we had 5 different agent files. For this demo I've also created a DynamoDB table, which again kind of simulates how most financial services organizations do downstream processing. You may have some kind of database that you will keep as a book of records or downstream processing of claims. I'm going to show you quickly our claims file that we're going to be testing. So our first claim is a regular claim of a car that had a rear bumper damage in a shopping center parking lot. There's an estimated repair cost of $995. So just, yeah, some entities over here and I have an image of a car that has a fender bender, so very consistent.
Now let's test this. I'm going to upload this using the same process with our event-driven architecture. We're going to upload this into our SFTP folder. You can see that just happened, and Kiro is going to start parsing the logs. This is the second phase of our urgent AI workflow, which is document scanning. We see that the file was successfully uploaded into my clean bucket, and the agents actually moved them to a submitted claims prefix for organization. I see the same files that you saw.
Now, Kiro, I've asked you to parse CloudWatch logs. So instead of going to the CloudWatch console, I like working in the IDE, so that's what I'm doing. Instead of going to the console, Kiro is parsing the logs for me, and we see that our agents successfully parsed the entities. Now it's giving me a summary of exactly what it found. My agent says that this damage is consistent. It is 90% confident that this claim is legitimate, and it gave me a reason why it thinks it's legitimate. I've also instructed this agent to write this file to an S3 bucket, so it's processed the claim and created a summary for my end users.
If I open this file, this is the exact same thing that we saw in the IDE. It says that there's a car that had a minor bumper damage. It is consistent with a claim description, so everything is fine and great. So it can be processed and paid. I'm going to do the same test for our claim two now. We see our claim two was successfully uploaded, so the same flow applies and nothing has changed. I see that my agents have finished the processing and extracted the entities. Now let's look at the results. This claim is fraudulent, that's what my agent says, and it's 95% confident that it is fraudulent. I didn't have to hardcode any of this. The agents did the reasoning. Let's look at claim two and understand why it is fraudulent, so I'm just going to quickly open claim two.
If I look at my claim form, it's a form that states that this car had a minor front bumper scratch in a grocery store parking lot. There's some more description about the scratch itself. The scratch is about three inches long. Now if I look at the image of this claim, do you guys think this is a scratch? Probably not. By the way, no cars were harmed in creating this demo. This is all AI generated. It would have been a very expensive demo otherwise.
So remember, we created DynamoDB tables. My agents, one of my agents also ingested meaningful entities. It knew exactly what to ingest into a DynamoDB table. So it's very consistent with what we saw. We have two claims. One has consistent damage, which is labeled as true, and the other as false. All the entities that were extracted by a claim form and other metadata are stored.
Now all of this is great, but we want to make this data accessible to human users in a simple way. Your human users or your business users, they're not going to log into Kiro and check out this data, and they're probably not going to log into the AWS console. So this is where we're going to be deploying the web apps functionality. I just did the same thing. I created web apps using Terraform, and we see that the web app has been created. I'm going to log into the web app using its access point, and I'm going to use the two identities that I created in the beginning of the presentation, which were my claim reviewers and my claim admins. For my claim reviewers, I've given them read-only access. Let's see if they have indeed what I defined in infrastructure code. As we see, our claims reviewer only has access to read these files of our processed claims and submitted claims, which is very consistent with what we saw. We see there are two claims from our S3 bucket, and just to double check, I can download this claim. This is exactly what we saw, but instead of going to the console and typing SFTP commands, this offers a very simple interface for end users.
Agent Architecture Deep Dive and Terraform Module Resources for Implementation
I also want to test the same thing for our claim admins. For my claim admins, I've given them read-write access to the entire bucket, so they should ideally see a little more than what we saw for our claim reviewers. Perfect. So they see the entire bucket. They have read-write access, so this is working as designed. We see all the prefixes that were in the bucket, and just to test that they have read-write access, I'm actually going to delete this claim file. Perfect. So if I do a refresh, I see that there are no files, so again this is working as designed.
Many of you may have questions about agents, so I want to double-click on how I've created these agents. Agent Core is a foundation that's orchestrating all of our agents. I've built all of these agents using Strands. Is anybody familiar with Strands? Some of you are, so for those who are not familiar, it's an open source SDK built by AWS that allows you to create agents in a very easy and flexible way. A fun fact is that all of these agents that I built were created in five minutes or less using QA CLI with lots of prompting. I want to walk you through what these agents are doing at a high level.
We have our entity detection agent, and I've asked it to detect and extract entities from my PDF. I have my validation agent that has multimodal capabilities and is able to extract entities, read the text, and also compare it with my image. This is what's doing the fraud detection, and this is what gives the score that it's 90% confidence. That's the validation agent's output. Our summarization agent is creating a meaningful summary that we all saw in our S3 bucket and also through web apps. Our database agent ingested all these records into a DynamoDB table.
The brain behind all of this is our supervisor agent. Remember the prompt that we saw earlier? The prompt was for my supervisor agent to decide which sub-agent to invoke and at what time. The reason I'm showing this is because this pattern is not only applicable to insurance, but it can be applied to any industry and any use case. Whether you're in payments, hospitality, or healthcare, this shows you how you can take a file processing workflow and use agentic AI to transform your file processing workflows and add intelligence to it by using agentic AI.
Before I wrap up, I want to emphasize that what you saw isn't just a demo. Everything that we built is powered by our Terraform Transfer Family module. I've given the QR code here, so if you want to grab it, please feel free to do so. Today we saw that we deployed a Transfer Family server, deployed custom identity provider support with it, set up malware protection, agentic AI, and web apps. This is just one of many examples that we have in our Terraform Transfer Family module. They're all built around real-world use cases that you can deploy and use right away.
We launched this module only a couple of months ago, and we've already had over 10,000 downloads. I want to call out and thank the amazing solution architects that have been contributing to this module, as well as external contributors like you. We've actually had a lot of external contributions, so if you're in the room, thank you. We do maintain a public roadmap. If you like what we're working on, please give it a plus one so we know where to focus. If you think anything is missing, please create a GitHub issue. My team and I review these feature requests daily, and I'm really excited to see what you will build using these modules. With that, I'm going to pass it on to Sue.
Awesome, great demo, and I'm so excited that this is all supporting what we can do. For this, we're going to wrap it up with a few resources and next steps for you all. Hopefully from this session you're able to take away how simple it is to create a secure and modern MFT system using Transfer Family and B2BI. Like Prabir's demo showed, you're able to see how you can unlock innovations using agents with Transfer Family as well.
As for availability, here are all the regions that Transfer Family is available in today. All recent regions are highlighted in pink. At re:Invent, if you want to learn more about Transfer Family and even rebuild what Prabir just showed you, here are some sessions for you to check out. Finally, here are some resources with QR codes for you all to save for later. We have our user guide where you can see end-to-end tutorials and guidance. Here's our website where you can learn more about our different offerings. Most exciting to me is a self-paced workshop that's hands-on so you can get your hands dirty and build event-driven MFT workflows for yourself. With that, thank you so much, and please reach out to your account teams to connect with us. We're looking forward to talking more with you.
; This article is entirely auto-generated using Amazon Bedrock.










































































































Top comments (0)