🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Modern SFTP: Deploy AWS Transfer Family, Identity, and Automation (STG419)
In this video, AWS solutions architects Matt Boyd and Prabir Sekhri demonstrate modernizing managed file transfer systems using AWS Transfer Family and agentic AI. They build a complete insurance claims processing solution through four stages: deploying Transfer Family SFTP servers with custom identity provider integration using Cognito, implementing automatic malware scanning with Amazon GuardDuty, creating an AI-powered claims processing workflow using Amazon Bedrock AgentCore and the STRANDS SDK to extract entities and detect fraud, and finally deploying Transfer Family web apps with S3 Access Grants for claims reviewers. The entire architecture is deployed using Terraform modules, featuring event-driven orchestration with Amazon EventBridge. They demonstrate live coding and deployment, showing how AI agents can process claims in seconds versus hours of manual work, achieving 90-95% confidence in fraud detection by comparing claim descriptions with damage images.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction to STG419: Modernizing Managed File Transfers with Code
Thank you, everyone, and welcome to STG419. Quick show of hands, who here has been to a code talk before? Anyone? All right. Who came to this session specifically because you saw that we were going to show code instead of just PowerPoint slides? Anyone? All right, that's good. Who came because it's after lunch and they're looking for a place to take a nap? All right, yeah, we'll try to keep it down for you, sir.
My name is Matt Boyd, and with me is my colleague Prabir Sekhri, and we are solutions architects with AWS. Today we're going to be exploring how to modernize your managed file transfers and automate your file transfer and file processing. Let me quickly walk you through the journey that we're going to be taking you on today.
We're going to start by talking about managed file transfers, what they are, why they matter, and some of the building blocks that we'll be using for our modern file transfer service that we'll be building today. Then we're going to do a quick overview of Transfer Family and its features. And then we're going to dive into our use case and a target architecture, followed by a quick sprinkling of agentic AI since we'll be using that as part of our solution. And then we're going to get into the code. We will be doing hands-on coding and deployment in real time, and we'll hope that nothing breaks while we do it.
Understanding Managed File Transfer and Modern Building Blocks
So let's get started. What is managed file transfer? Simply put, it is the exchange and processing of files securely, often between two known business partners or business entities, but it can also be between internal or external systems in general. Believe it or not, managed file transfer is critical to almost every industry and vertical. In finance, it is used for clearinghouse settlements, for example. In logistics, it's used for supply chain tracking and in any analytics use case you usually have to ingest files and content and that is also typically done using managed file transfer at some capacity.
For most organizations, managed file transfer is not just the process of transferring files, but it's core to their business functions and their business processes. What many organizations need to do is they need to modernize their managed file transfer systems and move away from legacy tooling and operational overhead. Today we're going to be showing you how to build a modern MFT system. These are just some of the building blocks that we're going to be using today.
First, we're going to be using AWS Transfer Family, a fully managed service for file transfers, and I'll talk about that in a moment here. We're also going to implement some malware scanning, because most organizations need to do that when they receive files from an organization outside. We'll use Amazon GuardDuty malware scanning for S3 for that. For the agentic workflow or for our workflow after we receive that file to process it, we're going to remove human manual labor, and we're going to use an agentic workflow for that processing of the files that we receive. Pulling it all together, we're going to use infrastructure as code and Terraform for our deployment and automated deployment processes. We'll use an event-driven architecture with Amazon EventBridge.
AWS Transfer Family: Key Components and Event-Driven Architecture
Briefly, I want to cover the key components of the AWS Transfer Family. There are actually three main services or features within the Transfer Family portfolio. The first one is file transfer servers. These are fully managed file transfer servers that you can deploy very quickly, and they scale automatically. They're backed by S3 storage, and they support the most industry standard protocols: SFTP, FTPS, and AS2.
Now, if you need to send files to a remote SFTP server, or maybe you need to download from a remote SFTP server, Transfer Family offers SFTP connectors. SFTP connectors are essentially a fully managed SFTP client that you can use via an API. So there's no infrastructure to maintain to download or send files via SFTP to remote servers. And then finally, if you have use cases where you need to provide web-based access to files stored in S3, and you need to do it securely with authentication for human users and in a user-friendly interface, we have Transfer Family web apps. This is a fully managed web app that integrates with S3. We'll be showing that as part of the solution today as well.
One other callout here is that with modern file transfers, you often want to adopt an event-driven architecture, and Transfer Family integrates with Amazon EventBridge. For example, you can automatically trigger downstream file processing when you receive a file.
With that, I'm going to pass it over to Prabir, and he's going to talk about the use case that we're going to be building for today. He'll show you the current state and then what we're going to be building toward.
Insurance Claims Processing Use Case: From Traditional to Cloud-Native
Thank you, Matt. For today's use case, we'll be looking at modernizing a traditional insurance claims processing system. I just want to know, with a quick show of hands, how many of you are in the insurance or financial services domain? Quite a few, nice. What about some other domains? Maybe you can shout it out. What industries are you from? Healthcare, hospitality, perfect. The use case I'm showing you today can actually be applied and seen in other industries as well.
The way traditional insurance claims processing works is typically it has three different phases. We start with an ingest phase, which is where most companies ingest files using SFTP. In our insurance use case, these could be files such as policy documents, images, and repair estimates. We often have an extraction phase where most organizations have some kind of OCR or basic OCR. For those who don't know, OCR is basically a technique that's used to extract text from images. The challenge with basic OCR is that it is somewhat rigid and only expects files in certain formats.
Finally, in our analysis phase, you may have some kind of manual processing or intelligence with rules-based engines, but most of the time it is manually processed. The challenge with all this approach is that not only is this error prone and time consuming, it just doesn't scale. Now we're going to show you how we'll transform this using cloud native architecture.
Here's a modern approach for this use case which demonstrates all the principles that Matt spoke about. We'll be using AWS Transfer Family for secure file transfer. We'll be setting up malware scanning using Amazon GuardDuty, and we will be leveraging AI agents to do the manual processing. All of this architecture is going to be orchestrated using Amazon EventBridge. Without further ado, I'm going to dive deep into each layer and we'll talk about it a little bit more.
Four-Stage Architecture Overview: Security, Malware Protection, AI Agents, and Web Access
Our stage one is where we are building a secure foundation. We're going to be starting with replacing our legacy SFTP servers with Transfer Family, which means that there's no infrastructure to manage. It offers automatic scaling and high availability. We want to authenticate external users, especially for our use case, external partners and repair shops, and we want to do it in a secure way without managing identity. This is where we're going to be leveraging Transfer Family's custom identity provider toolkit, and Matt's going to do a demo on that in just a while. By using that, you don't have to manage separate credentials. You can integrate with your partner's identity provider, whether it's from Okta, Ping, or Ontra. We want to store everything in Amazon S3 for its unlimited scalability and high durability.
Moving on to our next stage, we want to add automatic malware protection. GuardDuty will scan every file as it arrives. It offers immediate threat detection and intelligent routing. What I mean by that is that all your clean files are going to be automatically moved to a clean bucket and any malicious or suspicious file will move to a quarantine bucket automatically, all done through event-driven architecture.
Moving on, this is where we're seeing a major transformation in how organizations are processing files or modernizing file transfers. This is where we're going to be using AI agents to replace some of the manual processing. We're going to be using Amazon Bedrock AgentCore as our orchestrator for all our agents. The agents themselves can use the most sophisticated models that are available not only in Bedrock but also from other cloud providers or other partners, such as Anthropic or even our own Amazon Nova models. These in general provide a lot more flexibility and higher accuracy than traditional OCR. With just this approach alone, we've now automated what used to take maybe a team of people hours to days, and that can be done in just a couple of seconds or minutes.
Now let's make all of this accessible to our human users. This is where we're going to be leveraging Transfer Family's web UI or web apps service. This is especially for your end users because you don't have to know SFTP commands. It offers a simple browser-based access and is completely self-served.
It also has security built-ins, which means you can have rules-based access where the right people have access to the right files at the right time. This is built on the principles of zero trust.
Now we're going to be deploying all of this using Terraform. The Transfer Family service team endorses this approach, and in fact, we support an official module that is available in the Terraform registry. If you're using Terraform modules from Transfer Family or Terraform in general to automate your infrastructure, you're going to really enjoy what we have for you today. For those who are not using our modules, I strongly encourage you to use them because using modules allows you to improve standardization. You can deploy all of your infrastructure across your environments in a standardized way.
You widely want to track everything in Git for tracking of changes, rollbacks, and auditing. Lastly, we recommend that you deploy everything using a CI/CD pipeline.
Now it's time to build and start getting into the code. If you want to see the actual code yourself, please grab the QR code, which will take you to the GitHub repository we have for you and the specific branch that we're going to be working off of. We'll share the QR code again at the end, so if you've missed it, that's perfectly okay. We're going to be showing this a couple of times. Let me give everybody maybe 10 seconds to capture the code.
Stage 1 Implementation: Deploying Transfer Family Server with Custom Identity Provider
So Matt, you have some stuff to show them, right? I do. Before we get into the first part of the code, I'm going to show you what we're going to be building. You may have caught it when Prabir was showing the various stages of our architecture, stages 1 through 4, and that's actually how we're going to be building out our architecture today into various stages.
The first stage is to create our AWS Transfer Family server using the Terraform modules for that. But we also need to incorporate authentication, and that is where we're going to be deploying the Transfer Family custom identity provider solution. This is a solution that is endorsed by the Transfer Family service team for integrating the most popular identity providers. We have a pre-built Terraform module for this as well that I will show you.
With that, I'm going to go ahead and get into the code. We are now in the terminal here. You'll be able to see this if you go to the code in GitHub after the session. We've structured our Terraform into various stages, and we have a couple of additional modules that we put into this solution. The first stage is our Transfer Family server stage. We did also deploy a couple of prerequisites, IAM Identity Center and Cognito user pool, which are in stage 0, but we've already deployed those.
Before I go into the code, I'm going to go ahead and start the deployment. We have several scripts to help us with getting the deployment going, and we will do stage 1 deploy.sh. This is quickly showing what we're actually deploying here. We have terraform apply, and then we're using a variables file. We've actually broken up the solution into different stages with feature flags in our Terraform, and that's really how we're controlling what we deploy and when throughout this. Obviously, in most environments, you would not need a stage like this, but we're doing it for the purpose of the code talk. Let's kick off the deployment.
While that builds, I will jump over to stage one, our transfer server. A couple of key callouts here. The first thing that we are actually building, and I'm going to bring this down to give us some space, is our custom identity provider solution. For that, we're using the Terraform for AWS Transfer Family custom IDP module that is already available in the GitHub repository. This really simplifies the deployment of the custom IDP solution itself, which consists of a Lambda function with authentication logic and modules for various identity providers, and then DynamoDB tables that are used for configuration.
For some identity providers, they cannot be reached over the public internet, and you may need to attach them to a VPC. In order to do that, you can modify the use VPC parameter, set that to true, and you can specify the subnets and the security groups that you need to attach the Lambda function to so that it can communicate with your identity provider. There are several other optional features. For example, you can put an API Gateway API in front of the Lambda function. If you want to attach a Web Application Firewall for additional security, this is really all you need out of the box for this custom IDP solution to be deployed with Terraform.
Now the second thing we need to do once we have deployed the custom IDP solution is create our Transfer Family server. We are going to use the Transfer Family Terraform module for that. In order to configure the Transfer Family server, we have a couple of parameters here. The first one is our domain. We will configure either S3 or EFS for the storage. In this case, we are going to be using S3. We also have options for the endpoint type and the protocol. We are going to set this to a public endpoint, but you could also attach this to a VPC, for example, if you wanted to run a private SFTP server. For the protocol, we are going to be using SFTP.
Now, for configuring authentication itself, we need to change the identity provider parameter to AWS Lambda instead of service managed. Then we need to specify the ARN of our custom IDP Lambda, which is one of the outputs from the module. That is what we have done here. You can see now this will wire up authentication to forward all authentication requests to our Lambda function. Before you go further, I want to understand how you manage users and how you map them to these identity providers. That is a great question. Actually, all of this is done via configuration in DynamoDB tables. There are a couple of ways for you to configure those DynamoDB records. Terraform is one option, but you could do that externally as well. The solution has really two tables. One is for the identity provider configuration. That is what this DynamoDB table item record in Terraform represents. We give our provider a name. In this case, it is Cognito pool because our repair shops are going to authenticate via an existing Cognito user pool. Then we specify the configuration. In the Cognito configuration, we need to specify an app client ID for our user pool and then the region that the user pool resides in. When it comes to the user, we can create records for each individual user. In this record, we have our user. In this case, it is the AnyCompany repairs user, but we are using a variable to specify the username. Then you link your users to the identity provider that you have created in your identity provider table. In this case, the Cognito pool provider. From there, you specify what access and entitlements you are going to give for that Transfer Family session. Transfer Family supports this concept of logical directories or virtual directories, where you can map any path on the SFTP server to an S3 bucket or an EFS volume. In this case, I am mapping the root folder to a bucket that we have provisioned where our claims that we are uploading are going to land initially.
Our deployment looks like it is complete, and I have some output here. We can see our Transfer Family server endpoint. I have a test script just to make sure that everything is functioning as expected. We are going to test out this code that we have deployed. We will do stage 1-test.sh. What we are going to do quickly is retrieve credentials that we stored in Secrets Manager, and we are going to use those to authenticate to the Transfer Family session. You can see our SFTP session is going to use anycompany-repairs as the user. Let us go ahead and kick that off. Say yes to the fingerprint. I will enter my password. When I enter my password, if it is successful, we will immediately upload one of our claims files just as a test. It looks like we were able to upload that file successfully, so that is a good sign. The last thing we are going to do just to make sure everything is working as expected is verify that that file is indeed in the S3 bucket where we expect it to be. With a simple S3 LS, we can see claim-1.zip was uploaded and we are all set.
Stage 2 Implementation: Adding GuardDuty Malware Protection with Event-Driven Routing
In this stage, we quickly built a Transfer Family server. We integrated the custom IDP solution and we integrated our Cognito users for authentication. So with that, I think our next step is to implement some malware scanning. Is that right? Yeah, that's right. Chris, do you remember which service we intended to use for malware protection? GuardDuty. There you go. Somebody's paying attention. Perfect.
So for anybody using GuardDuty for malware scanning today, I see a few people over there. If you're using GuardDuty for malware scanning, the way GuardDuty works is that it scans every object that lands into a protected S3 bucket, and it does this by adding an object tag based on the scan results. If it's a clean object, it will add a clean tag. If it's a malicious object, it will add a threats found tag.
What we have done with this is expand the architecture to leverage an event-driven approach. We have EventBridge that is listening to GuardDuty events, and the scan results are being passed from EventBridge to an SQS queue. We have added SQS as a matter of decoupling and best practice so that we can scale and offer resiliency. If our downstream processing is unavailable, we can always replay these messages using SQS. SQS further triggers our Lambda function.
Our Lambda function is reading all the events in our SQS queue and processing them. Lambda is the brain or the logic that decides, based on the scan results, which files to route to which bucket. Clean files will go to a clean bucket, malicious files will go to a quarantine bucket, and any object that is not processed will go to our error bucket. Optionally in this module, we have also added a dead letter queue in SQS so that any messages in the queue that are not processed will be stored because we want to investigate what happened and process them at a later stage.
We have also added the ability to send notifications through SNS. We believe this is very important because if you have malware in one of your accounts or buckets, you want to get notified immediately. This is an optional feature, and we will be showing that in the code right away. Matt, do you mind switching to the demo screen? Happy to. Awesome. Before I show you the code, I am going to kick off the deployment. I am using these scripts that Matt put together. It is all very simple. It is just doing a terraform apply. It is definitely doing a terraform apply if you have already seen it.
The terraform module for malware protection is actually pretty simple. You can see it is available as a submodule in our Transfer Family malware protection Transfer Family server module. It requires primarily three different things. Number one, you have to specify a source bucket or a bucket that needs protection. In this case, it is defined using our S3 ingest bucket block. I am actually referencing a bucket that was created in one of our previous stages. It is the same bucket that Matt created and sent an SFTP file to.
The second part of the configuration is our optional features. This is where we can specify an SNS topic for malware detection, and optionally a dead letter queue. We have kept all those configurations as true as best practice defaults just to show that this works out of the box. The main thing you want to know is that we have a routing config. The routing config maps GuardDuty events. These are the exact same events that GuardDuty emits. For example, if the file was clean, GuardDuty will put in a no threats found tag to one of our buckets, and you can see that this is being mapped to a clean bucket.
This could be any of the buckets that you have created from maybe another module that you use to create S3 buckets, so you can refer to them as well. For any object that was considered malicious, we will have a threats found tag which will move to our quarantine bucket. Any objects that were not processed by GuardDuty, maybe there was a permission issue or the object type was not supported, will have an access denied or failed status and will automatically move to our errors bucket for further processing.
Premier, I just want to make sure that I understand this correctly. This is not tied specifically to the Transfer Family service. Like you could use this for really any solution where you need malware scanning and S3, right? Yeah, that is right. The way we build these modules, they are really independent building blocks or Lego pieces. You can take this module. Let us say you already have a pipeline that deploys all of the above, right? VPCs, Transfer Family servers, and maybe identity providers.
Maybe you're managing it separately using configuration. That's absolutely fine, but if you want to add malware protection, you can just take this module and deploy it in any of your deployment pipelines. This will work out of the box, and that's in fact how we build all the other modules too. You don't have to use everything, or you know, everything the example shows. For a full feature deployment, you can really plug and play.
Testing Malware Protection: Clean Files and ICAR Validation
Let's test our malware protection. I don't want to add anything to the code, so let me go back. All right, stage 2 test. Once again, we're using some helper scripts that are just simulating the same kind of flow. The first thing we're going to do is connect to our SFTP server. I'm going to start the SFTP session and enter my password for the SFTP server. Now I see that my claim one file was updated.
At the same time, I also added an ICAR file. Does anybody know what an ICAR file is? An ICAR file is a way for you to test malware protection without compromising the system. You actually don't want to inject actual malware to test if your malware protection is working. That's not a good idea. What if your system gets infected? ICAR files are a great way to test malware protection.
In this case, we are parsing CloudWatch logs and we see that GuardDuty did pick these up appropriately. Our clean file was tagged as no threats found, and our malicious file, which is our ICAR file, was detected as threats found. Now I'm going to do an S3 LS on our clean bucket to see if the files landed in the right place or not. If I look in my clean bucket, I did see that my file was processed properly. My clean file landed in my S3 clean bucket.
If I look at my quarantine bucket, I do see that my ICAR file was successfully moved to our quarantine bucket, and again, all done through the event-driven architecture. I didn't have to make any manual changes on my side.
Stage 3 Implementation: Building AI Agents with Amazon Bedrock and STRANDS SDK
Now I think it's a good moment for us to talk about agents or agentic AI. How many of you are using agents or exploring with AI agents today, whether it's in test? A few of you. So for those who are not using AI agents, this architecture might look very familiar. If you have some kind of file processing pipeline that uses intelligence, what I mean by intelligence is it uses some of our AI services. There were just a lot more steps involved in building something that does file processing, and these steps were basically Lambda functions or Step Functions that called different AWS services.
The thing I want to call out is that this architecture is still very relevant. If you have a pipeline that's very strict in terms of what needs to be processed and in what order, this is perfect and scales really well. But for organizations that are looking for a little more flexibility, this is where agents are really powerful.
For this use case, we're going to be using an agentic workflow, and we're going to be orchestrating all our agents using Amazon Bedrock AgentCore. We built all these agents using the Strands SDK, which is an open source framework that AWS has developed for you to build agents and offers a lot more flexibility. We have a couple of different agents. I'm going to talk about the agents in the middle. I have an entity detection agent whose job is to extract text from a PDF file. I have a validation agent that is built on the principles of multimodality, so it will take text and also read the image and then compare if the text matches with the images or not.
I have built a summarization agent that summarizes all the findings from the previous agents, and I have a database agent that takes all the entities that are detected and pushes them to a DynamoDB table. The brain behind all of this is our supervisor agent, which is coordinating all the tasks between the other sub-agents. The supervisor agent decides which agent to invoke at what time. The thing I want to mention is that this pattern is actually not only relevant for insurance customers but can be applied to any industries. Somebody mentioned healthcare, or maybe if you're in finance, this kind of flow and this kind of architecture works really well. It's just that you have to change the prompt a little bit.
Speaking of prompts, let me show you how to build these agents very quickly in terms of the prompts. We have our claim files. The first thing I'm doing is tasking the entity detection agent, and I'm using simple natural language to build my prompt. I'm asking you to extract this information.
The entity detection agent extracts information from car damage claims and ingests it in JSON format. I'm asking my validation agent to analyze the car damage image against the reported claim details. I'm asking my summarization agent to summarize all of the findings into the specific format shown here. Lastly, I have my database agent, and I've instructed it in natural language to extract all these entities and push them into a DynamoDB table.
Let's talk about how I've built this in code. Matt, do you mind switching to the demo, please? There we go. Awesome. So the first thing I'm going to do is kick off the deployment. Now I want to quickly walk you through the architecture and how we've built these modules. The first thing I want to mention is that we are using one of our modules for AgentCore, which is a module that supports out of the box and is officially supported. However, what I want to show you is how we build these agents using STRANDS. Anybody using STRANDS today or played around with STRANDS? We got one. Perfect. Great, awesome.
For those who have used STRANDS or this might look very familiar, but for those who haven't used STRANDS, STRANDS actually works on three different concepts or three different things that it requires. The first thing that STRANDS requires is that the agent needs to have a model. The model is basically the brain for the AI. The second thing that the agent needs is a tool. A tool in very simple terms is an action that you provide to the agent and an action that the AI agent can perform. This action could be a function, something that you specify that can do a task, or it could be another sub-agent that this orchestrated agent can use. So you're basically giving the AI hands to interact with the outside world.
I've built a couple of tools for my orchestrator agents. I'm going to show you an example of one of them. The tool that I've built here is a very simple function that calls one of our sub-agents, which is the entity extraction agent. For those who are familiar with programming, what I'm doing is pretty simple. The first thing I'm asking is to extract the ARN of my entity detection agent. In step two, I'm ingesting a payload which is my bucket name and my object name or the PDF key, which is going to be our claims forms that we'll be testing. I'm creating a unique session ID and I've done this just for best practices. I want to troubleshoot every session in CloudWatch. In step four, this is where I'm invoking the entity extraction agent via Bedrock. In my final step, I'm just parsing all the outputs that have been generated from this agent, and then this orchestrated agent can read them and proceed to the next step.
I've built the other agents in a similar way. I have the fraud detection agent specification and the database insertion agents. It's basically just calling these agents, but what I want to show you here is this. For those who have interacted with some kind of text-based or LLM-based service, even ChatGPT, this might actually look very familiar. Instead of using complex logic or statements written in SQL, I am instructing this agent in natural language to do the following. I've given it a task. I've said that you are a claims processing workflow agent leveraging STRANDS. Your job is to extract entities, validate the damage consistency, insert all this data into a database, and generate a summary. The agent decides, based on this task I've given, which tools to use. You saw like I showed one of the tools the way I've configured it, so the agent decides, you know what, I need to extract entities. I know that I have a tool for that, which is my sub-agent, so I can call that. That's how we've built this in STRANDS.
Testing AI Agents: Automated Claims Processing and Fraud Detection Results
A fun fact is that I've built this in a couple of minutes. We're using Q Pro CLI. For those who are using Q or any other AI-powered coding system, this can be done very quickly. All you have to do is just put your business logic, ask it to build something in STRANDS, and you will have your agentic AI workflow. Now, before I test the agents, I'm going to show you what the claim files look like. I have two sample claim files. On my left, I have a claims record that shows a car that was damaged in a parking lot. We have a rear bumper damage that happened in the parking lot, and the estimated cost to repair it is $995.
If you look at the image, this is very consistent with what we see. However, on my right-hand side, I have a claim form that states this was actually a minor front bumper scratch that happened in a grocery store parking lot. Do you think that's a minor scratch? It doesn't look like a minor scratch to me. I would say that's considerably more than a minor scratch in a parking lot. By the way, no cars were harmed in making this demo, so I just want to call that out. Otherwise, this would be a very expensive demo.
Let's test this, Matt. What do you think? Yeah, let's try it out. I'm going to switch back to the demo. Once again, we're using a test script with the same flow. Our event-driven architecture is going to kick in. We're going to log into the SFTP server. You're going to enter a password. We've uploaded one of our claim files. With the script, we've asked it to parse CloudWatch logs, so we have five different agents. We thought the best use of our time would not be to go to the console and find CloudWatch logs. Who likes going to the console, by the way? Everybody seems to say no one likes it. I like the console. Let's not be too mean about it. So you like it? Yeah, that's what I see the team over here saying. That's good, I guess that's the right answer.
So we see that our CloudWatch logs picked up different things happening in our workflow. Let me scroll up a bit. We see our entity detection agent picked up all these entities. Our fraud validation agent took this data and is now reading the image that I've supplied with it. Our workflow agent is our orchestrator, showing the different orchestration that's going on. Our database agent took all these entities and is now meaningfully parsing and ingesting them into a database record. I see that our claims processing was finished, and I'm going to show you the claim in just a minute. My claim was successfully processed, and I'm going to move on to the second claim as well. I'm going to test both of them and we're going to see them at the same time.
I have uploaded this file. For those who like the console, let's actually go to the console and see what's happening. If you create a couple of different buckets, I will be in my clean bucket because all these files were clean, and I can see that my claim was successfully uploaded. I just uploaded my second claim as well, so it's still being processed. At the same time, I have these two prefixes. I have my submitted claims, and I see claim one that was submitted, which is very consistent with the image and the PDF that I showed to you. Our processed claim was the one that was just processed by the AI agent. I have the same prefix again.
Let me open this summary and make it slightly bigger. This is claim one, which we saw had a minor rear bumper damage with an estimated repair cost of $995. This is what my summarization agent did. It states that our fraud agent is 90% confident that the description matches the image. The analysis, all done in natural language, says this is a car that had minor bumper damage, very consistent with the claim description. There's visible denting on the rear bumper and no apparent damage on the vehicle. The recommendation by this agent to our human agents is that this claim can be approved. This is super powerful, as it eliminates a whole bunch of manual steps that human users have to do.
At the same time, let me see the second one. Yeah, let's see. I'm curious about that. Let's see if it processed. My second claim is also processed. I think it should be the first one over here. Yes, legible, perfect. My second claim again shows that our claim form stated this was a minor front bumper scratch with an estimated repair cost of $350. However, our AI agent said that the image does not match the description, and the agent is 95% confident, so even more confident, that this is not a scratch. It states that although the claim suggests this is a front bumper scratch, the image shows severe front-end damage, and it recommends that this claim should be reviewed by a human just to be sure.
What I want to call out is that this shows how powerful this workflow can be and how you can apply it in your use cases. I built this using simple natural language prompts. The power of this is that you can literally take our demo, maybe play around with the prompts, and have something up and running for your own use cases too.
The one thing I do want to call out is that I don't think our business users will like going to the IDE. I know we all like the IDE or the AWS console, but that's not even appropriate for someone who needs to review insurance claims. So what do you think, Matt? Should we do something about that?
Stage 4 Implementation: Transfer Family Web Apps with S3 Access Grants and Closing Remarks
Well, I think this would be a good time to implement Transfer Family web apps. Okay, let's do that. I'm going to grab this and switch back to our PowerPoint. So the last stage, now that we've created this awesome workflow where we're automating the file and claims processing, summarizing it, and looking for potential fraud, is we need to make this available to the humans who will do this final review and sign off on these claims. We'll call those our claims reviewers, and we're going to give them access using a Transfer Family web app.
Show of hands—is anybody using Transfer Family web apps at all or explored it? Awesome. And does anybody here use a service called S3 Access Grants? Yes, obviously if you use web apps you use S3 Access Grants. Good answer. We're going to be using a combination of these services today. I'm going to briefly walk through how the service works for those not familiar.
When you deploy a web app, you integrate it with another AWS service called IAM Identity Center. Identity Center supports federated authentication from identity providers like Okta and Azure AD, just to name a few, but really any SAML-based identity provider can integrate with Identity Center. That's how your end users will authenticate. They'll authenticate through Identity Center and then be redirected back to the web app. But then we need to assign them permissions and authorizations, and that is done through another service called S3 Access Grants.
In order to do this, we first define an S3 bucket location with our access grants. This is where we're going to specify a location to assign grants or permissions to. Then we're going to assign individual grants for our users or groups. In the table in the lower left, you can see the grants that we'll be assigning. Our claims reviewers group will be given read-only access to the submitted claims and processed claims buckets. Then we'll have a claims admin group. If they need to manage the files in the bucket, we'll give them read and write access.
We're going to set all of this up. The one thing I want you to be aware of is that if you've ever had to configure all of this in the console, especially the access grants and setting up and assigning the permissions, you may find that it's a lot of clicks and a lot of manual processes, looking up user IDs based on their display names. We're going to automate all of that with Terraform here in just a moment. Let's switch over. I'm going to go ahead and kick off our deployment before we show the code.
What we're going to do today is show you what we're calling the alpha module of the Transfer Family web app for Terraform. This is something that we haven't released yet, but it is in the source code that we're sharing with you today, and it's something that we're hoping to release very shortly. This will allow you to really simplify deploying Transfer Family web apps and configuring your S3 Access Grants for your user entitlements. This is made up of one main module for the web app and then two submodules, which I'm going to walk through here quickly.
First, we use our transfer web app module itself. This is what's going to actually deploy the Transfer Family web app. A web app is actually really easy to deploy. It's fully managed, and with a couple of clicks in the console, you can deploy a web app in the Transfer Family console. All we need to do is specify the name that we want to assign our web app and then connect it with an IAM Identity Center instance and an S3 Access Grants instance. Usually these are both in the same AWS account. However, if you already have an organization-wide Identity Center instance, you can use that instead of an account-based instance. So in a couple of lines of code, we have our web app deployed.
Now we need to configure our permissions. We have a submodule first that will define the S3 locations where we're going to be giving users access for web apps. This module is configuring this in the S3 Access Grants instance and is automatically creating an IAM role with the associated permissions and trust policy required. It can optionally create a bucket for us if we want to specify a clean bucket that we're going to be using, or in this case, we're going to be using an existing bucket which has those process claims.
The other thing that it can do, and this is a requirement when you're configuring Transfer Family web apps, is we're going to set the CORS configuration on our S3 bucket. You need to do this for any S3 bucket that you configure with web apps, and this will also do that for you automatically. Now the last piece is to configure the entitlements or the permissions themselves. That's where this web app users and groups module comes in. This is a submodule that will be under the Transfer Web Apps module because it's purpose built for web apps. We can define both user and group entitlements via the users and groups parameters.
In this case, we're doing role-based access control, so we're assigning groups. First we specify the display name for our group. This looks up the group ID, which is a UUID of that group in Identity Center, and it retrieves that and we'll use that for the configuration. This simplifies a lot of that process of doing lookups. Then we assign our actual access grants, in other words, the permissions that we are going to give to this group. In this case, for our Claims Admins, we're going to assign read-write access to our S3 bucket location that we defined in the previous step. We're going to give it access to the entire bucket, so all paths with that wildcard. Now for our Claims Reviewers, the setup is very similar, except we're going to have two access grants assigned. First, we're going to give read access to submitted claims in any sub files and folders, and then to processed claims as well.
Let's check on the status of our deployment. It looks like we deployed successfully. There's nothing to test in the terminal this time. I'm going to take you to the actual console because this is a human interface. I want to show you what we just created. This is our Transfer Family web app that was just provisioned. If we go into our web app, we can see the configuration, including our Identity Center instance being configured. If we go to the groups tab, we can see that both of our groups have been assigned. These are the group IDs that I talked about. These are just IDs that are unique that you would otherwise have to look up manually yourself. We've really simplified that process with this module.
Now I'll switch over to S3 Access Grants. If we go into the details here, we can see we have our location set up, in this case, our claims bucket with our clean and process claims. Then our grants, so you can see our grant IDs for our Claims Reviewer and Claims Admin groups, and the grant scopes and permissions that have been applied. All of that was done with just a couple of lines of Terraform code. So what does it actually look like once you authenticate? Let's go back to web apps and open up our web app's URL. We'll sign in as our Claims Reviewer here. Now we're in our user-friendly Transfer Web App interface.
If we look here, we have access to both the processed and submitted claims with read permission. If we go into our process claims, we can open one of our claims folders and download that same summary that was shown earlier. One other thing to call out here is there is some level of customization you can do with these web apps. If you need a custom logo or you want to use your organization's logo, you can specify that in the configuration, and that will show up here instead of the Transfer Web App icon that's right here. So that is the end of our architecture. Just to quickly recap what we've done today,
we configured a Transfer Family server with authentication to an external identity provider, in this case Cognito. We performed malware scanning automatically once we received files. We then used agentic AI to process our submitted claims, look and see if there was an inaccuracy issue, and then we made those claims, once processed, available to our claims reviewers. So with that, where do we go from here? What are our next steps?
Before we wrap up, I want to say that all the code you saw today is powered by our Transfer Family Terraform module, and this is available to you for use whenever you can. The step-by-step example that we covered today is one of many examples that exist in our repository. It's built typically on use cases, so if you have a specific use case, we show you different components such as connectors. Today we showed you a use case with the agentic AI. All of these can be deployed in just a couple of minutes, so very quickly for you to set up, especially if you're using infrastructure as code.
We launched this module only a couple of months ago and we've already had over 10,000 downloads. I want to thank all the database solution architects who are contributing to this, as well as folks like you. We've had a lot of external contributions too, so if you're one of them, thank you so much. You can grab this QR code for direct access . This will take you to the root of the module, which has all the examples and all the modules that you can use in your workflows.
One thing I want to call out is that we do maintain a public roadmap for all the things that we're working on. If you like any of the features that we're working on, you can always give them a plus one. But if you feel that there's something that we do not support, or if you have any suggestions, you can always create a GitHub issue. My team and I review these almost daily. We want to get new features for you as soon as possible, and I'm excited to see what you'll build next with these tools.
With that, I want to thank you for your time. I hope you enjoy the rest of the event. Thank you, and have a great week, everyone.
; This article is entirely auto-generated using Amazon Bedrock.
























































































































Top comments (0)