TL;DR: Task 1.1 assignment in the AWS Skill Builder Generative AI Developer Professional Exam Prep Plan asks you to build a simple insurance claims application that uses Bedrock models to extract and summarize claims.
Table of contents
- 1. Context
- 2. The challenge
- 3. Architecture diagram
- 4. Code
- 5. Considerations and limitations
- 6. Summary
- 7. Further reading
1. Context
If you are studying for the AWS Certified Generative AI Developer - Professional exam, you might have considered the Exam Prep Plan in AWS Skill Builder as a resource.
The Exam Prep Plan discusses each domain and the tasks within the domains one by one. Each task in a domain contains a bonus assignment challenge after covering the required knowledge.
This post presents a solution to the assignment described in Task 1.1 of the Domain 1 Review section.
2. The challenge
The assignment is about creating a simple insurance claims application built around the task's exam requirements.
The app accepts insurance claims from users and summarizes their most important parts using Amazon Bedrock models.
The flow should include two Bedrock calls: the first to extract the most important pieces from the claim, and the second to summarize it in natural language.
The assignment describes some optional elements, several of which are implemented in this solution.
3. Architecture diagram
The diagram below shows the entire flow of the solution, which includes some additional, non-required elements.
3.1. Uploading the claim
When the user clicks the Upload Claim button in the UI, an API Gateway REST API endpoint accepts the request. A Lambda function generates a presigned URL, which the client then uses to upload the file to the claims S3 bucket.
3.2. Extracting the information
The claims processor Lambda function receives an Event Notification from S3 when a new object is uploaded to the bucket.
The function then calls the InvokeModel API in Bedrock to use a small text model, Amazon Nova Micro, to extract the key properties from the claim. Cost efficiency is key here. This task isn't complex, so it's more cost-effective to use a smaller model. The prompt includes the required JSON response format.
Optionally, you can apply a Bedrock Guardrail to the LLM call to remove sensitive information from both the model input and output.
3.3. Summarizing the claim
The next step is to ask the model to summarize the claim information based on the extracted data.
If you want the fictional insurance company's proprietary policy information to be considered in the model's response, you can use AWS's managed RAG service, Bedrock Knowledge Base.
This solution includes a knowledge base with a source S3 bucket that stores the policy files, an embedding model that converts documents into vectors, and an S3 Vectors vector bucket to store the vectors.
The claims processor function uses the RetrieveAndGenerate Bedrock API to add the relevant policy documentation parts to the summarization model's input.
3.4. Returning the response to the client
The claims processor Lambda function persists the summary returned by the summarization model, as well as some other data like claim ID, model ID, and the time it took the model to summarize the claim, to a DynamoDB table.
The next step is to capture the data changes (e.g., new items in the table) through DynamoDB Streams. The stream handler Lambda function takes the new item from the stream event and forwards it to an API Gateway WebSocket API.
The client connected to the WebSocket API should receive the summary within 2–3 seconds, giving the user real-time confirmation of their claim submission.
4. Code
I split the code into three parts (all in the same repo for convenience).
4.1. Application code
The application code, written in Python, largely relies on the code samples provided by the training material in Skill Builder.
4.2. Infrastructure code
I created and deployed the resources using CDK in TypeScript for easy deployment and stack deletion.
4.3. Front end
The minimalistic UI is also written in TypeScript. The web interface uses React instead of the Flask approach recommended in the Skill Builder assignment.
4.4. GitHub
Deployment and app-running information is available in this GitHub repo if you want to take a look.
5. Considerations and limitations
The project is not production-ready! It's simply a possible solution to the assignment and lacks some important features. See the README for more information.
The application in its current form only accepts .txt files. It assumes that claims are already converted from PDF, DOC, or image files. Handling multiple file extensions is a possible extension to the project. You could add packages in the code, or use Bedrock models or Textract to extract information from the uploaded claims. The solution's stack doesn't include any of these since the purpose of the project was different.
Also, feel free to add more checks to the guardrail, protect the API endpoint with a token, use different models, or keep the WebSocket connection alive in the client with some custom logic.
6. Summary
That's it! Above is a solution to the Task 1 bonus assignment in Domain 1 in the Skill Builder Exam Prep Plan for the Generative AI Developer Professional certification exam.
The application extracts and summarizes insurance claims and uses Bedrock foundation models, Bedrock Knowledge Bases, and Bedrock Guardrails.
7. Further reading
Exam Prep Plan: AWS Certified Generative AI Developer - Professional (AIP-C01 - English) - Exam preparation plan

Top comments (0)