DEV Community

Cover image for LamRAG: AI-Powered Feedback Analysis Using Amazon Bedrock
N Chandra Prakash Reddy for AWS Community Builders

Posted on • Originally published at devopstour.hashnode.dev

LamRAG: AI-Powered Feedback Analysis Using Amazon Bedrock

There were a lot of great talks at AWS Student Community Day Tirupati on November 1, 2025. But the one that really jumped out was "LamRAG: From data to constructive insights using Amazon Bedrock" by Rahul Kumar and Gokul Jangam.

It wasn't a normal slide-and-talk presentation. It was a live, step-by-step tour of a real product they made called Feedbackly, which is a platform for managing feedback. They showed how they improved it over time using Amazon Bedrock. The session was set up perfectly, with levels that built on each other. When it was over, I got a whole new way of looking at what generative AI on AWS can do.

Let me break it all down for you.

The Problem: Feedback Chaos and the 10/10 Trap

You know how messy it can get if you've worked on a team that does sprints. Many projects. A lot of project managers. Different schedules for sprints. Different approaches to get feedback from peers. There is no one spot to keep track of it all.

Feedbackly was made to solve just that: a single system for managing projects and getting peer input from team members every sprint. It's like a notebook that everyone in your engineering department can use to keep track of comments from every sprint.

But then there was a new problem that seems quite familiar: everyone started giving 10 out of 10 ratings. Everyone is "great." Everyone "did better than expected." The feedback loses its meaning. Not important. Not useful for real conversations about performance.

Sound familiar? That's where Amazon Bedrock enters the picture.

Why Serverless First?

Before getting into the AI aspects, the speakers made a quick but vital case for creating Feedbackly on a serverless architecture. This is why it made sense:

  1. No Server Management - no patching, no provisioning, no babysitting servers

  2. Pay Only for What You Use - no paying for idle compute between sprints

  3. Automatic Scaling - handles bursts of feedback submissions without manual work

  4. Faster Development - less infra, more features

  5. Built-in Availability - AWS handles the redundancy

  6. Focus on Business Logic - spend time on what actually matters to users

It's like renting a cab instead of buying a car. You don't have to worry about gas, insurance, or maintenance; you just get where you need to go.

Learning in Levels: The Session's Brilliant Structure

The session was set up as a series of five levels, from L1 to L5, with each level adding a new idea on top of the one before it.

L1: Bedrock Playground

The journey started with the Amazon Bedrock Chat Playground β€” a browser-based interface where you can experiment with multiple foundation models side by side, without writing a single line of code. It's literally a playground.

The presenters used the same feedback-classification prompt on three models at the same time: Llama 3.1 405B Instruct, Claude 3.5 Sonnet, and Command R. They wanted to see how each model reacted to the same input. The results were different for each of them. The terminology, the reasoning, the strictness, and the structure are all important. This is where it gets interesting: you need to choose a model, and the playground allows you compare them before you make a choice.

The model metrics were also interesting: Llama 3.1 had the largest latency (almost 18,000 ms), Command R was the fastest (around 1,591 ms), and Claude 3.5 Sonnet hit a sweet spot (about 4,799 ms) while giving the most structured, reasoned output.

L2: Prompt Engineering - One Word Changes Everything

AI is just as good as the directions you give it.

The presenters went over a good Prompt Template made in Amazon Bedrock's Prompt Management. It had five main parts:

  • Persona / Role - tell the model who it is

  • Action - tell it what to do

  • References - give it positive and negative examples to anchor its judgment

  • Variables - use placeholders like {{feedback}} for dynamic input

  • Output Format - ask for structured JSON so your application can actually parse the result

The prompt told the model to sort peer input into three groups: Reliability, Productivity, and Positive Energy. Then, it had to rank each group from 1 to 5. If there isn't enough background for a category, give it a -1.

After that, the most memorable demo of the whole session happened. They gave the identical report twice: "Person X has been productive and done the tasks as expected." What's the difference? One instruction suggested, "Be easy on the ratings." The other remarked, "Be strict with the ratings."

The outcomes were markedly distinct. One word. That's all it needed to change the AI's score. It's a strong reminder that you have to be careful when writing your prompt because it's the most important part of how your AI feature will work. Like you test your code, you should also test your prompts.

The Architecture: Lambda, RDS, and Bedrock Working Together

The basic structure of LamRAG is clean, serverless, and easy to understand:

  • The User sends a request

  • AWS Lambda receives it, validates the data, and calls both RDS and Bedrock

  • Amazon RDS acts as the data store, holding all sprint feedback

  • Amazon Bedrock takes that data, creates a query, and generates a human-readable summary

Lambda takes care of the orchestration. Bedrock is the smart part. RDS has the truth. Easy to use, works well, and is fully managed - no servers to worry about.

L3: Vector Databases and RAG

"RAG" means "Retrieval-Augmented Generation." In simple terms, you give an AI model access to your personal data instead of only what it already knows from training. This way, its replies are based on your specific situation instead of general internet knowledge.

The lecturers utilised a smart example about fruits to describe how vector databases work. Think about how you would describe an apple not just by its name, but also by its colour (1.0), sweetness (7), sourness (4), crunchiness (8), and shelf life (0.5). That list of numbers is a vector. The vector for an Orange is [0.8, 6, 8, 2, 1.0]. A vector database keeps these embeddings and uses arithmetic to locate items that are comparable to them, not keyword matching.

When you ask for "list some red-colored fruits," the database looks for vectors that are closest to the numbers that represent "red" and "fruit." That's semantic search β€” and it's far more powerful than a simple text search.

Feedbackly integrated feedback data stored in Amazon S3 to a Bedrock Knowledge Base. This let users to choose how to separate and index documents for quick retrieval by using configurable chunking schemes such as default, fixed-size, hierarchical, semantic, or no chunking.

L4: Agents - Smart, Conversational, and Privacy-Aware

Three months after the first Feedbackly launch, two new problems came up: Admins vs. Users access control and a problem with how feedback was being shown. Not everyone should be allowed to see what other people have said.

The solution? Bedrock Agents.

An agent is like a smart helper that can think, plan, and do things. The speakers built an agent called sls-days-2024-lamrag based on Claude 3 Haiku. The agent has the following settings:

  • Action Groups - A Lambda function that takes two arguments: the type of inquiry (self or others) and the email address of the person who asked for it.

  • Knowledge Base - linked to the Feedbackly S3 data, with the order to get data depending on the user's email address

  • Privacy Logic - The Lambda checks to see if you're asking about yourself or someone else and blocks access right away if it's not allowed.

The live demo was really cool. When an unauthorised email sought to get someone else's comments, the agent said, "You are not authorised to access that information." But when a user asked for their own comments, they got a long, conversational summary: "Sandeep is a very productive, dependable, and positive team member who always gets great results."

That's not only smart AI; it's also responsible AI. Privacy built into the design.

L5: Keeping the Knowledge Base in Sync

A knowledge base is only useful if it is up to date. If new feedback is sent in but the knowledge base isn't updated, the agent keeps answering questions with old information, like a librarian using last year's catalogue.

The presenters talked about this directly with L5, which kept the Bedrock Knowledge Base up to date. The knowledge base needs to re-sync every time fresh feedback is processed and uploaded to the S3 bucket (sls-days-2024-lamrag) so that the agent always has the most up-to-date information. It's a phase that is easy to forget, but it is very important for production systems.

Final Results: The AI Chat Assistant in Action

The end result was a fully functional conversational AI Chat Assistant that was built right into Feedbackly. A member of the team could type a question in plain English and get structured, data-backed answers.

For instance, asking, "What are the average ratings for this worker?" came back:

  • Positive Energy: 3.9/5 - Very Good

  • Productivity: 2.5/5 - Below Average

  • Reliability: 2.7/5 - Below Average

  • Overall: 3.1/5 - Average

Along with specific strengths (such being good at code reviews, managing time, and mentoring) and areas where they need to improve. No more empty "10/10 across the board" ratings. Instead, there will be real, detailed, AI-backed analysis based on real peer input over time.

Bonus: From Idea to App with Claude Code

The last part of the session was a bonus that honestly blew everyone away. The speakers showed how they used Claude Code, Anthropic's agentic coding tool, to construct the full-stack feedback analyser.

The workflow was deceptively simple:

  1. Create a TASKS.md file β€” describe the application in plain English (tech stack, features, database setup, everything)

  2. Tell Claude Code to refer the file and build the app

  3. Deploy the app

The TASKS.md file listed all the parts of the stack: For the frontend, we use React, Vite, and Tailwind CSS. For the backend, we use AWS Lambda (Node.js 22). For AI, we use Claude 3.5 Sonnet via Bedrock. For the database, we use PostgreSQL on RDS. For the infrastructure, we use AWS SAM. Claude Code asked a few questions to make sure he understood, and then he made a full implementation plan. This plan included AI functions, security functions, and even privacy functions that replace colleague names with [Colleague] in queries that include more than one user.

This led to the slide that made everyone think the most: "Does this mean we don't need to learn coding anymore?"

Honestly, no, it doesn't. The skill itself is changing. It's more important than ever to know about design, what good code looks like, and how to look at AI-generated output with a critical eye. The startups that the speakers talked about - Cursor, Midjourney, Lovable, and Eleven Labs - were all started by small groups of people that employed AI to help them work faster, not to replace them.

Key Takeaways

This was one of the most useful AI sessions I've ever been to. This is what I'm taking with me:

  • Prompt engineering is a real, learnable skill. The word "lenient" vs. "strict" makes all the difference. Check your prompts the same way you check your code.

  • RAG makes AI relevant to your world. Foundation models are not specific. RAG makes them fit your data, your people, and your situation.

  • Agents add intelligence and access control together. Bedrock Agents can think about who is enquiring before they decide what to respond.

  • Serverless + Bedrock is a genuinely practical stack. You can send AI features that are ready for production without having to manage a single server.

  • AI amplifies builders, it doesn't replace them. The actual expertise of this time is knowing what to develop and how to direct the AI.

The Amazon Bedrock Chat Playground is the best place to start if you want to attempt any of this yourself. You don't need any code; simply open your browser and start playing around.

Conclusion

The LamRAG session at AWS Student Community Day Tirupati reminded me that the finest tech speeches don't only teach you ideas; they also show you a real problem, a real solution, and a genuine way to move forward. In short, here's the broad picture:

  • Generative AI on AWS is approachable - The Bedrock Playground enables anyone start experimenting without having to write any code.

  • The journey from a simple prompt to a full RAG-powered agent is incremental - You don't have to develop it entirely at once. Start with a small part and add more intelligence as you go.

  • Privacy and access control aren't an afterthought - Bedrock Agents let you change how the AI reacts right away.

  • AI-assisted development tools like Claude Code are changing the speed of building - faster than ever from idea to app in use

  • The best time to start learning Amazon Bedrock is right now - The tools are well-developed, the documentation is good, and the community is developing quickly.

About the Author

As an AWS Community Builder, I enjoy sharing the things I've learned through my own experiences and events, and I like to help others on their path. If you found this helpful or have any questions, don't hesitate to get in touch! πŸš€

πŸ”— Connect with me on LinkedIn

References

Event: AWS Student Community Day Tirupati

Topic: LamRAG: AI-Powered Feedback Analysis Using Amazon Bedrock

Date: November 01, 2025

Also Published On

AWS Builder Center

Hashnode

Top comments (1)

Collapse
 
ali_muwwakkil_a776a21aa9c profile image
Ali Muwwakkil

A surprising insight from our accelerator is that AI adoption often stalls not because of technology but due to a lack of integration with existing workflows. We've found that even well-structured RAG architectures can underperform if teams don't align AI outputs with their current processes. In our experience with enterprise teams, focusing on this integration early ensures that AI insights lead to actionable results rather than just more data. - Ali Muwwakkil (ali-muwwakkil on LinkedIn)