I recently attended a two-day workshop on Generative AI at GTU, and I have to say, it was much better than I expected.
๐ What I Took Away
1. Generative AI in Practice
The sessions started with the basics of LLMs (Large Language Models). I knew the general idea โ they generate text that feels human โ but the discussion went deeper into prompt engineering and fine-tuning, which can drastically change how useful the model is.
One thing that stood out was how Amazon Bedrock simplifies the whole process. You donโt have to worry about hosting big models or scaling servers. You just call the service and focus on building your app. I had always assumed working with LLMs meant a huge infrastructure overhead, so this felt like a pleasant surprise.
2. Why Vector Databases Are a Game Changer
The next topic was vector databases, and this was a lightbulb moment for me. Traditional databases are fine for storing rows and columns, but they struggle when you need to search by meaning instead of exact words. Thatโs where vectors (embeddings) come in โ they let you search semantically.
We briefly compared tools like Pinecone and Weaviate, but what caught my eye was Amazon OpenSearch Service now supporting vector search natively. That means you can handle embeddings and similarity queries without setting up a separate system. It felt practical, especially if youโre already using AWS.
3. RAG (Retrieval-Augmented Generation)
This was probably the most practical part of the workshop for me. Normally, an LLM has no idea about your companyโs documents. With RAG, the model can retrieve the right information first and then generate an answer.
RAG Flow:
- User query โ Create embedding โ Search in vector DB โ Pass results to LLM โ Generate response
On AWS, the demo combined:
- Amazon Kendra for smart search
- Amazon S3 for storing the documents
- Amazon Bedrock for generating the final response
Seeing it in action was impressive. They showed a chatbot answering specific company FAQs without hallucinating. At that point, I found myself thinking about real ways to apply it in projects.
4. MCP (Model Context Protocol)
The final piece we explored was MCP (Model Context Protocol). In simple terms, it lets LLMs interact with tools and APIs instead of just โmaking things up.โ
The AWS stack here included:
- Amazon API Gateway to expose APIs
- AWS Lambda to run lightweight functions
- Amazon Bedrock Agents to decide which tool should be called at the right time
It felt like the missing piece for building AI systems that are both powerful and safe to use in production.
Overall, it was a great event that turned abstract AI buzzwords into something practical and tangible. If youโve been exploring GenAI on AWS, Iโd love to connect and swap notes. ๐
Top comments (0)