DEV Community

Abhishek Gupta for AWS

Posted on • Originally published at community.aws

Build a Serverless GenAI solution with Lambda, DynamoDB, LangChain and Amazon Bedrock

DynamoDB is used as the chat history backend along with AWS Lambda Web adapter for response streaming

In a previous blog, I demonstrated how to Redis (Elasticache Serverless as an example) as a chat history backend for a Streamlit app using LangChain. It was deployed to EKS and also make use of EKS Pod Identity to manage application Pod permissions for invoking Amazon Bedrock.

This use-case here is a similar one - a chat application. I will switch back to implementing things in Go using langchaingo (I used Python for the previous one) and continue to use Amazon Bedrock. But there are few unique things you can explore in this blog post:

As always, a diagram always helps.....

Image description

Deploy using SAM CLI (Serverless Application Model)

Make sure you have Amazon Bedrock prerequisites taken care of and the SAM CLI installed

git clone https://github.com/abhirockzz/chatbot-bedrock-dynamodb-lambda-langchain
cd chatbot-bedrock-dynamodb-lambda-langchain
Enter fullscreen mode Exit fullscreen mode

Run the following commands to build the function and deploy the entire app infrastructure (including the Lambda Function, DynamoDB, etc.)

sam build 
sam deploy -g
Enter fullscreen mode Exit fullscreen mode

Once deployed, you should see the Lambda Function URL in your terminal. Open it in a web browser and start conversing with the chatbot!

Image description

Inspect the DynamoDB table to verify that the conversations are being stored (each conversation will end up being a new item in the table with a unique chat_id):

aws dynamodb scan --table-name langchain_chat_history
Enter fullscreen mode Exit fullscreen mode

Scan operation is used for demonstration purposes. Using Scan in production is not recommended.

Quick peek at the good stuff....

Here is a sneak peek of the implementation (refer to the complete code here):

        _, err = chains.Call(c.Request.Context(), chain, map[string]any{"human_input": message}, chains.WithMaxTokens(8191), chains.WithStreamingFunc(func(ctx context.Context, chunk []byte) error {

            c.Stream(func(w io.Writer) bool {
                fmt.Fprintf(w, (string(chunk)))
                return false
            })

            return nil
        }))
Enter fullscreen mode Exit fullscreen mode

Closing thoughts...

I really like the extensibility of LangChain. While I understand that langchaingo may not be as popular as the original python version (I hope it will reach there in due time 🤞), but it's nice to be able to use it as a foundation and build extensions as required. Previously, I had written about how to use the AWS Lambda Go Proxy API to run existing Go applications on AWS Lambda. The AWS Lambda Web Adapter offers similar functionality but it has lots of other benefits, including response streaming and the fact that it is language agnostic.

Oh, and one more thing - I also tried a different approach to building this solution using the API Gateway WebSocket. Let me know if you're interested, and I would be happy to write it up!

If you want to explore how to use GO for Generative AI solutions, you can read up on some of my earlier blogs:

Happy building!

Top comments (0)