DEV Community

Cover image for How to Build Chatbots with Amazon Bedrock & LangChain
Tim Schill for AWS Community Builders

Posted on • Originally published at Medium

How to Build Chatbots with Amazon Bedrock & LangChain

Amazon Bedrock was released on the 28th of September, and I have been fortunate enough to have had access to it for some time now while it has been in closed preview.

If you are reading this article, I’m sure you already know about Amazon Bedrock; if not, let’s summarize it quickly. Amazon Bedrock is a fully managed service that gives you access to foundation models (FMs). Being serverless means there is no infrastructure to care about. However, you still have access to some powerful features (for some models) like fine-tuning with your own data and agents, which you can look at as tools you can create yourself to supercharge your FM with abilities it usually doesn’t have. For example, you could create an agent to query a database or talk with an external API.

You also can create context-aware applications with the help of Related Augmented Generation (RAG). We will not dive deeper than that in this article.

The Setup

Getting started with Bedrock is simple, but to utilize its full power, you can add LangChain to the mix. Langchain is a framework that will simplify the creation of applications using large language models (LLMs). Its power lies in its ability to “chain” or combine multiple components.

In this example, we will create a chatbot with the help of Streamlit, LangChain, and its classes “ConversationChain” and “ConversationBufferMemory.”

First, we create a new Python file. Let’s call it “bedrock.py”.

import os
import boto3
from langchain.chains import ConversationChain
from langchain.llms.bedrock import Bedrock
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
Enter fullscreen mode Exit fullscreen mode

We import some libraries from LangChain. Let’s quickly go through them. Bedrock will allow us to create an object with details about what FM we want to use and configure model parameters, authentication, etc. PromptTemplate will enable us to create prompts we can ingest variables into, similar to Python f-strings.

ConversationBufferMemory allows us to take control of the history or memory efficiently. And finally, ConversationChain will tie all these objects together in a chain.

Next we will create the chain function.

def bedrock_chain():
    profile = os.environ["AWS_PROFILE"]

    bedrock_runtime = boto3.client(
        service_name="bedrock-runtime",
        region_name="us-east-1",
    )

    titan_llm = Bedrock(
        model_id="amazon.titan-text-express-v1", client=bedrock_runtime, credentials_profile_name=profile
    )
    titan_llm.model_kwargs = {"temperature": 0.5, "maxTokenCount": 700}

    prompt_template = """System: The following is a friendly conversation between a knowledgeable helpful assistant and a customer.
    The assistant is talkative and provides lots of specific details from it's context.

    Current conversation:
    {history}

    User: {input}
    Bot:"""
    PROMPT = PromptTemplate(
        input_variables=["history", "input"], template=prompt_template
    )

    memory = ConversationBufferMemory(human_prefix="User", ai_prefix="Bot")
    conversation = ConversationChain(
        prompt=PROMPT,
        llm=titan_llm,
        verbose=True,
        memory=memory,
    )

    return conversation
Enter fullscreen mode Exit fullscreen mode

Our “bedrock_chain” function will create our Bedrock object. In our case, we use the Titan Text G1 — Express model. We then make our prompt template. Our prompt holds three essential parts: the instruction, the context (history), and our user query (input). We then configure the memory attribute with the help of “ConversationBufferMemory.” And finally, we put all this together by creating a “ConversationChain” object.

And we will end with creating two functions.

def run_chain(chain, prompt):
    num_tokens = chain.llm.get_num_tokens(prompt)
    return chain({"input": prompt}), num_tokens


def clear_memory(chain):
    return chain.memory.clear()
Enter fullscreen mode Exit fullscreen mode

The first one, run_chain will be used when we call our chain from our Streamlit app. And clear_memory is pretty obvious; it will empty our history.

We now have everything we need to communicate with Amazon Bedrock. In the next step, we will create our Streamlit app.

We start with creating a new file called “app.py” and import a couple of libraries.

import streamlit as st
import uuid
import bedrock
Enter fullscreen mode Exit fullscreen mode

There is nothing strange here. Streamlit is needed, uuid is used to handle user sessions, and Bedrock is the file we created before.

Now, let’s configure our session_state in Streamlit.

USER_ICON = "images/user-icon.png"
AI_ICON = "images/ai-icon.png"

if "user_id" in st.session_state:
    user_id = st.session_state["user_id"]
else:
    user_id = str(uuid.uuid4())
    st.session_state["user_id"] = user_id

if "llm_chain" not in st.session_state:
    st.session_state["llm_app"] = bedrock
    st.session_state["llm_chain"] = bedrock.bedrock_chain()

if "questions" not in st.session_state:
    st.session_state.questions = []

if "answers" not in st.session_state:
    st.session_state.answers = []

if "input" not in st.session_state:
    st.session_state.input = ""
Enter fullscreen mode Exit fullscreen mode

Next, we create a function to create our top bar, add a button for clearing the chat, and an if-statement with some clearing functionality.

def write_top_bar():
    col1, col2, col3 = st.columns([2, 10, 3])
    with col2:
        header = "Amazon Bedrock Chatbot"
        st.write(f"<h3 class='main-header'>{header}</h3>", unsafe_allow_html=True)
    with col3:
        clear = st.button("Clear Chat")

    return clear


clear = write_top_bar()

if clear:
    st.session_state.questions = []
    st.session_state.answers = []
    st.session_state.input = ""
    bedrock.clear_memory(st.session_state["llm_chain"])
Enter fullscreen mode Exit fullscreen mode

We can now create the main function for handling input from the user.

def handle_input():
    input = st.session_state.input

    llm_chain = st.session_state["llm_chain"]
    chain = st.session_state["llm_app"]
    result, amount_of_tokens = chain.run_chain(llm_chain, input)
    question_with_id = {
        "question": input,
        "id": len(st.session_state.questions),
        "tokens": amount_of_tokens,
    }
    st.session_state.questions.append(question_with_id)

    st.session_state.answers.append(
        {"answer": result, "id": len(st.session_state.questions)}
    )
    st.session_state.input = ""
Enter fullscreen mode Exit fullscreen mode

The important part is lines 6–8. This is where we initialize our chain, call it, and store the result. Next is to create a couple of functions to render the question, answer, and our history.

And finally, we call all the functions and add our input form.

def write_user_message(md):
    col1, col2 = st.columns([1, 12])

    with col1:
        st.image(USER_ICON, use_column_width="always")
    with col2:
        st.warning(md["question"])
        st.write(f"Tokens used: {md['tokens']}")


def render_answer(answer):
    col1, col2 = st.columns([1, 12])
    with col1:
        st.image(AI_ICON, use_column_width="always")
    with col2:
        st.info(answer["response"])


def write_chat_message(md):
    chat = st.container()
    with chat:
        render_answer(md["answer"])


with st.container():
    for q, a in zip(st.session_state.questions, st.session_state.answers):
        write_user_message(q)
        write_chat_message(a)


st.markdown("---")
input = st.text_input(
    "You are talking to an AI, ask any question.", key="input", on_change=handle_input
)
Enter fullscreen mode Exit fullscreen mode

That’s it; we can now start our application by typing “streamlit run app.py” and start chatting.

Our chatbot built with Streamlit, LangChain and Amazon Bedrock

Our chatbot built with Streamlit, LangChain and Amazon Bedrock

What’s next?

I recommend you look deeper at LangChain if you are not already familiar with it. You can also look at the aws-samples Github page; they have some great examples to get you started. For example, you could add Amazon Kendra to the mix. Connect it with one of its many sources, like Atlassian Confluence, and set up Langchain to utilize the Kendra retriever. And now you have a chatbot that can answer questions based on the context it grabs from your internal confluence wiki pages.

In Conclusion

Amazon Bedrock is super easy to start with; it does not require many lines of code, and with the help of LangChain, you can create some really powerful applications. And if you add agents to the mix, the possibilities are almost limitless.

Top comments (1)

Collapse
 
artyomovs profile image
artyomovs

I'm so happy I bumped into this How-to. Right now working with Bedrock in my project and got some ideas for that.
Thank you!