Introduction
I've been using OpenWebUI and local Llama models to help me with heavy lifting tasks like analyzing, classifying, fact-checking, and summarizing against documents, and personal financial records. It's been a valuable tool, allowing me to keep all that confidential data secured on my local machine. But I've run into one major frustration - my laptop just doesn't have the computing power to handle the larger language models I need for this kind of complex analysis. I was stuck using the smaller 7B models, which still took precious seconds to respond.
I was about to subscribe to a cloud-based service, resigning myself to having my sensitive data leave my local environment. But then I had a thought - what if I could leverage the power of a cloud-based service while still keeping my data secure and private? That's when I realized I could use Amazon Bedrock.
I already knew about Amazon Bedrock and had access to an AWS account. It only recently dawned on me that I could combine Bedrock's advanced foundation models with the privacy-focused approach of OpenWebUI. The idea of tapping into Bedrock's powerful LLM (Large Language Model) capabilities, while still maintaining the local data storage and confidentiality of OpenWebUI, was exactly what I needed to overcome the performance limitations of my laptop and keep my information secure.
Now, instead of being stuck with the limited performance of smaller local models, I can seamlessly integrate OpenWebUI with Amazon Bedrock. This allows me to leverage the wider range of high-performing foundation models available through Bedrock, without exposing my sensitive documents, financial records, or other proprietary information to the public cloud.
Although my interactions with Amazon Bedrock happen within my AWS account, all my conversations and data remain locally stored because I'm using a local vector database with OpenWebUI. This ensures my confidential data never leaves my controlled environment.
It's been a game-changer for the way I approach heavy-lifting tasks that require the power of LLMs. That's why I'm excited to share this powerful combination of Amazon Bedrock and OpenWebUI, and how it can benefit you when handling sensitive and proprietary information. Let's dive in.
What are Amazon Bedrock and OpenWebUI?
To dive into the details of how these two tools work together, let's first take a closer look at Amazon Bedrock and OpenWebUI.
Amazon Bedrock is a fully managed service that provides access to a wide range of high-performing foundation models (FMs) from leading AI providers like Anthropic, Cohere, and Stability AI. This allows you to select the best model for your specific use case, whether you're analyzing internal documents, questioning financial records, or processing customer information. Importantly, Amazon Bedrock is designed with a focus on security, privacy, and responsible AI practices, ensuring your sensitive data remains protected.
On the flip side, OpenWebUI is an open-source project that gives you a user-friendly web interface for working with LLMs like LLaMA and GPT. This makes OpenWebUI an ideal companion when handling sensitive information, as it keeps all user data stored locally on your device and prevents any external access or exposure.
By leveraging the capabilities of both Amazon Bedrock and OpenWebUI, you can tap into advanced LLM-powered functionality while prioritizing data security and privacy. Let's explore how these two tools work together in more detail.
What makes them ideal for handling sensitive data?
Now that we've covered what Amazon Bedrock and OpenWebUI are, let's dive into what makes them ideal for handling sensitive data.
OpenWebUI:
- Offline-First Design: OpenWebUI’s offline-first approach helps to isolate user data from network-based threats, making it a strong option for handling sensitive information.
- Strict Confidentiality: OpenWebUI emphasizes "strict confidentiality and no external requests," ensuring that your data remains secure and private.
- Transparent Open-Source: Being open-source, OpenWebUI allows you to audit its codebase, giving you the ability to verify its data handling practices.
Amazon Bedrock:
- Compliance with Standards: Bedrock, as part of AWS, complies with stringent security standards such as FedRAMP, SOC, ISO, and HIPAA, making it suitable for industries with high compliance requirements.
- Secure Model Deployment: Amazon Bedrock leverages isolated accounts to ensure that model providers do not have access to customer data, enhancing security for sensitive information.
- Robust Data Protection: Bedrock is designed to protect customer data with encryption and secure transmission. While data may be transmitted over the public internet, it is safeguarded by AWS’s robust security measures, ensuring data is protected both in transit and at rest.
These security and privacy-focused features of OpenWebUI and Amazon Bedrock make them a powerful combination for handling sensitive data with confidence.
Integrating OpenWebUI with Amazon Bedrock
Now that we've explored the security and privacy features of OpenWebUI and Amazon Bedrock, let's dive into the process of integrating the two tools. To get started, you'll need to set up both OpenWebUI and the Bedrock Access Gateway on your local machine. Once that's done, you can configure OpenWebUI to integrate with Bedrock, allowing you to interact with models like Claude3-Sonnet and use features like Cohere.Embedding - all while keeping your sensitive data secure.
But why integrate OpenWebUI with Amazon Bedrock when you could just use local LLMs like LLaMA and still remain secure? There are a few key reasons:
- Access to Larger and More Capable Models: While LLMs like LLaMA are powerful, Amazon Bedrock provides access to an even wider range of leading foundation models, including those from providers like Anthropic, Cohere, and Stability AI. This allows you to leverage more advanced and capable generative AI capabilities.
- Faster Responses with Limited Local Resources: Running LLMs can be resource-intensive, especially on local devices with limited CPU, memory, and GPU capabilities. By integrating with Amazon Bedrock, you can offload the heavy computational workload to the cloud, ensuring fast responses even when your local hardware is constrained.
- Ongoing Model Updates and Improvements: As a fully managed service, Amazon Bedrock handles the maintenance and updates of the foundation models, ensuring you always have access to the latest versions and improvements. This can be challenging to manage on your own when working with local models.
By combining the benefits of OpenWebUI's offline-first approach with the powerful and scalable capabilities of Amazon Bedrock, you can access advanced generative AI capabilities while keeping your sensitive and proprietary data secure.
Let's get started!
To set up OpenWebUI on your local machine, follow these steps:
1. Download and Install OpenWebUI
Head over to the OpenWebUI GitHub repository, clone their repo and install open-webui
by running the commands below;
# This command clones the open-webui repository from GitHub.
git clone https://github.com/open-webui/open-webui.git
# This command changes the current working directory to the cloned repository.
cd open-webui
# This command copies the .env.example file to create a new .env file. The .env file is used to store environment variables for the application.
cp -RPp .env.example .env
# This command installs the required Node.js dependencies for the frontend part of the application.
npm install
# This command builds the frontend code for the application.
npm run build
# This command changes the current working directory to the backend folder.
cd ./backend
# This command activates the Pipenv virtual environment for the backend code.
pipenv shell
# This command installs the required Python dependencies for the backend part of the application.
pip install -r requirements.txt -U
# Start the application
bash start.sh
If everything goes well you should see something similar to the image below.
📝 NOTE |
---|
There are other and much easier ways to install open-webui , go to their documentation page to see the recommended installation option. |
2. Login to your local OpenWebUI installation
Open your browser and navigate to http://0.0.0.0:8080
, since this is your first time accessing you will need to sign up.
📝 NOTE |
---|
This sign-up is only for your local instance of open-webui and totally unrelated to signing-up to https://openwebui.com/ . |
3. Download and Install Bedrock Access Gateway
Head over to the bedrock-access-gateway
GitHub repository, clone their repo and install by running the commands below;
# This command clones the bedrock-access-gateway repository from GitHub.
git clone https://github.com/aws-samples/bedrock-access-gateway.git
# This command changes the current working directory to the cloned repository.
cd src
docker build -f Dockerfile_ecs -t bedrock-access-gateway .
docker run -e AWS_ACCESS_KEY_ID="<replace-with-access-key-id>" \
-e AWS_SECRET_ACCESS_KEY="<replace-with-secret-access-key>" \
-e AWS_DEFAULT_REGION="<replace-with-region>" \
-e AWS_REGION="<replace-with-region>" \
-e DEBUG=true \
-d -p 8081:80 \
bedrock-access-gateway
If everything goes well, you should see something similar to the image below.
📝 NOTE |
---|
Make sure you have sufficient access to Bedrock and you have requested access to the base models you want to use via the Amazon Bedrock console. |
4. Connect the OpenWebUI to Amazon Bedrock
This allows you to chat with Bedrock models. Within OpenWebUI, go to "Settings" > "Admin Settings" > "Connections". Here, Update the "OpenAI API" section's API Base URL
to point to your bedrock-access-gateway
's URL which is http://localhost:8081/api/v1
and update the API Key
to bedrock-access-gateway
's default API key which is bedrock
.
Then click on the verify connection icon as highlighted on the image below. Upon clicking it, you should see a green popup saying "Server connection verified" Once verified, click the "Save" button.
📝 NOTE |
---|
For added security, store the API key you want to use in SSM and reference the SSM parameter name by defining the API_KEY_PARAM_NAME environment variable when running the Docker container. |
5. Connect the OpenWebUI Embedding Model Engine to Amazon Bedrock
After connecting OpenWebUI to Amazon Bedrock, the next step is to integrate the embedding model engine. This allows you to use Bedrock's embedding models to enhance document processing. Within OpenWebUI, go to "Settings" > "Admin Settings" > "Documents". Here, Update the "Embedding Model Engine" section's API Base URL
to point to your bedrock-access-gateway
's URL which is http://localhost:8081/api/v1
and update the API Key
to bedrock-access-gateway
's default API key which is bedrock
.
Next, you need to pick which embedding model to use by setting the Embedding Model
to either use cohere.embed-english-v3
or cohere.embed-multilingual-v3
. Before you save our changes, make sure you re-import all documents by clicking on "Reset Upload Directory" and "Reset Vector Storage", then simply re-upload your existing documents.
Once that is done, click on the "Save" button. With the OpenWebUI-Bedrock embedding integration now set up, you're ready to start leveraging the advanced capabilities of these tools for your document processing needs.
6. Start Interacting with Bedrock Models
With the OpenWebUI-Bedrock integration now set up, you can start tapping into the powerful capabilities of Amazon Bedrock's foundation models. This includes:
Chatting with Bedrock Models: Within the OpenWebUI interface, you can now engage in conversations with advanced language models like Anthropic's Claude3. Simply select the desired Bedrock model from the available options and start chatting, all while keeping your sensitive data securely stored on your local machine.
Start chatting once you've chosen your bedrock model of choice.
Leveraging Bedrock Embedding Models: In addition to language modeling, you can also utilize Bedrock's high-quality embedding models to enhance your document processing workflows. Options like Cohere's Embed-English and Embed-Multilingual are available through the OpenWebUI integration, allowing you to generate rich vector representations of your text data. Whenever you upload new documents to OpenWebUI it should now use Cohere's Embedding model through Amazon Bedrock.
📝 NOTE |
---|
You can validate this by checking bedrock-access-gateway 's logs as you upload a document. |
Once a document is uploaded, you can start questioning it! To do that, you can reference any document by using the #
sign on a new chat window.
Here you see a document has been chosen and have started quesitoning it.
And here you see claude3 responding.
By seamlessly connecting OpenWebUI to Amazon Bedrock, you have the best of both worlds - the privacy and security of a local, offline-first tool combined with access to a wide range of advanced generative AI capabilities. This empowers you to handle your sensitive data with confidence while tapping into the cutting-edge functionality provided by Bedrock.
Conclusion
With the OpenWebUI-Bedrock integration set up, you're now ready to start leveraging the powerful capabilities of these two tools to handle your sensitive data with ease. The combination of OpenWebUI's privacy-focused approach and Bedrock's advanced foundation models, along with its robust data protection features, provides a comprehensive solution for tackling your most complex and confidential tasks.
Amazon Bedrock's compliance with stringent security standards like FedRAMP, SOC, ISO, and HIPAA, as well as its secure model deployment and robust data protection mechanisms, ensures that your sensitive data remains safeguarded throughout the process. By seamlessly connecting OpenWebUI to Bedrock, you have the best of both worlds - the privacy and security of a local, offline-first tool combined with access to a wide range of cutting-edge generative AI capabilities.
I'm confident that this integration will be a game-changer for the way you approach sensitive and confidential data tasks. With the powerful combination of OpenWebUI and Amazon Bedrock at your fingertips, you can tackle even your most complex challenges while maintaining the highest standards of security and privacy. As I'll be using this setup as my daily driver, I'll be sure to report back on the cost-effectiveness of running this configuration for a month. Please let me know if you have any other questions as you get started!
Top comments (0)