DEV Community

Cover image for PrivateGPT and AWS EC2: A beginner's Guide to AI experimentation
Artur Schneider for AWS Community Builders

Posted on

PrivateGPT and AWS EC2: A beginner's Guide to AI experimentation

Introduction

In this era of digital transformation, it's hard to miss the wave of artificial intelligence and machine learning that is sweeping across all sectors. As an enthusiast with a curious mind, but with a limited background in AI, I, like many others, was intrigued yet overwhelmed. This fascination led me down the path of exploring AI, specifically the world of large language models (LLMs).

The world of AI may seem daunting, filled with a myriad of complex terms, algorithms, and architectures. However, my goal was to simplify this journey for myself and, in doing so, create an opportunity for others who wish to venture into this domain.

In this quest for simplicity, I stumbled upon PrivateGPT, an easy-to-implement solution that allows individuals to host a large language models on their local machines. Its powerful functionalities and ease of use make it an ideal starting point for anyone looking to experiment with AI. What's even more interesting is that it provides the option to use your own datasets, opening up avenues for unique, personalized AI applications - all of this without the need for a constant internet connection.

PrivateGPT comes with a default language model named 'gpt4all-j-v1.3-groovy'. However, it does not limit the user to this single model. Users have the opportunity to experiment with various other open-source LLMs available on HuggingFace. One such model is Falcon 40B, the best performing open-source LLM currently available.

In this blog post, I'll guide you through the process of setting up PrivateGPT on an AWS EC2 instance and using your own documents as sources for conversations with the LLM. To make the interaction even more convenient, we will be using a solution that provides an intuitive user interface on top of PrivateGPT.

So, fasten your seatbelts and get ready for a journey into the exciting realm of AI, as we explore and experiment with large language models, all in the comfort of your own private environment.

Configuration

Launching the EC2 Instance

In this section, we will walk through the process of setting up an AWS EC2 instance tailored for running a PrivateGPT instance. We'll take it step by step. This will lay the groundwork for us to experiment with our language models and to use our own data sources.

Let's start by setting up the AWS EC2 instance:

Choosing an Operating System: In our case, we will be using Amazon Linux as our operating system. This is an excellent choice for hosting PrivateGPT due to its seamless integration with AWS services and robust security features. However, PrivateGPT is flexible and can also be hosted on other operating systems such as Windows or Mac.

Selecting Instance Type: For the needs of our task, we require an instance with a minimum of 16 GB memory. The specific instance type that you choose may depend on other factors like cost and performance. I recommend using one of the T3 instances, such as t3.large or t3.xlarge.

Selecting Instance Type

Storage Configuration: After choosing the instance type, we need to add additional storage for the language model and our data. The exact amount of storage you need will depend on the size of the model and your dataset. For instance, the Falcon 40B model requires higher amount of storage itself.

Storage Size Selection

Instance Details: Proceed with the default instance details for the initial setup. These can be modified later based on specific requirements.

Security Group Configuration: To ensure we can access the instance from our client, it is essential to configure the security group appropriately. Add a new rule to the security group that allows inbound traffic for the ports 80 and 3000 from your client IP address. This will enable you to securely access the instance over the internet.

Adjusting Security Group

Remember that this setup is primarily for experimental purposes. Whitelisting IP addresses is one way to secure your instance from unwanted public access. However, for more complex or sensitive deployments, you may need a more robust security architecture.

At this point, you've successfully set up your AWS EC2 instance, creating a solid foundation for running PrivateGPT. Lets continue with the setup of PrivateGPT...

Setting up PrivateGPT

Now that we have our AWS EC2 instance up and running, it's time to move to the next step: installing and configuring PrivateGPT. The following sections will guide you through the process, from connecting to your instance to getting your PrivateGPT up and running.

Connecting to the EC2 Instance

Connection Setup: To start with, we need to connect to the EC2 instance. In this case, we will be using AWS Session Manager. Session Manager provides a secure and convenient way to interact with your instances. It allows you to connect to your instance without needing to open inbound ports, manage SSH keys, or use bastion hosts. If you're new to Session Manager, you can refer to the official AWS documentation to set up the required configuration. Alternatively, you could also connect to your instance via SSH if you prefer.

Installing Prerequisites

Once connected, we need to install a few prerequisites:

Git: Start by installing Git, which will allow us to clone the PrivateGPT repository. If you're using Amazon Linux, you can install Git by running the command:

sudo yum install git
Enter fullscreen mode Exit fullscreen mode

Pip: Pip is a package installer for Python, which we will need to install the Python packages required by PrivateGPT. You can install it by running:

sudo yum install pip
Enter fullscreen mode Exit fullscreen mode

NPM: NPM (Node Package Manager) is used to install Node.js packages. If it's not already installed, you can install it by first installing Node.js with the command:

sudo yum install npm
Enter fullscreen mode Exit fullscreen mode

Configuration of PrivateGPT

With the prerequisites installed, we're now ready to set up PrivateGPT:

Cloning the Repository: Clone the PrivateGPT repository to your instance using Git. You can do this with the command:

git clone https://github.com/SamurAIGPT/privateGPT.git
Enter fullscreen mode Exit fullscreen mode

First we start our environment, to do this we have to navigate to the client folder and execute the following commands:

npm install
npm run dev
Enter fullscreen mode Exit fullscreen mode

Then we start a Flask application and install the necessary packages. For this we have to change to the server folder and execute the following commands:

pip install -r requirements.txt
python privateGPT.py
Enter fullscreen mode Exit fullscreen mode

By following these steps, you should have a fully operational PrivateGPT instance running on your AWS EC2 instance. Now, you can start experimenting with large language models and using your own data sources for generating text!

Navigating the PrivateGPT User Interface

Now that we've successfully set up the PrivateGPT on our AWS EC2 instance, it's time to familiarize ourselves with its user-friendly interface. The UI is an intuitive tool, making it incredibly easy for you to interact with your language model, upload documents, manage your models, and generate text.

First and foremost, you need to access the PrivateGPT UI. You can accomplish this by typing the public IP address of your AWS EC2 instance into your web browser's address bar from your client, appending :3000 at the end. It should look something like this:

http://your-public-ip:3000

This will navigate you directly to the PrivateGPT interface hosted on your EC2 instance.

Entering UI

Once you've entered the UI, the next step is to download a Large Language Model (LLM). You'll find a button in the UI specifically for this purpose. Clicking this button will commence the download process for the default language model 'gpt4all-j-v1.3-groovy'. Remember, PrivateGPT comes with a default language model, but you also have the freedom to experiment with others, like Falcon 40B from HuggingFace.

With the language model ready, you're now prepared to upload your documents. Select the documents you'd like to use as a source for your LLM. After your documents have been successfully uploaded, the data needs to be ingested by the system. Look for an 'Ingest Data' button within the UI and click it. This action enables the model to consume and process the data from your documents.

Ingesting Data

Finally, with all the preparations complete, you're all set to start a conversation with your AI. Use the conversation input box to communicate with the model, and it will respond based on the knowledge it has gained from the ingested documents and its underlying model. (In my example I have generated PDF files from the official AWS documentations)

Response from Bot

And voila! You've now set foot in the fascinating world of AI-powered text generation. You can continue to explore and experiment with different settings and models to refine your understanding and outcomes. Enjoy this exciting journey!

Summary

As we wind up this exploration, it's worth highlighting that while PrivateGPT may not offer the exact capabilities of something like ChatGPT, it provides a robust and secure environment to experiment with large language models, leveraging your own sources of data. From PDFs, HTML files, to Word documents and beyond, PrivateGPT offers flexibility in the types of documents you can use as data sources. (For the full list of supported document types, refer to the official PrivateGPT GitHub repository at https://github.com/imartinez/privateGPT.)

What makes PrivateGPT all the more compelling is the constant evolution of open-source large language models. As these models continue to improve, the gap between services like ChatGPT is rapidly closing. The added advantage is that you're in control of your own data and infrastructure, providing a level of trust and flexibility that is invaluable in the rapidly evolving AI landscape.

Undoubtedly, the journey into the realm of AI and large language models doesn't end here. With services like AWS SageMaker and open-source models from HuggingFace, the possibilities for experimentation and development are extensive. For those interested in more reliable solutions, I highly recommend checking out the insightful blog post by Phillip Schmid on using AWS SageMaker with large language models in a AWS environment. You can find his blog post here:

In conclusion, whether you're a seasoned AI practitioner or a curious enthusiast, tools like PrivateGPT offer an exciting playground to dive deeper into the world of AI. So go ahead, set up your PrivateGPT instance, play around with your data and models, and experience the incredible power of AI at your fingertips. Remember, "es lohnt sich" - it's worth it!

Shout out to the creators of PrivateGPT Ivan Martinez and group around SamurAIGPT which give us a great start into the AI world through this simplification.

Top comments (1)

Collapse
 
tomharvey profile image
Tom Harvey

Thanks for the post and guide. I think your linking to two different repos:

In the summary you link to
github.com/imartinez/privateGPT

While the setup you’re cloning:
github.com/SamurAIGPT/privateGPT

As these are different packages I hit a bit lost in digging up the docs.