DEV Community

Cover image for AI Research Agent with memory using GPT-4o-mini: Step-by-Step Guide.
Ebikara Spiff
Ebikara Spiff

Posted on

AI Research Agent with memory using GPT-4o-mini: Step-by-Step Guide.

Do you sometimes feel overwhelmed by too many research papers and struggle to find the right one? You're not alone my friend. There's so much academic literature that it's becoming hard to keep up with the latest discoveries in your field. But what if you could make this process easier and tailor it to your needs? What if you could build an AI Research Agent and make it your personal assistant that finds the most relevant papers for you?

You will build this agent using powerful libraries like Streamlit to create a user-friendly web app, OpenAI's GPT-4o-mini for advanced language understanding, MultiOn to access Arxiv and retrieve the latest research data, and Mem0 to provide a personalized memory layer that learns from your preferences. With these tools, you'll be able to navigate academic research like never before.

With this, you'll spend less time searching and more time focusing on your work. In this guide, you'll realize that building your own AI research agent with memory using GPT-4o-mini and a vector database to find relevant research papers based on your interests is easier than you think.
It doesn't matter if you are just starting out, you can set up this powerful tool.
Here’s a step-by-step guide to get you started.

Project Environment Setup

When working in Visual Studio Code (VS Code), start by creating a new Python file for our project. It's helpful to have separate files for different parts of your project.

Create a new Python application:

To do this, start by opening your VS Code and creating a new folder:

Step 1

Open VS Code

Homepage of vs code

Step 2

Create new folder

Creating a new folder in Vs code

Step 3

Create a new file called app.py in the newly created folder.

Creating a new file

Installing required Python Libraries

First, open your terminal and run the following commands:

pip install streamlit openai multion mem0
Enter fullscreen mode Exit fullscreen mode

Importing the necessary Libraries

In your Python script, import the following libraries:

import streamlit as st
import os
from openai import OpenAI
from multion.client import Multi0n
from mem0 import Memory
Enter fullscreen mode Exit fullscreen mode
  • Streamlit: Used for building the web app.
  • OpenAI: Utilized for GPT-4o-mini.
  • MultiOn: Accesses Arxiv and retrieves data.
  • Mem0: Provides a personalized memory layer.

Setting the Streamlit App

Configure the basic layout of your Streamlit app:

st.title("AI Research Agent with Memory πŸ“š")
api_keys =(api_keys ={k:st.text_input(f" (k.capitalize()} API Key", type="password") for k in ['openai', 'multion']}
)
Enter fullscreen mode Exit fullscreen mode

Initializing services with API Keys

Set up the services by configuring Mem0 with Qdrant as the vector store and initializing MultiOn and OpenAI clients:

if all(api_keys.values()):
  os.environ['OPENAI_API_KEY']=api_keys['openai']
  config = { "vector_store": { "provider": "qdrant", "config": "model": "gpt-40-mini", "host": "localhost", "port": 6333, }, }
  memory= Memory.from_config(config) 
  multion = Multion(api_key=api_keys['multion'])
  openai_client = OpenAI (api_key=api_keys['openai'])
Enter fullscreen mode Exit fullscreen mode

Creating user input and search query fields

Add a sidebar for user input and a search query field:

st.sidebar.text_input("Enter your Username")
search_query = st.text_input("Research paper search query")
Enter fullscreen mode Exit fullscreen mode

Defining a function to process search results with GPT-4o-mini

Create a function to process search results into a readable format:

def process_with_gpt4 (result):
    prompt f """ 
    Based on the following arXiv search result, provide a proper structured output in markdown that is readable by the users. Each paper should have a title, authors, abstract, and link.Search Result: (result) Output Format: Table with the following columns: [{{"title": "Paper Title", "authors": "Author Names", "abstract": "Brief abstract", "link": "arXiv link"}}, ...]"""
    response = openai_client.chat.completions.create (model="gpt-40-mini", messages=[{"role": "user", "content": prompt}], temperature=0.2)
    return response.choices[0].message.content
Enter fullscreen mode Exit fullscreen mode

Implementing paper search functionality

Build the core functionality to search and display research papers:

if st.button('Search for Papers'):
   with st. spinner ('Searching and Processing...'):
     relevant memories = memory.search(search_query, user_id=user_id, limit=3) 
     prompt = f "Search for arXiv papers: {search_query}\nUser background: {' '.join(mem['text'] for mem in relevant_memories)}"
     result = process_with_gpt4 (multion.browse (cmd=prompt, url="https://arxiv.org/"))
     st.markdown (result)
Enter fullscreen mode Exit fullscreen mode

Adding memory viewing feature

To view your stored memories:

if st.sidebar.button("View Memory"):

    st.sidebar.write("\n".join([f"- {mem['text'])}" for mem in memory.get_all (user_id=user_id)]))
Enter fullscreen mode Exit fullscreen mode

Running Application

To see your AI research agent in action, paste the above code into your IDE (VSCode or PyCharm) and run the following command:

streamlit run ai_arxiv_agent_memory.py
Enter fullscreen mode Exit fullscreen mode

This will launch your Streamlit app, where you can search for research papers and manage your personalized memory layer.

And there you have it! You now have the power to tame the beast of academic literature and make research a whole lot easier. With your AI Research Agent by your side, you'll be able to find the perfect papers, stay on top of the latest discoveries, and focus on what really matters - your work.
It's like having your own personal research assistant, minus the coffee breaks. So, what are you waiting for? Dive in, start building, and discover a whole new world of stress-free research.
If you find this guide useful, please share it.

Top comments (2)

Collapse
 
spiff profile image
Ebikara Spiff

What would you want me to write about next?

Collapse
 
spiff profile image
Ebikara Spiff

Thank you all for reading.