DEV Community

Cover image for Step-by-step Guidelines for Integrating GPT in Your Project: Create an API for Anything Using LangChain and FastAPI
Afroza Nowshin
Afroza Nowshin

Posted on

Step-by-step Guidelines for Integrating GPT in Your Project: Create an API for Anything Using LangChain and FastAPI

Generative AI has taken the world by storm. With the advent of GPT-3.5 and Bing chat powered by GPT-4, LLMs are handy. You can build your own GPT assistant by obtaining an OpenAI key and following steps from OpenAI. For creating your account, you get a 5$ credit, which is useful for the paid GPT services.

Here is a problem with most of the free GPTs.

Free GPTs lack contexts. This is critical when you are using GPTs for building your chatbot or GPT assistant. Without providing proper context, these assistants are more prone to hallucinations and out-of-context results.

Fret not; we have an open-source framework called LangChain to save the day. With LangChain, we can create GPT assistants that have contexts and get access to almost all of the LLMs for free. LangChain tools and APIs simplify the development of LLM-driven applications and virtual agents. You can develop applications with multiple LLMs using LangChain. We can even create custom chains that are suitable for our work, which I intend to cover in future posts.

The workflow at a glance is as follows:

langchain

Steps are:

  1. Setup a Python virtual environment
  2. Install LangChain packages and Fast API
  3. Setup LangChain to work with OpenAI LLM or other LLM
  4. Create a prompt for the specific task
  5. Create your own Chain
  6. Create an API

1. Setup a Python virtual environment

First of all, install the Python virtual environment from these links: 1 and 2. I developed my GPT-based API in Python version 3.8.18. Pick any Python versions >= 3.7.

2. Install LangChain packages and Fast API

Now go on installing the basic packages with the following command:

pip install langchain langchain-openai fastapi[uvicorn]
Enter fullscreen mode Exit fullscreen mode

3. Setup LangChain to work with OpenAI LLM or other LLM

Create a python file and import the libraries.

from fastapi import FastAPI, HTTPException, Form
from langchain_openai import OpenAI as LLMOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from pydantic import BaseModel
Enter fullscreen mode Exit fullscreen mode

For this post, we will use OpenAI's LLM, which can be set with the following code:

# Language model for LangChain
llm = LLMOpenAI(temperature=0.7, max_tokens=500, streaming=True, batch_size=50)  
Enter fullscreen mode Exit fullscreen mode

The parameter temperature is used to set the "creativity" level of the GPT response. The value is between 0 and 1, and the values that are nearer to 1 give less grounded results, which means the model is hallucinating and may give improper outcomes. Usually, I keep the temperature between 0.6 and 0.7. In addition, streaming and batch_size ensure concurrent chunk processing of the input.

4. Create a prompt

We need to set up a prompt, and our prompt will take one or more input variables with which we will set the template of the prompt:

prompt_template = PromptTemplate(
    input_variables=['input', 'operation'],
    template="Provide {operation} for the following code:\n\n{input}\n\nOutput:"
)
Enter fullscreen mode Exit fullscreen mode

5. Create your own Chain

For input validation, we imported the Pydantic BaseModel. Then we can create our own Chain by passing our LLM and prompt to the LLMChain function.

class Input(BaseModel):
    input: str
    operation: str   

# Use dependency injection to create llm_chain within the route function
def get_llm_chain():
    return LLMChain(llm=llm, prompt=prompt_template)
Enter fullscreen mode Exit fullscreen mode

6. Create an API

Finally, finish up the API for which you used the LangChain.
You need to call the function of the last step to be called by invoke() function.

@app.post('/process-input')
async def process(input: str = Form(...), operation: str = Form(...)):
    try:
        # Use langchain with dynamically included operation and essay in the prompt
        output = get_llm_chain().invoke({'input': input, 'operation': operation})

        return {'output': output}

    except HTTPException as http_error:
        raise http_error
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Error: {str(e)}")
Enter fullscreen mode Exit fullscreen mode

With these steps, you can build your custom chatbots and virtual agents, summarize by integrating LangChain in the internal layers. I hope this post has helped even just one person out there. I wish to explore LangChain more and share the knowledge with you soon.

Top comments (0)