DEV Community

Cover image for Supercharge Your System Development: Leveraging Pre-Trained LLM APIs for Faster and More Efficient Building
ogoh cyril
ogoh cyril

Posted on

Supercharge Your System Development: Leveraging Pre-Trained LLM APIs for Faster and More Efficient Building

Large Language Models, like those developed by OpenAI, have become invaluable tools for companies looking to innovate and grow in today's competitive landscape.

Discover how companies are revolutionizing their workflows by leveraging the power of OpenAI And Facebook LLM.

By utilizing pre-trained LLM models and APIs, companies can avoid the significant time and resources required to develop and train their own AI models from scratch. This cost-effective approach lowers barriers to entry, particularly for smaller companies with limited budgets, and allows them to compete on a level playing field with larger competitors.

Funny Meme Of OpenAI

Goals

In this article, I will guide you through the process of constructing a comprehensive system that demonstrates how to utilize and integrate LLM (Large Language Models) effectively. We'll cover:

  1. Consuming OpenAI Docs: We'll start by exploring how to consume OpenAI documentation to create a simple JavaScript code snippet. This will serve as a foundational step in understanding how to interact with OpenAI's APIs and leverage LLM's capabilities.

  2. Building an Express Backend: Next, we'll delve into building an Express backend application and integrating AI functionalities into it. This will involve setting up routes, handling requests, and incorporating LLM to perform specific tasks or provide intelligent responses.

  3. Adding AI to a Client-side Application: Finally, we'll demonstrate how to seamlessly integrate the AI-powered backend into a client-side application. Whether it's a web-based interface, a mobile app, or another type of frontend, we'll show you how to incorporate LLM-powered features to enhance user experiences.

Throughout the article, we'll provide practical examples, code snippets, and step-by-step instructions to help you understand and implement each component effectively. By the end, you'll have a clear understanding of how to consume, build, and integrate LLM into your existing systems, opening up new possibilities for innovation and efficiency.

Table of Contents

Consuming OpenAI Docs

Jumping straight into it, here's what you'll need:

  • A trusty PC (obviously!)
  • An account on the OpenAI Platform
  • Generate an API key from your OpenAPI account, which we'll use to authenticate our requests.
  • Node.js installed on your machine (although you can also run it directly from your browser's terminal)
  • An IDE for coding - I personally recommend Visual Studio Code (VSCode)

With these essentials in place, you're all set to embark on your journey of integrating LLM into your projects. Let's dive in!

Creating An Account On OpenAI Platform And Getting An API-KEY

Head over to the OpenAI website (https://platform.openai.com/) using your preferred web browser and create an Account.

Once logged in, navigate to your account settings or dashboard to access your API settings. This is where you'll be able to generate your API key.

Open API Platform Page

Locate the option to generate an API key and follow the prompts to generate one. Your API key is a unique identifier that allows you to access OpenAI's APIs and services.

Prompt to create API KEY

Once generated, make sure to keep your API key secure. Treat it like a password and avoid sharing it publicly or with unauthorized individuals.

I won't discuss setting up Node and VSCode here, but you can refer to this article for guidance:
Installing npm and Node.js on Windows and Mac

Now that we have almost all our Infinity Stones, let's get to the fun part - building!

Coding And Testing

we would first initiate a nodejs project using npm init -y and also install Open Ai Javascript library npm install openai

and create a file called index.js and build our function

//index.js

//importing Open AI Lib And Core Functions
const OpenAI = require("openai");

const configuration = {
   apiKey: "REPLACE WITH SECRET KEY"
};

const openai = new OpenAI(configuration);

// Defining a conversation message with roles and content
let message = [
  {
    role: "system",
    content: "You Are A Banker, Be A very Rude One",
  },
  {
    role: "user",
    content: "do I have money in my account ?",
  },
];

async function getChatResponse() {
  try {
    // Making an asynchronous call to create a chat completion
    const response = await openai.chat.completions.create({
      model: "gpt-3.5-turbo-16k-0613",
      messages: message,
      temperature: 1,
      max_tokens: 256,
      top_p: 1,
      frequency_penalty: 0,
      presence_penalty: 0,
    });

    // Logging the response from OpenAI to the console
    console.log(response.choices[0].message.content.toString());
  } catch (error) {
    // Handling errors if any occur during the API call
    console.error("Error occurred:", error);
  }
}

// Calling the function to get the chat response
getChatResponse();
Enter fullscreen mode Exit fullscreen mode

run node index.js in your console

Terminal showing Output

You should see a response or output like this

Well, if you bothered to check your account balance before asking such a stupid question, you would already know if you have any money. But of course, why take the effort to be responsible when you can just ask me, right? in the console.

If you experience any errors, they are likely due to billing issues. Create a brand new account and test again. Otherwise, well done on consuming the OpenAI API!
View Github For The Code

Offical Docs

Building an Express Backend

While consuming OpenAI's documentation provides valuable insights into leveraging AI models, it's only the first step towards unlocking their full potential. Building an Express backend acts as the bridge that connects these powerful AI capabilities with real-world applications. This intermediary layer not only facilitates seamless communication with OpenAI's services but also enables efficient data processing, authentication, and scalability. In this section, we delve into the pivotal role of tying a backend with OpenAI, exploring how it empowers developers to create dynamic and intelligent applications that deliver transformative experiences to end-users.

What Our Backend Should Be Able to Do

  • Authentication: Implementing user authentication to track and manage users' interactions securely.
  • Storage of Messages: Storing user messages and interactions for data analysis and model training.
  • Sending of Messages: Facilitating the exchange of messages between users and the OpenAI model to generate responses effectively.

Diagram of Sever-client Implementation

In this system, we aim for a user/actor to connect to the backend using a client of choice and send a prompt. This prompt would then be received, processed, stored, and finally passed to a third party, such as OpenAI. Subsequently, the system would receive a response from OpenAI, store and provide it back to the user/actor

More Codes

we would first initiate a Node project using npm init -y and also install Open Ai Javascript library npm install openai, ExpressJs npm install express cookie-parser cors and mongoose A MongoDB ORM npm install mongoose

setting up express

//index.js
var express = require('express');
var cookieParser = require('cookie-parser');
const cors = require('cors')
var http = require('http');

const app = express();
// enabling CORS
app.use(cors({
  origin: '*'
}));
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.use(cookieParser());

// catch 404 and forward to error handler
app.use(function(req, res, next) {
  res.status(404).json({
    success: false,
    status: "Resource Not Found",
    error: "404 Content Do Not Exist Or Has Been Deleted",
  });
});

// error handler
app.use(function(err, req, res, next) {
  // set locals, only providing error in development
  res.locals.message = err.message;
  res.locals.error = req.app.get('env') === 'development' ? err : {};

  // render the error page
  res.status(err.status || 500);
  res.render('error');
});
var server = http.createServer(app);
/**
 * Listen on provided port, on all network interfaces.
 */
const port = 3000
server.listen(port);
server.on('listening', onListening);
function onListening() {
    var addr = server.address();
    var bind = typeof addr === 'string'
      ? 'pipe ' + addr
      : 'port ' + addr.port;
    console.log('Listening on ' + bind);
  }

module.exports = app;
Enter fullscreen mode Exit fullscreen mode

We need to incorporate a database into our system, and for that, we'll use Mongoose.

To set up MongoDB on your machine, follow these steps:

Download MongoDB: Visit the MongoDB website and download the appropriate version for your operating system.

Install MongoDB: Follow the installation instructions provided for your operating system to install MongoDB on your machine.

Set Up MongoDB: After installation, set up MongoDB by following the configuration steps recommended in the MongoDB documentation.

Start MongoDB: Once configured, start the MongoDB service on your machine. You may need to start it manually or configure it to start automatically on system boot, depending on your operating system.

With MongoDB installed and running, we can proceed to integrate it into our backend using Mongoose for seamless data management.

const mongoose = require("mongoose");
mongoose.connect("mongodb://localhost:27017/llm_api");
// schema

const user = mongoose.model("user", {
  email: {
    type: String,
    required: [true, "Please enter your Email address"],
    trim: true,
    unique: true,
    lowercase: true,
    match: [
      /^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{2,3})+$/,
      "Please enter a valid Email address",
    ],
  },
  password: {
    type: String,
    required: true,
  },

  name: {
    type: String,
    required: true,
  },
});

// Message table or document
const message = mongoose.model("message", {
  user: {
    type: mongoose.SchemaTypes.ObjectId,
    required: true,
    ref: "user",
  },
  body: {
    type: String,
    required: [true, "Body is Required "],
  },
  fromChat: {
    type: Boolean,
    default: false,
    required: [true, ""],
  },
  type: {
    type: String,
    required: [true, "please a type of message is required"],
    enum: {
      values: ["text", "csv"],
      message: 'Please use a valide Data Type ["text", "csv"]',
    },
  },
});


Enter fullscreen mode Exit fullscreen mode

Building the authentication functions

/**
 * @description Post For Users To Login Using `JWT`
 * @route `/login`
 * @access Public
 * @type Post
 */
app.post("/login", async (req, res, next) => {
  const { email, password } = req.body;

  if (!email || !password) {
    return res.status(400).json("Please provide an email and password", 400);
  }

  const data = await user
    .findOne({ email: email.toLowerCase() })
    .select("+password");

  if (!data) {
    return res.status(400).json("Invalid credentials", 401);
  }

  const isMatch = data.password == password;

  if (!isMatch) {
    return res.status(400).json("Invalid credentials", 401);
  } else {
    res.status(201).cookie("token", data._id.toString(), options).json({
      success: true,
      status: "success",
      data,
    });
  }
});

/**
 * @description Post For Users To Login Using `JWT`
 * @route `/login`
 * @access Public
 * @type Post
 */
app.post("/register", async (req, res, next) => {
  const { name, email, password } = req.body;
  const isUser = await user.findOne({ email: email.toLowerCase() });
  if (isUser) {
    return res.send(`${email} is Assigned to a user sign in instead`);
  }
  const newUser = await user.create({
    name,
    email,
    password,
  });

  res.status(201).cookie("token", newUser._id.toString(), options).json({
    success: true,
    status: "success",
    data: newUser,
  });
});
Enter fullscreen mode Exit fullscreen mode

Our complete code setup should resemble the following:

var express = require("express");
var cookieParser = require("cookie-parser");
const cors = require("cors");
var http = require("http");

const app = express();
const mongoose = require("mongoose");
mongoose.connect("mongodb://localhost:27017/llm_api");
// user table or document
const user = mongoose.model("user", {
  email: {
    type: String,
    required: [true, "Please enter your Email address"],
    trim: true,
    unique: true,
    lowercase: true,
    match: [
      /^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{2,3})+$/,
      "Please enter a valid Email address",
    ],
  },
  password: {
    type: String,
    required: true,
  },

  name: {
    type: String,
    required: true,
  },
});

// Message table or document
const message = mongoose.model("message", {
  user: {
    type: mongoose.SchemaTypes.ObjectId,
    required: true,
    ref: "user",
  },
  body: {
    type: String,
    required: [true, "Body is Required "],
  },
  fromChat: {
    type: Boolean,
    default: false,
    required: [true, ""],
  },
  type: {
    type: String,
    required: [true, "please a type of message is required"],
    enum: {
      values: ["text", "csv"],
      message: 'Please use a valide Data Type ["text", "csv"]',
    },
  },
});

// enabling CORS
app.use(
  cors({
    origin: "*",
  })
);
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.use(cookieParser());

const options = {
  expires: new Date(Date.now() + 10 * 24 * 60 * 60 * 1000),
  httpOnly: true,
};


const protected = (req, res, next) => {
    let token;
    if (req.cookies.token) {
      token = req.cookies.token;
    }
    if (!token) {
      return res.status(401).json({
        success: false,
        status: "Not Authorized",
        error: "401 Invalid Authorization",
      });
    }
    req.user = {
      _id: token,
    };
    next();
  };

/**
 * @description Post For Users To Login Using `JWT`
 * @route `/login`
 * @access Public
 * @type Post
 */
app.post("/login", async (req, res, next) => {
  const { email, password } = req.body;

  if (!email || !password) {
    return res.status(400).json("Please provide an email and password", 400);
  }

  const data = await user
    .findOne({ email: email.toLowerCase() })
    .select("+password");

  if (!data) {
    return res.status(400).json("Invalid credentials", 401);
  }

  const isMatch = data.password == password;

  if (!isMatch) {
    return res.status(400).json("Invalid credentials", 401);
  } else {
    res.status(201).cookie("token", data._id.toString(), options).json({
      success: true,
      status: "success",
      data,
    });
  }
});

/**
 * @description Post For Users To Login Using `JWT`
 * @route `/login`
 * @access Public
 * @type Post
 */
app.post("/register", async (req, res, next) => {
  const { name, email, password } = req.body;
  const isUser = await user.findOne({ email: email.toLowerCase() });
  if (isUser) {
    return res.send(`${email} is Assigned to a user sign in instead`);
  }
  const newUser = await user.create({
    name,
    email,
    password,
  });

  res.status(201).cookie("token", newUser._id.toString(), options).json({
    success: true,
    status: "success",
    data: newUser,
  });
});

app.get("/messages", protected, async (req, res, next) => {
  const myMessage = await message.find({ user: req.user._id }).sort({
    _id: -1,
  });
  res.status(200).json({
    success: true,
    status: " success",
    prompt: myMessage,
  });
});

//importing Open AI Lib And Core Functions
const OpenAI = require("openai");

const configuration = {
    apiKey: "REPLACE WITH SECRET KEY"
};

const openai = new OpenAI(configuration);

const resFromChat = async (messages, user) => {
    const allMessages = await message.find({ user: user }).sort({ _id: 1 });

    let data = [];
    let chat;
    for (let i = 0; i < allMessages.length; i++) {
      const e = allMessages[i];

      if (!e.fromChat) {
        chat = {
          role: "user",
          content: e.body,
        };
      }
      if (e.fromChat) {
        chat = {
          role: "assistant",
          content: e.body,
        };
      }
      data.push(chat);
    }

    let upload = [
      {
        role: "system",
        content:
          "You Are A Banker, Be A very Rude One",
      },
    ];
    upload = upload.concat(data);
    try {
      const response =await openai.chat.completions.create({
        model: "gpt-3.5-turbo-16k-0613",
        messages: upload,
        temperature: 1,
        max_tokens: 256,
        top_p: 1,
        frequency_penalty: 0,
        presence_penalty: 0,
      });

      const data = {
        user,
        body: response.choices[0].message.content.toString(),
        type: "text",
        fromChat: true,
      };

      await message.create(data);
      return data;
    } catch (error) {
      const data = {
        user,
        body: "Error From Server Please Try Again",
        type: "text",
        fromChat: true,
      };

      await message.create(data);
      return data;
    }
  };

/**
 * @description Send new Prompt
 * @route `/messages`
 * @access Public
 * @type POST
 */
app.post("/messages", protected, async (req, res, next) => {
    const { body, type } = req.body;
    const newMgs = await message.create({
      body,
      type,
      user:req.user._id,
      fromChat:false
    });

    const data =  await resFromChat(body,req.user._id);
    res.status(200).json({
      success: true,
      status: " success",
      prompt: data,
    });
  });


// catch 404 and forward to error handler
app.use(function (req, res, next) {
  res.status(404).json({
    success: false,
    status: "Resource Not Found",
    error: "404 Content Do Not Exist Or Has Been Deleted",
  });
});

// error handler
app.use(function (err, req, res, next) {
  // set locals, only providing error in development
  res.locals.message = err.message;
  res.locals.error = req.app.get("env") === "development" ? err : {};

  // render the error page
  res.status(err.status || 500);
  res.render("error");
});
var server = http.createServer(app);
/**
 * Listen on provided port, on all network interfaces.
 */
const port = 3000;
server.listen(port);
server.on("listening", onListening);
function onListening() {
  var addr = server.address();
  var bind = typeof addr === "string" ? "pipe " + addr : "port " + addr.port;
  console.log("Listening on " + bind);
}

module.exports = app;

Enter fullscreen mode Exit fullscreen mode

To run the code, execute node index in your terminal. You can then use Postman as an initial client to interact with the server by sending requests to the endpoints

POSTMAN

view the full code on github

Adding AI to a Client-side Application

In this final section, we'll augment our LLM Agent with a simple web interface to showcase its functionality in a user-friendly manner. We'll implement both login and registration forms to facilitate user authentication, and we'll design a chat graphical user interface (GUI) for interacting with our LLM Agent.

It's important to note that all endpoints are accessed using native JavaScript fetch requests without relying on any additional frameworks. Furthermore, authentication is cookie-based, making it a lightweight and straightforward solution for our project.

Let's explore the sample code used to bring this web interface to life.

AUTH
This Handles the Auth

      // Function to check if token exists in cookies
      function checkToken() {
        const token = document.cookie
          .split(";")
          .find((cookie) => cookie.trim().startsWith("token="));
        return token ? true : false;
      }

      // Redirect to login page if token doesn't exist
      if (!checkToken()) {
        window.location.href = "./login.html";
      }

Enter fullscreen mode Exit fullscreen mode

ChatGUI
This Fetch And Return The message Arranged


      // Fetch messages from an endpoint
      async function fetchMessages() {
        try {
          const response = await fetch("/messages");
          const data = await response.json();

          return data.prompt; // Assuming the API response contains an array of messages
        } catch (error) {
          console.error("Error fetching messages:", error);
          return []; // Return an empty array if an error occurs
        }
      }

      // Function to display messages in the chat box
      async function displayMessages() {
        const messages = await fetchMessages();
        chatBox.innerHTML = ""; // Clear previous messages
        messages.reverse().forEach((message) => {
          const messageElement = document.createElement("div");
          messageElement.textContent = message.body;
          messageElement.classList.add("message");
          if (!message.fromChat) {
            messageElement.classList.add("sender");
          } else {
            messageElement.classList.add("receiver");
          }
          chatBox.appendChild(messageElement);
        });
        // Scroll to bottom of chat box
        chatBox.scrollTop = chatBox.scrollHeight;
      }

      // Call displayMessages function initially to display existing messages
      displayMessages();

      // Event listener for message form submission
      messageForm.addEventListener("submit", async function (event) {
        event.preventDefault(); // Prevent default form submission behavior

        const newMessage = {
          sender: "user", // Assuming the user is sending the message
          body: messageInput.value,
        };

        // Add new message to messages array
        // Here, you can also send the new message to the backend if neede

        // Call displayMessages to update chat box with new message
        await displayMessages();

        // Clear message input field
        messageInput.value = "";

        const response = await fetch("/messages", {
          method: "POST",
          headers: {
            "Content-Type": "application/json",
          },
          body: JSON.stringify({ type: "text", body: newMessage.body }),
        });
        if (response.ok) {
          await displayMessages();
        }
      });

Enter fullscreen mode Exit fullscreen mode

view the full code on github

Conclusion

Throughout this article, I've provided insights into constructing a comprehensive system to effectively utilize and integrate LLMs into existing workflows. From consuming OpenAI documentation to building an Express backend and integrating AI features into client-side applications, we've outlined practical steps accompanied by code snippets and examples.

As demonstrated, the possibilities are limitless when it comes to leveraging LLMs. Whether it's automating customer support, generating creative content, or powering intelligent assistants, the transformative potential of LLMs is reshaping how businesses operate and engage with their audiences.

Shoutout Veritas 300lvl
Signing out by 4 AM

Top comments (0)