DEV Community

Akemnoor Singh
Akemnoor Singh

Posted on

How to Talk to an AI 💻: A Beginner’s Guide to the OpenAI API

Ever wondered how apps talk to ChatGPT? Let’s break down the simple but powerful way you can chat with models like GPT-4.

LLM : A Large Language Model is an algorithm that uses training data to recognize patterns and make predictions or decisions

We’ve all been amazed by what LLMs can do. But what if you want to build that magic into your own website or application?

The answer is that you can use OpenAI APIs. An API is just a way for different software programs to talk to each other. In this case, it lets our app have a conversation with OpenAI’s powerful models.

Each request to the API consists mainly of

  • a LLM model name
  • an array of messages (basically an array of objects)
  • Other optional settings

Let’s Write Some Code!
Okay, let’s see a real example of how to make an API call using JavaScript.

First, you need the official OpenAI library. Then, you set up the client and build your messages array.

import OpenAI from 'openai'

const openai = new OpenAI({
    dangerouslyAllowBrowser: true
})

const messages = [
    {
        role: 'system',
        content: 'You are a helpful general knowledge expert.'
    },
    {
        role: 'user',
        content: 'Who invented the television?'
    }
]

const response = await openai.chat.completions.create({
    model: 'gpt-4', // this automatically picks the best current snapshot of gpt-4
    messages
})

console.log(response)
//we get a object with role=assistance
console.log(response.choices[0].messsage.content);
Enter fullscreen mode Exit fullscreen mode

Why dangerouslyAllowBrowser: true?

By default, OpenAI disables access from the browser to protect your secret API key from being exposed to users.

You’re telling the OpenAI client library:

Yes, I understand the risk — go ahead and allow API calls from the browser using this client.

After running this, the API sends back a detailed object. It looks a bit scary, but the part we care about is

response. choices[0].messsage.content

Response:

//THAT RESPONSE RETURNS 

{
id: "chatcmpl-8Go69bvmGWV8JHvZ9uxYXSUAimEb8", 
object: "chat.completion", 
created: 1699016517, 
model: "gpt-4-0613", 
choices: [
    {
        index: 0, 
        message: 
            {
                role: "assistant", 
                content: "The invention of television was the work of many individuals in the late 19th century and early 20th century. However, Scottish engineer John Logie Baird is often associated with creating the first mechanical television. He demonstrated his working device in January 1926 in London. Concurrently in the United States, Philo Farnsworth is credited with inventing the first fully electronic television in the late 1920s."
            }, 
            finish_reason: "stop"
     }
 ],
 usage: {
     prompt_tokens: 24, 
     completion_tokens: 86, 
     total_tokens: 110
     }
 }
Enter fullscreen mode Exit fullscreen mode

SNAPSHOTS OF LLMs

As any LLM evolves, let's say GPT-4. OpenAI will choose to default to a snapshot that has the best performance. (snapshots are kind of versions of an LLM model.)

We can also mention which snapshot of a given model we want to use.

const response = await openai.chat.completions.create({
    model: 'gpt-4-1106-preview',
    messages: messages
})
Enter fullscreen mode Exit fullscreen mode

As you can see in the response, we have a key named “usage,” and it tells something about tokens.

Tokens in LLM

total_token = prompt_tokens + completion_tokens

token → a chunk of text of no specific length

  • tokens cost credits (value depends upon model)

  • needs processing

  • keeping it low saves time and money

How we can reduce the API costs and maximize our output’s quality, we will discuss it in prompt engineering topic.

Top comments (0)