DEV Community

Evan Lin
Evan Lin

Posted on • Originally published at evanlin.com on

[Golang] Quickly Set Up a Free Local ChatGPT with Ollama and Build a LangChainGo Application

title: [Golang] Quickly Set Up a Free Local ChatGPT with Ollama, and Write a LangChainGo Application
published: false
date: 2023-12-10 00:00:00 UTC
tags: 
canonical_url: http://www.evanlin.com/til-ollama-go-vertexai/
---

![image-20231212012818576](http://www.evanlin.com/images/2022/image-20231212012818576.png)

# Preface

It's been a while since I've written an article in Golang, so I thought I'd describe what I've read about Golang LangChain. This article mainly references [Eli Bendersky's blog post - [Using Ollama with LangChainGo](https://eli.thegreenplace.net/2023/using-ollama-with-langchaingo/)](https://eli.thegreenplace.net/2023/using-ollama-with-langchaingo/)

Here, I will also introduce the following parts in more detail:

- What is Ollama and what can it do?
- How to use Ollama?
- How to connect Ollama via Golang and integrate LangChain

Let's get started!

# What is Ollama

![image-20231211231527061](http://www.evanlin.com/images/2022/image-20231211231527061.png)

[Ollama](https://ollama.ai/) is a very convenient tool. In the past, if you wanted to use llama locally, you would need the following steps:

- [Apply to download the link from Meta AI](https://ai.meta.com/llama/)
- Use [LLAMA.cpp](https://github.com/ggerganov/llama.cpp) to convert the LLAMA2 model so that you can run and read it on Mac OSX. (Of course, there is also
- Then run [LLAMA.cpp](https://github.com/ggerganov/llama.cpp) to run the LLAMA model to run the functions you need.

But using Ollama is quite simple

### Install Ollama

Refer to [github documentation](https://github.com/jmorganca/ollama):

Enter fullscreen mode Exit fullscreen mode

curl https://ollama.ai/install.sh | sh


### After execution, the model will be downloaded

Enter fullscreen mode Exit fullscreen mode

ollama run llama2


More other Models:

| Model | Parameters | Size | Download |
| --- | --- | --- | --- |
| Neural Chat | 7B | 4.1GB | `ollama run neural-chat` |
| Starling | 7B | 4.1GB | `ollama run starling-lm` |
| Mistral | 7B | 4.1GB | `ollama run mistral` |
| Llama 2 | 7B | 3.8GB | `ollama run llama2` |
| Code Llama | 7B | 3.8GB | `ollama run codellama` |
| Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored` |
| Llama 2 13B | 13B | 7.3GB | `ollama run llama2:13b` |
| Llama 2 70B | 70B | 39GB | `ollama run llama2:70b` |
| Orca Mini | 3B | 1.9GB | `ollama run orca-mini` |
| Vicuna | 7B | 3.8GB | `ollama run vicuna` |

Enter fullscreen mode Exit fullscreen mode

➜ Documents ollama run llama2
pulling manifest
pulling 22f7f8ef5f4c... 100% ▕██████████████████████████████████████████████████████████████████████████████████▏ 3.8 GB
pulling 8c17c2ebb0ea... 100% ▕██████████████████████████████████████████████████████████████████████████████████▏ 7.0 KB
pulling 7c23fb36d801... 100% ▕██████████████████████████████████████████████████████████████████████████████████▏ 4.8 KB
pulling 2e0493f67d0c... 100% ▕██████████████████████████████████████████████████████████████████████████████████▏ 59 B
pulling 2759286baa87... 100% ▕██████████████████████████████████████████████████████████████████████████████████▏ 105 B
pulling 5407e3188df9... 100% ▕██████████████████████████████████████████████████████████████████████████████████▏ 529 B
verifying sha256 digest
writing manifest
removing any unused layers
success

Write a poem, but it doesn't understand:

write me a potry
Sure, I'd be happy to write a short story for you! Can you give me some details on what you would like the story to be about? For example,
is it a romance, a mystery, a sci-fi adventure, or something else entirely? And how long would you like the story to be?

Asking in Chinese is also possible, but the answer will be in English:

寫一個笑話給我
Of course! Here's a short humorous story for you:

One day, a man walked into a library and asked the librarian, "Do you have any books on Pavlov's dogs and Schrödinger's cat?"

The librarian replied, "It rings a bell, but I'm not sure if it's here or not."

Force him to reply in Chinese, it seems okay:

➜ ollama-go ollama run llama2

寫一首詩給我, reply in zh_tw
Of course, I'd be happy to write a poem for you! Here it is:

紅色的陽光在天上照亮,
顯示出世界的美好。
每一個人都有自己的story,
filled with laughter, tears, and glory.

在沒有你的時候,我們會繁榮;
在你回來的時候,我們會歡喜。
你是我們的家,是我們的心靈伴侶,
不管 WHERE 你去,我們都会等你回來。

I hope you like it! Let me know if you have any requests or preferences for the poem.


### Call Ollama via API Gateway

The most convenient way is, after setting up Ollama, in addition to calling it via `ollama`, you can also call it locally via API.

Enter fullscreen mode Exit fullscreen mode

curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt": "very briefly, tell me the difference between a comet and a meteor",
"stream": false

}'

{"model":"llama2","created_at":"2023-12-11T14:41:36.760949Z","response":"\nSure! Here's the difference between a comet and a meteor:\n\nComets are icy bodies that originate from the outer reaches of the solar system. They are composed of dust, ice, and rock, and they have a long, elliptical orbit around the sun. When a comet approaches the inner solar system, the sun's heat causes the comet to release gas and dust, creating a bright tail that can be seen from Earth.\n\nMeteors, on the other hand, are small rocks or pieces of debris that enter Earth's atmosphere. As they travel through the atmosphere, they burn up due to friction with the air, producing a bright streak of light in the sky, commonly known as a shooting star. The remains of the meteoroid can sometimes survive entry into the atmosphere and land on Earth as a meteorite.\n\nSo, while both comets and meteors are objects in space, the key difference is that comets are icy bodies that originate from outside the solar system, while meteors are small rocks or pieces of debris that originate from within the solar system (primarily from asteroids).","done":true,"context":.......}


# Write a simple LangChain and Ollama application

Enter fullscreen mode Exit fullscreen mode

package main

import (
"context"
"fmt"
"log"

"github.com/tmc/langchaingo/llms"
"github.com/tmc/langchaingo/llms/ollama"
"github.com/tmc/langchaingo/schema"
Enter fullscreen mode Exit fullscreen mode

)

func main() {
llm, err := ollama.NewChat(ollama.WithLLMOptions(ollama.WithModel("llama2")))
if err != nil {
log.Fatal(err)
}
ctx := context.Background()
completion, err := llm.Call(ctx, []schema.ChatMessage{
schema.SystemChatMessage{Content: "Give a precise answer to the question based on the context. Don't be verbose."},
schema.HumanChatMessage{Content: "What would be a good company name a company that makes colorful socks? Give me 3 examples."},
}, llms.WithStreamingFunc(func(ctx context.Context, chunk []byte) error {
fmt.Print(string(chunk))
return nil
}),
)
if err != nil {
log.Fatal(err)
}

fmt.Println(completion)
Enter fullscreen mode Exit fullscreen mode

}


You can find the complete code through [github](https://github.com/kkdai/ollama-go).
Enter fullscreen mode Exit fullscreen mode

Top comments (0)