DEV Community

Cover image for I fine-tuned my model on a new programming language. You can do it too! πŸš€
Nevo David Subscriber for Gitroom

Posted on

I fine-tuned my model on a new programming language. You can do it too! πŸš€

I have been using OpenAI ChatGPT-4 for a while now.
I don't have a lot of bad stuff to say about it.
But sometimes, it's not enough.

In Winglang, we wanted to use OpenAI and ChatGPT-4 to answer people's questions based on our documentation.

Your options are:

  • Use OpenAI assistant or any other vector-based database with (RAG). It worked nicely since Wing looked like JS, but there were still many mistakes.
  • Passing the entire documentation into the context window is super expensive.

Soon enough, we realized that was not going to work.
It's time to host our own LLM.

Problem


Your LLM dataset

Before we train our model, we need to create data that will be used to train the model. In our case, the Winglang documentation. I will do something pretty simple.

  1. Extract all the URLs from the sitemap, set a GET request, and collect the content.
  2. Parse it; we want to convert all the HTML into readable content.
  3. Run it with ChatGPT 4 to convert the content into a CSV as the dataset.

It should be something like this:

LLM DATASET

Once you finish, save the CSV with one column named text and add the question and the answer. We will use it later. It should look something like this:

text
<s>[INST]How to define a variable in Winglang[/INST] let a = 'Hello';</s>
<s>[INST]How to create a new lambda[/INST] bring cloud; let func = new cloud.Function(inflight () => { log('Hello from the cloud!'); });</s>
Enter fullscreen mode Exit fullscreen mode

Save it on your computer in a new folder called data.


Autotrain, your model

My computer is pretty weak, so I have decided to go into a smaller model - 7b parameters: mistralai/Mistral-7B-v0.1

There are millions of ways to train a model. We will use Huggingface Autotrain. We will use their CLI without running any Python code πŸš€

When you use Autotrain from Huggingface, you can train it on your computer (my approach here) or train it on their servers (pay money) and train larger models.

I have no GPU with my old Macbook Pro M1 2021. thank you, Apple 🍎.

Let's install autotrain.

pip install -U autotrain-advanced
autotrain setup > setup_logs.txt
Enter fullscreen mode Exit fullscreen mode

Then, all we need to do is run the autotrain command:

autotrain llm \
--train \
--model "mistralai/Mistral-7B-Instruct-v0.2" \
--project-name "autotrain-wing" \
--data-path data/ \
--text-column text \
--lr "0.0002" \
--batch-size "1" \
--epochs "3" \
--block-size "1024" \
--warmup-ratio "0.1" \
--lora-r "16" \
--lora-alpha "32" \
--lora-dropout "0.05" \
--weight-decay "0.01" \
--gradient-accumulation "4" \
--quantization "int4" \
--mixed-precision "fp16" \
--peft
Enter fullscreen mode Exit fullscreen mode

Once finished you will have a new directory called "autotrain-wing" with the new fine-tuned model πŸš€


Playing with the model

To play with the model, start by running:

pip install transformers torch
Enter fullscreen mode Exit fullscreen mode

Once completed, create a new Python file named invoke.py with the following code:

from transformers import pipeline

# Path to your local model directory
model_path = "./autotrain-wing"

# Load the model and tokenizer from the local directory
classifier = pipeline("text-classification", model=model_path, tokenizer=model_path)

# Example text to classify
text = "Example text to classify"
result = classifier(text)
print(result)
Enter fullscreen mode Exit fullscreen mode

And then you can run it by running the CLI command:

python invoke.py
Enter fullscreen mode Exit fullscreen mode

And you are done πŸš€


Keep on working on your LLMs

I am still learning about LLMs.
One thing I realized is that it's not so easy to track changes with your models.

You can't really use it with Git because a model can reach a very large size > 100 GB; it doesn't make much sense - git doesn't handle it nicely.

A better way to do this is with a tool called KitOps.

I think it will soon be a standard in the world of LLM, so make sure you star this library so you can use it later.

  1. Download the latest KitOps release and install it.

  2. Go to the model folder and run the command to pack your LLM:

    kit pack .
    
  3. You can also push it to Docker hub by running

    kit pack . -t [your registry address]/[your repository name]/mymodelkit:latest
    

    πŸ’‘ To learn how to use DockerHub check this

Β 

⭐️ Star KitOps so you can find it again later ⭐️

StarRepo


I started a new YouTube channel mostly about open-source marketing :)

(Like how to get Stars, Forks and Client)

If that's something that interests you, feel free to subscribe to it here:
https://www.youtube.com/@nevo-david?sub_confirmation=1

Top comments (29)

Collapse
 
shaiber profile image
Shai Ber

Nice! Is there a way to access the LLM you trained?

Collapse
 
bmicklea profile image
Brad Micklea

Very soon KitOps will have a Dev mode command that will make it easy to run the LLM locally and interact with it via prompts or chats as well as experiment with various parameters. You can star our repo or, better yet, join our discord: discord.gg/Tapeh8agYy

Collapse
 
samcurran12 profile image
Sammy Scolling

Can't wait 🀞

Thread Thread
 
bmicklea profile image
Brad Micklea

Good news - KitOps dev mode is here! Now you can run and interact with an LLM locally (no internet or GPUs required) with a single Kit CLI command.
dev.to/kitops/kitops-release-v02-i...

Collapse
 
nevodavid profile image
Nevo David

Awesome!

Thread Thread
 
bmicklea profile image
Brad Micklea

Good news - KitOps dev mode is here! Now you can run and interact with an LLM locally (no internet or GPUs required) with a single Kit CLI command.
dev.to/kitops/kitops-release-v02-i...

Collapse
 
bmicklea profile image
Brad Micklea

Good news - KitOps dev mode is here! Now you can run and interact with an LLM locally (no internet or GPUs required) with a single Kit CLI command.
dev.to/kitops/kitops-release-v02-i...

Collapse
 
nevodavid profile image
Nevo David

I will dm you :)

Collapse
 
bmicklea profile image
Brad Micklea

You can, KitOps dev mode lets you run and interact with an LLM locally (no internet or GPUs required) with a single Kit CLI command.
dev.to/kitops/kitops-release-v02-i...

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

Looks like I need to check out KitOps!

Collapse
 
nevodavid profile image
Nevo David

You do!

Collapse
 
morgan-123 profile image
Morgan

I'll check Kitops out!

Collapse
 
andrew0123 profile image
Andrew

Intresting, almost no python code :)

Collapse
 
nevodavid profile image
Nevo David

A little bit :)

Collapse
 
jwilliamsr profile image
Jesse Williams

Check out our discord, we have a lot coming down the pipeline in the next few weeks. discord.gg/Tapeh8agYy

Collapse
 
nevodavid profile image
Nevo David

πŸš€

Collapse
 
ayush2390 profile image
Ayush Thakur

Very informative blog, Nevo

Collapse
 
nevodavid profile image
Nevo David

Thank you so much!

Collapse
 
mathew00112 profile image
Mathew

I love it!

Collapse
 
nevodavid profile image
Nevo David

πŸ™πŸ»

Collapse
 
johny0012 profile image
Johny

Awesome!

Collapse
 
nevodavid profile image
Nevo David

πŸš€

Collapse
 
benjamin00112 profile image
Benjamin

Autotrain really simplifies everything.

Collapse
 
nevodavid profile image
Nevo David

It is!

Collapse
 
markfriedman profile image
mark-friedman

So what were your results? How did your small, but fine-tuned, model do with questions and code generation tasks with your new language compared to your RAG-based approach with GPT-4?

Collapse
 
pablocabrol profile image
Pablo E. Cabrol • Edited

Have you tried Gen app Builder from Google?, let you use a set of data and no code to have a natural language access to your documentation. I'm doing my homework to test it my self. 😊

Collapse
 
matijasos profile image
Matija Sosic

Thanks for sharing - we've actually been thinking about doing something similar for Wasp, our full-stack framework for React & Node.js! (github.com/wasp-lang/wasp)