DEV Community

Cover image for The Best Open-Source 7B LLM
Marko Vidrih
Marko Vidrih

Posted on

The Best Open-Source 7B LLM

OpenChat just released the world's best open-source 7B LLM, surpassing Grok0, ChatGPT (March), and Grok1.

OpenChat is a library of open-source language models fine-tuned with C-RLFT.

C-RLFT, or Conditioned-Reinforcement Learning Fine-Tuning, works by categorizing different data sources as separate reward labels. It's essentially a fine-tuning process for language models using mixed-quality data.

Instead of treating all training data equally or needing high-quality preference data, C-RLFT assigns a class or condition to each data source, leveraging these as indicators of data quality.

The model is accessible on platforms like HuggingFace, GitHub, and through a live demo. Detailed instructions for independent deployment, including setup for an accelerated vLLM backend and API key authentication, are available on GitHub. The model is also available on consumer GPUs, like the RTX 3090.

Top comments (2)

Collapse
 
kwnaidoo profile image
Kevin Naidoo

Thanks for this. I have been playing with Mistral 7B on an ARM Ampere box - decent performance: 9-12 tokens per second but its results are a mixed bag.

Collapse
 
ranjancse profile image
Ranjan Dailata

I would highly recommend with the Anyscale platform for the following reasons

  • Great pricing
  • Highly scalable platform for LLM
  • Fast and Reliable Service
  • Amazing company who primarily focus on cutting edge AI Services.