In this article, i am going to review the small language models. Most people created hype in the social media and other platforms, regarding the small language models, like, now the ai is in our own pocket, it does work without the internet, privacy and secure.
At first it seems interesting and curious. I too started installing them in my local machine. There are no of ways to run the models in the local machine.
- Using Ollama
- Using LMStudio
I preferred Ollama, because it seems ease for me, because after installation, we just have to run ollama run model_name
, that's it, our setup is done.
Not the instance is fired in the terminal, for prompts.
Everything works good till now. But, the problem is resource cost. even for a simple 'Hi' the model uses the extensive resources like cpu, ram. I mean, in my view, its not good.
See More:
For a deeper dive into QNX Hypervisor 8 and its security model, check out this detailed guide.
Top comments (0)