Developers generally give system prompts to the model or fine-tune it, to make the model more humanly. According to studies this issue is not getting solved, AI has data & mind, but still don't have humanly sense. Prompting is inefficient & fine-tuning LLMs need lot of computation & time.
So what's the solution ? How to handle the thing, which helps AI to be more humanly, along with low computation & less time ?
That is SLM, yes you heard right. That is Small Language Model, fine tuning it more quick & efficient. Moreover instead of fine tuning the billions of parameter. We can just finetune the parameter neurons of the neuron architecture by PEFT- Parameter Efficient Fine-tuning.
It helps to give better results according to which we have trained it. So, if the developer wants it to be more humanly it will be.
If you want to read the next part which emphasizes on the SLM architecture & it's uses, so kindly follow me for more such content & please support the work.
Comment section is open to ask any type of questions, I will happy to answer them.
Thank You!!
Top comments (0)