DEV Community

Tushar
Tushar

Posted on

How should AI answer more humanly ?

Image description

Developers generally give system prompts to the model or fine-tune it, to make the model more humanly. According to studies this issue is not getting solved, AI has data & mind, but still don't have humanly sense. Prompting is inefficient & fine-tuning LLMs need lot of computation & time.
So what's the solution ? How to handle the thing, which helps AI to be more humanly, along with low computation & less time ?

That is SLM, yes you heard right. That is Small Language Model, fine tuning it more quick & efficient. Moreover instead of fine tuning the billions of parameter. We can just finetune the parameter neurons of the neuron architecture by PEFT- Parameter Efficient Fine-tuning.

It helps to give better results according to which we have trained it. So, if the developer wants it to be more humanly it will be.


If you want to read the next part which emphasizes on the SLM architecture & it's uses, so kindly follow me for more such content & please support the work.
Comment section is open to ask any type of questions, I will happy to answer them.
Thank You!!

API Trace View

How I Cut 22.3 Seconds Off an API Call with Sentry

Struggling with slow API calls? Dan Mindru walks through how he used Sentry's new Trace View feature to shave off 22.3 seconds from an API call.

Get a practical walkthrough of how to identify bottlenecks, split tasks into multiple parallel tasks, identify slow AI model calls, and more.

Read more →

Top comments (0)

Retry later