Big AIs learn from examples — small ones stick to gut feeling
Researchers looked at how language models learn from examples shown right before a question.
They found small models mostly follow their built-in semantic priors, so if an example says happy = sad they still guess by meaning, not the example.
Bigger models, however, can change course and follow the examples even when labels are flipped, so they actually learn the new mapping.
They also tried a trick where labels were random words like foo and bar so nothing matched the meaning.
Only large models could pick up those strange mappings and use them to decide — this shows scale matters for learning from context.
Another result: models that get extra teaching, called instruction-tuned, get better at both using their prior knowledge and learning new label rules, but leaning more on their priors.
This means bigger, well-taught models can adapt to weird or flipped examples, while smaller ones mostly guess from earlier knowledge.
That tells us how and when AIs can truly learn from the example you give them.
Read article comprehensive review in Paperium.net:
Larger language models do in-context learning differently
🤖 This analysis and review was primarily generated and structured by an AI . The content is provided for informational and quick-review purposes.
Top comments (0)