DEV Community

Jonathan Flower
Jonathan Flower

Posted on • Originally published at blog.jonathanflower.com on

Metacognition is susceptible to stochasticity

Excellent advice from Sam Schillace, Deputy CTO of Microsoft, on building with AI. What does it mean?

metacognition is susceptible to stochasticity

In other words, LLMs are easily confused. Their outputs can be unpredictable. It’s better to offload the planning aspect to code, which can provide more structured and deterministic guidance for the LLM.

Sam Schillace goes on to explain:

the model’s good at thinking, but it’s not good at planning. So you do planning in code.

What an excellent guidance on how best to create value with AI models.

In a recent client project, we wondered what the right balance was between relying on the LLMs reasoning ability verses using code to help guide the LLM. When relying more on the LLM, the AI Agent was able to better handle unusual questions. However, its’ responses were less deterministic. The version that relied on code and predefined prompts, depending on the user’s objective, was much more predictable.

For instance, we wanted the AI Agent to ask questions about the user’s preferences before recommending a product. The model would randomly recommend a product earlier in the conversation than we wanted. What worked best was using code to wait to add the prompt about recommending products until specific conditions were met (such as how many questions the user had answered).

Here is a link to the episode and more of my favorite quotes:

Presenting the AI Engineer World’s Fair — with Sam Schillace, Deputy CTO of Microsoft

Sam Schillace:

This is a little bit of an anthropomorphism and an illusion that we’re having. So like when we look at these models, we think there’s something continuous there.

We’re having a conversation with chat GPT or whatever with Azure open air or like, like what’s really happened. It’s a little bit like watching claymation, right? Like when you watch claymation, you don’t think that the model is actually the clay model is actually really alive. You know, that there’s like a bunch of still disconnected slot screens that your mind is connecting into a continuous experience.

But what happens is when you’re doing plans and you’re doing these longer running things that you’re talking about, that second level, the metacognition is very vulnerable to that stochastic noise, which is like, I totally want to put this on a bumper sticker that like metacognition is susceptible to stochasticity would be like the great bumper sticker.

So what, these things are very vulnerable to feedback loops when they’re trying to do autonomy, and they’re very vulnerable to getting lost.

So what we’ve learned to answer your question of how you put all this stuff together is You have to, the model’s good at thinking, but it’s not good at planning. So you do planning in code. So you have to describe the larger process of what you’re doing in code somehow.

Having that like code exoskeleton wrapped around the model is really helpful, like it keeps the model from drifting off and then you don’t have as many of these vulnerabilities around memory that you would normally have.

Image credit: Dalle3 (I ran with Sam’s suggestion that this would look great as a bumper sticker. I selected the best out of 8 tries. Pretty hilarious how poorly it spells words like Metacognition and stochasticity. I was going to avoid including it in the post, but then I realized it illustrates Sam’s point perfectly.)

Top comments (0)