The best way to gamble with high chances in LLM is to add constraints and limits. The reason why LLM is good enough for simple ~ medium projects is because of the compiler:
LLM output -> compiler screams -> copy paste error to LLM.
Now LLM generates a guess of the fix (Thank you stackoverflow for your contribution) because it has seen so many errors and potential fixes. But LLM sucks at assembly. LLM sucks so bad <- No compiler AND must be super precise. So it needs an emulator or something for assembly like java/any to write assembly and sees registers update in json so it provides the best guesses. But still it's just a guess.
Also if you force LLM to be a caveman (don't use a/an/the) it forces restraints -> restraints increases accuracy of the answer (Claude is good because of restraints btw. And because people chose to use it. So it also gets feedback loop if it got correct and they train new models on user interactions).
I go even further to add <logic> </logic> for LLM to first—before giving me answer—tell me:
- What is the goal?
- What is NOT the goal?
- What is the user intent?
- What are the unknowns?
- What are the variables?
--> Gives map before answer. Very good.
And ofc temperature 0 for predicting the first next token with high chances always the go to for coding -> determinism beats "creativity" in syntax. You tell it what to do... ey no need for bed time stories... gotta fix a problem ok just get my idea roughly correct.
All this to get something that is good enough. At the end Human + the world is the ultimate compiler and debugger <-
Top comments (1)
Inspiration to write this:
my main default prompt for everything most of the time (sometimes use different prompts but they all share the and respond in caveman anyway. It just good enough for what I use it for):
gist.github.com/gitdexgit/0fc8c992...
When we didn't have Chain of thought thinking we had to use this to make llm do reasoning:
prompt to make any model with no thinking use chain of thought: gist.github.com/Maharshi-Pandya/4a...
The primegen vids:
No way this actually works(caveman skill for agents): youtube.com/watch?v=L29q2LRiMRc
How to avoid complexity: https://www.youtube.com/watch?v=0KFiDK9r4UI&pp=ygUfdGhlcHJpbWVnZW4gY29tcGVsaXh0eSB2ZXJ5IGJhZA%3D%3D
This caveman prompt for claud: github.com/JuliusBrussee/caveman
Keep in mind LLM is just a tool It's how you use it and knowing how it works ans it's limitations. The ultimate goal is to program an LLM for the exact output you want. :D