LLMs are powerful AI models, but they can hallucinate, go out of context, or simply provide the wrong output. You need to take their output and have layers of verifications against it so that you only let the right data in. The verification layers must send the feedback for the next iteration.
The important question then becomes when to stop if it ends up in a never ending cycle? Do you stop at a certain number of retries and take the best out of all the failed outputs? or Do you keep it running until it passes a certain threshold?
It very much depends on the problem you are trying to resolve.
What are your thoughts and experiences about building iterative agentic patterns?
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.