there is none, seriously not even one time, u have to give for granted the output text, you must follow each word of the model, in between of those implicitly there is a decision the ai took that drifts from the original context.
and the time u realize it you are already 10 iterations deeper because u did not push back when u should.
each step in the session is a perfect moment for an adversarial audit and anti deception check.
there is no fun in context poisoning when u are trying to do serious work. better be attentive and add some more effort in the start than trying to understand and stick the puzzle at the end of the mountain of generated content
For further actions, you may consider blocking this person and/or reporting abuse
Top comments (0)