DEV Community

Achin Bansal
Achin Bansal

Posted on • Originally published at gridthegrey.com

Human Trust of AI Agents

Forensic Summary

Research published via Schneier on Security reveals that humans systematically over-trust LLMs in strategic game environments, defaulting to Nash-equilibrium rational play based on assumptions of LLM rationality and cooperation. This behavioural bias has direct security implications for mixed human-LLM systems, where adversaries could exploit predictable human over-trust to manipulate decision outcomes. The findings underscore systemic risks in deploying LLMs as agents in high-stakes economic or security-relevant decision loops.


Read the full technical deep-dive on Grid the Grey: https://gridthegrey.com/posts/human-trust-of-ai-agents/

Top comments (0)