When the Inmates Run the AI: Make Explanations for People, Not Programmers
We often celebrate tools that can explain themselves, but sometimes the explainers are built for the wrong crowd.
Many systems give answers that please the people who built them, not the people who actually use them.
That’s a problem when we want trust and clear choices.
If we want better tools, we need to rethink explainable AI so it helps everyday people, not just coders.
Good design means thinking about what users need, what they already know, and what they worry about.
That’s where the social and behavioural sciences come in — they study how people think, why they trust, and how to tell stories that make sense.
Fixing this won’t happen by only tweaking code; it needs listening to real people, testing with them, and learning from fields outside tech.
Imagine explanations that feel clear and calm, not confusing or scary.
We can get there, if makers stop designing for themselves and start designing for the rest of us.
Read article comprehensive review in Paperium.net:
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to StopWorrying and Love the Social and Behavioural Sciences
🤖 This analysis and review was primarily generated and structured by an AI . The content is provided for informational and quick-review purposes.
Top comments (0)