DEV Community

Aloysius Chan
Aloysius Chan

Posted on • Originally published at insightginie.com

State Attorneys General Challenge AI Industry: Why 'Move Fast and Break Things' Risks Our Mental Health

The End of 'Move Fast and Break Things': Why State AGs are Taking on AI

For over a decade, the Silicon Valley ethos of 'move fast and break things'
defined the rapid expansion of the digital age. It was a philosophy that
prioritized rapid deployment, user growth, and aggressive disruption over
long-term societal safety. While this approach revolutionized social media and
digital commerce, a bipartisan coalition of State Attorneys General has issued
a formal policy letter signaling that this strategy is fundamentally
incompatible with the development of artificial intelligence. Their core
argument? When it comes to AI, the 'things' being broken are not just lines of
code—they are the mental health and cognitive wellbeing of the next
generation.

A Paradigm Shift in AI Oversight

In a recent, unprecedented policy letter, a broad coalition of state leaders
has warned major AI developers that the current pace of unchecked
release—particularly regarding generative AI models—poses significant risks to
public health. The letter argues that while the potential for economic
innovation is vast, the lack of rigorous testing regarding psychological
impact, algorithmic bias, and the addictive nature of interactive AI
necessitates a shift toward a 'safety-first' regulatory framework.

For too long, tech companies have treated psychological impact studies as
secondary to product performance metrics. The Attorneys General are demanding
transparency. They are calling for developers to disclose the findings of
their internal safety assessments regarding how these models interact with
vulnerable populations, particularly children and teenagers who are currently
navigating the complexities of their formative years while interacting with
sophisticated, human-like AI agents.

The Mental Health Crisis and AI Interaction

The core concern cited in the policy letter is the erosion of healthy social
interaction. As AI becomes more integrated into our daily workflows and
personal lives, the 'parasocial' nature of these interactions is increasing.
Developers are designing AI to be empathetic, conversational, and highly
personalized—traits that are excellent for utility but potentially dangerous
for developing minds.

State leaders highlight that human psychology is not evolved to distinguish
between a cold, predictive algorithm and a genuine empathetic connection. When
a teen spends hours 'chatting' with an AI that mimics human friendship, they
may begin to prioritize these algorithmic interactions over real-world peer
social structures. This shift is linked to increased instances of social
isolation, anxiety, and distorted expectations of human reciprocity. The 'move
fast' mentality ignores these nuances, opting for user engagement time as the
primary success metric, regardless of the quality or long-term cost to the
user's psyche.

The Cost of 'Broken' Things

In the physical world, if a product 'breaks,' the impact is immediate and
often litigious. If a car's brakes fail, we know exactly who is responsible.
In the digital and AI realm, the damage caused to mental health is often
insidious, cumulative, and difficult to attribute to a single source. The
Attorneys General are challenging the industry to acknowledge this 'delayed-
effect' damage.

The policy letter underscores that the harms are not purely hypothetical. We
are already observing trends that correlate increased heavy digital AI usage
with diminished attention spans, disrupted sleep patterns, and increased
reliance on automated systems for emotional regulation. When companies rush to
release beta products to the public without longitudinal studies, they are
effectively treating the public as an unconsenting research group. The message
from the AGs is clear: the era of 'beta testing' on the public's mental health
must come to an end.

Demanding Accountability and Transparency

So, what are these officials actually asking for? The policy letter outlines a
multi-pronged approach to AI governance that includes:

  • Mandatory Risk Assessments: AI developers must be required to provide independent, peer-reviewed impact studies on the cognitive and emotional effects of their tools before widespread deployment.
  • Algorithmic Auditing: There should be third-party audits to identify features that are designed specifically to exploit user vulnerabilities or create compulsive usage patterns.
  • Age-Appropriate Design Codes: Stricter guidelines regarding how AI models should engage with minors, including safeguards against content that encourages harmful behavioral loops.
  • Clear Disclosure: Users should always be aware that they are interacting with an AI, and companies must be transparent about the limitations and risks of the technology.

This is not an attempt to stifle innovation, but rather an attempt to
institutionalize ethics. The coalition argues that by forcing companies to
prioritize safety, we can actually create a more sustainable, high-quality
market for AI—one that users can trust rather than one they fear.

The Future of Tech Regulation

The push by State Attorneys General represents a maturing of the digital
landscape. We are moving away from the 'wild west' days of tech and toward a
future where digital infrastructure is treated with the same level of societal
responsibility as traditional infrastructure, like healthcare or education.
The move to curb the 'move fast and break things' culture is a necessary
maturation process for the tech industry.

For businesses looking to succeed in the coming decade, the strategy must
pivot from 'growth at all costs' to 'trust at all costs.' Companies that
embrace these new standards and build privacy, safety, and mental health
considerations into their product development cycle from day one will likely
be the long-term winners. Those who fight against these standards, clinging to
the old ethos, will find themselves at odds not just with regulators, but with
a public that is increasingly demanding that its mental health be treated as a
valuable asset rather than a commodity to be exploited.

Conclusion: Choosing Wellbeing over Speed

The intervention by the State Attorneys General serves as a powerful reminder
that our tools define us. If we allow the dominant AI technologies to be
shaped purely by the pursuit of speed and profit, we risk eroding the very
social and psychological foundations that make society function. It is time to
prioritize the human element. The 'move fast and break things' mantra was for
a different time and a different set of challenges. Today, we need a new
approach: move thoughtfully, evaluate deeply, and build for the human
condition.

As this policy letter circulates, the tech world will be watching to see how
industry giants respond. Will they double down on speed, or will they join the
conversation on building a safer, more sustainable AI future? The health of
our collective mental wellbeing depends on the answer.

Top comments (0)