DEV Community

Cover image for The Rising Tide of Anti-AI Violence
Nilesh Kasar
Nilesh Kasar

Posted on • Originally published at thestackstories.com

The Rising Tide of Anti-AI Violence

The Rising Tide of Anti-AI Violence

27% of Americans believe that AI will have a negative impact on society, up from 15% just two years ago, with a notable 42% of respondents in a Pew Research Center survey citing job displacement as a primary concern. This stark increase in anti-AI sentiment is not just a fleeting trend, but rather a symptom of a deeper issue - one that warrants a closer examination of the AI backlash and its implications for the future of AI development. The rising tide of anti-AI violence, both physical and rhetorical, is a disturbing consequence of this growing sentiment. For example, the 2020 vandalism of a Microsoft-funded AI research facility in Seattle, which resulted in over $100,000 in damages, and the 2019 protests against the deployment of AI-powered surveillance systems in Hong Kong, which drew over 10,000 participants, highlight the escalating tensions. Notably, a study by the Center for Strategic and International Studies found that the number of AI-related protests and demonstrations increased by 300% between 2018 and 2020, with a significant proportion of these incidents targeting AI research facilities and tech companies.

The public's perception of AI is increasingly being shaped by high-profile incidents of AI-related job displacement, misinformation, and perceived biases in AI decision-making. A study by the McKinsey Global Institute found that up to 800 million jobs could be lost worldwide due to automation by 2030, with the majority of these losses occurring in the manufacturing and transportation sectors. For instance, a report by the International Labor Organization estimated that the adoption of AI-powered automation in the manufacturing sector could lead to a 40% reduction in employment opportunities in the sector by 2025. Furthermore, the controversy surrounding the use of AI-powered facial recognition systems by law enforcement agencies, such as the one in Detroit that incorrectly identified a suspect, has sparked heated debates about AI safety and ethics. Experts like Dr. Joy Buolamwini, a renowned AI ethicist, have highlighted the need for more diverse and representative training data to mitigate biases in AI decision-making. The AI regulation debates currently underway in various countries, including the European Union's proposed AI regulatory framework, which emphasizes transparency, accountability, and human oversight, are highlighting the need for more transparent and accountable AI development practices. As the AI community continues to push the boundaries of what is possible with AI, it is essential that they also prioritize the development of more robust AI safety protocols, such as those being developed by the AI Safety Center at the University of California, Berkeley, and more effective strategies for mitigating the risks associated with AI, like the $10 million investment by the Allen Institute for Artificial Intelligence in AI safety research.

The consequences of inaction are already being felt, with some experts warning that the growing anti-AI sentiment could ultimately hinder the development of AI technologies that have the potential to greatly benefit society. For instance, a report by the Brookings Institution found that the backlash against AI could lead to a decline in AI-related investments, resulting in a loss of up to $1.3 trillion in potential economic benefits by 2030. Conversely, companies like NVIDIA and IBM are taking proactive steps to address AI safety and ethics concerns, such as investing in AI transparency and explainability research, and implementing human-centered AI design principles. As Dr. Francesca Rossi, a leading AI researcher, notes, "The development of AI technologies that are transparent, accountable, and beneficial to society requires a multidisciplinary approach, involving not only technologists but also social scientists, ethicists, and policymakers." By prioritizing AI safety, ethics, and transparency, the AI community can work to mitigate the growing anti-AI sentiment and ensure that the benefits of AI are equitably distributed across society.

Originally published on The Stack Stories.

Top comments (0)