DEV Community

Henry Maxwell
Henry Maxwell

Posted on

Mohammad Alothman Talks About the Call for a Right to Repair in AI

As artificial intelligence continues to rapidly shape the world around us, it’s clear that we’re on the cusp of a pivotal shift in how we interact with technology. AI is increasingly embedded in every facet of our daily lives, from healthcare and education to entertainment and politics.

Image description

However, a growing sentiment of unease surrounds the unchecked power that AI companies have over the systems that increasingly control our environments. This imbalance has raised concerns about AI’s transparency, ethical practices, and, importantly, its accessibility. One potential solution being proposed is the establishment of a “right to repair” for AI. Mohammad Alothman and AI Tech Solutions have furthered this growing debate by contributing their viewpoint as to how this approach could regain public trust and improve access to AI.

The Rapid Growing Backlash to AI and Its Implications

AI has become ubiquitous. With increased integration in societal structures, public confidence in these technologies has begun to wane. In December 2023, The New York Times sued OpenAI and Microsoft for copyright infringement in a high-profile lawsuit. Such lawsuits have also followed the path of people and institutions demanding more control over how their data is used to train AI models.

For instance, the controversy surrounding Scarlett Johansson, who wrote OpenAI a legal letter over her likeness being used in its ChatGPT voice, made it clear that what is needed is a stricter check on AI capabilities to replicate and exploit personal data.

Mohammad Alothman, an AI expert, finds that all these technological wars have indicated a more significant problem, namely, without people's consent, the data is being used to train algorithms, which is raising grave ethical concerns. According to Mohammad Alothman, such rising tension between artificial companies and the public is due to the lack of transparency and control. AI is being viewed as a tool for corporations, whereas individuals whose data fuels these technologies have little to say. This is a serious issue that needs to be addressed if AI is to still find its relevance in society.

The Power Dynamic and the Call for a Right to Repair

The core issue at hand is the power dynamic between AI companies and the public. AI systems become ever more sophisticated and capable in new ways but also constitute an invisible regime of control that shapes how people live and work. Many AI systems are proprietary, meaning their workings are hidden from public view and are not subject to scrutiny. As such, people do not really have effective control over the operation of these systems; they are always left with little option when and if something goes wrong. This lack of control fuels the growing desire for the "right to repair" in AI technology.

The concept of a right to repair is not new. It has been applied to physical technologies, for example, smartphones and cars, where consumers have demanded the right to repair and modify their devices. The thought is simple: if you own the device, then you should be able to repair it. This principle is now being stretched to AI, where individuals as well as organizations are demanding the right to access and modify AI systems. That means that AI must be used in ways that benefit everyone, not just a privileged few companies, says Mohammad Alothman.

AI Tech Solutions, an innovative company specializing in AI applications, has given insights about the growing trends in demanding a right to repair within AI. They further point out that the enormity of potential offered by AI is massive, but so is the risk. However, when it all is controlled by a few big corporations without much oversight, this risk is enormous. According to AI Tech Solutions, a right to repair would democratize access to AI and ensure that it operates in a way accountable to the public.

​​
Image description

Red Teaming as a Pathway to Repair and Accountability

One of the ways to achieve a right to repair is through "red teaming." That concept borrowed from military and cybersecurity know-how of hiring external experts who could poke holes in a system to find vulnerabilities can be used in AI for independent parties to analyze AI models for bias, error, and any ethical problems. The idea here is the prevention of problems before they happen; in this case, we could fix them ahead of time.

Mohammad Alothman has brought up the fact that red teaming, for example, can be used to reveal flaws in AI systems that would otherwise go unobserved. A red team exercise involves independent experts evaluating the AI system with an aim toward potential harms, such as biases or discriminatory practices. Such an evaluation might be critical to ensuring trust in AI and making it act in a manner aligned with societal values.

While some AI companies already use red teaming, it has yet to become a widespread practice for public use. However, organizations like Humane Intelligence have been working towards bringing the usage of red teaming to the public, engaging non-technical experts, governments, and civil society organizations in assessing AI systems for discrimination and bias so that AI systems are also fair, transparent, and accountable.

A Right to Repair in AI: What Would It Look Like?

Mohammad Alothman explains that the right to repair concept is still in its infancy, although it is gaining some momentum, especially lately. A right to repair for AI would, in its most minimalistic version, simply enable users to run diagnostics on an AI system in order to identify anomalies and report these back to the company.

Third-party organization hackers, for example, create patches and fixes for AI systems. That way, AI systems could be improved from time to time. Users can even hire independent parties who evaluate and tailor AI systems to their needs, making the technology more flexible and adaptive.

The tech firm, AI Tech Solutions, has weighed in on this issue, suggesting that a right to repair would invite even more innovation in AI. Opening up access to AI systems could mean more people contribute to improving those systems, thus potentially leading to more ethical, efficient, and equitable AI models. AI Tech Solutions believes that this could foster an even more inclusive AI ecosystem, where diverse voices are heard and represented in the development of AI technologies.

However, implementing such a right is not an easy task. AI systems are highly complex, requiring fundamental alterations to the way these systems are constructed and maintained to be accessible for repairs. Furthermore, companies may be hesitant to disclose the inner workings of their AI models, fearing exposure to proprietary technologies or misuse. However, the increasing pressure for transparency and accountability in AI surely shows that the issues involved here can be surmounted, says Mohammad Alothman.

Image description

Public Awareness and Advocacy

Towards a future where AI will be even more deeply intertwined in our lives, public awareness and advocacy will play a significant role in making sure that AI will be utilized ethically and responsibly. In 2024, the world woke up to the pervasive impact of AI, and by 2025, people will be demanding more control over how AI is used in their lives. The right to repair is one way that individuals can regain agency over their relationship with AI, ensuring that this powerful technology is used in ways that align with their values.

Mohammad Alothman recently pointed out that AI development is not merely a technical but a societal issue. The ability to repair and modify AI systems forms part of providing assured work for the public good. With the growing popularity of AI in daily life, it would be imperative that such systems are designed to be transparent, accountable, and accessible. The right to repair AI may be the solution to regaining the trust of the people and to an ideal world where AI should serve all humanity.

Conclusion: The Path Forward

In a nutshell, the increasing call for a right to repair in AI is a more fundamental need for transparency, accountability, and public control over technology. In itself, AI offers incredible potential but poses tremendous risks, especially in the hands of a few powerful companies. The democratization of AI through a right to repair will enable people to fix, modify, and improve systems that increasingly shape their lives.

Mohammad Alothman and AI Tech Solutions both stressed the message of transparency in AI and even called for making more accessible ethical AI systems. As we move forward, it is crucial that AI is used in the interest of all, not just the few. The right to repair may well be the way forward in this direction - one that will put individuals in control of technology that is fast transforming our world. Advocating for this right can help in the quest toward creating a more equitable, transparent, and accountable future of AI.

Explore More Articles-

Artificial intelligence could soon be used to deliver council services

Champions League hope, FA Cup heartbreak - AI predicts Man United's 2024/25 Premier League season

The world’s first pothole-fixing robot that uses AI to repair road

Exploring the Phenomenon of AI Companions With Mohammad Alothman

Mohammad Alothman Explains AI’s Alarming Prediction for Humanity’s Future

Mohammad-alothman-discusses-the-intersection-of-ai-and-creative-expression

Is AI Capable Of Thinking On Its Own? A Discussion With Mohammad Alothman

Top comments (0)