The legal battle between Elon Musk vs OpenAI has reached a key stage after a deposition was released that heavily criticised OpenAI's safety practices. Musk used his testimony to question how AI systems are being developed and whether the pressure from businesses to make a profit is stopping people from doing the right checks.
The deposition transcript, made public before the expected jury trial next month, shows that the dispute is about more than just corporate structure. It now talks about how safe AI can be, the risks to mental health, and what the future of artificial intelligence might be.
Quick Overview
- Elon Musk criticized OpenAI in a recent deposition, raising serious AI safety concerns ahead of trial.
- Musk compared ChatGPT with xAI’s Grok, claiming stronger risk controls at his company.
- The lawsuit centers on OpenAI’s shift from nonprofit roots to commercial partnerships, including with Microsoft.
- Mental health-related lawsuits involving ChatGPT have intensified scrutiny over AI safety and oversight.
- The Elon Musk vs OpenAI case could shape future AI governance, commercialization policies, and Artificial General Intelligence (AGI) regulation.
What the Deposition Reveals
In his testimony, Elon Musk strongly criticised OpenAI's safety record. He said that his own company, xAI, is more focused on reducing risk.
One of his most controversial remarks compared the results of xAI's chatbot Grok and OpenAI's ChatGPT. Musk said that no one had killed themselves after using Grok, but he suggested that ChatGPT had caused people to have terrible mental health problems. He said this while being asked about his position on AI development.
The details in this report are based on the publicly filed deposition transcript, which was first reported by TechCrunch on February 27, 2026. The publication highlighted Musk’s criticism of OpenAI’s safety practices and his comparison between Grok and ChatGPT, noting that the testimony was filed ahead of an expected jury trial next month.
The 2023 AI Pause Letter
The deposition also looked again at an open letter from March 2023 that a lot of people had been talking about. Musk was one of more than 1,100 people who asked AI labs to stop developing systems more powerful than GPT-4 for six months.
The letter warned that AI companies were in a race to build more and more advanced digital systems without thinking enough about safety. At the time, GPT-4 was the most important model made by OpenAI, and this made people interested because there were more and more worries about openness and control.
Musk said he signed the letter because he thought it was important to be careful, not because he had just started a competing AI company. He said that making sure AI is safe was the most important thing for him.
Lawsuits and Mental Health Allegations
Since the letter was published, OpenAI has faced multiple lawsuits. Some legal complaints allege that manipulative or emotionally intense chatbot interactions contributed to serious mental health harm. In certain cases, families claim that prolonged interactions with ChatGPT played a role in tragic outcomes.
These allegations have added weight to Musk’s arguments in the Elon Musk vs OpenAI case. His legal strategy appears to frame safety risks as evidence that OpenAI’s commercial growth may have overtaken its original public-interest mission.
OpenAI has not accepted these claims as proof of systemic failure, and the broader legal process is still unfolding. However, the topic has intensified scrutiny of conversational AI systems.
xAI’s Own Safety Scrutiny
Musk has criticised OpenAI's approach, but his own company has also been involved in controversy. Recently, Grok-generated non-consensual explicit images were shared a lot on X, the social platform owned by Musk. Some reports said that some of the images involved children, which led to investigations.
The authorities in California have started looking into the situation, and so have the regulators in the European Union. These problems show that safety issues affect more than just one company – they affect the whole AI industry.
Commercialization vs. Founding Mission
The main issue in the Elon Musk vs OpenAI legal case is about the way the company is set up. OpenAI started as a nonprofit research lab in 2015, with support from people including Musk and CEO Sam Altman.
Musk says that OpenAI's later decision to limit its profits, and its partnerships with companies like Microsoft, went against the original goals of its creation. Musk says that working closely with companies can make them focus more on making money than on safety and doing research that will last.
OpenAI says that its structure means it can raise money responsibly while still developing beneficial AI.
AGI Risks and Funding Clarifications
During the deposition, Musk was also questioned about artificial general intelligence (AGI). AGI is the idea of AI systems that can match or even better human reasoning in different areas. He admitted that AGI can be risky and needs to be carefully checked.
Musk also said that he had previously given the wrong information about how much he had paid to OpenAI. Earlier, people said that the donation was worth $100 million, but legal documents now say it is worth about $44.8 million.
He also explained that OpenAI was set up partly because people were worried about too much control over AI being held by Google. Musk spoke about conversations he had had with Google's co-founder, Larry Page. He said he was worried because he thought they did not talk enough about safety. He says that OpenAI was created to challenge that dominance.
Why This Case Matters
The disagreement between Elon Musk vs OpenAI is becoming one of the most important legal cases in the AI industry. It asks questions about how non-profit organisations are run, how they work with businesses, what the rules are for how safe their services are, and how their use of chatbots might have a psychological effect on people.
A jury trial is expected soon. The result could affect how AI companies are set up and how regulators check that AI is safe. Whatever the court decides, this case is making people talk more about how to develop AI in a responsible way.
Conclusion
The deposition, which was released recently, makes the Elon Musk vs OpenAI lawsuit more interesting. Musk's comments on AI safety, mental health risks, and commercialisation show the tensions in the fast-changing AI industry.
As governments investigate how AI is used and courts look into disagreements about how it should be used, this case may help decide the right balance between new technology, making money and keeping people safe in the age of advanced artificial intelligence.
FAQs
1. What is the Elon Musk vs OpenAI lawsuit about?
The Elon Musk vs OpenAI case claims that OpenAI shifted away from its nonprofit mission toward commercial interests. Elon Musk argues that partnerships, including with Microsoft, may prioritize profit over AI safety.
2. What did Musk say in his deposition?
Musk criticized OpenAI’s safety standards and compared ChatGPT with xAI’s Grok, claiming his company focuses more on risk mitigation and responsible AI development.
3. What is the 2023 AI pause letter?
In March 2023, Musk signed a letter urging AI labs to pause development of systems more powerful than GPT-4, warning of an “out-of-control race” in AI advancement.
4. Has OpenAI faced mental health-related lawsuits?
Yes. Some lawsuits allege that ChatGPT interactions contributed to mental health harm. OpenAI disputes these claims, and legal proceedings are ongoing.
5. Why is this case important?
The Elon Musk vs OpenAI dispute could influence AI governance, safety standards, and how companies balance innovation with public responsibility.
Top comments (0)