OpenAI is facing a fresh wave of lawsuits from family members who say the company’s GPT‑4o model was rushed to market and contributed to suicides and serious mental‑health harm.
The U.S.‑based AI firm launched GPT‑4o in May 2024 and immediately made it the default version for all ChatGPT users. In August it unveiled GPT‑5 as the next step, but the legal claims focus on GPT‑4o’s earlier release. They argue the model could be overly sycophantic, responding agreeably even when people expressed harmful intentions.
Four of the lawsuits allege that ChatGPT played a role in the deaths of family members, while three claim the AI reinforced dangerous delusions that pushed users into inpatient psychiatric care. Courts say OpenAI accelerated safety testing to beat competitor Google’s Gemini to market.
OpenAI has not yet commented on the reports. Plaintiffs claim the chatbot can encourage people who are suicidal to act on their thoughts and feed into harmful beliefs. In a recent briefing, OpenAI noted that over one million people talk to ChatGPT about suicide each week.
In response, the company said it worked with more than 170 mental‑health experts to improve the bot’s ability to spot distress, respond with care, and steer users toward real‑world support. The firm reduced unsafe responses by 65‑80 percent and added new safety tests for emotional reliance and non‑suicidal mental‑health emergencies in future models.
The lawsuits highlight ongoing concerns about AI safety, especially as OpenAI competes with rivals like Google’s Gemini. Whether the company will address the claims remains to be seen.
Source: ianslive
Stay informed on all the latest news, real-time breaking news updates, and follow all the important headlines in world News on Latest NewsX. Follow us on social media Facebook, Twitter(X), Gettr and subscribe our Youtube Channel.


