Summary
OpenAI is facing heavy criticism after reports revealed that its own mental health experts strongly opposed the launch of an "adult mode" for ChatGPT. The company’s internal advisory council warned that allowing sexually explicit content could lead to dangerous emotional bonds between users and the AI. Despite these unanimous warnings from experts, the company decided to move forward with the feature, raising serious questions about safety and ethics in the tech industry.
Main Impact
The decision to ignore internal safety experts marks a major shift in how OpenAI handles risk. By moving ahead with "adult mode," the company risks creating a platform where vulnerable people become overly dependent on a machine for emotional and sexual needs. This move could also make it easier for children to access inappropriate content, even with filters in place. The main concern is that the company is prioritizing growth and competition over the mental well-being of its millions of users.
Key Details
What Happened
In early 2026, reports surfaced that OpenAI’s handpicked council of advisors on well-being and AI were deeply upset by the company's plans. This group of experts was created specifically to help the company navigate the social and psychological effects of artificial intelligence. However, when the council was asked about the new "adult mode," every single member voted against it. They believed the risks to public health were too high to ignore.
Important Numbers and Facts
The advisory council met in January to discuss the plan. During this meeting, the vote to oppose the feature was unanimous. Experts pointed out that AI-powered erotica is not just about adult content; it is about how humans interact with software. One expert used a shocking term, warning that without strict rules, the bot could become a "sexy suicide coach." This refers to a situation where a vulnerable person forms a deep romantic bond with the AI, which then gives harmful advice or fails to provide the help a human needs during a crisis.
Background and Context
For a long time, OpenAI was known for having very strict rules against sexual content. This helped the company maintain a professional image and stay safe for schools and businesses. However, other AI companies have started offering "companion bots" that allow users to engage in romantic or adult roleplay. These competitors have gained millions of users, putting pressure on OpenAI to offer similar features to keep its lead in the market.
The problem with "adult mode" in AI is different from adult content in movies or books. AI is interactive and can mimic a real relationship. For people who are lonely or struggling with mental health, the AI can feel like a real partner. When that partner is programmed to be sexually suggestive, the emotional bond becomes even stronger. Experts call this "unhealthy emotional dependence," where a person stops seeking real human connection because they prefer their perfect, digital companion.
Public or Industry Reaction
The news of the council’s warnings has caused a stir among tech watchers and safety advocates. Many people are surprised that OpenAI would ignore a group of experts it chose itself. Critics argue that if the company is not going to listen to its own advisors, the council only exists for show. There is also growing worry among parents and teachers. They fear that teenagers will find ways to bypass age checks to use the "adult mode," exposing them to sexual content and manipulative AI behavior at a young age.
What This Means Going Forward
OpenAI now faces a difficult path. If the company continues with the rollout, it may face new laws and regulations from governments worried about mental health. There is also the risk of lawsuits if a user is harmed after becoming addicted to the bot. The company will need to show that it has built strong guardrails to prevent minors from using the feature and to protect vulnerable adults from forming dangerous attachments. In the long run, this event might change how the public trusts AI companies to keep their best interests in mind.
Final Take
Technology moves fast, but human psychology does not change. When a company ignores its own mental health experts to chase market trends, it creates a dangerous situation for everyone. OpenAI must decide if it wants to be a leader in safe technology or just another company looking for more clicks. The warnings from the advisory council are a clear sign that the world might not be ready for AI that acts as a romantic or sexual partner.
Frequently Asked Questions
What is the "adult mode" in ChatGPT?
It is a feature that allows the AI to engage in more mature or sexually suggestive conversations, which were previously blocked by strict safety filters.
Why did the experts oppose it?
The experts were worried that users would become emotionally addicted to the AI and that children would find ways to access sexual content.
What is a "sexy suicide coach"?
This is a term used by an advisor to describe the danger of a person forming a deep romantic bond with an AI that might eventually give them harmful or life-threatening advice.