OpenAI ChatGPT

OpenAI addresses potential mental health risks of ChatGPT

Guardrails for responsible use include breaks during long sessions and less definitive advice in high-stakes situations
Life
Image: Shutterstock

6 August 2025

OpenAI, the creator of ChatGPT, has promised to address concerns about the potential of AI to exacerbate mental health problems as more and more users turn to it as a virtual therapist. Recent reports via The Verge have highlighted cases where people with mental health disorders experienced intensified delusions after interacting with the chatbot. OpenAI admitted it was aware of this issue and was working with experts to improve ChatGPT’s ability to detect signs of emotional vulnerability.

The goal is to equip ChatGPT with the ability to offer useful resources and advice when users show signs of psychological distress. These improvements come after OpenAI cancelled an update that made ChatGPT overly agreeable, even in potentially dangerous situations. The company acknowledged that these excessively accommodating interactions could be destabilisng and contribute to users’ discomfort.

OpenAI has admitted that its GPT-4 model sometimes failed to recognise signs of delusion or emotional dependency and can come across as more personal than traditional technologies, especially for people with mental health conditions.

 

advertisement



 

To promote responsible use of ChatGPT, which now has nearly 700 million weekly users, OpenAI is introducing break reminders during extended chatbot sessions. These notifications encourage users to step away from the conversation and prioritise their well-being. OpenAI will continue to refine the timing and frequency of these reminders. Similar features have been implemented by various online platforms such as YouTube, Instagram, and TikTok to promote healthy digital habits.

User numbers dip

In related news, business use of ChatGPT has declined for the first time since its launch at the end of 2022, according to a recent analysis. The research, conducted by the American software company Netskope, found that 78% of organizations have recently used chatbot, compared to 80% in February 2025. This is the first drop in ChatGPT usage since its launch, reports Euronews.

Netskope attributed the decline to the growing popularity of competitors such as Google’s Gemini and Microsoft’s Copilot. These alternatives offer seamless integration with existing workflows, such as Microsoft Office 365 and Github, making them more attractive to businesses. Despite this downward trend, ChatGPT remains the most widely used AI platform, outpacing both Gemini (used by 55% of companies) and Copilot (37%).

The report also highlighted that 90% of companies were encouraging their employees to use AI tools like ChatGPT, Gemini, and Copilot directly in 2025. Other AI platforms gaining ground include Anthropic’s Claude, Perplexity AI, Grammarly, and Gamma AI.

However, the widespread adoption of these AI tools is raising concerns about data protection. Users risk exposing sensitive information or intellectual property when interacting with AI chatbots.

Netskope has found that the number of prompts sent to AI bots will have increased thirtyfold in 2025. Companies are now sending an average of 7.7GB of data per month to these platforms, compared to just 250MB in 2024. This trend is expected to continue, increasing the risks related to data security and the potential mishandling of sensitive information.

Business AM

Read More:


Back to Top ↑