
ChatGPT might be making its most frequent users more lonely, a joint study conducted by OpenAI and MIT Media Lab has shown. Ever since being launched over two years ago, ChatGPT has become a phenomenon with over 400 million people using it every week. While the platform is not designed or marketed as an AI companion, a subset of users engage emotionally with ChatGPT, which prompted the study.
The researchers used a two-pronged method for the study. First, they analysed millions of chat conversations and audio interactions with ChatGPT whilst surveying over 4,000 users on their self-reported behaviour with the bot. Secondly, the MIT Media Lab recruited 1,000 people to take part in a four-week trial to examine how they interacted with ChatGPT for a minimum of five minutes each day.
While feelings of loneliness and social isolation are often influenced by various factors, the study authors concluded that participants who trusted and "bonded" with ChatGPT more were likelier than others to be lonely and to rely on it more.
"Overall, higher daily usage-across all modalities and conversation types-correlated with higher loneliness, dependence, and problematic use, and lower socialization," the study highlighted.
'Advantages diminished'
The researchers also conducted an in-depth analysis of users interacting with ChatGPT's Advanced Voice Mode -- a speech-to-speech interface. The bot was programmed to interact in two modes viz., neutral and engaging mode. In the former mode, the bot maintained a neutral tone regardless of the user's emotional state while in the latter, the LLM-powered bot expressed its feeling openly.
"Results showed that while voice-based chatbots initially appeared beneficial in mitigating loneliness and dependence compared with text-based chatbots, these advantages diminished at high usage levels, especially with a neutral-voice chatbot," the study stated.
Though the technology is still in its nascent stage, researchers said the study may help start a conversation about its full impact on the mental health of users.
"A lot of what we're doing here is preliminary, but we're trying to start the conversation with the field about the kinds of things that we can start to measure, and to start thinking about what the long-term impact on users is," said Jason Phang, an OpenAI safety researcher who worked on the project.
The study comes in the backdrop of OpenAI releasing GPT-4.5, which it claims is supposedly a more intuitive and emotionally intelligent model than its predecessor and competitors.
Track Latest News Live on NDTV.com and get news updates from India and around the world