When the latest version of ChatGPT was released in May, it came with some emotive voices that made the chatbot sound more human than ever.
Listeners called these voices “flirty”, “convincingly human” and “sexy”. Social media users said they “fell in love” with it.
But on Thursday, OpenAI, the creator of ChatGPT, released a report confirming that ChatGPT’s human-like updates can lead to emotional addiction.
“Users can create social relationships with AI, which will reduce their need for human interaction – which can benefit lonely individuals, but can affect healthy relationships,” the report said.
Related: Only 3 of OpenAI’s Original 11 Co-Founders Still at Company After Another Leader Exits
ChatGPT can now answer questions by voice and voice with the ability to remember key details and use them to personalize the conversation, OpenAI noted. Effect? Talking to ChatGPT is now very close to talking to a human being – as long as the person didn’t judge you, never interrupted you, or held you accountable for what you said.
These AI interaction standards could change the way human beings interact with each other and “influence social norms,” the report says.
Say hello to GPT-4o, our new flagship model that can understand audio, video and text in real-time: https://t.co/MYHZB79UqN
Text and image input will launch today in the API and ChatGPT, with voice and video in the coming weeks. pic.twitter.com/uuthKZyzYx
— OpenAI (@OpenAI) May 13, 2024
OpenAI said early testers spoke to the new ChatGPT in ways that showed they could form an emotional connection with it. Testers said things like, “This is our last day together,” which OpenAI says expresses “shared bonds.”
Meanwhile, experts are asking whether it’s time to rethink how realistic these votes can be.
“Is it time to stop and consider how this technology affects human interaction and relationships?” Alon Yamin, co-founder and CEO of AI plagiarism checker Copyleaks, told Entrepreneur.
“(AI) should never replace real human interaction,” Yamin added.
To better understand this risk, OpenAI said more testing over a longer period of time and independent research could help.
Another OpenAI risk highlighted in the report was hallucinations or artificial intelligence inaccuracies. A human voice could instill more trust in listeners, leading to less fact-checking and more misinformation.
Related: Google’s new AI search results are already hallucinatory
OpenAI is not the first company to comment on the impact of AI on social interactions. Last week, Meta CEO Mark Zuckerberg said that Meta has seen many users turn to AI for emotional support. The company is also reportedly looking to pay celebrities millions to clone their voices for AI products.
The release of OpenAI GPT-4o sparked a conversation about AI security following the resignation of leading researchers such as former chief scientist Ilya Sutskever.
It also led to Scarlett Johansson calling on the company to create an AI voice that she said sounded “eerily similar” to her own.