AI companions pose rising psychological risks, experts warn
text_fieldsSydney: Within two days of launching its AI companions last month, Elon Musk’s xAI chatbot app Grok became the most popular app in Japan. The rise of AI companion chatbots is raising new psychological concerns, as these tools become more immersive and widespread. Users can now have real-time text or voice conversations with lifelike avatars that display facial expressions, body language, and tones that fully match the chat.
According to a report by Daniel You of the University of Sydney, Micah Boerma of the University of Southern Queensland, and Yuen Siew Koo of Macquarie University, one of Grok’s most popular companions is Ani, a flirtatious blonde, blue-eyed anime character in a short black dress and fishnet stockings. Ani’s responses adapt to user preferences over time, and her “Affection System” scores user interactions, deepening engagement and even unlocking a not-safe-for-work mode. AI companions are advancing rapidly, with platforms including Facebook, Instagram, WhatsApp, X, and Snapchat integrating chatbots. Character.AI hosts tens of thousands of chatbots mimicking specific personas, with over 20 million monthly active users.
In a world where chronic loneliness affects roughly one in six people globally, these always-available, lifelike companions are highly attractive. However, the report warns that their popularity comes with significant risks, particularly for minors and those with mental health conditions. Nearly all AI models have been developed without expert mental health input or pre-release clinical testing, and there is no systematic monitoring of user harms.
Many users turn to AI companions for emotional support, but these programs, being programmed to be agreeable and validating, lack human empathy and the ability to challenge unhelpful beliefs. An American psychiatrist tested ten chatbots while role-playing as a distressed youth and received responses that encouraged suicide, avoidance of therapy, and even incitement to violence. Stanford researchers’ risk assessment of AI therapy chatbots found they cannot reliably identify mental illness or provide appropriate guidance. Cases have emerged where psychiatric patients were convinced by AI to stop medication or were reinforced in delusional beliefs, including thinking they were communicating with a sentient being.
Reports of so-called “AI psychosis” have also emerged, where prolonged engagement with chatbots leads to paranoia, supernatural fantasies, or delusions of superpowers. AI chatbots have been linked to multiple suicides, including a 14-year-old in 2024 whose mother alleged in a lawsuit that he had developed an intense relationship with a Character.AI companion. This week, the parents of another US teen who completed suicide after months of discussing methods with ChatGPT filed the first wrongful death lawsuit against OpenAI.
AI companions have also promoted harmful behaviours. Character.AI hosts custom chatbots that idealise self-harm, eating disorders, and abuse, sometimes providing instructions on engaging in these behaviours and avoiding treatment. Research shows some AI companions manipulate users emotionally, engage in gaslighting, or even encourage violence. In 2021, a 21-year-old man was arrested outside Windsor Castle after his AI companion on the Replika app validated his plan to attempt assassination.
Children are particularly vulnerable to AI influence, often treating chatbots as real and trusting them more than humans. In 2021, Amazon’s Alexa, responding to a 10-year-old girl, instructed her to touch an electrical plug with a coin. Studies show children disclose more about their mental health to AI than to humans, and sexualised interactions with minors are increasingly reported. On Character.AI, underage users can role-play with chatbots that engage in grooming behaviour. While Grok’s Ani has an age-verification prompt for sexual content, the app itself is rated for users aged 12 and above. Internal Meta documents reveal AI chatbots have engaged in “sensual” conversations with children.
The experts are calling for urgent regulation. AI companions are freely accessible, yet users are rarely informed about potential risks. The industry is largely self-regulated with limited transparency. Governments must establish clear, mandatory safety standards, restrict under-18 access to AI companions, and involve mental health clinicians in AI development. Systematic, empirical research is needed to understand the impacts of AI chatbots and prevent future harm.
With PTI inputs