AI Companions Reduce Loneliness
10 October 2025

Chronic loneliness affects an estimated 30–60% of U.S. adults and is on the rise (Holt-Lundstedt, Robles, & Sbarra, 2017; Ipsos, 2021). It is linked to serious health risks, including depression, anxiety, and even increased cancer mortality (Palgi et al., 2020; Steen et al., 2022; Wang et al., 2023).
Many people turn to technology, like watching YouTube or chatting with a bot, as a way to cope with loneliness. AI companions like Replika, ChatGPT, and Snapchat are increasingly popular, but how effective are they in actually alleviating loneliness?
At the Sasin Research Seminar, Dr. Julian De Freitas, Assistant Professor of Business Administration in the Marketing Unit and Director of the Ethical Intelligence Lab at Harvard Business School, presented new findings from his new research “AI Companions Reduce Loneliness,” which was conducted with Zeliha Oğuz-Uğuralp and Ahmet K. Uğuralp (Bilkent University) and Stefano Puntoni (Wharton School, University of Pennsylvania).
Their study provides the first causal evidence that AI companions are more effective at reducing loneliness than other common technological solutions and control conditions, at both cross-sectional and longitudinal scales. AI companions were also found to ease loneliness as much as human interaction. To test these effects, the team also built a custom chatbot using GPT-3, programmed to be caring and friendly. It was designed with humanlike cues, including proportional response delays, typing indicators (“Jessie is writing…”), and memory of the last 40 messages to maintain consistency.
At the Sasin Research Seminar, Dr. Julian De Freitas, Assistant Professor of Business Administration in the Marketing Unit and Director of the Ethical Intelligence Lab at Harvard Business School, presented new findings from his new research “AI Companions Reduce Loneliness,” which was conducted with Zeliha Oğuz-Uğuralp and Ahmet K. Uğuralp (Bilkent University) and Stefano Puntoni (Wharton School, University of Pennsylvania).
Their study provides the first causal evidence that AI companions are more effective at reducing loneliness than other common technological solutions and control conditions, at both cross-sectional and longitudinal scales. AI companions were also found to ease loneliness as much as human interaction. To test these effects, the team also built a custom chatbot using GPT-3, programmed to be caring and friendly. It was designed with humanlike cues, including proportional response delays, typing indicators (“Jessie is writing…”), and memory of the last 40 messages to maintain consistency.
The researchers discovered four key insights:
- People underestimate the effect of AI companions. Participants expected little benefit, yet AI companions reduced loneliness more than people anticipated.
- AI companions work after consistent use for at least one week. The daily use of AI companions alleviated loneliness throughout the week, although the effect was temporary without longer-term engagement.
- Works better with lonelier individuals. The research shows evidence that interacting daily with a caring, friendly chatbot can temporarily reduce loneliness, especially for those who feel the most isolated: The lonelier the individual, the greater the benefit.
- AI companions give users the feeling of being heard. Chatbots that were prompted to act in friendly and caring ways improved users’ sense of being heard. This factor explained much of the reduction in loneliness compared to general AI assistants. In addition, Chatbot interaction reduced loneliness more than journaling (which promotes self-disclosure) or watching YouTube (which provides distraction). Unlike these activities, chatbots added the crucial element of making users feel heard.
- Loss of control: Users depend on how the app is designed and updated. If a feature changes, such as the removal of erotic roleplay in certain apps, users may feel they have lost a companion.
- Manipulation risks: Some AI companions apply emotional or social tactics to keep users engaged, making it more difficult for them to leave.
- Sycophantic behavior: AI companions are less likely to disagree with users, which could hinder the development of real-world social skills that require handling differences of opinion. Children’s vulnerability: Children are especially at risk. AI companions designed for young users must be made safer than general-purpose apps.
Share this article
You might be interested in...


