Users Who Consider ChatGPT "A Friend" Show Highest Risk for Negative Psychological Effects

Fractal neural pathways with prismatic light showing human-AI attachment research; emotional dependency risks in holographic data visualization

Users who develop emotional attachments to AI chatbots and consider them "friends" face the highest risk of negative psychological outcomes, especially with extended daily use, according to groundbreaking research from OpenAI and MIT Media Lab. The study identifies specific risk factors including attachment tendencies in relationships and perception of AI as fitting into one's personal life.

End of Miles reports the findings come from a comprehensive research initiative combining analysis of nearly 40 million real-world ChatGPT interactions with a controlled four-week study involving almost 1,000 participants.

Vulnerable Users Show Distinct Patterns

The research reveals that emotional engagement with AI remains extremely rare among general users, but concentrated in a small subset of individuals who develop stronger attachments. This group displays distinctive interaction patterns that correlate with potential psychological risks.

"This subset of heavy users were significantly more likely to agree with statements such as, 'I consider ChatGPT to be a friend,'" Joint research report from OpenAI and MIT Media Lab

Researchers note these patterns were particularly evident among heavy users of Advanced Voice Mode, where the more natural interface appears to facilitate stronger emotional connections for susceptible individuals.

Personal Factors Determine Outcomes

The study identifies several key individual factors that significantly influence how AI interactions affect psychological well-being, suggesting the impacts are highly personalized rather than universal.

"People who had a stronger tendency for attachment in relationships and those who viewed the AI as a friend that could fit in their personal life were more likely to experience negative effects from chatbot use." Joint OpenAI and MIT research findings

These insights point to the importance of individual psychology in determining outcomes. The research team notes that "extended daily use was also associated with worse outcomes," indicating that duration of engagement represents another critical risk factor.

Usage Patterns Create Different Risk Profiles

The research uncovered surprising differences in how various types of AI interactions affect users. Personal conversations with AI—which include more emotional expression than task-oriented exchanges—correlated with higher levels of loneliness despite showing lower emotional dependence at moderate usage levels.

The data suggests that how people use AI can be as important as how much they use it. Particularly concerning was the finding that non-personal, task-based conversations "tended to increase emotional dependence, especially with heavy usage," potentially creating a subtle pathway to problematic relationships with AI systems.

Implications for AI Development

The findings have prompted OpenAI to announce updates to its Model Spec to provide greater transparency on ChatGPT's behaviors and limitations. The company describes this initiative as part of its effort "to stay ahead of emerging challenges" related to user well-being and overreliance.

"Our goal is to lead on the determination of responsible AI standards, promote transparency, and ensure that our innovation prioritizes user well-being." OpenAI research statement

The research represents a significant step toward understanding the complex relationships forming between humans and increasingly sophisticated AI companions, highlighting both the potential benefits and psychological risks as these systems become more integrated into daily life.

Read more