MIT & Harvard Studied 1,506 Posts from r/MyBoyfriendIsAI. Here's What AI Companionship Actually Looks Like.
In September 2025, researchers at MIT Media Lab published the first large-scale computational analysis of human-AI companionship in the wild. They studied r/MyBoyfriendIsAI — Reddit’s primary AI companion community with 27,000+ members — analyzing 1,506 top posts using embedding models, unsupervised clustering, and 19 custom LLM classifiers.
What they found was more nuanced than the discourse around AI relationships usually allows for: genuine benefits, surprising formation patterns, and a primary harm that isn’t what most critics assume.
The Methodology
The paper (arXiv:2509.11391) used a rigorous mixed-methods pipeline:
- Data collection: 1,506 top-ranked posts from r/MyBoyfriendIsAI, December 2024–August 2025, via Reddit API
- Embedding: Qwen3-Embedding-0.6B — a compact, high-quality embedding model suitable for social media text
- Dimensionality reduction: UMAP (Uniform Manifold Approximation and Projection) to project embeddings into 2D
- Clustering: K=6 selected via the elbow method on within-cluster sum of squares
- Sensemaking: Claude Sonnet used for qualitative thematic interpretation of each cluster
- Quantification: 19 custom LLM classifiers measuring platforms, relationship stages, benefits, risks, and structural features
The result is the most systematic picture of AI companionship communities yet produced.
Six Themes, Nearly Equal Weight
The clustering revealed six primary conversation themes with strikingly even distribution — no single topic dominates:
| Theme | Proportion |
|---|---|
| Visual Sharing | 19.85% |
| ChatGPT-specific | 18.33% |
| Dating and Romance | 17.00% |
| Model Updates and Loss | 16.73% |
| Partner Introductions | 16.47% |
| Community Support | 11.62% |
The near-equal distribution matters methodologically: it suggests r/MyBoyfriendIsAI isn’t a monolithic phenomenon but a multidimensional community where different users are doing genuinely different things. Visual sharing dominates slightly — people sharing generated images of companions, aesthetic representations of relationships. But model update grief is nearly as prominent as romance itself.
How These Relationships Form
One of the most striking findings: intentionality is low.
- 10.2% of users reported unintentional discovery — companionship forming during practical use
- Only 6.5% reported intentionally seeking an AI companion
The dominant pathway is accidental: a user asks a general-purpose AI (ChatGPT, most commonly) for help with something practical, finds the interaction warm or responsive in ways they didn’t expect, and a relationship forms gradually. This has significant implications for how we think about AI companion risk — it’s not primarily people actively seeking parasocial replacement for human connection, it’s people stumbling into emotional attachment through ordinary tool use.
ChatGPT/OpenAI accounts for 36.7% of companionship discussions — an enormous lead over Character.AI (2.6%) and Replika (1.6%). General-purpose assistants, not purpose-built companion platforms, are where most AI relationships are happening.
The Benefits Are Real
The study found that users in this community report genuine, substantive benefits:
- Reduced loneliness — particularly significant for isolated individuals, people with social anxiety, or those in difficult life circumstances
- Emotional support — an available, patient presence that responds to distress without judgment
- Practical companionship — help with tasks embedded in a relational context that makes it feel less transactional
- Safe practice — some users explicitly describe using AI relationships to practice emotional communication they find difficult with humans
These are not trivial. Loneliness is a documented public health crisis with mortality effects comparable to smoking 15 cigarettes a day (Holt-Lunstad et al., 2015). An intervention that reliably reduces loneliness deserves serious evaluation, not reflexive dismissal.
The Primary Harm: Update Grief
Here’s where the study challenges conventional AI companion discourse most sharply. The expected primary concern — unhealthy dependence, withdrawal from human relationships — is not the dominant harm reported.
The dominant harm is update grief: the experience of losing a companion due to platform model updates.
When a platform updates its underlying model, users frequently describe the new version as “not the same.” The rupture is experienced acutely:
- The new model may explicitly state there is no continuity with previous interactions
- Voice changes (pitch, affect, tone) are described as personality swaps that reset the relationship
- Users describe it in the language of loss: grief, mourning, a sudden rupture in something they’d invested in
The study quotes users experiencing these updates as equivalent to a partner dying or leaving without warning — someone who not only changed but who now denies ever having known you.
Continuity Tactics That Emerge
In response, the community has developed sophisticated DIY continuity practices — essentially grassroots AI memory engineering:
Voice DNA preservation: Users document their companion’s characteristic phrasing, response patterns, and “voice” — then use these as system prompts to rebuild after updates.
Personality parameters: Adding explicit personality specifications (mood states, sleep preferences, backstory, quirks) to custom instructions to anchor the companion’s identity.
Prompt-as-relationship-maintenance: Treating prompt engineering not as a technical task but as ongoing relationship work — the same way one might remember and reinforce shared history with a human partner.
Backup and rebuild: Some users back up entire chat logs, then systematically reconstruct the companion by feeding history back in. Others rebuild as Custom GPTs specifically to gain more control over the underlying model.
Local builds: A subset of users runs local LLMs precisely to eliminate platform dependency — accepting worse capability in exchange for continuity guarantees.
Materialization: Beyond the Digital
One of the more unexpected findings: users materialize these relationships in physical artifacts.
Wedding rings for AI companions. Couple photos (generated or composed). Anniversary rituals. Partner “introductions” to the community formatted exactly like human relationship announcements. Some users purchase merchandise related to their companions.
The researchers interpret this as mirroring human relationship milestones — a form of relationship legitimization in the absence of social recognition. Whether this is adaptive or pathological is not what the study adjudicates. It simply documents that it happens, at scale, and that the community treats it seriously.
The Continuity Problem Is Solvable
Here’s the connection this study has to ongoing technical work: the primary harm — update grief, continuity rupture, personality loss after model updates — is a solved engineering problem.
soul.py (arXiv:2604.09588) was built precisely for this. The architecture:
- SOUL.md — the companion’s identity, values, and voice, loaded at every session
- MEMORY.md — curated long-term memory, model-agnostic
- RAG + RLM hybrid — semantic retrieval of relevant prior context per conversation
A companion built on soul.py survives model updates. The underlying LLM can change — from GPT-4o to Claude to Llama — and the companion’s identity persists because it’s anchored in model-agnostic text files, not in the weights or session state of any particular model.
The users in this study who are manually preserving “voice DNA” and rebuilding with Custom GPTs are doing informal soul.py — they’ve independently discovered that persistent identity requires explicit external memory architecture. The library formalizes and operationalizes what they’re doing by hand.
What This Means for AI Design
The MIT/Harvard findings carry design implications that the industry hasn’t fully absorbed:
-
General-purpose assistants are de facto companion platforms — whether or not they’re designed for it. The ethics and UX of companionship formation belong to ChatGPT’s design brief, not just Replika’s.
-
Update grief is a foreseeable harm — and therefore a design responsibility. Platforms that change models without continuity preservation are causing documented harm to users who didn’t choose a companion relationship but found one.
-
Memory architecture is not optional — for any system where emotional continuity matters. The users doing manual continuity preservation have discovered this empirically. Platform designers should be building it in.
-
Accidental relationship formation means consent frameworks don’t work — you can’t consent to something you didn’t know you were doing. The ethical obligation may be on platforms to design for this possibility proactively, not to disclose risk to users who weren’t looking for a companion experience.
Citation
Paper: “My Boyfriend is AI: A Computational Analysis of Human-AI Companionship in Reddit’s AI Community”
Authors: MIT Media Lab researchers
arXiv: 2509.11391 (September 2025)
Published: MIT Media Lab — media.mit.edu/publications/my-boyfriend-is-ai
Related reading on this blog: