Like a Therapist, But Not: Reddit Narratives of AI in Mental Health Contexts

Like a Therapist, But Not: Reddit Narratives of AI in Mental Health Contexts
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Large language models (LLMs) are increasingly used for emotional support and mental health-related interactions outside clinical settings, yet little is known about how people evaluate and relate to these systems in everyday use. We analyze 5,126 Reddit posts from 47 mental health communities describing experiential or exploratory use of AI for emotional support or therapy. Grounded in the Technology Acceptance Model and therapeutic alliance theory, we develop a theory-informed annotation framework and apply a hybrid LLM-human pipeline to analyze evaluative language, adoption-related attitudes, and relational alignment at scale. Our results show that engagement is shaped primarily by narrated outcomes, trust, and response quality, rather than emotional bond alone. Positive sentiment is most strongly associated with task and goal alignment, while companionship-oriented use more often involves misaligned alliances and reported risks such as dependence and symptom escalation. Overall, this work demonstrates how theory-grounded constructs can be operationalized in large-scale discourse analysis and highlights the importance of studying how users interpret language technologies in sensitive, real-world contexts.


💡 Research Summary

This paper investigates how everyday Reddit users evaluate and relate to large language model (LLM)–based artificial intelligence when it is employed for emotional support or therapeutic purposes outside clinical settings. The authors collected 4.7 million submissions from 47 mental‑health‑focused subreddits spanning November 2022 to August 2025, then applied a two‑stage relevance‑filtering pipeline (keyword extraction followed by GPT‑4o mini validation) to isolate posts that explicitly discuss personal or exploratory use of AI for mental‑health care. After manual refinement, a final corpus of 5,126 “experiential” and “exploratory” posts was obtained for analysis.

To structure the analysis, the study integrates two well‑established theoretical frameworks: the Technology Acceptance Model (TAM) and Bordin’s therapeutic alliance theory. From TAM, dimensions such as perceived usefulness, ease of use, output quality, result demonstrability, social influence, perceived trust, perceived risk, and intention to continue were extracted. From therapeutic alliance, the three classic components—task, goal, and bond—were operationalized. In total, 13 annotation dimensions were defined, each with categorical labels and free‑text rationales.

Human annotators first labeled a 100‑post sample, achieving raw agreement of 0.74–0.95, Cohen’s κ of 0.37–0.60, and Gwet’s AC1 of 0.64–0.94, indicating substantial reliability despite label imbalance. These gold‑standard annotations were then used to evaluate several high‑performing LLMs (GPT‑5.2, Gemini‑3‑Pro, Claude‑Opus‑4.5, Kimi‑K2‑Instruct, Qwen3‑Next‑80B‑A3B‑Instruct) in a zero‑temperature, structured‑prompt setting. Each model produced JSON‑formatted labels together with explanatory text, which were manually checked for consistency.

Quantitative analysis revealed that positive sentiment and the intention to keep using AI were most strongly associated with demonstrable outcomes (e.g., improved sleep, reduced anxiety), perceived trust, and high response quality. In contrast, the emotional “bond” dimension alone showed only a weak relationship with positive sentiment and frequently co‑occurred with reports of dependency or symptom escalation. Users who framed AI as a companion or friend tended to experience misalignment on task and goal dimensions, leading to higher perceived risk and occasional adverse effects. When comparing AI to traditional therapy, most participants described AI as complementary or a convenient alternative rather than a full replacement.

The findings suggest that successful AI‑mediated mental‑health tools must prioritize outcome effectiveness, reliability, and clear alignment with users’ therapeutic tasks and goals, rather than relying solely on fostering an emotional connection. Overemphasis on bond without task/goal congruence may increase the risk of over‑dependence. The authors release the curated dataset, annotation guidelines, and code to support further research on ethically designing and evaluating AI‑supported mental‑health interventions.


Comments & Academic Discussion

Loading comments...

Leave a Comment