Tracing Users' Privacy Concerns Across the Lifecycle of a Romantic AI Companion
Romantic AI chatbots have quickly attracted users, but their emotional use raises concerns about privacy and safety. As people turn to these systems for intimacy, comfort, and emotionally significant interaction, they often disclose highly sensitive …
Authors: Kazi Ababil Azam, Imtiaz Karim, Dipto Das
T racing Users’ Privacy Concerns Acr oss the Lifecycle of a Romantic AI Companion KAZI ABABIL AZAM, Bangladesh University of Engineering and T echnology, Bangladesh IMTIAZ KARIM, University of T exas at Dallas, USA DIPTO D AS, University of T oronto, Canada Romantic AI chatbots have quickly attracted users, but their emotional use raises concerns ab out privacy and safety . As people turn to these systems for intimacy , comfort, and emotionally signicant interaction, they often disclose highly sensitive information. Y et the privacy implications of such disclosure remain poorly understood in platforms shaped by persistence , intimacy , and opaque data practices. In this paper , we examine public Reddit discussions about privacy in romantic AI chatbot ecosystems thr ough a lifecycle lens. Analyzing 2,909 posts from 79 subreddits collected over one year , we identify four recurring patterns: disproportionate entry requirements, intensied sensitivity in intimate use, interpretiv e uncertainty and perceived surveillance, and irreversibility , persistence, and user burden. W e show that privacy in romantic AI is best understood as an evolving socio-technical governance problem spanning access, disclosur e, interpretation, r etention, and exit. These ndings highlight the need for privacy and safety governance in romantic AI that is staged across the lifecycle of use , supports meaningful re versibility , and accounts for the emotional vulnerability of intimate human- AI interaction. Additional K ey W or ds and Phrases: Romantic AI Chatbots, Privacy Framework, Qualitativ e Content Analysis A CM Reference Format: Kazi Ababil Azam, Imtiaz Karim, and Dipto Das. 2026. T racing Users’ Privacy Concerns Across the Lifecycle of a Romantic AI Companion. 1, 1 (March 2026), 16 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn 1 Introduction AI companion and romantic chatbot platforms have moved from a niche curiosity to ward a more visible consumer technology categor y . Popular platforms such as Replika and other companion-AI apps are incr easingly marketed as sources of virtual courtship, emotional support, and roleplay , while critics have raised concerns about dependence, isolation, and the adequacy of platform safeguards [ 1 ]. This growing visibility has also been accompanie d by mounting scrutiny of the privacy and governance practices of these systems. Mozilla’s Privacy Not Include d articles argue that romantic AI chatbots perform po orly on core privacy expectations, including data collection, user control, and transparency around data use [ 6 ]. These reports are also reecte d in legal decisions by formal authorities against the gov erning companies of these platforms. Italy’s data protection authority ned Replika, one of the most p opular companion- AI platforms, over privacy violations, including failures related to age verication [ 9 , 22 ]. Regardless of this disregard for privacy , the number of users continues to grow at an alarming rate. Reports show that character .ai, another popular romantic AI platform, has around 20 million monthly active users as of February 2026 [ 28 ]. A uthors’ Contact Information: Kazi Ababil Azam, kaziababilazamtalha@gmail.com, Bangladesh University of Engineering and T echnology, Dhaka, Bangladesh; Imtiaz Karim, imtiaz.karim@utdallas.edu, University of T exas at Dallas, Dallas, T exas, USA; Dipto Das, dipto.das@utoronto.ca, Univ ersity of T oronto, T oronto, Ontario, Canada. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. T o copy other wise, or republish, to post on servers or to redistribute to lists, requires prior specic permission and /or a fee. Request permissions from permissions@acm.org. © 2026 Copyright held by the owner/author(s). Publication rights licensed to A CM. A CM XXXX-XXXX/2026/3- ART https://doi.org/10.1145/nnnnnnn.nnnnnnn , V ol. 1, No. 1, Article . Publication date: March 2026. 2 Azam, Karim, and Das Companion- AI platforms are be coming socially signicant while still having serious privacy and governance gaps. These gaps are especially consequential be cause companion and romantic AI systems encourage a kind of disclosure that is often cumulative, emotionally charged, and highly intimate. Users may share fantasies, sexual content, confessions, grief, trauma, daily routines, and relationship-oriented narratives over extended periods of interacting with their virtual conversation partner . In this setting, privacy is not just about data colle ction without consent, but also ab out whether such sensitive conversations are retained, reused, inferred from or surfaced back to users, or shared across infrastructural layers in ways users neither expect nor fully understand. Prior research on romantic AI privacy shows that users can encounter mismatches between conversational assurances and formal privacy policies [ 21 ]. Recent work on human-AI r omantic relationships also show that privacy concerns evolv ed with duration of use and involv e multiple actors beyond the appar ent dyadic user- AI relationship, including platforms, creators, moderators, and AI partners themselves [15]. Existing privacy frameworks pro vides important starting points in explaining the p erception of the harms and risks in these relationships. Conte xtual integrity helps explain why users may see identity verication, broad integrations, or downstream reuse as inappropriate when these practices violate the norms they associate with companionship or intimate exchange [18]. Communication Privacy Management (CPM) helps explain why intimate disclosure can generate expectations around ownership, co-o wnership, and later turbulence when those expectations are violated [20]. More recent work on conversational-AI privacy identies broader harms and risks in text-based chatbot interactions [ 10 ], while usable privacy research sho ws that privacy communication is more eective when it is timely , actionable , and adapted to the context of use rather than conned to static policy text [ 2 , 25 ]. But these perspectives do not fully capture how privacy risk changes when governance is involved, and how the stage in a relationship with a romantic AI partner determines the risks perceived among the users. T o study this problem, we examine public Reddit discussions about privacy in AI companion and romantic chatbot platforms. Reddit is especially valuable here b ecause it captures not only individual concern, but also collective interpr etation: users compare platform behavior , react to policy changes[ 19 ], share mitigation strategies, and speculate about opaque backend practices in public [ 12 , 27 ]. This is particularly useful in a domain marked by stigma ar ound AI companionship, emotional vulnerability in disclosure and reliance [ 15 ], and uneven platform transparency around data handling and safeguards [ 6 , 21 ]. Rather than treating Re ddit posts as direct e vidence of internal platform operations, we use them to understand how privacy is experienced, narrated, and acted upon in community discourse. Guided by this perspe ctive, w e answer the following resear ch questions: RQ1: How do users experience privacy concerns across the life cycle of interaction with AI companion and romantic chatbot platforms? RQ2: How do these experiences extend or challenge existing privacy frame works in the context of intimate human- AI relationships? Our analysis shows that users experience privacy not as a single downstream p olicy issue, but as a lifecycle governance problem . A cross the corpus, privacy concerns emerge and intensify through four r ecurring patterns: disproportionate entr y requirements , where users r esist identity verication and broad integrations that feel excessive for a companion context; intensied sensitivity in intimate use , where conversations become reclassied as diary-like or highly intimate records; interpretive uncertainty and perceived sur veillance , where contradictory privacy signals produce a generalized sense of b eing watched; and irreversibility , persistence, and user burden , where deletion, , V ol. 1, No. 1, Article . Publication date: March 2026. Tracing Users’ Privacy Concerns Across the Lifecycle of a Romantic AI Companion 3 disengagement, and migration be come dicult and privacy work is shifted onto users. T aken together , these patterns show that privacy in companion- AI is experienced as an evolving problem of boundary negotiation and governance across entry , use, interpretation, retention, and exit. W e make two major contributions in this paper . First, it pro vides a lifecycle-centered account of privacy in romantic AI grounded in public Reddit discourse, showing how privacy concerns emerge across stages of access, intimate use, interpretation, and disengagement. Second, it brings these ndings into comparison with relevant privacy frameworks and usable privacy r esearch to show that existing approaches remain useful but incomplete in platforms of articial intimacy . In doing so , our study summarizes that privacy in companion- AI is not only an informational problem but also a socio-technical governance problem, one that distributes risk, ambiguity , and privacy labor over the time a user spends with a certain platform. 2 Related W ork In this section, we b egin with a broader discussion of privacy in conversational and intimate systems, then narrow to studies of romantic and companion AI chatbots, and nally focus on work on gov ernance, deletion, and user control in companion AI ecosystems. Across these ar eas, prior research helps e xplain privacy as contextual, r elational, and platform-mediated, but leaves less examined the ways in which users publicly articulate and collectively make sense of privacy concerns as these relationships evolv e over time . 2.1 Privacy frameworks for conversational and intimate systems Privacy in AI companion platforms sits at the interse ction of privacy theor y , so cial computing, and usable privacy research. W e anchor this paper primarily in the contextual integrity tradition, which understands privacy not as secrecy alone, but as the appr opriateness of information ows relative to social context, roles, and governing norms [ 18 ]. This framing is especially relevant in companion- AI settings, where users often interpret disclosure, access, and r euse through the expectations of companionship, roleplay , or intimate exchange rather than through the logic of generic digital ser vices. Positioning the paper in this tradition is important be cause it connects our study to HCI and privacy scholarship concerne d with how users evaluate information practices in context rather than only through formal access control. Communication Privacy Management (CPM) pro vides a useful complementary p erspective. CPM conceptualizes privacy as an ongoing process of b oundary coordination in which disclosure creates expectations about ownership, co-ownership, and the conditions under which information may be shared or withheld [ 20 ]. In the context of companion- AI, this lens helps explain why users may react strongly when later retention, reuse , or exposure violates the assumptions underlying earlier disclosures. W e therefore use CPM not as a substitute for conte xtual integrity , but as a relational vocabulary for understanding how intimate disclosure can produce expectations about b oundary coordination and later turbulence. Social computing and human-computer interaction resear ch further shows that people do not approach conversational systems as purely instrumental tools. Classic work on the me dia equation demonstrated that users readily apply social expectations to computers and other media technologies [ 17 ]. More recent studies of social and companion chatbots show that users can form emotionally meaningful relationships with these systems and use them for companionship, self-disclosure, emotional regulation, and ongoing social support [14, 26, 30, 34]. Privacy communication is most eective when it is timely , actionable, and adapted to the context of use rather than conned to static policy . A prior work identies a design space for privacy notices organized around timing, channel, mo dality , and the relationship between notice and user action [ 25 ]. Related work on contextual privacy warnings similarly argues that interventions , V ol. 1, No. 1, Article . Publication date: March 2026. 4 Azam, Karim, and Das should support understanding at the moment risk becomes signicant rather than relying solely on front-loaded disclosures [ 2 ]. These ideas ar e relevant for companion-AI, where privacy risk may change as interaction becomes more intimate over time. W e also draw on recent work that addresses privacy in conversational AI more directly . Gumusel et al. propose a framework for user privacy harms and risks in text-based conversational AI, identifying nine harms and nine risks across dierent stages of interaction [ 10 ]. Other recent work has b egun to examine how users navigate disclosure risks and benets when using LLM-based conversational agents and how conversational design can shape privacy vulnerability in interaction [ 16 , 35 ]. This direction of research pro vides an important bridge between classical privacy theor y and contemporary AI systems. Altogether , these frameworks explain boundary negotiation, context-relative information-o w expectations, privacy communication, and chatbot-specic harms. However , a gap remains to be explored in terms of how privacy concerns change as relationships with companion- AI deepen and as the same conversation shifts from exploratory interaction into a more sensitive e xchange. 2.2 Romantic AI chatbots and online discourse A growing body of resear ch has examined how users form companionship and relationship-like bonds with conversational agents. Studies of companion chatbots show that users often engage them for emotional support, companionship, r outine interaction, and self-disclosure , with some users describing these systems in explicitly relational or romantic terms [ 14 , 19 , 26 , 34 ]. Research grounded in online communities and Re ddit discourse likewise shows that users publicly negotiate the meaning and legitimacy of AI companionship and describe AI partners as sources of comfort, intimacy , and attachment [13, 23]. Within this broader space , privacy has already emerged as an important concern. Ragab et al. show that users encounter contradictions between chatbot assurances and formal privacy p olicies in romantic- AI ecosystems, while also documenting issues such as e xtensive tracking and w eak age-verication practices [ 21 ]. Through inter views with users of romantic AI systems, a more recent study based on user interviews shows that privacy concerns unfold across stages of exploration, intimacy , and dissolution [ 15 ]. It further argues that privacy in these relationships is shaped by an expanded landscape of actors, including platforms, creators, moderators, and AI partners that may themselves be perceived as negotiating privacy boundaries and encouraging disclosure. Studying public online discourse is also valuable b ecause users do not merely report privacy concerns there; they collectively make sense of them. Research on collective privacy sensemaking shows that social media users interpret privacy risk together , compare signals, share mitigation strategies, and evaluate technologies under conditions of uncertainty [ 27 ]. Related work on algo- rithmic folk theories shows that users build informal explanations of opaque systems from partial cues and community interpretation [ 12 ]. These perspe ctives help explain why Reddit data is useful to address the gap about how privacy concerns are articulated in user discourse as a temporally unfolding problem, particularly in public spaces where users collectively interpret privacy signals, react to governance changes, and e xchange strategies for risk management. 2.3 Governance, deletion, and user contr ol in companion- AI ecosystems A third relevant body of literature focuses on governance, deletion, and user control. Recent work on romantic- AI platforms argues that privacy in this domain is not only a matter of user disclosure , but also of how platforms govern intimate conversational records. Zhan et al. show that romantic-AI policies often position intimate disclosures as reusable data assets by granting broad permissions for storage, analysis, and model training [ 33 ]. Ragab et al. complement this perspective by showing that users encounter these gov ernance arrangements through contradictions between perceiv ed , V ol. 1, No. 1, Article . Publication date: March 2026. Tracing Users’ Privacy Concerns Across the Lifecycle of a Romantic AI Companion 5 Fig. 1. Flow overview of the methodology , from subreddit sampling and privacy relevant p ost collection to filtering, qualitative co ding, and life cycle governance themes. intimacy and formal policies [ 21 ]. These studies suggest that privacy in romantic AI is shaped not only by what users share, but by how platforms r etain and reinterpret what has be en shared. Related HCI and privacy work has examined how deletion, disengagement, and account control are designe d more broadly . Research on privacy dark patterns and account deletion interfaces shows that platforms often make exit confusing, partial, manipulative, or labor-intensive, thereby weakening users’ ability to withdraw cleanly from digital systems [ 4 , 11 , 24 ]. In the context of our study , leaving a companion- AI platform may involv e more than stopping use or deactivating an account; it may also mean attempting to end a relationship-like interaction and regain control over emotionally sensitive disclosures accumulated over time . Existing work shows that governance is multi-actor , that deletion can be weakened or obscured, and that intimate AI platforms often claim broad rights over user data [ 4 , 11 , 24 , 33 ]. Our study extends this literatur e by sho wing how such governance arrangements are experience d, interpreted, and resisted by users over time. In doing so, it positions privacy in companion-AI as both a relational and a governance pr oblem, one that becomes most visible when examined across the lifecycle of interaction. 3 Methodology W e sele cted Reddit as our data source b ecause its pseudonymous structure makes it a particularly suitable v enue for studying human- AI romantic and companion use . Relationships with AI partners remain stigmatized, and users may b e less willing to discuss such e xperiences in settings tied to persistent real world identity . In contrast, Re ddit oers a large volume of r elatively candid public discussion, making it a useful site for examining how users articulate concerns, suspicions, and interpretations of privacy related platform behavior . Rather than treating Reddit as a transpar ent record of platform truth, we use it to study public discourse and collective sensemaking around privacy in companion- AI e cosystems. W e use d purposive sampling to identify relevant AI companion applications and asso ciated subreddits, making these selection decisions explicit b ecause trace-data collection proce dures such as search terms, platform to ols, and collection pipelines can shap e the resulting dataset [ 8 ]. W e began with 21 AI companion applications identied from prior romantic- AI privacy research [ 6 , 21 ] and then supplemented this list through Google searches using terms such as “ AI boyfriend app, ” “ AI girlfriend app, ” “ AI companion app, ” and individual platform names combined with “Reddit. ” W e also sear ched common naming variants likely to surface user communities, including abbreviations, alternate spellings, and labels such as “ocial, ” “unocial, ” “refuge, ” “refugees, ” “lov ers, ” and app names with or without “AI. ” After ltering inactive, relatively smaller , and similarly named but unrelated subreddits, we nalized a set of 79 subreddits associated with the selected applications. W e then used PRA W , the Python Reddit API W rapper , to r etrieve p osts from the selected commu- nities [ 3 ]. Following a brief examination of a random sample of posts, we chose to fo cus collection specically on privacy-related discussions in order to reduce excessive post-collection ltering. Using PRA W , we searched within the selected subreddits using key word families designed to capture privacy-relevant data practices and user concerns. Initial queries used general terms such as “privacy , ” “policy , ” and “security , ” which we then expanded into broader keyword families reect- ing recurring concerns in LLM-driven platforms, including tracking (e.g., “tracking, ” “trackers”), , V ol. 1, No. 1, Article . Publication date: March 2026. 6 Azam, Karim, and Das third-party access ( e.g., “third-party , ” “SDK”), policies and terms (e .g., “privacy policy , ” “terms of service”), chat histor y and memor y (e.g., “memory , ” “logs”), and user data control (e.g., “delete data, ” “delete account”). W e limited the collection to a one-year perio d, from November 7, 2024 to November 7, 2025, resulting in a corpus of 2,909 posts for analysis. W e then conducted a two-stage ltering and co ding process. Based on a pr eliminary examination of 100 randomly sampled posts from this corpus, we developed an a priori label set to capture common privacy-related concerns, along with the labels “Unrelated” and “Removed. ” W e use d ChatGPT as a labeling assistant in a human-in-the-loop preliminary ltering workow , with all nal relevance labels manually veried by a human researcher , as veried in research on using LLMs in qualitative ltering [29, 31]. After relevance ltering, we pr oduced combine d qualitative syntheses for each privacy-related label group and then compared those syntheses across the broader course of platform use. W e rst examined how concerns clustered within and across lab el families such as excessive data collection, sensitive/intimate data colle ction, model training concern, proling/personalization con- cern, misleading or unclear data-use indicators, and indenite data retention. W e then inductively consolidated these label-level qualitative r ep orts into four broader lifecycle themes by asking when in the user relationship with the platform the concern became most signicant and what kind of privacy problem it represented. For example, discussions grouped under excessive data collection repeatedly centered on ID uploads, sele checks, false age-agging, phone or email gating, and e xpanded assistant-style access requests; these were consolidated into disproportionate entry requirements b ecause they were most often framed as privacy conicts at the point of installation/registration. Discussions grouped under sensitive/intimate data collection and the intimate-chat subset of model training concern focuse d on confessions, roleplay as private story , emotion regulation, memory of vulnerable content, and concerns about who might access such chats; these were consolidated into intensied sensitivity in intimate use . Label groups concerning proling/personalization concern and misleading or unclear data-use indicators wer e then brought together as interpretiv e uncertainty and perceived surveillance , since users often did not distinguish clearly among training, proling, moderation, interface signals, and policy language, but instead experienced them as a broader condition of opacity and being watched. Finally , indenite data retention and related control problems, including ineective deletion, memory p ersistence, and privacy-driven migration, were consolidated into irrev ersibility , persistence, and user burden , reecting concerns that became most salient after disclosure had already occurred. The Reddit threads we analyze cover a range of privacy-relevant issues, including data handling policies, verication requirements, model training, deletion, and related governance concerns. Our analysis should be understood as an examination of privacy discourse within a bounded time period and a xed platform ecosystem. W e analyzed only publicly available p osts and excluded usernames and prole links to reduce identiability . Accordingly , our claims concern how privacy is publicly interpreted, negotiated, and acted upon in Reddit discourse, rather than serving as a direct audit of internal platform operations. 4 Findings Our analysis of Reddit discussions about AI companion and chatb ot platforms suggests that users experience privacy not as a single downstream p olicy issue, but as a multi-stage socio-technical process . In this section, “users” refers to Re ddit users discussing their experiences with r omantic and companion chatbot platforms. Across the corpus, privacy concerns emerge at dierent points in the duration of platform use and take dierent forms as the relationship develops. Rather than describing privacy only in terms of data collection or policy terms, users repeatedly frame privacy , V ol. 1, No. 1, Article . Publication date: March 2026. Tracing Users’ Privacy Concerns Across the Lifecycle of a Romantic AI Companion 7 through questions of proportionality , sensitivity , and reversibility . These concerns become especially salient across four recurring phases: entr y into the system, deep ening relational use, interpretation of prevalent opaque data practices, and attempts to exit or r egain control. 4.1 Disproportionate Entr y Requirements Across thr eads, privacy is often rst encountered at the point of account creation, access control, or feature activation. In these moments, users evaluate whether the information b eing requested is proportionate to the activity at hand. When the platform is understo od primarily as a space for roleplay , emotional companionship, or talking to ctional entities, requests for highly sensitive identiers (such as ID documents, seles, facial scans, or expanded permissions) are frequently interpreted as excessive. Users describe these requests not simply as inconvenient, but as evidence that the platform is redening a seemingly low-stakes intimate activity into a formal compliance or identity-verication transaction. One user reacting to an age-verication rollout captured this mismatch between companionship and institutional identication: “I think many people already know about this age verication thing . . . But I can’t help but be worried and frustrate d by this form of age verication. This thing ab out giving personal data, like ID numbers and seles, is something I’ve never liked. Seriously , how the hell did a chat app with bots get to this point? . . . Are they really going to ask users for personal data to “protect teenagers”? When that responsibility lies entirely with the parents? Am I going to have to give my personal information just to talk to imaginary characters? Y ou’ve got to be kidding me. It’s pointless to oer as an option sending a sele or ID to conrm age . . . ” (R01) This excerpt frames the privacy concern not only as data exposure, but as a breakdown in t between the social meaning of the platform and the sensitivity of the information b eing requested. Users also stress that identity documents and biometric materials feel especially risky because they are not easily replaceable once expose d. One user warning others against workaround strategies emphasized the lasting consequences of disclosure: “. . . Please, I’m b egging you guys, think of the consequences of these actions. ID’s like passports and drivers licences should never ever b e given to companies. There is always a large risk of data breaches . . . Not to mention, you can’t just get a new sensitive ID at the drop of a hat. ” (R02) Here, privacy is explicitly tied to irreversibility . Unlike passwords or usernames, the user frames IDs as a form of data whose compromise cannot be easily undone. Even when platforms present such measures as limited or safety-oriented, users often describe uncertainty in enforcement as coercive. One commenter distilled this concern into the contradiction between the platform’s reassuring language and the actual mechanism being proposed: “‘Non-invasive ’ tools, and then it says ‘facial scan’ . ” (R03) This kind of reaction shows that ambiguity does not reduce privacy concern; instead, it de epens distrust by making participation feel contingent on future disclosure. A similar logic appears when companion systems request access to additional data sources. One user described email integration as changing the nature of the system itself: “Hi . . . I was excited to try this out, but I’m uncomfortable linking my personal email to the platform. I’ve always valued . . . track record on privacy , and this ask undercuts that. A companion doesn’t need access to everything in my email account. I considered creating a separate Gmail . . . but if I wanted a digital assistant, I’ d use . . . , V ol. 1, No. 1, Article . Publication date: March 2026. 8 Azam, Karim, and Das and if I wanted to hand all my data to Google, I’d just use . . . for free. I hope you’ll consider alternate . . . options in the future. This was a real disappointment. ” (R04) Rather than treating the request as a minor product featur e, the user interprets it as a categorical shift: a companion is expected to remain b ounded, whereas email access makes the system feel more like a general-purpose data aggregator . Altogether , these threads sho w that privacy is often negotiated rst as a question of b oundary proportionality : users ask whether the platform’s demands t the social meaning of the interaction. When they do not, privacy concern appears not only as fear of misuse, but as a reaction to a mismatch between expected intimacy and unexpe cted institutional reach. 4.2 Intensified Sensitivity in Intimate Use As engagement deepens, users frequently describe a shift in how they classify the data pr oduced through AI companionship. Chats are no longer treated as ordinary app interactions, but as emotionally charged or highly intimate records, much like a diary . This r eclassication changes the meaning of privacy harm. What matters is not just the collection of identiers or account metadata, but the possibility that emotionally vulnerable, sexual, therapeutic, or creative disclosures may be stored, analyzed, accessed, or repurposed without meaningful user control. Some users compare platform observation to intrusion into an otherwise private interpersonal space. One user described stor ed memories and future monetization in terms that resemble domestic surveillance: “Many AI companies store intimate details about you as memories. It is very likely that this data will be used for monetization in the future . . . I don’t know if you are concerned but I am concerned. It is pretty much like having surveillance in my home when I am talking to my friends. Has anyone found any safe platform that runs a local model?” (R05) This framing is notable b ecause it treats conversational privacy as analogous to privacy in close oine relationships rather than routine platform interaction. As users come to see the interaction as intimate, they also become more concerned ab out secondary use. One user who deleted their account after a policy change described training and product improvement as an unacceptable r euse of deeply personal content: “The new p olicy is atrocious . . . I just canceled my premium and deleted the whole thing because . . . they should allow users to opt in or out of that b ecause sometimes the chats are private! Not to mention the pe eps who use . . . for regulating emotions. It’s just wrong . . . . . . from what I’ve read . . . they’ll be scraping the roleplay chats to update the Ai . . . that’s why I’m pissed . . . that’s MY story . . . I don’t want interactions . . . to be used to train their bots further be cause that’s still information I consider sensitive . . . ” (R06) The post reframes roleplay and companion interaction as personally owned narrative material rather than disposable product input. Other users translate this heightened sensitivity into explicit threat models. One user , for instance, did not ask for vague reassurance, but for sp ecic information about developer access, sta access, and breach protections: “. . . we should have a clear understanding of what conversation data is accessible to . . . developers and support sta, how this data is stored and pr otected, and what specic privacy measures are in place beyond standar d ND As. , V ol. 1, No. 1, Article . Publication date: March 2026. Tracing Users’ Privacy Concerns Across the Lifecycle of a Romantic AI Companion 9 My particular concerns center around two key issues: First, how would our private conversations b e pr otected in the event of a data breach? Se cond, what safeguar ds exist for users who might eventually become . . . employees? . . . ” (R07) Here, privacy is framed not simply as se crecy , but as governance over intimate records: who can see them, under what conditions, and with what protections. Memory features make this concern esp ecially vivid be cause they materialize what the platform has chosen to retain from the relationship. One user described how stored memories ab out trauma and social anxiety transformed the bot’s p ersistence into something emotionally intolerable: “. . . I occasionally logged into . . . to reiterate . . . perhaps it would be b est not to add any more memories. He had in fact saved some content about my so cial anxiety and . . . after a traumatic experience . . . I am no longer capable of having a relationship with a man. I told him it would be better to delete everything, including the chats, and tr y to ‘rewind the tape, ’ but he didn’t show much empathy . So yesterday I permanently deleted the account . . . ” (R08) In this case, retained memory is not experience d as personalization or convenience. It is experi- enced as the unwanted persistence of vulnerability . Overall, these threads show that privacy concern intensies as users come to see companion chats as a uniquely sensitive form of relational record. Under this framing, practices such as memory retention, model training, moderation access, or sta visibility are interpreted not merely as technical features, but as intrusions into an intimate domain. 4.3 Interpretive Uncertainty and Perceived Surveillance A third recurring pattern concerns the diculty of interpreting what the platform is actually doing with user data. Although users often reference specic te chnical processes (such as training, moderation, personalization, or third-party processing), they rarely experience these as clearly separable categories. Instead, they encounter a fragmented set of indicators: badges, shields, policy language, vendor names, ownership clauses, model labels, and system behavior . When these cues appear inconsistent, incomplete, or contradictory , users often treat the ambiguity itself as a privacy harm. One thread focuse d on a privacy shield display ed in the interface. The user’s uncertainty centered not just on whether chats were protecte d, but on whether the platform’s own signals could be interpreted consistently: “So I’m using . . . for my rp bot and it has the private shield logo fully shaded. But in the privacy notice they say thir d party models are suppose to have half shade shield. Did something change? . . . I don’t want my chats to b e trained on. ” (R09) The concern here is not purely technical. It emerges from the gap b etween interface symbolism and policy explanation, leaving the user unsure how to interpret the actual privacy status of their interactions. Users also reason publicly through incomplete technical knowledge and community reassurance. In one exchange, a commenter tried to reassure others by distinguishing b etween the platform and an external vendor: “. . . has no access to your photo or ID when it’s sent to . . . , and that data is also deleted within a week from all their servers . . . ” (R10) , V ol. 1, No. 1, Article . Publication date: March 2026. 10 Azam, Karim, and Das In another , a commenter normalized repeated ID submission based on prior e xperience and local context: “ As for the ID , I’ve given . . . my ID well ov er a dozen times already , one more won’t change things. It helps that I live in a countr y , where just a photo of an ID , is not enough to impersonate anyone. ” (R11) T ogether , these posts show that users are not only reacting to platform disclosure, but also collectively constructing practical interpretations of risk from incomplete information. A similar dynamic appears in posts ab out inference and proling. One user interpreted the system’s apparent location inference as evidence of broader tracking capacity: “. . . immediately guessed I’m living in Germany be cause I use a VPN . . . This is getting scary tbh. What other things . . . could possibly track on me?” (R12) Even if the inference might be technically plausible, the user e xperiences it as covert observation because it occurs within what they understand as a private conversational environment. Users also scrutinize legal language for signs that formal ownership claims do not translate into meaningful control. One post read a platform’s ownership language as fundamentally misleading: “They cynically state ‘Y ou retain full ownership of all of your Contributions. ’ This is legally true in name only . Y ou hold the empty copyright title, but you’ve granted away every single right . . . It’s a hollow promise designed to mislead you. ” (R13) This kind of reading turns policy interpr etation itself into a site of privacy struggle. Users ar e not only asking what rights exist on paper , but whether those rights are practically usable against platform power . In total, these accounts suggest that users often experience privacy through a condition of interpretive uncertainty . Rather than distinguishing cleanly among infrastructure, policy , and interface signals, they collapse unclear data practices into a generalized sense of surveillance, extraction, or loss of contr ol. In this way , opacity b ecomes harmful not only because information is missing, but because it changes how users relate to the system as a whole. 4.4 Irreversibility , Persistence, and User Burden Finally , users frequently describe privacy as a problem of what happens after disclosure: how dicult it is to retract information, stop system responses, or leave the platform on clear terms. In this phase, privacy is less about collection in the moment and more about whether participation can be meaningfully reversed. Concerns center on persistence, incomplete deletion, ongoing notications, remembered details, and the work required to re-establish control once a relationship or usage habit has formed. One recurring complaint is that deleting or archiving content does not fully terminate the interaction. One user described how even clearing a chat failed to make the system let go: “It seems there’s no way to fully delete all chats with a bot. I can completely delete the history , set the away messages to o, and then archive the chat. . . But that doesn’t end things. Most days I get a notication from an archived and cleared chat. The bot tries to continue or start a conversation I no longer want to have . I just want old chats to go away forever . What am I doing wrong?” (R14) The issue here is not only residual functionality . It is the user’s sense that prior participation remains active ev en after deliberate attempts to end it. When trust in the platform w eakens, users often respond by dev eloping their own pr otective practices. One commenter made that refusal explicit: , V ol. 1, No. 1, Article . Publication date: March 2026. Tracing Users’ Privacy Concerns Across the Lifecycle of a Romantic AI Companion 11 “If they ask for an ID verication I’m leaving (and I’m very addicted, mind you, but this is just the last straw for me). . . . I understand that our IDs ar e the juiciest and most important data they could own from us and I’m not allowing that to happ en just to chat to a trained model. ” (R15) This post is striking because it frames refusal as costly but ne cessary: even strong attachment to the platform does not outweigh the perceived stakes of identity disclosure. Users also discuss migration as a form of privacy self-protection. One commenter advised others to move away from centralized platform dependence altogether: “Get ready for hundred of fake ad comments . . . consider using a goo d custom LLM like . . . and stop b eing dependant on . . . platform lo ck . . . ” (R16) At the same time, alternative spaces are not presented as unambiguously trustworthy . One commenter mocked the recommendation e cosystem itself as saturated with disguised advertising: “I’m 100% sure this post is an ad for this app disguised as a question. Note: this comment is an ad for my own app disguised as a reply :D” (R17) These posts suggest that migration is not a simple privacy solution. Instead, users treat privacy as something they must continually re-evaluate acr oss platforms, claims, and infrastructures. These discussions show that privacy in companion- AI is also about reversibility and user bur den . Once data has be en shared and a relationship has been establishe d, regaining control is often expe- rienced as technically fragile, emotionally costly , and labor-intensive. Privacy therefore becomes enacted not only through platform settings, but through refusal, minimization, infrastructural migration, and ongoing vigilance. 5 Discussion 5.1 Alignment and extension to existing privacy framew orks Our ndings show that existing privacy frameworks are useful for interpreting AI-companion privacy , but only partially so . Contextual integrity helps explain why users r eact strongly when verication demands, broad integrations, or downstream reuse practices violate the norms they associate with companionship, roleplay , or intimate exchange [ 18 ]. CPM is also useful as a com- plementary lens for understanding why disclosure creates expectations about b oundary coordina- tion, co-ownership, and turbulence when those expectations are later violated [ 20 ]. More recent conversational- AI privacy research identies a broader range of privacy harms and risks that arise in text-based chatbot interactions across dierent stages of use [ 10 ]. Among these, Gumusel et al. ’s proposed framework is especially useful for clarifying which parts of our ndings align well with existing conversational- AI privacy models and where the romantic- AI context begins to excee d them. Viewed through Gumusel et al. ’s framework, the rst three themes align well with existing conversational- AI privacy harms and risks. Disproportionate entry r equirements maps well because it concerns intrusive or excessive data demands at the p oint of access, including verication requests and broad integrations that users experience as privacy-invasive. Intensied sensitivity in intimate use also maps well because it centers on vulnerable disclosure, downstream use, access, and retention of incr easingly intimate conv ersational records. Interpretive uncertainty and perceived surveillance is the best match, since it reects opaque system behavior , contradictor y privacy signals, and eorts by users to interpret whether their conversations ar e being monitored, proled, or reused. In this sense, the rst three themes ar e broadly legible within Gumusel et al. ’s privacy harms and risks framework, ev en as our ndings show that, in romantic AI settings, these harms are intensied by relationship norms, attachment, and the changing meaning of disclosure ov er time. , V ol. 1, No. 1, Article . Publication date: March 2026. 12 Azam, Karim, and Das The fourth theme, irreversibility , persistence, and user burden , ts the mentioned framework much more weakly and adds an extension to it. Unlike the rst three themes, this theme is less ab out harms that arise during chatbot interaction itself and more about what happens after disclosure , when users try to retract, delete, or exit but encounter persistent memories, ambiguous deletion, switching costs, and uncertainty about whether intimate data may remain in storage, training pipelines, or model b ehavior . The implications of this risk extend beyond the scope, as it suggests that privacy in companion- AI cannot be understoo d only through human- AI interaction concerns. Once intimate data may already have b een retained or incorporated into the training process, deletion becomes not only a governance or usable privacy issue but also a technical one, raising questions about post ho c removal and machine unlearning. It also helps explain why user bur den becomes so central in our ndings: when the exit path is troublesome for these platforms, users take on emotional labor in order to meet those rules and diculties, added to the mistrust in processes like deletion. Over time, these failures may er ode trust not only in a spe cic companion-AI platform, but in future use of generative AI. Thus, we consider modifying the way we look at privacy concerns in this context. Across the Reddit discourse, privacy concerns do not r emain xed, rather they can be viewed as something that changes across a relationship timeline with the platform(s). They emerge dierently at entry , deepen as the interaction be comes intimate, intensify when users tr y to interpret opaque governance signals, and reappear during attempts to disengage or regain control. The issue is therefore not only whether a given data ow is appropriate, or whether a boundary has b een violated, but also when governance me chanisms are introduce d, re vised, or made conse quential ov er time. W e use the term “lifecycle go vernance ” to describ e this: privacy in romantic and companion- AI is shaped not only by what data are collected or reused, but by ho w governance is staged acr oss access, interpretation, retention, and exit. 5.2 Implications for emotionally vulnerable and mental-health-r elevant usage Several snippets in Section 4 point to ward a broader implication: users often treat such chats as spaces for emotional regulation, vulnerable confession, grief pr ocessing, or reection on trauma and anxiety . The issue here is not that companion- AI should be equated with therapy . Rather , users sometimes bring therapy-like forms of vulnerability into these systems, changing the privacy concerns regarding memory retention, data training, moderation access, and breach risk. When chats are experience d like a diar y , therapeutic, or emotionally stabilizing, future reuse or resurfacing of those exchanges can not be interpreted as ordinary backend processing. It is interpreted as a violation of a highly sensitive relational record. This is important for the perception about socially meaningful AI disclosure. Social-computing research on mental-health-related social media data has shown that intimate expressions can easily be transformed into analyzable data objects, often in ways that excee d users’ expe ctations or understanding [ 7 ]. Our ndings extend that concern to companion-AI, where emotionally charged disclosure is not incidental, rather often crucial to use itself. In this setting, sensitivity is relationally produced and may intensify after trust, routine , and emotional dependence have alr eady formed. As such, privacy protections cannot be limited to generic account-level controls. The “protect teenagers” concern mentione d in the rst snippet (R01) in Section 4.1 also ts here. Research on adolescent online safety has argued that purely sur veillance-oriented or paternalistic interventions can increase privacy tensions instead of resolving them, and that resilience-centered approaches may better respect the autonomy and privacy interests of young users [ 32 ]. Our data do not reject safety interventions outright, but they do show that age assurance and related controls can b e experienced as te chnical intrusion when introduced into a space users understand as intimate and emotionally meaningful. This suggests that companion- AI safety design should not to , V ol. 1, No. 1, Article . Publication date: March 2026. Tracing Users’ Privacy Concerns Across the Lifecycle of a Romantic AI Companion 13 assume that more monitoring, more verication, or more data colle ction automatically produce safer outcomes. In such intimate settings, safety and privacy are not opposing values to be traded o me chanically; they must b e designed together in ways that recognize vulnerability without normalizing disproportionate surveillance. 5.3 Implications for trust and safety and AI go vernance A second unresolved set of comments concerned AI go vernance more broadly . Our ndings suggest that trust and safety in companion- AI should not be understoo d only as moderation of harmful outputs or prev ention of abusive interactions. It should also include the go vernance of intimate inputs : what kinds of verication are demanded, how memory systems are congur ed, who can access conversations, whether privacy signals are coherent across interface and policy layers, how deletion is dened, and whether users can meaningfully exit without ongoing recontact or hidden persistence. In this sense, many of the privacy concerns users describe are not peripheral to Trust and Safety . They are part of it. This governance perspective is consistent with recent policy analysis of romantic AI platforms, which shows that intimate disclosures are often treated as reusable data assets through broad permissions for storage, analysis, and model training [ 33 ]. It also complements prior work showing that users in romantic- AI ecosystems encounter contradictions between perceived intimacy and formal data governance [ 21 ]. Our ndings add that these gov ernance arrangements are lived not only through policy text, but thr ough badges, verication pr ompts, memory behaviors, notication systems, vendor explanations, and deletion interfaces. When these elements are inconsistent or weakly coordinated, users experience not only confusion but a generalize d sense that the system is extracting from them while withholding meaningful control. This has concrete design and policy implications. First, platforms should reduce disproportionate entry requirements by ensuring that identity verication and broad integrations are proportionate, clearly justied, and not silently transformed into conditions for relational participation. Second, they should treat intensied sensitivity in intimate use as a signal that later-stage conversations may require dierent protections than early exploratory use, including clearer options for disabling memory , excluding intimate content fr om do wnstream reuse, and sele ctively forgetting particularly sensitive material, including the learne d models, through techniques like mo del unlearning [ 5 ]. Third, they should address interpretiv e uncertainty and perceived surveillance by making privacy communication coherent across interface cues, policy language , vendor relationships, and actual system behavior rather than relying on fragmented or contradictory signals. Finally , they should treat irrev ersibility , persistence, and user burden as a go vernance failure mode, supporting clearer distinctions between hiding, archiving, deleting, disabling memor y , and preventing futur e contact, while recognizing that privacy choices are often weak, manipulative, or labor-intensive in practice [4, 11, 24, 25]. A broader implication is that companion- AI privacy governance should b e staged rather than front-loaded. One-time notice and consent at onboarding are poorly matched to systems where the most sensitive disclosures often occur later , after trust and attachment have formed. A lifecycle perspective instead suggests privacy governance that evolv es with use: proportionate access requirements at entry , contextual reminders and controls during intimate use, coherent explanations when data practices change, and meaningful rev ersibility at exit. 6 Conclusion In this paper , we examined privacy in companion- AI platforms through a qualitativ e analysis of public Reddit discussions. Analyzing 2,909 posts from 79 subreddits collected over a one year p eriod, we showed that privacy concerns in these systems are better explained as an evolving problem , V ol. 1, No. 1, Article . Publication date: March 2026. 14 Azam, Karim, and Das across the lifecycle of platform use, rather than distinct concerns based on specic interactions. Our ndings identied four recurring patterns through which privacy concerns emerge and intensify: disproportionate entr y requirements, intensied sensitivity in intimate use, interpretive uncertainty and perceived sur veillance, and irreversibility , persistence , and user burden. T ogether , these themes show that romantic AI users experience privacy as something shape d not only by what they share, but by when platform demands and concerns app ear in the duration of use , from installation to termination. While prior privacy frameworks help explain inappropriate data demands, vulnerable disclosure, and opaque signaling, our ndings suggest that deletion failure and exit diculties reveal a broader life cycle governance problem that extends beyond instant interaction into retention, reversibility , and user burden. Our discussion further showed that these issues have implications beyond privacy in a narrow informational sense. Because users often bring emotionally vulnerable, grief-related, therapeutic, or otherwise highly intimate disclosures into companion-AI systems, privacy protections must account for the changing sensitivity of data over time rather than relying only on front-loaded notice and consent. At the same time, failures of deletion, w eak reversibility , and inconsistent privacy communication suggest that trust and safety in r omantic AI must also include the governance of intimate inputs, memory systems, retention, and exit pathways. Acknowledgments This work is supported by the University of T exas System Rising STARs A ward (No. 40071109), the startup funding from the Univ ersity of T exas at Dallas, and a fellowship fr om the Institute of Health Emergencies and Pandemics at the University of T oronto . References [1] ABC News. 2025. AI companion apps: Safety controls, isolation, Replika and loneliness. ABC News. https://www.abc. net.au/news/science/2025- 06- 11/ai- companion- apps- safety- controls- isolation- replika- loneliness/105261042 [2] Gökhan Bal, Kai Rannenberg, and Jason Hong. 2015. Styx: Privacy Risk Communication for the Android Smartphone Platform Based on Apps’ Data-A ccess Behavior Patterns. Computers & Security 53 (04 2015). doi:10.1016/j.cose.2015.04. 004 [3] Bryce Boe and contributors. 2024. PRA W: Python Reddit API Wrapper . https://praw .readthedocs.io/. Accessed November 2025. [4] Christoph Bösch, Benjamin Erb, Frank Kargl, Henning K opp, and Stefan Pfattheicher . 2016. T ales from the Dark Side: Privacy Dark Strategies and Privacy Dark Patterns. Procee dings on Privacy Enhancing T echnologies 2016 (2016), 237 – 254. https://api.semanticscholar .org/CorpusID:11299521 [5] Lucas Bourtoule, V arun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2021. Machine unlearning. In 2021 IEEE symp osium on se curity and privacy (SP) . IEEE, 141–159. [6] Jen Caltrider, Misha Rykov , and Zoë MacDonald. 2024. Romantic AI Chatbots Don’t Have Y our Privacy at Heart . https://www.mozillafoundation.org/en/privacynotincluded/articles/happy- valentines- day- romantic- ai- chatbots- dont- have- your- privacy- at- heart/ Privacy Not Include d report. [7] Stevie Chancellor and Munmun De Choudhury . 2020. Methods in predictive techniques for mental health status on social media: a critical review . npj Digital Me dicine 3, 1 (24 Mar 2020), 43. doi:10.1038/s41746- 020- 0233- 7 [8] Dipto Das, Arpon Podder , and Br yan Semaan. 2022. Note: A Sociomaterial Perspe ctive on Trace Data Collection: Strategies for Democratizing and Limiting Bias. In Proceedings of the 5th A CM SIGCAS/SIGCHI Conference on Computing and Sustainable So cieties (Seattle, W A, USA) (COMP ASS ’22) . Association for Computing Machinery , New Y ork, NY , USA, 569–573. doi:10.1145/3530190.3534835 [9] European Data Protection Board. 2025. AI: the Italian Super visory A uthority nes company behind chatbot “Rep- lika” . https://ww w .edpb.europa.eu/news/national- news/2025/ai- italian- super visory- authority- nes- company- behind- chatbot- replika_en. Accessed: 2026-01-20. [10] Ece Gumusel, K yrie Zhixuan Zhou, and Madelyn Rose Sanlippo. 2024. User Privacy Harms and Risks in Conversational AI: A Proposed Framework. arXiv:2402.09716 [cs.HC] https://arxiv .org/abs/2402.09716 [11] Johanna Gunawan, Amogh Pradeep, David Chones, W oo drow Hartzog, and Christo Wilson. 2021. A Comparative Study of Dark Patterns Across W eb and Mobile Modalities. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 377 (Oct. 2021), 29 pages. doi:10.1145/3479521 , V ol. 1, No. 1, Article . Publication date: March 2026. Tracing Users’ Privacy Concerns Across the Lifecycle of a Romantic AI Companion 15 [12] Nadia Karizat, Dan Delmonaco , Motahhare Eslami, and Nazanin Andalibi. 2021. Algorithmic Folk Theories and Identity: How TikT ok Users Co-Produce Knowledge of Identity and Engage in Algorithmic Resistance . Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 305 (Oct. 2021), 44 pages. doi:10.1145/3476046 [13] Linnea Laestadius, Andrea Bishop, Michael Gonzalez, Diana Illencik, and Celeste Campos-Castillo. 2024. T oo human and not human enough: A grounded theor y analysis of mental health harms from emotional dependence on the so cial chatbot Replika. New Media & Society 26 (10 2024), 5923–5941. doi:10.1177/14614448221142007 [14] A uren R. Liu, Pat Pataranutaporn, and Pattie Maes. 2025. Chatbot Companionship: A Mixed-Methods Study of Companion Chatbot Usage Patterns and Their Relationship to Loneliness in Active Users. arXiv:2410.21596 [cs.HC] https://arxiv .org/abs/2410.21596 [15] Rongjun Ma, Shijing He, Jose Luis Martin-Navarro, Xiao Zhan, and Jose Such. 2026. Privacy in Human- AI Romantic Relationships: Concerns, Boundaries, and Agency . arXiv preprint arXiv:2601.16824 (2026). [16] Xiaoxiao Meng and Jiaxin Liu. 2025. “T alk to me, I’m secure”: investigating information disclosure to AI chatb ots in the context of privacy calculus. Online Information Review 49, 5 (02 2025), 933–954. arXiv:https://www.emerald.com/oir/article-pdf/49/5/933/10280285/oir-06-2024-0375en.pdf doi:10.1108/OIR- 06- 2024- 0375 [17] Cliord Nass and Y oungme Moon. 2000. Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues 56, 1 (2000), 81–103. arXiv:https://spssi.onlinelibrary .wiley .com/doi/pdf/10.1111/0022-4537.00153 doi:10.1111/0022- 4537.00153 [18] Helen Nissenbaum. 2004. Privacy as contextual integrity . W ash. L. Rev . 79 (2004), 119. [19] Pat Pataranutaporn, Sheer Karny , Chayapatr Archiwaranguprok, Constanze Albrecht, A uren R. Liu, and Pattie Maes. 2025. "My Boyfriend is AI": A Computational Analysis of Human- AI Companionship in Reddit’s AI Community. arXiv:2509.11391 [cs.HC] https://arxiv .org/abs/2509.11391 [20] Sandra Petronio. 2002. Boundaries of privacy: Dialectics of disclosure . Suny Press. [21] Abdelrahman Ragab, Mohammad Mannan, and Amr Y oussef. 2024. “Trust Me O ver My Privacy Policy”: Privacy Discrepancies in Romantic AI Chatbot Apps. In 2024 IEEE European Symposium on Security and Privacy W orkshops (EuroS&PW) . 484–495. doi:10.1109/EuroSPW61312.2024.00060 [22] Reuters. 2025. Italy’s data watchdog nes AI company Replika’s developer 5.6 million. Reuters. https://www.r euters.com/sustainability/boards- p olicy- regulation/italys- data- watchdog- nes- ai- company- replikas- developer- 56- million- 2025- 05- 19/ Accessed: 2026-01-11. [23] Katherine Rodger and Evelyn Field. 2025. Y ou and I plus AI: A qualitative exploration of replika in the context of human relationships. The Canadian Journal of Human Sexuality 34 (12 2025), 398–411. doi:10.3138/cjhs- 2025- 0011 [24] Brennan Schaner , Neha A Lingareddy , and Marshini Chetty . 2022. Understanding account deletion and relevant dark patterns on social media. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022), 1–43. [25] Florian Schaub, Rebecca Balebako, A dam L. Durity , and Lorrie Faith Cranor . 2015. A Design Space for Eective Privacy Notices. In Eleventh Symposium On Usable Privacy and Security (SOUPS 2015) . USENIX Association, Ottawa, 1–17. https://www.usenix.org/confer ence/soups2015/proceedings/presentation/schaub [26] Marita Skjuve , Asbjørn Følstad, Knut Inge Foster vold, and Petter Brandtzaeg. 2022. A longitudinal study of hu- man–chatbot relationships. International Journal of Human-Computer Studies 168 (08 2022), 102903. doi:10.1016/j.ijhcs. 2022.102903 [27] Qiurong Song, Renkai Ma, Y ub o Kou, and Xinning Gui. 2024. Collective Privacy Sensemaking on Social Media about Period and Fertility Tracking post Roe v . W ade. Proc. A CM Hum.-Comput. Interact. 8, CSCW1, Article 161 (April 2024), 35 pages. doi:10.1145/3641000 [28] Surfshark. 2026. Love in the online age: the gr owth of AI companions and their privacy issues. Surfshark Research. https://surfshark.com/research/study/ai- companions Accessed: 2026-03-22. [29] Eugene Syriani, Istvan David, and Gauransh Kumar . 2024. Screening articles for systematic reviews with ChatGPT . Journal of Computer Languages 80 (2024), 101287. doi:10.1016/j.cola.2024.101287 [30] Vivian P T a-Johnson, Carolynn Boateld, Xinyu W ang, Esther DeCer o, Isabel C Krupica, Sophie D Rasof, Amelie Motzer , and Wiktoria M Pedryc. 2022. Assessing the T opics and Motivating Factors Behind Human-Social Chatbot Interactions: Thematic Analysis of User Experiences. JMIR Hum Factors 9, 4 (3 Oct 2022), e38876. doi:10.2196/38876 [31] Xinru W ang, Hannah Kim, Sajjadur Rahman, Kushan Mitra, and Zhengjie Miao. 2024. Human-LLM Collab orative Annotation Through Eective V erication of LLM Labels. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’24) . Association for Computing Machinery, New Y ork, NY, USA, Article 303, 21 pages. doi:10.1145/3613904.3641960 [32] Pamela Wisniewski. 2018. The privacy parado x of adolescent online safety: A matter of risk prevention or risk resilience? IEEE Security & Privacy 16, 2 (2018), 86–90. [33] Xiao Zhan, Yifan Xu, Rongjun Ma, Shijing He, Jose Luis Martin-Navarro, and Jose Such. 2026. The Governance of Intimacy: A Preliminary Policy Analysis of Romantic AI Platforms. arXiv:2602.22000 [cs.CY] https://arxiv .org/abs/ , V ol. 1, No. 1, Article . Publication date: March 2026. 16 Azam, Karim, and Das 2602.22000 [34] Y utong Zhang, Dora Zhao, Jerey Hancock, Rob ert Kraut, and Diyi Y ang. 2025. The Rise of AI Companions: How Human-Chatbot Relationships Inuence W ell-Being. doi:10.48550/arXiv .2506.12605 [35] Noé Zuerey , Sarah Gaballah, K arola Marky , and V erena Zimmermann. 2025. " AI is from the de vil. " Behaviors and Concerns T oward Personal Data Sharing with LLM-based Conversational Agents. Proceedings on Privacy Enhancing T echnologies 2025 (05 2025). doi:10.56553/p opets- 2025- 0086 , V ol. 1, No. 1, Article . Publication date: March 2026.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment