The Governance of Intimacy: A Preliminary Policy Analysis of Romantic AI Platforms

Romantic AI platforms invite intimate emotional disclosure, yet their data governance practices remain underexamined. This preliminary study analyses the Privacy Policies and Terms of Service of six Western and Chinese romantic AI platforms. We find …

Authors: Xiao Zhan, Yifan Xu, Rongjun Ma

The Governance of Intimacy: A Pr eliminary Policy Analysis of Romantic AI Platforms XIA O ZHAN, VRAIN, Universitat Politècnica de V alència & Univ ersity of Cambridge, Spain, UK YIF AN X U, The University of Manchester, UK RONGJUN MA, VRAIN, Universitat Politècnica de V alència & Aalto University, Spain, Finland SHIJING HE, King’s College London, UK JOSE LUIS MARTIN-NA V ARRO, VRAIN, Univ ersitat Politècnica de V alència & Aalto Univ ersity, Spain, Finland JOSE SUCH, INGENIO (CSIC-Universitat Politècnica de V alència), Spain Romantic AI platforms invite intimate emotional disclosure, yet their data go vernance practices remain undere xamined. This preliminary study analyzes the Privacy Policies and T erms of Service of six W estern and Chinese romantic AI platforms. W e nd that intimate disclosures are often positioned as reusable data assets, with broad permissions for storage, analysis, and model training. W e identify default training appropriation, ownership reconstruction, and intimate-history assetization as key mechanisms structuring these practices, expanding platforms’ rights while shifting risk onto users. Our ndings surface key governance challenges in romantic AI and are intended to provoke discussion and inform futur e empirical and design research on human– AI intimacy and its governance. CCS Concepts: • Security and privacy → Privacy protections ; • Human-centered computing → Empirical studies in HCI . Additional K ey W ords and Phrases: Romantic AI, AI companions, privacy policy , terms of service, intimate data, consent, generative AI A CM Refer ence Format: Xiao Zhan, Yifan Xu, Rongjun Ma, Shijing He, Jose Luis Martin-Navarro, and Jose Such. 2026. The Governance of Intimacy: A Preliminary Policy Analysis of Romantic AI Platforms. 1, 1 (February 2026), 17 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn 1 Introduction Romantic AI companions are rapidly moving into the mainstream [ 7 , 20 , 31 ], oering emotionally expressive, relationship- like interactions [ 9 , 13 , 28 ] that encourage users to disclose de eply personal thoughts, routines, and desires [ 17 , 33 , 35 ]. Unlike general-purpose AI systems, these platforms cultivate attachment, emotional dependence, and continuous self-disclosure [ 9 , 13 , 14 , 28 ], creating forms of vulnerability that exceed those found in ordinary chatbots. Over time, such interactions accumulate into intimate conversational histories that users may experience as private and relational. Despite this heightened intimacy , little is known about how romantic AI platforms govern the sensitive data they elicit. Prior work has documented privacy discrepancies in individual platforms such as Replika [ 29 ], but existing work has not examined romantic AI governance across multiple platforms or jurisdictions. Moreover , no work has investigated how the introduction of generative AI (GenAI) reshapes data governance, ownership , and safety responsibilities in emotionally intimate contexts. Finally , governance documents themselves hav e not been studied as relational infrastructures. As Authors’ Contact Information: Xiao Zhan, xzhan1@upv .es, VRAIN, Universitat Politècnica de V alència & University of Cambridge, V alencia, Spain, Cambridge, UK; Yifan Xu, yifan.xu@manchester .ac.uk, The University of Manchester, Manchester, UK; Rongjun Ma, rma1@upv .es, VRAIN, Universitat Politècnica de V alència & Aalto University, V alencia, Spain, Espoo, Finland; Shijing He, shijing.he@kcl.ac.uk, King’s College London, London, UK; Jose Luis Martin-Navarro, jomarna6@up v .es, VRAIN, Universitat Politècnica de V alència & Aalto University, Valencia, Spain, Espoo, Finland; Jose Such, jose.such@csic.es, INGENIO (CSIC-Universitat Politècnica de V alència), V alencia, Spain. 2026. Manuscript submitted to ACM Manuscript submitted to ACM 1 2 Zhan et al. a result, we still lack clarity on how platforms legally frame intimacy , assign responsibility , and convert aective exchanges into model-training resources. T o address this gap, we conducted a qualitative p olicy analysis of six romantic AI governance , including three W estern platforms and three non- W estern (Chinese) platforms op erating under distinct regulatory environments. Through qualitative analysis of their Privacy Policies (PPs) and T erms of Service (T oS), we investigate how platforms articulate data practices, dene rights over GenAI-generated content, and claim user protections. Specically , we pose the following research questions: RQ1 (Data governance): How do romantic AI platforms describe their privacy practices regarding data colle ction, sharing, storage, deletion, and ownership? RQ2 (GenAI transparency): How do romantic AI platforms disclose the details of their GenAI use, particularly regarding training data practices and output responsibility? RQ3 (User protection): What safeguards do romantic AI platforms claim to provide to keep their users safe? By answering these research questions, we make the following contributions: (1) This pap er provides the rst systematic examination of GenAI-related disclosures in romantic AI governance documents. (2) W e uncover major transparency gaps and show that intimate conversations are tr eated as extractable resources, while r esponsibility for GenAI-related risks ( e.g., hallucinated outputs, model training memorisation) is shifted onto users. (3) Building on these ndings, we argue that intimate data requires dedicated governance and call for redesigned consent mechanisms that account for emotional dependency and irre versible model training. (4) W e position this work as a prequel for futur e empirical and design research, opening new conv ersations on intimate data governance , relational vulnerability , and consent in human– AI intimacy . 2 Related W orks Early research on AI companionship highlights se vere privacy and ethical challenges in platforms that mediate r omantic or emotionally intimate relationships with articial agents. Piispanen et al. [ 29 ] conducted a close reading of Replika’s Privacy Policy and compared it against public media reports, revealing that even GDPR compliant documents can obscure exploitative practices such as broad data collection, b ehavioral proling, and emotional manipulation of vulnerable users. These ndings echo broader concerns that AI companions op erate under extreme information asymmetry , where ser vice providers accumulate vast stores of intimate behavioral data that can be aggregated into user proles [ 18 , 32 ]. Journalistic and technical audits further expose weak privacy and security protections among AI-companion providers [ 30 ], showing that data about daily routines, sexuality , or health-related experiences can be inferred from conversational logs and potentially accessed by third parties. Beyond system-level data practices, recent studies explore how users themselves navigate disclosure with AI partners. W ang et al. [ 36 ] identify two dominant orientations toward self-disclosure: one gr oup vie ws op enness with AI as natural and benecial given its perceived emotional support, while another expresses appr ehension about surveillance and misuse of sensitive data. Similarly , Djufril et al . [11] nd that users who report stronger emotional attachment to their AI partners tend to share more p ersonal information than they would with human partners, though others remain selective depending on topic or context. Recent research [ 17 ] further shows that as emotional attachment deepens, users often deprioritize privacy concerns and disclose more intimate information, perceiving AI partners as less risky than human partners while remaining wary of platform-level surveillance and data retention practices. Parallel qualitative analyses of user forums and social media communities [ 28 ] suggest that users often normalize or even romanticize intimate data exchange, treating disclosur e as a sign of connection rather than a privacy risk. Manuscript submitted to ACM The Governance of Intimacy: A Preliminar y Policy Analysis of Romantic AI Platforms 3 At a broader ethical level, Ho et al. [ 14 ] synthesize the potential and pitfalls of romantic AI systems, identifying recurrent risks around data misuse , user manipulation, and emotional dependency . However , as Dewitte [ 10 ] notes, few of these studies examine the policy layer that ostensibly governs such risks—namely the Privacy Policies and T erms of Service through which platforms legally frame user consent. Recent theoretical and audit-based research therefore calls for integrated p olicy analyses that connect de clared privacy practices with the technical and emotional realities of human– AI intimacy [6, 16, 22]. 3 Methodology 3.1 Platform and Document Selection T o capture a diverse range of governance approaches, we selected six romantic AI companionship platforms based on three criteria: (i) Popularity , as indicated by high download rankings in their respective regional markets; (ii) LLM-based; and (iii) Regional Representation, with thr ee platforms from W estern markets (Grok Ani, Nomi.AI, and Replika) and three from non- W estern markets, spe cically Chinese domestic platforms (Maoxiang, Zhumengdao, and Xingye). 1 . W e collected and analysed both the PPs and the T oS for each platform. The PPs primarily function as regulatory disclosures that describe a platform’s data practices, whereas the T oS operates as a broader contractual agreement that denes the rules governing platform use. Examining the T oS is particularly important for romantic AI platforms because key GenAI governance pro visions, including policies on ownership of AI-generated content and statements about model hallucinations, ar e often located in the T oS rather than in the PPs. W e therefore analyze both documents, follo wing prior work that emphasizes the value of studying them together , since a joint analysis can reveal contradictions, redundancies, and misalignments in how platforms present their data and governance practices across these documents [ 5 , 25 , 27 , 34 ]. Examining both documents also helps reveal potential inconsistencies or forms of “policy decoupling” . For example, promises of user data control in the PPs may be undermine d by broad and perpetual licensing rights in the T oS, or the PPs may even contradict itself [5, 25]. 3.2 alitative Coding Methodology W e employed a hybrid qualitative content analysis combining inductive theme de velopment with deductive coding. Drawing from prior research [ 12 , 15 , 19 ], we applied major regulator y framew orks—GDPR [ 2 ] and the CCP A [ 1 ] for the three W estern platforms; and PIPL [ 3 ] for platforms op erating in the Chinese market—as our high-level conceptual guides for drafting the initial codebook. These frameworks informed which categories (e.g., data types, processing purposes, retention, user rights, and safety obligations) should be included, but they were not used as co ding schemas. Guided by these, we developed themes and their associated codes for RQ1, including themes such as “Data T ypes Collecte d” , “Data Collection Purpose” , “Data Retention & Determination” , and “Data Sharing Recipients” . W e also generated themes relevant to RQ3, such as “T e chnical Security Measures” . For RQ2, the authors met to discuss and inductively develop themes spe cic to GenAI–related risks, including “Training Data Sources” , “Opt-Out Mechanisms for Mo del Training” , and “ Accuracy or Hallucination Disclaimers” . After establishing the initial codebook, two authors independently coded all documents, while the r emaining authors cross-checked segments and joined discussions of ambiguous cases. Throughout the coding 1 Grok Ani: https://grok.com/ani, Nomi: https://nomi.ai/, Replica: https://replika.com/, Maoxiang: https://maoxiangai.com/, Zhumengdao: https://www.zhumengdao .com/, Xingye: https://www .xingyeai.com/ Manuscript submitted to ACM 4 Zhan et al. process, the research team rened the codebook through regular meetings, merging overlapping codes and clarifying category denitions. W e did not calculate inter-rater reliability (IRR), as our interpretivist, consensus-based approach prioritized iterative discussion in codebook development [ 26 ]. W e ensured analytic cr edibility through collaborative reconciliation and systematic cross-checks, rather than relying on numeric agreement metrics [21]. 4 Policy Analysis Findings Before presenting the RQ ndings, we compiled a descriptive overview of the PPs and T oS (e.g., the exact document version used, accessibility , update cycles). 4.1 Overview All formal governance documents are accessible through the platform’s user interface, both on web-based platforms (via the settings or account menus) and within mobile applications (typically under “ Account” , “Privacy” , or “Legal” sections). While this accessibility is a common baseline, the level of detail, update frequency , and clarity of communication vary notably across platforms. Regarding how these platforms handle policy changes. Across the six platforms (see table 1), all PPs and 5 T oS documents clearly display their eective dates. Only Grok’s platform provides access to previous terms of service versions for comparison. Additionally , Xingye and Zhumengdao explicitly state that they will notify users about major changes; notications tend to appear as in-app messages or banners rather than personalised emails, oering a slightly more proactive stance. Howe ver , several platforms default to the assumption that continued use of the service constitutes user consent to the up dated terms. For example, Grok’s T oS explicitly states that for global users, notication of changes is satised simply by updating the “Ee ctive Date ” at the top of the document, with no armative obligation to email or directly inform users. Most platforms do not oer access to previous versions of their policies. Only Grok maintains a publicly accessible archive of its T oS, making it uniquely traceable for users or researchers interested in how its governance practices have changed. In terms of how current their policies are , three platforms Grok, Zhumengdao, and Xingye have revised their governance documents within the last year . Grok, for instance, last updated its policy within the past six months, reecting its ongoing adaptation to a rapidly changing AI environment. Nomi, however , has not published substantial revisions since 2023. 4.2 Data Governance (RQ1) 4.2.1 Data T yp es Collecte d and Purp ose. T ab. 2 summarises the data types that platforms explicitly state they collect. Additional details on the purposes of data collection and the recipients with whom these data may be shared are provided in Appendix A.1 (see T ab. 3-11). Across the six platforms, disclosur es about data collection dier sharply in granularity and completeness, meaning users face fundamentally unequal levels of knowability about what is collected and why . Grok and Replika oer the most structured disclosures, providing category-level descriptions and linking data types to specic purposes. By contrast, Nomi and several Chinese platforms disclose far less detail. For example, Nomi PP states only that it collects information users “dir ectly provide to us” or “that is generated when they interact with the Services” , without sp ecifying what these categories include. Document inconsistency further undermines transparency: some data types appear in the PP but not the T oS (Nomi, Zhumengdao), while use purposes appear in one document but not the other (Xingye , Maoxiang), requiring users to reconcile fragmented and incomplete statements. Manuscript submitted to ACM The Governance of Intimacy: A Preliminar y Policy Analysis of Romantic AI Platforms 5 T able 1. Privacy Policy and T erms of Ser vice Updates Summar y Platform V ersion Analysed Latest V ersion Contacts V ersion History Grok (PP) 10/07/2025 10/07/2025 Provided – Grok (PP–EU) 24/04/2025 24/04/2025 Provided – Grok (ToS) 04/11/2025 04/11/2025 Provided A vailable Nomi (PP) 14/04/2023 14/04/2023 Provided – Nomi (T oS) – – – – Replika (PP) 01/11/2025 01/11/2025 Pro vided – Replika (T oS) 07/02/2023 07/02/2023 Provided – Maoxiang (PP) 02/12/2025 02/12/2025– Provided – Maoxiang (ToS) 04/11/2025 04/11/2025 – – Zhumengdao (PP) 18/06/2025 18/06/2025 Provided - Zhumengdao (T oS) 18/06/2025 18/06/2025 Provided – XingY e (PP) 17/11/2025 17/11/2025 Provided – XingY e (T oS) 01/09/2025 01/09/2025 Provided – T able 2. Presence of Data Type Collections in PP and T oS of Romantic AI P latforms Account Payment Communication Interests & Preferences Usage Feedback So cial Media T e chnical Publicly A vailable Face & Head Movement PP T oS PP T oS PP T oS PP T oS PP ToS PP T oS PP T oS PP T oS PP T oS PP T oS Grok ✓ ✓ ✓ ✓ ✓ ✓ — — ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ — — Nomi ✓ — ✓ — ✓ — — — ✓ — — — — — ✓ — — — — — Replika ✓ ✓ ✓ — ✓ — ✓ — ✓ — — — ✓ ✓ ✓ — — — ✓ — Maoxiang ✓ ✓ ✓ — ✓ ✓ — — — — ✓ — ✓ — ✓ — ✓ — ✓ — Zhumengdao ✓ ✓ ✓ — ✓ ✓ ✓ — ✓ — ✓ ✓ ✓ — ✓ — ✓ — — — Xingye ✓ ✓ — — ✓ ✓ — ✓ — — ✓ — ✓ — ✓ — ✓ ✓ — — Note: ✓ means Explicitly mentione d as collected; —denotes Not mentioned. Despite these disparities, all platforms name the same three core data types somewhere in their documents: account-, communication-, and technical data. Several also r eserve rights to ingest “publicly available information” or feature- dependent data such as payment details, extending proling beyond the r omantic interface. Stated purposes similarly range from specic to highly op en-ended. All platforms justify processing communication data for ser vice provision or “security , integrity , and legal compliance” , yet most simultane ously authorise br oad secondary uses such as impr ovement, and analytics. Some (e.g., Replika and Maoxiang) explicitly include marketing as a purpose; by contrast, Grok explicitly excludes it, and a majority omit any mention entirely . This leaves users unable to determine whether marketing occurs or is simply undisclosed. These selective disclosures create asymmetric transparency: platforms name data types but withhold clarity about which purposes apply and how intimate data ows through internal processes. 4.2.2 Data Sharing and Third-Party Access. All platforms acknowledge that user data may be shared with third parties, though transparency varies substantially . Grok and Replika oer the clearest breakdowns in their PPs, identifying categories such as service providers, aliates, and law-enforcement authorities. Alarmingly , Grok (T oS) grants them a license to share user content for “any purp ose ” , directly conicting with its PPs. Replika (PP) similarly distinguishes between vendors supporting op erations and entities receiving data for analytics or research. However , other platforms adopt less granular approaches. Nomi provides only a high-level statement that information may b e shared as needed to provide “ core functionality” , without naming specic recipients. Chinese platforms generally list typical categories such as aliated companies, ser vice providers, and government or regulatory bodies. Despite these disclosures, no platform oers detailed criteria for when sharing is triggered, ho w frequently access occurs, or what technical controls limit Manuscript submitted to ACM 6 Zhan et al. third-party processing. As a result, while the existence of third-party access pathways is clear , the scope and conditions of such access remain broadly dened. 4.2.3 Retention & Deletion. Across the six platforms, retention and deletion policies share a common structure but vary signicantly in specicity . Most rely on broad clauses such as retaining data “for as long as necessary” or “as required by law” . Nomi is the only service to promise immediate erasure, stating that data will be “immediately deleted and cannot be recovered” (Nomi T oS) . Others pro vide more conditional timelines: Grok (PP) describes deletion requests entering a processing queue, while Maoxiang (PP) and Zhumengdao (PP) cite statutory log-retention requirements applicable in their jurisdictions. Replika (PP) oers detailed retention perio ds for dierent data categories but, unlike Grok, does not specify an operational deletion window . Across all platforms, this user-oriented framing coexists with broad T oS provisions allowing pr oviders to remov e content or terminate accounts at their discretion. As a result, formal deletion rights are paired with expansive platform authority o ver data removal and account shutdo wn. 4.2.4 Ownership - IP Rights and Human Review . Across all six platforms, ownership language signals user control but is substantially limited by broad licensing and revie w rights. W estern platforms allow users to r etain copyright while simultaneously claiming sweeping licenses that undermine exclusivity , for example, Replika (T oS) requires a “perpetual, irrevocable, and sublicensable” license, and Nomi (T oS) permits the use of user content in “nancing, sale, [or] transfer” of the ser vice, eectively treating intimate chat histories as corporate assets. Chinese platforms likewise deploy an “acknowledge-and-appropriate ” structure, with Xingy e (T oS) further coupling its license with stringent user liability clauses. Regardless of these formal claims, all W estern platforms reserve rights for “authorised personnel” to access or review content, such as Grok’s revie w for “improving product features” or investigating misuse (Grok PP). These provisions make clear that o wnership—whether nominally held by users or claimed by platforms—does not pre vent internal access for moderation, safety , or development. 4.3 Disclose the Use of GenAIs (RQ2) W e examined how platforms disclose the specic mechanics of their LLMs, specically regarding training data sources and liability for GenAI-generated outputs. 4.3.1 Training Data Disclosure & Use. Only Grok provides any information about pre-training sources, noting that its models use “publicly available information on the internet (Grok PP)” . Other platforms avoid external data provenance and disclose only that user-generate d content may b e used to “train” , “optimise” , or “improve ” their systems (Replika PP; Nomi PP; Maoxiang PP; Xingye PP; Zhumengdao PP). 4.3.2 Opt-Out Mechanisms. Only three platforms oer any way to refuse model-training use. Grok (PP) allows users to disable training use directly through a settings toggle, Zhumengdao (PP) recognises a right to object but requires users to “contact us [. . . ] to request the withdrawal” , making the process more labour-intensive. Maoxiang (PP) oers both opt-out approaches. All other platforms that state the y use user content for training pro vide no opt-out mechanism, meaning user interactions are treated as default training inputs with no procedural control. How ever , a fundamental contradiction exists: the perpetual usage rights secured in these platforms’ IP clauses (§4.2.4) nullify the very purpose of their oered opt-out mechanisms. 4.3.3 Disclaimers & Allo cation of Responsibility . Four platforms (Grok, Maoxiang, Zhumengdao, and Xingye) explicitly limit responsibility for AI reliability , though with dierent levels of specicity . Zhumengdao (PP) issues the strongest Manuscript submitted to ACM The Governance of Intimacy: A Preliminar y Policy Analysis of Romantic AI Platforms 7 warning, requiring users to “independently verify” outputs, “especially regarding numbers, time, and factual descriptions” , and cautioning that accuracy cannot be “100 percent guaranteed” . Maoxiang (PP) likewise states that outputs are “for reference only” and that users b ear all consequences arising from r eliance on their “authenticity , accuracy , or reliability” , while Grok and Xingye use broader warranty disclaimers in their T oS that emphasise the AI service may be interrupted, erroneous, or fail to meet expe ctations and that errors need not be fully corrected. A cross these do cuments, hallucination and other failures are framed as risks that users, rather than platforms, must ultimately absorb . 4.3.4 Mandatory AI Lab elling. All three Chinese platforms commit to labelling AI-generated content in accordance with China’s 2023 AIGC Interim Measures [ 4 ]. Their terms state that providers may add “labels” or watermarks to outputs and that users must clearly mark AI-generated content when sharing it and may not remove such labels (e.g., Xingye T oS; Maoxiang T oS; Zhumengdao T oS). None of the W estern platforms mentions comparable requirements, underscoring a regulatory divide in how transparency obligations are dened across regions. 4.4 Platform-Claimed Safeguards (RQ3) 4.4.1 A ge Safety & Protection of Minors. All six platforms formally claim to protect minors, but the strength and specicity of age-related safeguards vary considerably . Chinese platforms consistently frame minor protection as a platform-led governance responsibility , and they employ r eal-name verication to identify suspected underage users and apply mandatory restrictions accordingly . Maoxiang implements the most stringent regime: once identied as a minor , an account enters a restricted mo de that “cannot like, create, comment, share, top-up, or consume” , and real-name veried minors “cannot exit minor mode until they turn 18” . In contrast, English-language platforms oer only minimal, declarative age safeguards. Replika and Nomi simply state that their services are for adults and may delete underage accounts, without describing any detection or enforcement. Grok allows users as y oung as 13 and openly warns that its outputs may include “sexual situations, violence, [and] crude humour” , yet provides no technical restrictions or minor-protection mode beyond a basic reporting channel. 4.4.2 T e chnical Se curity Measures. For security measures, Grok provides the most minimal commitments, its PP states only that they “implements commercially reasonable te chnical, administrative, and organisational measures” , without oering any concrete mechanisms or explanations. Nomi likewise discloses almost no technical safeguards; its T oS even caution that data transmissions “may be unencr ypted” , making it the only platform to openly acknowledge the possibility of plaintext transfer . By contrast, the other platforms describe more concrete, system-level safeguards. Across their privacy policies, they collectively r eference measures such as “SSL encryption, secure ser vers with rewalls, and role-based access control (Replika PP)” , “encrypted storage, access-permission controls, and breach-notication mechanisms (Xingye PP)” , “industry-standard encryption and independent encrypted storage of sensitive data (Maoxiang PP)” , and “encrypted storage and transmission, strict access controls, and incident-response procedures (Zhumengdao PP)” . T ogether , these disclosures present a notably more specic and multi-layered security posture, in sharp contrast to the minimal, generic assurances oered by Grok and Nomi. 4.4.3 Content Safety & Governance. Across all six platforms, providers uniformly claim that they will inter vene against unlawful or inappropriate content. Except for Nomi, every platform states that, upon detecting a violation, it may delete or block content, suspend or terminate accounts or services, and coop erate with authorities, including reporting severe cases. For example, Zhumengdao (PP) explicitly notes that it may “report relevant information to comp etent authorities in accordance with the law” and Grok (T oS) states that they will “cooperate with law enforcement” . What diers is the Manuscript submitted to ACM 8 Zhan et al. level of specicity: Maoxiang is the only platform that names an actual review pathway , noting that content may be processed by “third-party content review providers” , whereas the others simply assert that they conduct revie w without describing how it operates. Nomi oers the least detailed regime. Its T oS merely state that it may remove “unlawful, defamatory , harassing, abusive, or otherwise obje ctionable ” content and terminate accounts, without further elaboration on moderation processes or enforcement mechanisms. 5 Discussion Building on our ndings (§4.2), platforms do not treat intimacy as a distinct categor y of sensitive data; instead, romantic disclosures are pr ocessed through the same pipelines as routine technical or account data, enabling broad reuse and retention with minimal constraints. Intimacy is further endangered by opaque data practices and, as our ndings (§4.2.2,§4.2.4) show , the commercial extraction of the user data far bey ond their reasonable expectations. This rev eals a deeper governance problem: platform policies systematically strip intimacy of its relational signicance, recasting disclosures as assets rather than components of a private relationship . Our ndings identify three mechanisms that make such commercial extraction p ossible (§4.2.4,§4.3.1, §4.3.2). Default training appr opriation renders all r omantic interactions available for model optimization unless users actively object – an option most platforms do not provide. Ownership reconstruction operates through expansive licenses or outright claims of corporate ownership, ensuring platforms retain downstream rights o ver aective histories regardless of users’ e xpectations of condentiality . Intimate- history assetization, most visible in Nomi’s allowance of content use during nancing or sale, transforms the accumulated traces of emotionally charged relationships into transferable corp orate property . T ogether , these mechanisms illustrate how platforms convert relational e xchanges into durable computational and economic resources. Even where platforms nominally grant users ownership or control, intimate conversations lack meaningful privacy protections. Nomi’s T oS even acknowledges that data transmissions “may be unencrypted” , exposing disclosures to interception (§4.4.2). W estern platforms reser ve broad rights for “authorised p ersonnel” to access user content and Maoxiang delegates review to thir d-party contractors (§4.2.4). These practices reveal that user ownership is largely symbolic: operational control resides with platform employees and external reviewers. The premise of a private , one- to-one relationship with an AI partner is thus incompatible with infrastructures that treat intimacy as content to be inspected, mo derated, and repurposed. Our preliminary analysis identies two structural barriers limiting meaningful user control and raising key governance challenges. First, a temporal mismatch: consent is obtained at r egistration, long before emotional attachment forms, even though attachment r eliably increases the depth and intimacy of disclosur e [ 33 , 35 ]. Users often misunderstand or ov erlook policy terms at the p oint of registration [ 8 , 23 , 24 ], and as emotional b onds de epen, they may disclose increasingly sensitive information [ 17 ]. Y et the consent governing these disclosures remains xe d, granted before users experience the system’s relational dynamics. This temporal mismatch is further reinforced by a second structural constraint: irreversibility . Even where platforms oer opt-out mechanisms, these typically operate only prospectively; intimate disclosures already incorporated into model training cannot meaningfully be withdrawn (§4.3.2). At the same time, liability disclaimers shift responsibility for GenAI-generated harm to users (§4.3.3). This creates a structural imbalance in which platforms benet from progr essive emotional vulnerability while users retain neither the ability to retract data nor recourse against downstream uses. Limitations and Future W ork. As a preliminary study , this work has several limitations that warrant further develop- ment. First, future research should examine a broader range of romantic AI platforms, e xpanding not only the sample size Manuscript submitted to ACM The Governance of Intimacy: A Preliminar y Policy Analysis of Romantic AI Platforms 9 but also the diversity of r egional contexts in which these systems op erate. Second, some of the governance mechanisms identied here, such as broad licensing clauses and liability disclaimers, also appear in general-purpose AI services and other digital applications (e.g., smart home platforms). Our analysis does not systematically compare romantic AI governance with that of non-romantic AI systems, as our focus was on understanding how governance operates within intimacy-oriented contexts. Future research could ther efore considering conducting cross-domain policy comparisons to determine whether romantic AI introduces substantively distinct governance dynamics or instead amplies existing platform practices through relational dependency . Third, future work should also explor e how to better support users’ understanding of governance risks and p otential “policy traps” through clear er and more conte xtualised policy cues. Fifth, empirical resear ch is needed to examine ho w users interpret and negotiate these governance conditions in practice, while traceability studies should compare platforms’ stated p olicies with their actual data practices. Last, there remains an interpretive question concerns how to understand the nding that platforms abstract r elational interactions into technical and economic assets. One explanation is that companies view these systems primarily as technical tools and therefore apply standard governance models without fully addressing their r elational implications. Y et, intentional or not, this framing overlooks the distinctive characteristics of intimate relationships. Because these systems are designed to cultivate emotional dependency , go vernance approaches that fail to recognise this sp ecicity warrant critical scrutiny . Future work will be necessary to examine this interpretive tension with additional empirical evidence, particularly by investigating how designers understand the relational implications of the systems the y build. 6 Conclusion This study examined Privacy Policies and T erms of Service across six romantic AI platforms to assess transparency and risk distribution. Our ndings re veal platforms designed for emotional attachment while contractually treating intimate disclosures as commercial assets subje ct to training, transfer , and corporate control. Across the platforms examined, three recurring mechanisms enable this extraction: default training appropriation, ownership r econstruction, and intimate history assetization, sustained by consent obtaine d before dependency forms and rendered eectively irreversible after integration. W e also recommend future work examining user perceptions, auditing the p olicy-practice gap, and co-designing consent mechanisms accounting for progressive dependency . Recognising relational asymmetry as a basis for data protection warrants serious consideration in eorts to prevent the commo dication of human vulnerability . Acknowledgments W e thank the anonymous review ers for their constructive feedback, which has helped us impro ve this work at its early stage. This r esearch was supported by the INCIBE’s strategic SPRINT (Seguridad y Privacidad en Sistemas con Inteligencia Articial) C063/23 project with funds from the EU-NextGenerationEU through the Spanish government’s Plan de Recuperación, Transformación y Resiliencia, by the Generalitat V alenciana under grant CIPROM/2023/23, and grant PID2023-151536OB-100 funded by MICI U/AEI/10.13039/501100011033 and by ERDF/EU . References [1] 2018. California Consumer Privacy Act (CCP A) . Retrieved Nov 12, 2025 from https://oag.ca.gov/privacy/ccpa [2] 2018. General Data Protection Regulation (GDPR) . Retrieved Oct 12, 2025 from https://gdpr- info.eu/ [3] 2021. Personal Information Protection Law of the People’s Republic of China . Retrieved Oct 12, 2025 from http://en.npc.gov .cn.cdurl.cn/2021- 12/29/c_694559.htm Manuscript submitted to ACM 10 Zhan et al. [4] 2023. Interim Measures for the Management of Generative Articial Intelligence Services 2023 . Retrieved Oct 12, 2025 from https://w ww .gov.cn/ zhengce/zhengceku/202307/content_6891752.htm [5] Benjamin Andow , Samin Y aseer Mahmud, W enyu W ang, Justin Whitaker , William Enck, Bradley Reav es, Kapil Singh, and Tao Xie. 2019. { PolicyLint } : investigating internal privacy policy contradictions on google play . In 28th USENIX security symposium (USENIX security 19) . 585–602. [6] Anonymous. 2025. Harmful Traits of AI Companions. arXiv preprint arXiv:2511.14972 (2025). [7] David Batty . 2025. ‘She helps cheer me up’: the people forming relationships with AI chatb ots . Retrieved Oct 28, 2025 from https://ww w .theguardian. com/technology/2025/apr/15/she- helps- cheer- me- up- the- people- forming- relationships- with- ai- chatbots [8] Lorrie Faith Cranor and Florian Schaub. 2020. How to ( { In) Eectively } Convey Privacy Choices with Icons and Link T ext. In 2020 { USENIX } Conference on Privacy Engineering Practice and Respect ( { PEPR } 20) . [9] Mehul Reuben Das. 2023. AI Groom: US woman creates AI bot, marries it and starts family , calls ’him’ the perfect husband . Retrieved Oct 28, 2025 from https://www .rstpost.com/world/us- woman- creates- ai- bot- marries- it- and- starts- family- calls- him- the- perfect- husband- 12693012.html [10] Pierre Dewitte. 2024. Better alone than in bad company: Addressing the risks of companion chatbots through data protection by design. Computer Law & Security Review 54 (2024), 106019. [11] Ray Djufril, Jessica R Frampton, and Silvia Knoblo ch- W esterwick. 2025. Love, marriage, pregnancy: Commitment processes in romantic relationships with AI chatbots. Computers in Human Behavior: A rticial Humans 4 (2025), 100155. [12] Shijing He, Xuchen W ang, Y axiong Lei, Chi Zhang, Ruba Abu-Salma, and Jose Such. 2026. Investigating Bystander Privacy in Chinese Smart Home Apps. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems . ACM, 1–24. [13] Stuart Heritage. 2025. ‘I felt pure, unconditional love ’: the people who marry their AI chatb ots . Retrieved Oct 28, 2025 from https://ww w .theguardian. com/tv- and- radio/2025/jul/12/i- felt- pure- unconditional- love- the- people- who- marry- their- ai- chatb ots [14] Jerlyn QH Ho, Meilan Hu, Tracy X Chen, and Andree Hartanto . 2025. Potential and pitfalls of romantic Articial Intelligence ( AI) companions: A systematic review . Computers in Human Behavior Reports 19 (2025), 100715. [15] Y ousra Javed, Elham Al Qahtani, and Mohamed Shehab. 2021. Privacy policy analysis of banks and mobile money services in the middle east. Future Internet 13, 1 (2021), 10. [16] Theodore K ouros. 2024. Digital Mirrors: AI Companions and the Self. Societies 14, 10 (2024), 200. https://w ww .mdpi.com/2075- 4698/14/10/200 [17] Rongjun Ma, Shijing He, Jose Luis Martin-Navarro, Xiao Zhan, and Jose Such. 2026. Privacy in Human-AI Romantic Relationships: Concerns, Boundaries, and Agency . In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems . ACM, 1–24. [18] Lisa Mekioussa Malki, Ina Kaleva, Dilisha Patel, Mark W arner , and Ruba Abu-Salma. 2024. Exploring Privacy Practices of Female mHealth Apps in a Post-Roe W orld. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems . ACM, 1–24. https://dl.acm.org/doi/10.1145/ 3613904.3642834 [19] Lisa Mekioussa Malki, Ina Kaleva, Dilisha Patel, Mark W arner , and Ruba Abu-Salma. 2024. Exploring privacy practices of female mhealth apps in a post-roe world. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems . 1–24. [20] Neil McArthur . 2025. More pe ople are considering AI lovers, and we shouldn’t judge . Retrieved Oct 28, 2025 from https://theconversation.com/more- people- are- considering- ai- lovers- and- we- shouldnt- judge- 260631 [21] Nora McDonald, Sarita Schoenebe ck, and Andrea Forte. 2019. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Procee dings of the A CM on human-computer interaction 3, CSCW , 1–23. [22] Niloofar Mireshghallah, Maria Antoniak, Y oav More, Y ejin Choi, and Golnoosh Farnadi. 2024. Trust No Bot: Discovering Personal Disclosures in Human–LLM Conversations in the Wild. arXiv preprint arXiv:2407.11438 (2024). https://arxiv .org/abs/2407.11438 [23] Midas Nouwens, Ilaria Liccardi, Michael V eale, David Karger , and Lalana Kagal. 2020. Dark patterns after the GDPR: Scraping consent pop-ups and demonstrating their inuence. In Proceedings of the 2020 CHI conference on human factors in computing systems . 1–13. [24] Jonathan A Obar and Anne Oeldorf-Hirsch. 2020. The biggest lie on the internet: Ignoring the privacy policies and terms of service policies of social networking services. Information, Communication & Society 23, 1 (2020), 128–147. [25] Ehimare Okoyomon, Nikita Samarin, Primal Wijesekera, Amit Elazari Bar On, Narseo V allina-Rodriguez, Irwin Reyes, Álvaro Feal, Serge Egelman, et al . 2019. On the ridiculousness of notice and consent: Contradictions in app privacy policies. In W orkshop on T echnology and Consumer Protection (ConPro 2019), in conjunction with the 39th IEEE Symposium on Security and Privacy . [26] Anna-Marie Ortlo, Matthias Fassl, Alexander Ponticello, Florin Martius, Anne Mertens, Katharina Krombholz, and Matthew Smith. 2023. Dierent researchers, dierent results? analyzing the inuence of researcher experience and data type during qualitative analysis of an interview and survey study on security advice. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems . 1–21. [27] Przemysław Pałka and Marco Lippi. 2021. Big data analytics, online terms of service and privacy policies. In Research handb ook on big data law . Edward Elgar Publishing, 115–134. [28] Pat Pataranutaporn, Sheer Karny , Chayapatr Archiwaranguprok, Constanze Albrecht, A uren R Liu, and Pattie Maes. 2025. " My Boyfriend is AI": A Computational Analysis of Human- AI Companionship in Reddit’s AI Community . arXiv preprint arXiv:2509.11391 (2025). [29] Joni-Roy Piispanen, Tinja Myllyviita, Ville V akkuri, and Rebekah Rousi. 2024. Smoke Screens and Scapegoats: The Reality of General Data Pr otection Regulation Compliance–Privacy and Ethics in the Case of Replika AI. arXiv preprint arXiv:2411.04490 (2024). [30] Abdelrahman Ragab, Mohammad Mannan, and Amr Y oussef. 2024. “Trust Me Over My Privacy Policy”: Privacy Discrepancies in Romantic AI Chatbot Apps. In 2024 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW) . IEEE, 484–495. Manuscript submitted to ACM The Governance of Intimacy: A Preliminar y Policy Analysis of Romantic AI Platforms 11 [31] Neil Sahota. 2025. AI Companions: Popular , Personal, and Increasingly Problematic . Retrieved Oct 28, 2025 from https://ww w .neilsahota.com/ai- companions- popular- personal- and- increasingly- problematic/ [32] Y ashothara Shanmugarasa, Ming Ding, Chamikara Mahawaga Arachchige, and Thierry Rakotoarivelo. 2025. Sok: The privacy paradox of large language models: Advancements, privacy risks, and mitigation. In Proceedings of the 20th ACM Asia Conference on Computer and Communications Security . 425–441. [33] Marita Skjuve, Asbjørn Følstad, Knut Inge Fostervold, and Petter Bae Brandtzaeg. 2021. My chatb ot companion-a study of human-chatbot relationships. International Journal of Human-Computer Studies 149 (2021), 102601. [34] Shikha Soneji, Mitchell Hoesing, Sujay Koujalgi, and Jonathan Dodge. 2024. Demystifying Legalese: An Automated Approach for Summarizing and Analyzing Overlaps in Privacy Policies and T erms of Ser vice. arXiv preprint arXiv:2404.13087 (2024). [35] Vivian T a, Caroline Grith, Carolynn Boateld, Xinyu W ang, Maria Civitello, Haley Bader, Esther DeCero, and Alexia Loggarakis. 2020. User experiences of social support from companion chatbots in everyday contexts: thematic analysis. Journal of medical Internet research 22, 3 (2020), e16235. [36] Xuetong W ang, Ching Christie Pang, and Pan Hui. 2025. ‘My Dataset of Love ’: A Preliminary Mixed-Method Exploration of Human-AI Romantic Relationships. 9, 7, Article CSCW351 (Oct. 2025), 34 pages. doi:10.1145/3757532 Manuscript submitted to ACM 12 Zhan et al. A Appendix A.1 Data Practices Details In this section, we present detailed denitions of the datatypes used, along with tables analysing their collection, usage, and disclosure. • Account Data (T able 3): Information provided by the user when registering for , logging into, or maintaining an account on a romantic AI platform, such as name, phone number , email address, date of birth, and login credentials. • Payment Data (T able 4): Information related to nancial transactions on the platform, including payment method details, subscription information, and purchase histor y , whether processed directly by the platform or via third-party payment services. • Communication Data (T able 5): Content generated through interactions between the user and the AI, including user-provided inputs (e .g., text messages, voice recordings, images, or uploaded les) and AI-generated outputs (e.g., r esponses, dialogue, or role-play content), as well as the r esulting conversation history . • Interests & Preferences Data (T able 6): Data reecting a user’s likes, dislikes, chosen conversation topics, interaction styles, or usage habits, including explicit selections and inferred preferences. • Usage Data (T able 7): Records describing how a user interacts with the platform, such as clicks, page views, navigation paths, session duration, feature usage, and other activity logs. • Feedback Data (T able 8): Information provided by the user about their experience with the ser vice, such as ratings, reviews, survey r esponses, bug reports, or improvement suggestions. • Social Media Data (T able 9): Information obtained through a user’s connection to or interaction with external social media platforms, such as prole information, usernames, so cial connections, or content shared via linked accounts. • T echnical & Location Data (T able 10): Automatically collected device- and network-related information, such as IP address, device identiers, operating system, bro wser type, application version, netw ork information, and approximate location data. • Publicly A vailable Data (T able 11): Information that is publicly accessible or obtained from public sources, such as publicly available online content or proles. Manuscript submitted to ACM The Governance of Intimacy: A Preliminar y Policy Analysis of Romantic AI Platforms 13 T able 3. Account Data Collection, Usage, and Disclosure Analysis APP Account Data Collect? Collect Method Use / Process Disclose / Share Maintain Service User Support Improve / Research Communic- ate Security Integrity Legal Purposes Marketing Service Providers Business Transfers Legal Authorities Related Companies Advertisers Other 3rd Parties Public Grok (PP) Y es Direct Y es Y es Yes Yes Y es Y es No Yes Y es Yes Y es No Y es – Grok (PP-EU) Y es Direct Yes Y es – – Y es Y es Y es Y es – Yes – – – – Grok (T oS) Y es Direct Yes – – Y es Y es Y es – – – Y es – – – – Nomi (PP) Y es Direct Yes – – – – – – Y es – – – – Yes – Nomi (ToS) – – – – – – – – – – – – – – – – Replika (PP) Yes Direct Yes Y es Yes Y es Y es Y es Y es Y es Y es Yes Y es – Yes – Replika (ToS) Y es Direct Yes – – Y es – Yes – – – – – – Yes – Maoxiang (PP) Y es Direct Yes – – Y es Y es Y es – – – Y es Yes – – Yes Maoxiang (T oS) Y es Direct – – – – – Yes – – – Y es – – – – Zhumengdao (PP) Y es Direct Yes Y es – Y es Y es Yes – – – – – – Y es Y es Zhumengdao (ToS) Yes Direct – Y es – Yes Y es Yes – – – Y es – – – – XingY e (PP) Yes Direct Yes Y es Yes Yes Y es Yes – – Yes – – – – XingY e (T oS) Y es Direct Yes Y es Yes Y es Y es Yes Y es – – Y es – – – – T able 4. Payment Data Collection, Usage, and Disclosure Analysis Platform Payment Data Collect? Collect Method Use / Process Disclose / Share Maintain Service User Support Improve / Research Communic- ate Security Integrity Legal Purposes Marketing Service Providers Business Transfers Legal Authorities Related Companies Advertisers Other 3rd Parties Public Grok (PP) Yes Through 3rd party Y es – – – Yes Yes – Y es Y es Y es – – – – Grok (PP-EU) Y es Through 3rd party Y es – – – – – – Y es – – – – – – Grok (T oS) Y es Through 3rd party Y es – – – – Y es – Y es – Y es – – – – Nomi (PP) Y es Through 3rd party Y es – – – – – – – – – – – – – Nomi (ToS) – – – – – – – – – – – – – – – – Replika (PP) Yes Through 3rd party Y es Y es – – Y es Yes – Yes Y es Yes Y es – Y es – Replika (ToS) – – – – – – – – – – – – – – – – Maoxiang (PP) – Through 3rd party Y es – – – – – – – – – – – Y es – Maoxiang (T oS) – – – – – – – – – – – – – – – – Zhumengdao (PP) Y es Through 3rd party – – – – – – – – – – – – – – Zhumengdao (ToS) – – – – – – – – – – – – – – – – XingY e (PP) – – – – – – – – – – – – – – – – XingY e (T oS) – – – – – – – – – – – – – – – – Manuscript submitted to ACM 14 Zhan et al. T able 5. Communication Data Collection, Usage, and Disclosure Analysis Platform Communication Data Collect? Collect Method Use / Process Disclose / Share Maintain Service User Support Improve / Research Communic- ate Security Integrity Legal Purposes Marketing Service Providers Business Transfers Legal Authorities Related Companies Advertisers Other 3rd Parties Public Grok (PP) Yes Direct Yes Y es Yes – Yes Yes – Y es Y es Y es Yes – – – Grok (PP-EU) Y es Direct Yes Y es Yes – Yes Yes – Yes – – – – – – Grok (T oS) Y es Direct Yes Y es Yes – Yes Yes – Y es Y es Y es Yes Yes Y es Yes Nomi (PP) Y es Direct Yes – Yes – – – – Y es – – – – Yes – Nomi (ToS) – – – – – – – – – – – – – – – – Replika (PP) Yes Direct Yes Y es Yes Y es Y es Y es – – Y es Yes – – No – Replika (ToS) – – – – – – – – – – – – – – – – Maoxiang (PP) Y es Direct Yes – Yes – – Y es – Y es – – – – Yes – Maoxiang (T oS) Y es Direct Yes – Yes – – – – – – – – – – – Zhumengdao (PP) Y es Direct Yes Y es Yes – – – – – – – – – – – Zhumengdao (ToS) Yes Direct – Y es – – – – – – – Y es – – No – XingY e (PP) Yes Direct Yes Y es Yes – Yes – Y es – Y es Yes – – – – XingY e (T oS) Y es Direct Yes Y es Yes – Yes – Yes – Yes Y es – – – – T able 6. Interests and preferences Data Collection, Usage, and Disclosure Analysis Platform Interest and Preference Data Collect? Collect Method Use / Process Disclose / Share Maintain Service User Support Improve / Research Communic- ate Security Integrity Legal Purposes Marketing Service Providers Business Transfers Legal Authorities Related Companies Advertisers Other 3rd Parties Public Grok (PP) – – – – – – – – – – – – – – – – Grok (PP-EU) – – – – – – – – – – – – – – – – Grok (T oS) – – – – – – – – – – – – – – – – Nomi (PP) – – – – – – – – – – – – – – – – Nomi (ToS) – – – – – – – – – – – – – – – – Replika (PP) Yes Direct Yes Y es Yes Y es Y es Y es Y es Y es Y es Yes Y es – Yes Replika (ToS) – – – – – – – – – – – – – – – – Maoxiang (PP) – – – – – – – – – – – – – – – – Maoxiang (T oS) – – – – – – – – – – – – – – – – Zhumengdao (PP) Y es Direct – – Yes – – – Y es Y es – – – – – – Zhumengdao (ToS) – – – – – – – – – – – – – – – – XingY e (PP) Yes Direct – – – – – – – – – – – – – – XingY e (T oS) Y es – – Yes Yes – Yes – Y es Yes – – – – – – T able 7. Usage Data Collection, Usage, and Disclosure Analysis Platform Usage Data Collect? Collect Method Use / Process Disclose / Share Maintain Service User Support Improve / Research Communic- ate Security Integrity Legal Purposes Marketing Service Providers Business Transfers Legal Authorities Related Companies Advertisers Other 3rd Parties Public Grok (PP) Yes Direct Yes Y es Yes – Yes Yes No Yes Y es Y es Yes No – – Grok (PP-EU) Y es Direct Yes Y es – – Y es Y es Y es Y es – – – – – – Grok (T oS) Y es Direct Yes – Yes – – – Y es – – – – – Yes – Nomi (PP) Y es Direct Yes – Yes – – – – Y es – – – – Yes – Nomi (ToS) – – – – – – – – – – – – – – – – Replika (PP) Yes Direct Yes Y es Yes Y es Y es Y es Y es Y es Y es Yes Y es – Yes – Replika (ToS) – – – – – – – – – – – – – – – – Maoxiang (PP) Y es Direct Yes – – – Y es – – – – – – – – – Maoxiang (T oS) – – – – – – – – – – – – – – – – Zhumengdao (PP) Y es Direct – – – – Y es Yes – – – – – – – – Zhumengdao (ToS) – – – – – – – – – – – – – – – – XingY e (PP) – – – – – – – – – – – – – – – – XingY e (T oS) – – – – – – – – – – – – – – – – Manuscript submitted to ACM The Governance of Intimacy: A Preliminar y Policy Analysis of Romantic AI Platforms 15 T able 8. Feedback Data Collection, Usage, and Disclosure Analysis Platform Feedback Data Collect? Collect Method Use / Process Disclose / Share Maintain Service User Support Improve / Research Communic- ate Security Integrity Legal Purposes Marketing Service Providers Business Transfers Legal Authorities Related Companies Advertisers Other 3rd Parties Public Grok (PP) Yes Direct Yes Y es Yes No Y es Y es – Y es Y es Yes Y es – No – Grok (PP-EU) Y es Direct Yes Y es Yes – Yes Yes Yes Yes – – – – – – Grok (T oS) Y es Direct – – Yes – – – Y es – – – – – – – Nomi (PP) – – – – – – – – – – – – – – – – Nomi (ToS) – – – – – – – – – – – – – – – – Replika (PP) – – – – – – – – – – – – – – – – Replika (ToS) – – – – – – – – – – – – – – – – Maoxiang (PP) Y es Direct – Yes – Y es – Yes – – – Yes – – – – Maoxiang (T oS) – – – – – – – – – – – – – – – – Zhumengdao (PP) Y es Direct Yes Y es – Y es – – Y es – – – – – – – Zhumengdao (ToS) Yes Direct Yes – – – – – – – – – – – – – XingY e (PP) Yes Direct Yes – Y es Yes – – Y es – – – – – – – XingY e (T oS) – – – – – – – – – – – – – – – – T able 9. Social Media Data Collection, Usage, and Disclosure Analysis Platform Social Media Data Collect? Collect Method Use / Process Disclose / Share Maintain Service User Support Improve / Research Communic- ate Security Integrity Legal Purposes Marketing Service Providers Business Transfers Legal Authorities Related Companies Advertisers Other 3rd Parties Public Grok (PP) Yes Through 3rd party Y es Y es Yes No Y es Y es – Y es Y es Yes Y es – Y es – Grok (PP-EU) Y es Through 3rd party – – – – – – Y es – – – – Yes – – Grok (T oS) Y es Through 3rd party – – – – – – – – – – – – – – Nomi (PP) – – – – – – – – – – – – – – – – Nomi (ToS) – – – – – – – – – – – – – – – – Replika (PP) Yes Through 3rd party Y es – – – – – – – – – – – – – Replika (ToS) Y es Through 3rd party Y es – – – – – – – – – – – – – Maoxiang (PP) Y es Through 3rd party Y es – – – – – – – – – – – – – Maoxiang (T oS) – – – – – – – – – – – – – – – – Zhumengdao (PP) Y es Through 3rd party – Yes Yes – Yes – – – – – – – – – Zhumengdao (ToS) – – – – – – – – – – – – – – – – XingY e (PP) Yes Through 3rd party – Yes Yes – Yes – – – – – – – – – XingY e (T oS) – – – – – – – – – – – – – – – – Manuscript submitted to ACM 16 Zhan et al. T able 10. T echnical Data Collection, Usage, and Disclosure Analysis Platform T echnical Data Collect? Collect Method Use / Process Disclose / Share Maintain Service User Support Improve / Research Communic- ate Security Integrity Legal Purposes Marketing Service Providers Business Transfers Legal Authorities Related Companies Advertisers Other 3rd Parties Public Grok (PP) Yes Direct Yes Y es Yes Y es Y es Y es – Y es Y es Yes Y es – Y es – Grok (PP-EU) Y es Direct Yes Y es Yes – Yes Yes Yes Yes – – – – – – Grok (T oS) Y es Direct Yes Y es – – Y es – – – – – – – – – Nomi (PP) Y es Direct Yes – Yes – – – – Y es – – – – Yes – Nomi (ToS) – – – – – – – – – – – – – – – – Replika (PP) Yes Direct Yes Y es Yes – Yes Yes Yes Y es Y es Y es Y es – Y es – Replika (ToS) – – – – – – – – – – – – – – – – Maoxiang (PP) Y es Direct Yes – Yes – Yes Y es Yes Y es Y es – – – Yes – Maoxiang (T oS) – – – – – – – – – – – – – – – – Zhumengdao (PP) Y es Direct Yes Y es – – Y es – – – – – – – – – Zhumengdao (ToS) – – – – – – – – – – – – – – – – XingY e (PP) Yes Direct Yes Y es Yes – Yes – – – – – – – – – XingY e (T oS) – – – – – – – – – – – – – – – – T able 11. Publicly A vailable Data Collection, Usage, and Disclosure Analysis Platform Publicly Available Data Collect? Collect Method Use / Process Disclose / Share Maintain Service User Support Improve / Research Communic- ate Security Integrity Legal Purposes Marketing Service Providers Business Transfers Legal Authorities Related Companies Advertisers Other 3rd Parties Public Grok (PP) Yes Through 3rd party Y es – Y es – Yes Y es – Y es Y es Yes Y es – – – Grok (PP-EU) Y es Through 3rd party – – Y es – – – – – – – – – – – Grok (T oS) – – – – – – – – – – – – – – – – Nomi (PP) – – – – – – – – – – – – – – – – Nomi (ToS) – – – – – – – – – – – – – – – – Replika (PP) – – – – – – – – – – – – – – – – Replika (ToS) – – – – – – – – – – – – – – – – Maoxiang (PP) Y es Through 3rd party Y es – – – Yes Yes – Y es – Y es – – – – Maoxiang (T oS) – – – – – – – – – – – – – – – – Zhumengdao (PP) Y es Through 3rd party – – – – – – – – – – – – – – Zhumengdao (ToS) – – – – – – – – – – – – – – – – XingY e (PP) Yes Through 3rd party Y es Y es – – Y es – – – – – – – – – XingY e (T oS) – – – – – – – – – – – – – – – – Manuscript submitted to ACM The Governance of Intimacy: A Preliminar y Policy Analysis of Romantic AI Platforms 17 T able 12. Face and Head Movement Data Collection, Usage, and Disclosure Analysis Platform Face and Head Movement Data Collect? Collect Method Use / Process Disclose / Share Maintain Service User Support Improve / Research Communic- ate Security Integrity Legal Purposes Marketing Service Providers Business Transfers Legal Authorities Related Companies Advertisers Other 3rd Parties Public Grok (PP) – – – – – – – – – – – – – – – – Grok (PP-EU) – – – – – – – – – – – – – – – – Grok (T oS) – – – – – – – – – – – – – – – – Nomi (PP) – – – – – – – – – – – – – – – – Nomi (ToS) – – – – – – – – – – – – – – – – Replika (PP) Yes Through 3rd party Y es – – – – – – – – – – – No – Replika (ToS) – – – – – – – – – – – – – – – – Maoxiang (PP) Y es Direct & Through 3rd party Y es – – – – – – Y es – – – – – – Maoxiang (T oS) – – – – – – – – – – – – – – – – Zhumengdao (PP) – – – – – – – – – – – – – – – – Zhumengdao (ToS) – – – – – – – – – – – – – – – – XingY e (PP) – – – – – – – – – – – – – – – – XingY e (T oS) – – – – – – – – – – – – – – – – Manuscript submitted to ACM

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment