InnerPond: Fostering Inter-Self Dialogue with a Multi-Agent Approach for Introspection

Introspection is central to identity construction and future planning, yet most digital tools approach the self as a unified entity. In contrast, Dialogical Self Theory (DST) views the self as composed of multiple internal perspectives, such as value…

Authors: Hayeon Jeon, Dakyeom Ahn, Sunyu Pang

InnerPond: Fostering Inter-Self Dialogue with a Multi-Agent Approach for Introspection
InnerPond: Fostering Inter-Self Dialogue with a Multi- Agent Approach for Intr ospection Hayeon Jeon hci+d lab. Seoul National University Seoul, Republic of Korea jhy94520@snu.ac.kr Dakyeom Ahn ∗ hci+d lab. Seoul National University Seoul, Republic of Korea adklys@snu.ac.kr Sunyu Pang ∗ hci+d lab. Seoul National University Seoul, Republic of Korea sunyu.pang@snu.ac.kr Y unse o Choi ∗ hci+d lab. Seoul National University Seoul, Republic of Korea dbstj0531@snu.ac.kr Suhwoo Y oon hci+d lab. Seoul National University Seoul, Republic of Korea yeopil@snu.ac.kr Joonhwan Lee hci+d lab. Seoul National University Seoul, Republic of Korea joonhwan@snu.ac.kr Eun-mee Kim † Department of Communication Seoul National University Seoul, Republic of Korea eunmee@snu.ac.kr Hajin Lim † hci+d lab. Seoul National University Seoul, Republic of Korea hajin@snu.ac.kr Abstract Introspection is central to identity construction and future plan- ning, yet most digital tools approach the self as a unied entity . In contrast, Dialogical Self Theory (DST) views the self as composed of multiple internal perspectives, such as values, concerns, and as- pirations, that can come into tension or dialogue with one another . Building on this view , we designed InnerPond, a research probe in the form of a multi-agent system that represents these internal per- spectives as distinct LLM-based agents for introspection. Its design was shap ed through iterative explorations of spatial metaphors, interaction scaolding, and conversational orchestration, culmi- nating in a shared spatial environment for organizing and relating multiple inner perspectives. In a user study with 17 young adults navigating career choices, participants engaged with the probe by co-creating inner voices with AI, composing relational inner landscapes, and orchestrating dialogue as obser vers and mediators, oering insight into how such systems could support introspec- tion. Overall, this work oers design implications for AI-supported introspection tools that enable exploration of the self ’s multiplicity . CCS Concepts • Human-centered computing → Natural language interfaces . ∗ These authors contributed equally to this research. † Co-corresponding authors This work is licensed under a Creative Commons Attribution-NonCommercial- NoDerivatives 4.0 International License. CHI ’26, Barcelona, Spain © 2026 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-2278-3/2026/04 https://doi.org/10.1145/3772318.3791248 Ke ywords Introspection, inner dialogue, self-reection, LLM, multi-agent, Di- alogical Self Theory , DST A CM Reference Format: Hayeon Jeon, Dakyeom Ahn, Sunyu Pang, Yunseo Choi, Suhwoo Y oon, Joonhwan Lee, Eun-mee Kim, and Hajin Lim. 2026. InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection. In Pro- ceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26), A pril 13–17, 2026, Barcelona, Spain. A CM, New Y ork, N Y , USA, 25 pages. https://doi.org/10.1145/3772318.3791248 1 Introduction Individuals constantly face complex life choices [ 101 ], where introspection—the critical examination of one’s own thoughts, values, and emotions [ 115 ]—plays a crucial r ole in navigating these decisions [ 13 , 19 ]. A key mechanism that facilitates introspection is inner dialogue, in which dier ent internal perspectives engage in a mental conversation [ 31 , 85 , 92 ]. T o b etter understand the complexities of this inner dialogue, the Dialogical Self Theor y (DST) [ 41 , 45 ] provides a theoretical framework. Rather than viewing the self as an isolated, unied entity , DST conceives it as a relational process—a “dynamic multiplicity of I-positions” [ 42 ]. Put simply , our mind hosts various ‘selves’—like ‘the creative self ’ or ‘the fearful self ’—each possessing a distinct perspective and narrative [ 44 ]. These I-p ositions are not isolated; they engage in a constant dialogue, inuencing and repositioning one another within a dialogical space [ 40 , 43 ]. Through such interaction, a more reective and emergent self-understanding could emerge [44, 45]. While inner dialogue is a vital mechanism for introspection, in its spontaneous form, it may remain constraine d within familiar perspectives or spiral into unproductive rumination [ 14 , 34 ]. T o scaold this process more constructively , various aids for introspec- tion have been explored—from artifacts such as journals, photos, CHI ’26, April 13–17, 2026, Barcelona, Spain Hayeon Jeon et al. Figure 1: InnerPond fosters AI-mediated introspection through multi-agent dialogue. The system transforms users’ I-positions into conversing LLM agents (lotus leaves), supporting dialogue among multiple I-p ositions within the self. and music [ 21 , 91 , 110 ] to more recent LLM-based systems [ 54 , 108 ]. These approaches have provided valuable ways to foster introspec- tion. However , by framing users primarily as singular , unied selves, they have given less attention to the plural and dynamic nature of the self—particularly the way multiple internal voices coexist, com- pete, and negotiate with one another during complex life choices [44, 45]. W e aim to address this gap by translating DST’s concept of a dialogical self into an interactive system and, through this instan- tiation, generating design knowledge ab out supporting dialogue among one’s multiple selv es. In particular , LLMs can now embody distinct personas with consistent perspectives [ 23 , 103 ], enabling new ways to externalize this plurality . Lev eraging this capability , we investigate a design e xploration that makes inner multiplicity explicit—representing multiple I-positions as distinct AI agents that can dialogue with one another and with the user . Through this approach, we explore how p eople experience and make sense of such multi-agent inner dialogue. Building on this, w e address the following research questions (RQs): • RQ1: What ar e the ke y design considerations for supporting dialogue among multiple selves in exploring the plural and dynamic self ? • RQ2: How do pe ople e xperience and make sense of engaging with their multiple selves through multi-agent AI-mediated introspection? In addressing the RQs, we designed and developed InnerPond as a research probe [ 12 , 20 , 49 ]—not to evaluate system eectiveness, but to inspire design exploration of how pe ople experience dialogi- cal engagement with their multiple I-positions. Informed by design considerations around spatial metaphors, scaolding structures, and multi-agent conversational dynamics, InnerPond r epresents an individual’s various I-positions as independent LLM-based agents that can be visualized, elaborated, and brought into dialogue within a shared space. Anchored in the lotus leaf and p ond metaphor (Fig- ure 1), InnerPond aims to enact co existence and interconnection among multiple selves, helping users explore their inner landscape and move towar d more emergent forms of self-understanding. T o understand how pe ople experience such engagement with their multiple selv es, w e conducted a user study (N=17) with young adults delib erating b etween two career paths—a context where competing internal voices often surface [ 3 , 113 ] and structured support can help navigate their complexity [ 11 , 64 ]. Our ndings showed that through externalizing and dialoguing with their inner voices, participants could de velop a meta-cognitiv e perspective— recognizing distinct selves as interconnected parts of a larger self. For some, this shift reframed ho w they approached car eer decisions, moving from reacting to external pressures toward reecting on internal alignment. Through this exploration, we use the term “ inter-self commu- nication ” to describ e dialogue among one’s own inner voices, as conceptualized in DST . Unlike solitary reection, it gives distinct voices space to interact, making the multiplicity of the self more tangible; unlike dialogue with others, it ke eps exchanges grounded within the self. This perspe ctive opens broader possibilities for de- signing AI systems that support people in exploring, elaborating, and engaging with their plural selves. W e discuss the implications of this approach, considering both the opportunities it oers and the design tensions that accompany it. T aken together , this paper makes two contributions to HCI: • Design considerations for plural and dynamic self en- gagement — Informed by our design journey with Inner- Pond, we articulate key considerations for translating DST into interactive e xperience: spatial metaphors that conve y coexistence and interconnection, scaolded stages that struc- ture the introspe ctive journey , and multi-agent dynamics InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection CHI ’26, April 13–17, 2026, Barcelona, Spain Figure 2: Core concepts of Dialogical Self Theory , illustrating the relationship b etween multiple I-p ositions in the dialogical space and a meta-position. that balance consistency with exibility . These considera- tions extend current approaches to AI-supported introspec- tion by shifting from a unied to a dialogical view of the self. • Empirical insights into inter-self dialogue — W e pro- vide rich qualitativ e insights into how p eople experience and make sense of engaging with their multiple selves, reveal- ing how inter-self dialogue is experienced and interpreted when instantiated through a multi-agent system. Our nd- ings surface both opportunities and design tensions, oering implications for future systems that aim to support the plural and dynamic nature of the self. 2 Related W ork This section examines inner dialogue as a core mechanism of in- trospection and introduces Dialogical Self The ory . W e then review existing approaches to supporting introspection. Finally , we high- light the potential of multi-agent systems as a means to support dialogical forms of introspection. 2.1 Dialogical Self Theor y as Framework for Understanding Inner Dialogue Introspection is a practice of looking inward and exploring one’s own thoughts, values, and emotions [ 19 ]. In psychology , it is char- acterized as explicitly observing and reecting on an individual’s mental state [ 13 ]. Distinguished from simply reecting on or rec- ollecting memories, introspection involves critically evaluating signicant experiences and patterns in one’s life, and continuously questioning what has been achieved and what is desired [115]. In- trospection and self-reection are closely related—both involve examining one ’s mental states to foster self-understanding [4, 32]. Howev er , they dier in scope: self-reection refers to the cogni- tive process of examining one’s thoughts and actions, whereas introspection encompasses a broader philosophical construct that includes emotional and evaluative dimensions [ 48 ]. Through this process, individuals search for answ ers concerning their identity , values, and life priorities, fostering deeper self-understanding and identity construction [ 15 , 16 ]. Particularly in life choice contexts such as career exploration, activ e introspection on one’s identity and life directions plays a crucial role [96]. A key internal process facilitating introspection is inner dia- logue [ 31 ]. Inner dialogue is a process where “dierent internal perspectives engage in mental conversation, ” exchanging views and positions within the mind [ 85 , 92 ]. Unlike simple monologues, inner dialogue reects the dialogical nature of the self and simulates social dialogical relations internally [ 2 , 46 ]. Through confronting and integrating diverse internal perspectives, it enables reexiv e and contemplative self-understanding and identity construction [66, 93]. T o better conceptualize this pr ocess, Dialogical Self Theory (DST) [41, 45] oers a structured framework for understanding the com- plex dynamics of inner dialogue . It conceptualizes inner dialogue as an exchange of thoughts and perspectives between distinct I- positions within the self, helping identify and interpret various forms of internal dialogical activity during introspe ctive processes [ 85 ]. DST reconceptualizes the self not as an isolated inner entity but as an inher ently relational pr ocess—a dialogical being described as a dynamic multiplicity of I-positions within the so ciety of mind [ 42 ]. Each I-position represents a narratively structured unit of the self with its own perspe ctive and voice. For example, I-positions such as “I as dreamer” or “I as fearful” reect dierent internal aspects of the person [ 44 ], engaging in dialogical relationships [ 43 ]. These I-positions are shape d as external voices become internal- ized within the self. Through multidimensional inner dialogue un- folding in a dialogical space, these I-positions are dynamically repo- sitioned through DST’s core mechanisms—p ositioning, counter- positioning, and repositioning—with their relationships continually CHI ’26, April 13–17, 2026, Barcelona, Spain Hayeon Jeon et al. restructured [ 40 , 45 ]. DST further introduces the concept of the ‘ meta-position ’ , a higher-lev el vantage point from which a person can observe and reect on their various I-positions. This capacity for meta-level reection serves as a crucial mechanism for self- understanding and integration [43–45]. This view r esonates with recent theorizing in personal informat- ics, where the self has been reconceptualized as dynamic and con- structed thr ough ongoing interaction with the world [ 97 ]. Y et, while such work emphasizes the self ’s temporal and social dimensions— how it evolv es across past, present, and future , and in various social contexts—it gives less attention to the dialogical mechanisms through which distinct aspects of the self actively engage with one another . DST complements this perspective by foregr ounding these inter-positional dynamics, oering a foundation for understand- ing introspection not merely as self-observation, but as a process through which self-understanding emerges from dialogue among multiple internal voices. 2.2 Existing Approaches to Supporting Introspection While inner dialogue serves as a key me chanism for introspection, its naturally occurring forms are often limited by the individual’s existing experiences and perspectives, resulting in constrained self- understanding [ 14 ]. In situations where thoughts and emotions are deeply intertwined, individuals can fall back on habitual thinking patterns or become trapp ed in negative cycles such as rumination [ 34 ]. T o address these challenges, various introspective interven- tions have been developed. Traditional introspective interventions have supported self- reection through personal artifacts such as photographs, letters, music, and diary writing [ 8 , 9 , 63 , 105 ]. Among these , diary writing has been one of the most common methods for critically evalu- ating past experiences and questioning the futur e [ 86 ]. Howev er , these writing-based approaches, grounded in a p erson’s existing perspective [ 79 ], may have limitations in oering new insights or alternative viewpoints [80]. Further , digital technologies have expanded these practices, in- corporating journals [ 88 , 110 ], photographs [ 21 , 91 ], music [ 60 , 84 , 88 ], and social media data [ 26 ]. A parallel line of work has explored quantied self-tracking, where pe ople reect by examining his- torical records of sensor-based data [ 28 , 99 ]. Howev er , research in reective informatics emphasizes that reection does not arise automatically from exposure to personal artifacts or data; carefully scaolded experiences are ne eded [ 6 , 107 ]. T o provide such scaf- folding, early conversational agents explored structured reection through rule-based interactions [ 1 , 62 , 74 ]. These systems have provided valuable entr y points for reection, though they often oered limited adaptability to individual contexts [68]. More recently , Large Language Models (LLMs) have enabled personalized and adaptive support for reection [ 5 , 56 ]. Explor eSelf [ 108 ] supp orts users in articulating personal challenges through adaptive questioning. In mental health conte xts, MindfulDiary [ 61 ] helps psy chiatric patients document daily experiences through con- versational journaling. For career exploration, Letters from Future Self [ 54 ] facilitates introspe ction through dialogues with LLM-based agents that simulate a future self. These systems represent mean- ingful steps for ward, yet they typically frame users as singular , unied selves. Given that the self can be viewed as dialogical and multi-positional, ho w to support exchanges among distinct internal perspectives remains underexplored. This motivates our approach to facilitate inner dialogue across multiple I-positions—a design goal we explore in this paper . 2.3 Potential of Multi- Agent Systems for Supporting Inner Dialogues Recent studies have shown that LLMs are remarkably capable of simulating distinct personas [ 23 , 103 ]. Beyond surface-level role- playing, LLMs can now generate agents that reect the complex interplay of identity components, such as personality traits [ 55 , 71 ] and value systems [ 121 , 123 ], mirroring the richness of real-world individuals [ 67 ]. These agents could maintain consistent narratives and unique perspectives while responding adaptively to dier ent contexts [54]. Building on this, multi-agent systems are emerging as power- ful tools for surfacing diverse perspe ctives and domain-specic insights across elds [ 36 ]. In particular , they have shown promise in collaborative problem-solving, decision support, and creative ideation [ 112 ]. For instance, in software engineering, agents sim- ulating roles such as product managers, architects, and engineers can coordinate to tackle complex development tasks [ 47 ]. In cre- ativity support, agents with distinct backgrounds and viewpoints can engage in brainstorming sessions, generating more original ideas than a single LLM operating alone [73]. Here, we draw attention to the p otential of combining LLMs’ persona-simulation capabilities with a multi-agent architecture to support inner dialogue. By simulating distinct personas as dif- ferent I-p ositions, such systems may capture complex dialogical processes—including disagreement, perspective shifts, and nding common ground—forms of interaction that prior studies have doc- umented among agents within multi-agent structures [100, 122]. Despite this potential, the use of LLM-based multi-agent sys- tems for introspection through inner dialogue remains largely un- explored [ 25 , 29 , 30 ]. In response, we developed InnerPond , a research probe that applies a multi-agent LLM architecture to ex- ternalize inner voices as dialogical agents, enabling users to engage them in structured inner dialogue. 3 InnerPond: Translating DST into Interactive Experience Because Dialogical Self Theory (DST) conceptualizes the self as a plurality of interacting inner voices rather than a single unie d identity , translating this theoretical persp ective into an interactive system demanded careful attention to the e xperiential conditions under which inner dialogue can unfold. Our design process fol- lowed a research-thr ough-design approach [ 125 , 126 ], using itera- tive prototypes as probes to explore ho w DST could be materially instantiated and how users might meaningfully engage with their plural selves. At the outset, we established three core design goals grounded in DST’s foundational principles. These goals functioned not as te ch- nical requirements but as orienting principles that guided design InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection CHI ’26, April 13–17, 2026, Barcelona, Spain Figure 3: Three-phase metaphor evolution for the dialogical space: From group chat as battle ring (Phase 1), to stones highlighting individuality yet emphasizing separation (Phase 2), to lotus leaves appearing distinct yet sharing roots, enabling a meta-p osition perspective (Phase 3). experimentation and decision-making throughout the dev elopment of InnerPond. Design Goal 1 (DG1) — Support co existence rather than res- olution. Inner conict is commonly framed as a debate in which one position must ultimately pre vail [ 82 ]. In contrast, DST em- phasizes coexistence, negotiation, and the ability to hold tensions without prematurely collapsing them into a nal answ er [ 44 , 45 ]. Our goal was to create interaction structures that sustain multiple voices in parallel and resist convergence toward a single dominant perspective. Design Goal 2 (DG2) — Ensure equal aention and legiti- macy for each inner voice. P lurality is not simply the presence of many voices; it requires that each voice be heard, r ecognized, and granted the opportunity to contribute meaningfully [ 27 , 94 ]. W e sought to design a system that encourages engagement with both dominant and quieter persp ectives, supporting self-authorship and preventing str ong voices from overshado wing vulnerable or easily overlooked ones. Design Goal 3 (DG3) — Foster relational connectedness and meta-positional perspective. Externalizing multiple voices risks fragmenting the self into disconne cted pie ces. DST emphasizes that inner perspe ctives are dynamically related, and that reection emerges through perceiving these r elationships fr om a higher-level meta-position [ 35 , 124 ]. Our goal was to support users in expe- riencing a holistic, relational structure of identity—shifting from being inside a conict to observing and orchestrating it from a meta-positional perspe ctive. These three goals served as the conceptual backb one of our design inquiry . The subsequent sections detail how iterative explo- rations of spatial metaphors, interaction scaolding, and conver- sational orchestration functioned as design experiments to opera- tionalize and balance these goals, culminating in the current form of InnerPond. 3.1 Metaphor Evolution for the Dialogical Space Guided by these design goals, we explored spatial metaphors for a dialogical space where multiple inner voices could coexist as distinct yet interconnected agents. This exploration evolved through three phases—each iteration revealing tensions that prompted further renement. Phase 1: Group Chat Room as Bale Ring. In the early design phase, we envisione d a ‘debate arena’ based on the obser vation that inner conict often manifests as competing perspectives [82]. Borrowing the familiar gr oup chat interface [ 57 , 78 ], w e envisioned creating an environment where each voice could clash in real- time. While this approach was promising for externalizing conict, we realized that it implied the assumption that “ one voice must win ”—directly contradicting DG1 ’s emphasis on coexistence with- out premature resolution. This prompted us to seek a space where multiple perspectives could evolve together , rather than compete for dominance. Phase 2: Landscape of Mind through Stone Metaphor . T o design a space for coexistence rather than victory , we drew inspi- ration from DST -based therapies [ 51 , 116 ] that use natural objects like stones [ 65 ] to symbolize I-positions and reconstruct identity through spatial arrangement. Translating this into a digital envi- ronment, we adopted a “ Landscape of Mind ” interface where users could arrange digital stones to construct their o wn inner maps. This approach addressed DG1 by allowing v oices to coexist spatially , and partially supported DG2 by giving each I-p osition a distinct, tangi- ble representation. How ever , while the stone metaphor eectively highlighted the individuality of each I-position, we recognized that it simultaneously emphasized separation and xed identity—failing to address DG3 ’s emphasis on relational connectedness and meta- position perspe ctive [40, 45]. Phase 3: Inner Pond through Lotus Leaf Metaphor . T o addr ess the limitations of stones as representations of selv es, we explored alternative metaphors, including a ‘Zen Garden’ [ 42 , 83 ]. However , its emphasis on stillness and xed balance was misaligned with the dynamic, uid nature of I-positions. W e ultimately arrived at the “ Lotus Leaf ” metaphor , where each leaf appears independent on the surface while remaining connected through shared roots beneath the water—capturing both the distinctness and interconnecte dness of I-positions. This metaphor integrates all three design goals: leaves coexist on the pond surface without one dominating another ( DG1 ); CHI ’26, April 13–17, 2026, Barcelona, Spain Hayeon Jeon et al. each leaf maintains a distinct and visible identity ( DG2 ); and the shared underwater roots, together with the user’s bird’s-eye per- spective, naturally embo dy DST’s concept of the ‘ meta-position , ’ enabling relational connectedness and holistic observation ( DG3 ). 3.2 Scaolding for Inter-Self Introspe ction With the spatial metaphor in place, w e shifted our focus to structur- ing how users would mo ve through the introspective experience. As a refer ence framework, we drew on “ Composing the Self ” [ 65 ], a DST -based therapeutic te chnique using stones described earlier . This therap y guides clients through a seven-step process—including 1) selecting stones as I-positions, 2) arranging them spatially to vi- sualize relationships, 3) labeling (naming) each stone, 4) examining the arrangement, 5) voicing each position, 6) repositioning stones, and 7) reecting on the composition as a whole. W e iteratively adapted these seven steps into four key stages, designed to follow a natural cognitive ow while lowering the bar- rier to engaging with one’s multiple selves. Rather than preserving each therapeutic step as a discrete phase, w e reorganized them into higher-level stages that reect how these activities unfold in an interactive, AI-mediated context. Specically , Steps 1 (selecting stones) and 3 ( labeling stones) were combined into Stage 1: I-position Construction , while Steps 2 (arranging), 4 (examining), and 6 (repositioning stones) were consolidated into Stage 2: Spatial Conguration . Step 5 (v oicing each p osition) was adapted as Stage 3: Dialogical Exchange , and Step 7 (reecting on the comp osition) became Stage 4: Reective Snapshot . The following subsections elaborate on how each stage reinterprets its corresponding therapeutic steps by lev eraging the aordances of an LLM-enabled, interactive medium. From Manual Selection to AI- Augmented Identication. The original therapy we dre w on [ 65 ] requires clients to structure and verbalize their inner world from scratch, demanding a high le vel of self-awareness and cognitive load [ 51 ]. W e envisaged transform- ing this into AI-augmented identication, leveraging the analytical capabilities of LLMs. Specically , we utilized LLMs to assist users in identifying their multiple selves, drawing on user-provided data such as personality traits, values, and personal narratives. T o avoid imposing a xed interpretation, we designe d these AI-generated I-positions as starting points that users can enrich through co- creation—exploring, editing, or conversing with the suggeste d inner voices in their own words. This approach served as the foundation for [Stage 1: I-p osition Construction] in InnerPond, helping users concretize ambiguous inner voices into tangible objects (indi- vidual lotus leaves) and recognize their multifaceted selves. From P hysical A rrangement to Digital Visualization. In the original therapy [ 65 ], clients are asked to express relationships between dierent selves thr ough physical distance and height of stones. W e translated this into digital manipulations, such as adjust- ing the position, size, and color of the lotus leaves. Unlike physical stones, these attributes can b e easily adjusted, allowing users to uidly e xplore and r eshape their inner relationships. This approach was implemented as [Stage 2: Relational Positioning] , where users spatially compose their inner landscape by arranging lotus leaves on the pond. From Role-playing to Dialogical Exchange. In the original therapy [ 65 ], clients sequentially role-play each stone ’s perspective to negotiate inner conicts. W e reinterpreted this role-playing step as real-time dialogue among agents representing dier ent selves as lotus leaves. This aimed to allow users to obser ve conversations among their selves from a third-person perspe ctive or intervene as a mediator , without the burden of direct role-playing. This approach became the core of [Stage 3: Dialogical Exchange] , enabling dynamic exchanges among conicting selves through dialogue . From Closing to T emporal Integration. In the original ther- apy [ 65 ], the session concludes with a one-time r eection on the nal composition. W e reframed this closing not as an ending but as a record of a “ T emporal Self-portrait ”—a record that can accu- mulate over time, helping users understand their inner world as a owing narrative rather than a xed entity . This concept was ma- terialized in [Stage 4: Reective Snapshot] , where users capture the current conguration of their pond to preser ve the moment of introspection. 3.3 Conversational Orchestration and System Implementation The four-stage scaolding established above was implemented in InnerPond , an LLM-based multi-agent system that guides users through four stages: (1) I-position Construction , (2) Relational Posi- tioning , (3) Dialogical Exchange , and (4) Reective Snapshot . While presented sequentially , these stages can b e navigated freely , allow- ing for non-linear self-exploration. W e situated this implementation within the context of career decision-making —spe cically , situations where users face a choice between two career paths. Such decisions naturally evoke inner dialogue as multiple perspectives and value conicts come into play [ 96 ]. Career decisions provide concrete scenarios in which diverse I-p ositions are activated, interact, and evolve . While our study focuses on this context, the system’s architecture can extend to other situations involving inner multiplicity , such as work–life balance or relationship conicts [77]. In the following sections, we detail how each stage translates into concrete interface features and underlying technical mechanisms, and describe the overall technical congurations. 3.3.1 Stage 1: I-position Construction. This stage centers on gener- ating and enriching I-positions through a pipeline that combines LLM analysis with user input (Figure 4). T o extract diverse selves relevant to users choosing between two career paths, we carefully designed a knowledge base structure using the SPeCtrum frame- work [ 67 ]. This knowledge includes demographics, personality traits (Big Five Inventory–2 Short Form (BFI-2-S) [ 109 ]), work val- ues (Super’s W ork V alues Inventory (SW VI) [ 98 ]), self-identied strengths and w eaknesses, and the two car eer options under consid- eration. Quantitativ e scale data are converted into natural language summaries using Chain of Density (CoD) pr ompting [ 102 ] for ef- fective LLM understanding (see Appendix A.1 for the complete knowledge structure). Based on this knowledge structure, we prompted the LLM to ex- tract approximately ten distinct inner v oices (I-positions), ensuring no overlap among them. W e ensured they were distributed evenly across three categories: (1) those common to both career paths, InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection CHI ’26, April 13–17, 2026, Barcelona, Spain Figure 4: I-position extraction and story enrichment pipeline: user kno wledge is transformed into distinct I-positions, iteratively rened through scaolding questions and user responses. Figure 5: The interface for [Stage 1: I-position Construction]: (a) I-positions visualized as lotus leaves on the main pond view , with a prole modal showing name, viewpoint, and narrative; (b) “Stor y Enrichment” modal with scaolding questions to rene the narrative; (c) “1:1 Dialogue” modal for direct conversation with a leaf agent. CHI ’26, April 13–17, 2026, Barcelona, Spain Hayeon Jeon et al. Figure 6: The interface for [Stage 2: Relational Positioning]: Users can freely customize their inner landscape by (a) adjusting leaf size, (b) repositioning leaves via drag-and-drop, and (c) selecting colors through the dewdrop-shaped button. (2) those sp ecic to one career path, and (3) those specic to the other . T o represent each I-position as an authentic inner voice, we constructed a prole for each, comprising a name in the “Myself, ... ” format ( e.g., [Myself, Seeking Stability ]), a core vie wpoint capturing its unique voice, and a rst-person narrative revealing underlying motivations (see Appendix A.2 for the full prompt). Based on these proles, each I-position is visualize d as an inter- active leaf agent—a named lotus leaf on the pond that embodies a distinct inner voice (Figure 5-(a)). Users can hover over a leaf to reveal its core viewpoint or click to open a modal with the full prole. T o encourage e qual attention to all voices, all leaves initially appear in the same gray color and size. T o prevent users from passively accepting AI-generated I- positions, we introduce d a co-creation process that allows users to rene and personalize them. The “ Stor y Enrichment ” feature generates three scaolding questions by identifying gaps in the current narrative ( see Appendix B.1 for the full prompt). As users answer these questions, the leaf agents are enriche d with the user’s own voice, reecting their spe cic context rather than a generic description (Figur e 5-(b)). Users can also modify I-positions through direct editing or add and delete leaf features. Finally , users can engage in “ 1:1 Dialogue ” with each leaf agent (Figure 5-(c)). This dialogue allows users to question, challenge, or explore an I-position’s motivations without interference from other voices. Each leaf agent maintains its core identity to express consistent perspectives even when challenged (see Appendix B.2 for the full prompt). 3.3.2 Stage 2: Relational Positioning. This stage fo cuses on ex- ternalizing the relationships among I-positions, allowing users to spatially compose their inner landscap e. Once users have explored and enriched their I-positions, they could b egin to arrange them spatially to express relationships among their inner voices (Figur e 6). Users can freely adjust the position, size, and color of each leaf. Unlike physical stones, these attributes can be easily changed, al- lowing users to uidly e xplore and reshape their inner landscape in personally meaningful ways. 3.3.3 Stage 3: Dialogical Exchange. This stage aims to facilitate dynamic dialogue among I-positions, allowing users to obser ve or mediate conversations between their inner voices. This multi-agent conversation begins when users select two leaf agents for dialogue (Figure 7-Left). The LLM then analyzes their relationship and gen- erates three tailored discussion topics based on their dynamics (see Appendix C.1 for the full prompt): • Conict : Questions to explore value clashes, seek compro- mise and navigate dilemmas. • Complementary : Questions on ho w distinct aspects can work together . • Unrelated : Questions to explore the diversity and unique motivations of each self. After a user selects a topic, the group conv ersation begins and is coordinated by an orchestrator module implemente d through prompt engineering rather than a separate multi-agent framework. The orchestrator functions as a control prompt that tracks con- versational context and interaction state, managing turn-taking among the two leaf agents and the user to prev ent any single agent from dominating. For example, when a direct question is posed, the addressed agent is prompted to respond rst; when a challenging argument arises, the orchestrator issues a targeted prompt inviting the opposing agent to refute. Users can engage in the conversation thr ough two modes (Fig- ure 7-Right). In obser vation mo de, activated via the Skip feature, the system injects an intervention message—“Do not repeat view- points; engage more de eply with each other’s p erspectives”—to encourage sustained dialogue between agents. In me diation mode, users intervene directly through their own input; the agents then InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection CHI ’26, April 13–17, 2026, Barcelona, Spain Figure 7: The interface for [Stage 3: Dialogical Exchange]: (Left) After sele cting two leaf agents, the system suggests three discussion topics based on their relationship; (Right) the group chat where users can participate via “Send” or observe via “Skip. ” respond while maintaining their core identities, allo wing users to guide the direction of the dialogue. 3.3.4 Stage 4: Ref lective Snapshot. This stage intends to allow users to capture a snapshot of their current inner landscap e. By click- ing the save pond button, users can capture their current pond conguration—including leaf arrangement and visual attributes—as a timestamped image le (e.g., “{user }’s InnerPond_{timestamp}”). Users can revisit past inner landscapes at any time, h elping them view the self not as xed but as an evolving narrative . 3.3.5 T echnical Configuration. InnerPond was implemented as a web-based application using T ypeScript [ 76 ] and Next.js frame work [ 117 ]. The system consisted of a frontend for user interactions and a backend managing user data and LLM APIs. All interaction data was stored in MongoDB [ 81 ], with access restricted to the research team only . For the generative pipeline, we utilized Anthropic’s Claude-3.5-Sonnet, selected for its strong performance in K orean among models available at the time of the study [52, 59]. The system’s generative functionality is organized into three LLM-driven pipelines, each implemente d through prompt engi- neering rather than a de dicated multi-agent framework. First, I- position Extraction generates initial, personalized I-positions from pre-survey data. Se cond, Single-A gent Interactions supp ort stor y enrichment and one-to-one dialogue between the user and an in- dividual leaf agent. Third, Multi- Agent Orchestration co ordinates dialogue among multiple leaf agents by using a control prompt that manages turn-taking and interaction ow based on conversational context. Selected prompts are included in the Appendix A-C; full prompts are available at https://github .com/syou- b/innerpond. 4 Methods T o explor e how people experience and make sense of engaging with their multiple selves (RQ2), we conducted a qualitative exploration with participants actively deliberating between two career paths, using InnerPond as a research probe for enabling inter-self dialogue. 4.1 Participants W e recruited 17 participants (7 male, 10 female, average age = 27.18 ( SD =4.93, Min=21, Max=37)) who were actively deliberating between two career directions, pr oviding a naturally high-stakes context rich in internal conict and self-negotiation. Recruite d via university communities, participants wer e either ( a) undergraduate or graduate students expecte d to graduate within three years, or (b) recent graduates currently exploring career opportunities while unaliated with any educational institution or workplace. 4.2 Study Procedure The thr ee-week study involv ed three sessions: (1) pr e-survey , (2) in- person session, and (3) follow-up interview . All research materials and procedures were appr oved by the Institutional Revie w Board (IRB) of the hosting university . The entire study , from materials and sessions to data analysis, was conducte d in Korean. For the analysis and writing process, participants’ quotes were translated into English by bilingual researchers on our resear ch team. 4.2.1 Pre-survey . T o construct knowledge of participants’ multiple selves rele vant to their career deliberations, participants completed an online pre-survey se ven days prior to the in-person session. The survey consisted of two main parts: • Personal Information Sur vey (5 min) : Participants pro- vided demographics and completed p ersonality assessments using the BFI-2-S [ 109 ] and work values assessment through the SW VI [ 98 ]. Participants were also aske d to list three personal strengths and weaknesses. • Career Context Survey (5 min) : Participants briey de- scribed the two career paths under consideration through short, open-ended responses. For each path, they detailed: (1) the background and rationale for considering it, (2) their expectations and concerns, and (3) any preparation processes undertaken. CHI ’26, April 13–17, 2026, Barcelona, Spain Hayeon Jeon et al. This information was used to generate agents representing par- ticipants’ I-positions prior to the in-person session (see Appendix A.1 for an example). 4.2.2 In-person Session. W e conducte d the in-person session in a quiet room with two researchers for appr oximately 120 minutes. T o foster a r elaxed and introspective atmosphere , the lab space was set with meditative music and dim lighting. During the session, one re- searcher guided participants and conducte d interviews (facilitator), while the other observed participant behavior and system interac- tions in real-time (note-taker), documenting notable moments and emerging patterns in eld notes. As the session began, participants provided informed consent af- ter being briefed on the study’s objectives and procedures. T o allow the observing researcher to monitor interactions without disrupt- ing the participant, participants shar ed their screens via Zoom. The seven-stage session—comprising an initial interview , onb oarding, four InnerPond activities each follow ed by a p ost-activity interview , and an exit interview—was designe d to investigate participants’ evolving experiences. In particular , post-activity inter views ex- plored participants’ in-the-moment experiences by probing specic decisions and interaction patterns obser ved during each activity . T o facilitate this, the facilitator use d the documented eld notes as prompts to help participants recall their process, jointly revisit- ing moments of interest to understand the reasoning behind their actions. All interviews were conducted using a semi-structured interview guide, with probing and follow-up questions informed by participants’ responses and observed interaction patterns (see Appendix D). (1) Initial Interview (5 min) : W e b egan with a brief inter view to understand participants’ current career concerns and how they typically engaged in inner dialogue. (2) Onboarding (5 min) : W e introduced InnerPond, e xplaining that their inner v oices—extracted fr om the pre-survey—were represented as ‘lotus leav es. ’ W e presented an overview of the four activity stages, along with guidelines for the main features of each stage. Participants were informed that inter- views w ould follo w each activity and that researchers would be available to assist them. (3) I-position Construction (25 min activity + 15 min in- terview) : After onboarding, participants w ere asked to ex- amine the extracted leaves in their InnerPond and articulate them. The post-activity inter view explored the authenticity of the extracted I-positions and their motivations for editing, adding, or removing leaves (e.g., “Which leaf did you fe el represented you the best, and why?” , “Why made you delete the [Myself, ...] leaf ?” ). (4) Relational Positioning (5 min activity + 10 min inter- view) : Participants wer e instructed to visually express their inner landscape by adjusting the size, color , and p osition of leaves in the pond, while thinking-aloud to v erbalize their thoughts. The post-activity inter view explored the rationale behind their p ond structure and design, probing specic choices observed during the activity (e.g., “When arranging the leaves, what did you consider and why did you choose their p ositions?” , “What made you not resize the lotus leaves at all?” ). (5) Dialogical Exchange (15 min activity + 10 min inter- view) : Participants w ere instructed to create and engage in group conversations with dierent leaf combinations and topics. The post-activity inter view focused on their group conversation experience, their criteria for selecting leaves, and the suitability of LLM-suggeste d topics, as well as ob- served conversation patterns (e.g., “Why did you fo cus most of your time on conversations in this group chat?” ). (6) Reective Snapshot (10 min activity + 5 min inter view) : In the nal activity , participants freely explored previous stages at their own pace, revisiting and engaging with their I-positions as they wished. Afterward, participants were asked to save their InnerPond landscape using the ‘save pond’ feature. The brief post-activity interview explored what participants chose to revisit ( e.g., “What did y ou mainly do during the free exploration time?” and their experience of capturing a snapshot “How did you feel when you saved your pond at the end?” ). (7) Exit Inter view (10 min) : After completing all stages, an exit interview was conducted about the overall InnerPond experience. W e inquired about participants’ most memorable moments, the perceived naturalness of the dialogue , and any new insights they gained. W e also explored how the Inner- Pond experience impacted their self-understanding. Upon completion of the session, participants were compensate d with 40,000 Kor ean W on (equivalent to 28 USD). 4.2.3 Follow-up Inter view . T wo weeks after the in-person session, we conducted a follow-up phone interview to explore how partici- pants reected on their experiences with InnerPond following the session. Although participation in the follow-up inter view was not mandatory , all but one of the 17 participants chose to take part. Following the semi-structured approach, the interview focused on whether the insights from the session persisted or evolv ed, whether participants noticed new patterns in how they engaged with their inner voices in daily life, and any shifts in their approach to career deliberation. 4.3 Data Analysis 4.3.1 Data Preparation. W e documented participant behavior and experiences through multi-faceted data: (1) system logs and (2) qualitative data from in-person sessions and follow-up interviews. System logs provided detailed behavioral traces across all stages, including I-position proles, their modications, and chat messages with individual leaves (Stage 1), spatial congurations of the pond (Stage 2), and group conversation logs with leaf combinations and discussion topics (Stage 3) (see Appendix E for detailed data descrip- tion). Qualitativ e data included eld notes documenting participant behavior and screenshots during activities, think-aloud transcripts from Stage 2, and all interview transcripts. All interviews were transcribed by the two researchers who conducted the study . W e synthesized all data into a master document for each participant, integrating system logs and qualitativ e data. T wo researchers collab- oratively constructed these documents and visually organized the materials on FigJam [ 111 ] to support systematic reference through- out the analysis. InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection CHI ’26, April 13–17, 2026, Barcelona, Spain 4.3.2 Thematic A nalysis. W e employed thematic analysis [ 17 ] to systematically analyze the collected data. Our approach combined deductive and inductive elements: we used the four stages of Inner- Pond as an overar ching deductive framework to segment partici- pants’ e xperiences and behaviors. Within each stage , we performed open coding and inductively developed themes from the data. The analysis primarily operated at the semantic level, fo cusing on partic- ipants’ explicit accounts of their experiences, while also attending to latent meanings when interpreting the underlying motivations and signicance that participants attributed to their interactions. The entire resear ch team conducted this analysis through weekly meetings, continuously cross-referencing insightful excerpts from interview transcripts with corresponding system logs to develop richer interpretations. Through an iterative pr ocess, we grouped the codes into higher-level themes and further developed them into more specic sub-themes to capture emergent patterns. As a result, we identied key themes across the four stages: the externalization and p ersonalization of inner voices (Stage 1), relational positioning through spatial composition (Stage 2), the orchestration of dialogue among multiple voices (Stage 3), and the preservation of temporal self-portraits (Stage 4). Additionally , through follow-up interviews, we observed an enduring theme of reective integration into everyday life and career decisions. 5 Findings This section reports how InnerPond, use d as a research probe, shaped the ways participants engaged with their plural selves as they moved through its four designed stages. Se ctions 5.1–5.4 present ndings from the in-person session, organized by each stage, while Section 5.5 reports insights from follow-up interviews conducted two weeks later . Rather than focusing on the stages themselves, we highlight the kinds of experiences that emerged within them—how participants articulated inner v oices (Stage 1), made their relationships visible (Stage 2), encountered dialogue among these voices (Stage 3), and captured temporal self-portraits (Stage 4). Lastly , we describe how these experiences continued be- yond the sessions and informed participants’ everyday reections and career decisions. 5.1 Co-Creating I-positions Through Externalization and Reconstruction In Stage 1, participants co-created personalized I-p ositions from AI-generated fragments of self. Overall, externalizing their inner world into discrete ‘lotus leaves’ allowed them to step back and view their psychological states from a distance, creating space for a deeper recognition of their multi-layer ed identity and the surfacing of overlooked aspects of themselves. 5.1.1 Initial Perceptions of Externalized Selves. Participants’ rst encounters with the AI-generated leaves began with an imme diate scan of how well each one aligned with their lived experience— whether it felt resonant, subtly o, or misaligned. Overall, most participants accepted the majority of their leaves, with over half (9/17) retaining the entire set, while others removed only one or two ( M =1.17). Many found the externalization itself valuable for introspection. P1 described how seeing her inner world visualized prompted an objective re-e valuation, while P8 found that having her unarticulated thoughts e xternalized allowed her to e xplore ne w self-perceptions, prompting reections like, “Oh, I can think this way . ” Many also described a sense of recognition as they scanned the leaves, noting how familiar w orries or aspirations were suddenly given form. P7, for instance, was surprised to see her anxieties concretely represented: “Except for deleting one leaf, every single thing I always worr y about was specically targeted and realized. It felt like [they are] my actual inner voices. ” In particular , when the leaves reected specic, personal experiences rather than generic concerns, participants received them with a pronounced sense of au- thenticity . For example, P11’s [Myself, Rewarded by Admin W ork] resonated de eply because the narrative reected her conviction that “supporting the organization from behind is more meaningful. ” Others noted subtle misinterpretations rather than outright inac- curacy . P5 felt [Myself, Longing for the Prestige of Professorship] overstated her motivations: “I just thought of it as an honorable job. But this makes it sound like I’m thirsting for social recognition, and that’s not quite right. ” Her reaction illustrated how even minor dis- crepancies in framing could shift participants’ sense of ownership over a given I-position. Overly misaligned leaves sometimes prompted resistance, caus- ing some participants to r emove them imme diately . P7, for example, was decisive in remo ving [Myself, Anxious about Success]: “This is not me. ” Y et more often, participants chose to retain discordant leaves as prompts for reection. P3 was initially skeptical of [Myself, Drained by Socializing] but later reframed her stance and chose to keep it: “ At rst, I didn’t recognize it, but seeing it listed with the same weight [as others leaves] made me consider it more. ” Rather than dismissing the unfamiliar voice , she found that its presence surfaced neglected aspe cts of her career thinking—demonstrating how even imperfect extractions could also ser ve as catalysts for deeper self-examination. 5.1.2 Elaborating and Personalizing the Extracted Selves. After mov- ing past their initial impr essions, participants enter ed a mor e activ e phase of working with the leav es—clarifying, enriching, and r eshap- ing the AI-generated narrativ es so they better reected the nuances of their own e xperiences. In doing so, all participants engaged in story enrichment by answering AI-scaolded questions ( M =11.82 times), to rene abstract or generic voices into situated, meaningful selves. Through this process, participants personalize d the abstract leaves by gr ounding them in their own contexts—reshaping initially unconvincing or generic narrativ es to better align with their values and perspectives. For example , P3 dev eloped the vague narrative of [Myself, W anting to Contribute] into a clear professional aspiration. Answering tailored questions like “What specic social problems do you want to solve?” helped her move from an abstract desire to a concrete consideration of which issues she might want to addr ess as a lawyer . Participants also to ok direct control by modifying narratives that felt insucient or misaligne d. P6, for instance, edited the LLM’s phrasing to better reect her perspe ctive, replacing “ economic sta- bility” with “ economic abundance. ” Furthermore, when participants felt important aspects of their inner voice were missing, they added new leaves to ll these gaps. In total, 12 participants added an CHI ’26, April 13–17, 2026, Barcelona, Spain Hayeon Jeon et al. Figure 8: Example of a one-on-one dialogue with [Myself, Diving Deep] (P15). average of 1.67 ne w leaves. P2, for instance , added [Myself, V alu- ing Oce Environment], explaining how his happiness and e- ciency depended on tangible workplace factors, like a comfortable atmosphere and good foo d. P4 went further , intentionally creating [Myself, Believing in Myself ] to counterbalance the negative leaf [Myself, Anxious about Being Ordinary], as a move to prevent his inner landscape from being dominated by negative r epresentations. Through these processes of enrichment, modication, and ad- dition, participants transformed the AI-generated leaves into per- sonalized representations. The resulting leaves ( M =11.06) were not perceived as static data summaries but as distinct entities, each em- bodying a dierent aspe ct that constituted the self: “b eings similar to me, but with a distinct character—like in the movie ‘Inside Out”’ (P15). In that lm, emotions such as Joy and Sadness are portrayed as independent characters living inside a person’s mind. This com- parison suggests that the co-creation process gave the I-positions a sense of independent identity , making them feel like genuine inner voices rather than mere labels. Participants also valued the process for surfacing aspects of themselv es they typically overlooked—even when the output was not perfectly accurate. P3 r eected: “With my single consciousness, I can’t pay attention to everything at once. This process helped me extract parts of myself I usually miss. So even if some leaves felt o, it was still meaningful. ” These dynamics illus- trate how co-creation shifted the system from simply classifying users to enabling a shared ownership of their plural selves. 5.1.3 Conversing with Individual Selves. With each I-position (leaf ) elaborated and structured, the process shifte d from organization to directly engaging these selves in one-on-one dialogue. Rather than conversing with all leaves, participants gravitated toward ab out half of their selves ( M =5.29), focusing on those where they felt tension, curiosity , or a nee d for clarity . These conversations to ok varie d forms, shaped by participants’ intentions and their relationship with each self. Some participants use d the one-on-one space for open-ende d exploration, treating the self as a partner for thinking through complex issues. P15, for instance, held a collaborative conversation with [Myself, Diving Deep] to shape his research direction, asking questions like, “How can I nd an interesting topic?” and “How can I sustain my motivation?” (Figure 8) to facilitate open-ended reection on the meaning of his research. Rather than seeking denitive answers, he used the dialogue to think aloud with a part of himself he wanted to understand better . Others approached conversations more strategically . P7, for ex- ample, posed the same fundamental questions—like “Should I choose what I love or what I’m goo d at?” —to a cluster of related selves, in- cluding [Myself, Fearing Uncertainty], [Myself, Pursuing Success], and [Myself, W anting Recognition]. By collecting responses from multiple I-p ositions, she compared p erspectives and analyze her career concerns from dierent angles. The course of dialogue also depended on how familiar partici- pants were with each self ’s voice—and how willing they w ere to acknowledge it. With highly aligned selves, conversations were often used to verify authenticity , sometimes even through adver- sarial testing. P13, for example , intentionally challenged [Myself, Craving Quick Success] by asking whether he should simply lower his expectations. When the self counter ed that doing so would only lead to future regret, he found he couldn’t r efute it: “I tried being uncooperative in the conversation, but the voice was perfectly aligned with what’s inside me. ” By contrast, conv ersations with less familiar or initially rejected selves often unfolde d as a process of gradual acceptance. P17 initially denied [Myself, Lacking Expertise], which voiced his professional insecurities. Y et through continued dialogue , he came to recognize a suppressed anxiety: “ At rst I kept denying it. But as we talked, I thought it could be a part of me. Maybe there has b een a voice of anxiety in my heart, but I didn’t know it because I didn’t have the time or energy to think ab out it. ” As dialogues deepened, many participants also varie d their con- versational style to match the persona of each leaf. P7 discov ered she was acting as a mediator , emphasizing p ossibilities when talking to selves expr essing uncertainty , and highlighting risks when en- gaging with overly optimistic selves. For example, she encouraged [Myself, Anxious about Success] by reminding it that abilities can be developed, while cautioning [Myself, Passionate for Challenges] that excitement alone might not sustain long-term commitment. This adaptive approach was particularly eective in conversa- tions with inherently negative selves, as it allowed participants to objectively view their weaknesses. P5 found it particularly “fun and helpful to talk with the leaves that contained my aws, ” such as [Myself, Lacking Perseverance]. She explained that hearing her weaknesses spoken by another entity created emotional distance, making them easier to examine without self-judgment. P12 echoed InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection CHI ’26, April 13–17, 2026, Barcelona, Spain (a) Inner landscape of P4 (b) Inner landscape of P6 Figure 9: Examples of holistic inner landscapes composed by participants. (a) P4 created a scene of self-compassion, describing one self whisp ering encouragement to another . ( b) P6 arrange d the leaves to express gradual personal growth, likene d to a musical crescendo. this sentiment: through conversations, he was compelled to con- front contradictions he would normally dismiss, describing it as “a powerful moment of reection that came from seeing [his] weaknesses objectively . ” Howev er , a few participants reported fe eling fatigue d when a negative self be came too entr enched in its character . P11 felt drained after a conversation with [Myself, T ense in Relationships], noting that “if a self is to o negative, it’s tiring, even though it’s me. ” P6 also described [Myself, Hesitant to Decide], as caught in an endless loop of anxiety: “Because it kept circling around anxiety and hesitation, it didn’t feel like it would oer a solution. ” These experiences suggest that while externalizing negative aspects could facilitate construc- tive self-understanding, ov erly repetitive portrayals may become counterproductive. While most participants actively engaged in one-on-one dia- logues and found them inter esting and valuable, a fe w who wer e not accustomed to introspection found it dicult to initiate con- versations with their selves. P4 explained: “I’m not the type to have thought deeply ab out myself. When I get a question, I can think about it, but when I tried to start a conversation, the questions didn’t easily come to me. ” His account suggests that open-ended dialogical en- gagement presupposed a level of self-directed initiative that not all participants felt equipped to sustain. 5.2 Composing Relational Landscapes Through Visual Arrangement In Stage 2, participants visually composed their ‘inner landscape ’ by arranging individual selves as lotus leaves. Building on the clearer understanding of each leaf developed in Stage 1, they moved b e- yond considering leav es in isolation and began attending to how dierent aspe cts of themselves related, contrasted, or clustered together . Through iterative spatial adjustments, participants articu- lated a more integrated sense of self, expressing relational meanings through variations in the size, color , and p osition of each lotus leaf. In particular , many participants adjusted the size of leaves to express the relativ e importance of dierent selves. P13 e xplained: “I made [Myself, Finding W ork I Love] the biggest. It felt like many of my other thoughts stemmed from this one, and it’s a thought I’ve had for a very long time. . . ” By contrast, a few delib erately avoided assigning relative sizes. P11, for instance , chose to keep all leaves equal, explaining that “If I make one bigger than the others, it feels like I’m treating my worries unequally . I wanted to see them all on the same level. ” Participants also adjusted color and brightness to conv ey emo- tional qualities or to organize their selves thematically . P16 asso- ciated hue directly with emotional tone, using warm colors for positive aspects and cooler shades for negative or uncertain ones. P7 adjusted brightness to indicate with her curr ent values—brighter leaves reected parts of herself she embraced, while dimmer ones stood for aspects she felt less integrated with. P3, meanwhile, as- signed colors to thematic domains, explaining: “I’ve categorized my dreams from long ago in blue, nancial matters in yellow , and important aspects in red. ” Others developed more metaphorical in- terpretations of color . P4 arrange d his leaves into what he calle d a “seasonal clock” , using shifts in color to suggest the temp oral change—spring-like greens for emerging aspects of himself, autum- nal hues for those he felt wer e fading (Figure 9-( a)). P5 used color to symbolize vitality , e xplaining that she gave “dead and dying” shades to the selves tied to uncertainty or fatigue, while assigning bright green to the thoughts that “made [ her] fe el alive and motivated. ” Participants also iteratively positioned the leaves in various ways to convey both the relationships among selves and their relative depth. Some use d spatial stacking to express containment or hi- erarchy: P5 stacked certain leaves on top of others to signal that CHI ’26, April 13–17, 2026, Barcelona, Spain Hayeon Jeon et al. one self encompassed another . Others used spatial links to repre- sent inuence or sequence. P8, for instance, connected leaves in a chain to illustrate how one concern led to another . Still others highlighted mediation: P3 placed one leaf b etween two conicting ones, describing it as a means of bridging the tension b etween them. Some went further , mo ving beyond these attributes to imbue the entire comp osition with holistic meaning. P6, for example, arrange d her leaves to r epresent her ongoing pr ocess of gro wth (Figure 9-( b)), explaining, “The overall composition is like a crescendo in music. For me, it expressed my gradual growth. ” Similarly , P4 describ ed his landscape as a scene of self-compassion, where one self whisp ered encouragement to another (Figur e 9-(a)). He noted, “It looks like the one in the back is whisp ering words of comfort to the one in front, as if a bigger self were encouraging a smaller one. ” The high degree of freedom in visual comp osition supp orted the introspection processes by giving participants greater control over how their thoughts were organized. Many reporte d fe eling empowered to actively manipulate the space to clarify what mat- tered to them. P11 contrasted this with more passive forms of self-expression: “Unlike a diar y that only expresses emotions, this in- terface helped me work with my thoughts by rearranging and aligning them. So it felt closer to nding a solution. ” These accounts illustrate how spatial composition aorded a meta-p ositional persp ective, enabling participants to see their plural selves as a conne cted land- scape rather than isolated fragments. 5.3 Orchestrating Multi- Agent Dialogue as Meta-Cognitive Moderator In Stage 3, participants engaged in dialogical exchanges among their inner voices, e xploring how dierent selv es could converse and negotiate with one another . As they observed disagreements emerge and inter vened as mediators when neede d, they encoun- tered new perspectives on personal dilemmas and became aware of how dier ent voices enacted authority , hesitation, or reconciliation within the self. 5.3.1 Curating Dialogical Encounters A mong Selves. As participants began preparing their inner voices for dialogue, they intentionally combined dierent selves with distinct intentions, forming early ideas about how these voices should meet and interact. One com- mon approach was to bring opposing selves into dialogue to exam- ine internal dilemmas. P3, for instance , paired [Myself, Y earning for Creative Freedom] with [Myself, Desiring Financial Abundance] to confront the tension between passion and security (Figure 10). P7 emphasized the purpose of this approach, explaining: “My main idea was to put conicting selves in dialogue. It was more useful for weighing my options. ” Some participants combined similar or complementar y selves to surface subtle distinctions that were not immediately apparent. P8, for instance, paired [Myself, Pursuing a Stable Life] and [Myself, Sensitive to Others’ Evaluations]—two selves she thought were similar . She reected: “I didn’t expect much at rst, but I found unexpected answers here. These two actually explained myself best, and I realized they were shaping my recent career thinking together . ” Through such pairings, participants recognized how aligned voices could mutually reinforce and arm one another . Figure 10: Example of a group conversation where P3 pairs [Myself, Y earning for Creative Freedom] and [Myself, Desir- ing Financial Abundance]. Some participants also experimente d with unrelated selves to probe for hidden connections, driven by curiosity about unexpected insights. For example, P13 paired [Myself, W orried About Being Unplanned] with [Myself, W anting to Innovate with Robots]: “I chose these because they seemed unrelated, but seeing their interaction made me realize my disorganization could aect my career path— whether as an obstacle or mayb e even as a source of fresh ideas. ” Through this pairing, he recognized that voices he had pre viously treated as separate were, in practice , working in tandem. Building on these diverse intentions, participants then shifted their attention to shaping the dialogue itself. Having chosen which selves should meet, they selected one of the AI-suggeste d topics that most closely captured what they wanted the encounter to reveal or help them think through. Often, these aligned with the questions they already had in mind. As P12 put it: “When I chose these two leaves, certain topics naturally came to mind, and there was usually something similar among the suggestions. ” Some selecte d topics to gain practical guidance on ongoing dilem- mas. P10, weighing a PhD path against corporate opp ortunities, asked: “What are the minimum conditions to continue researching despite reduced income?” for a dialogue between [Myself, Antici- pating a PhD] and [Myself, W orried About Finances]. She explained, “I wanted to nd clues for my current concerns. ” For her , the topic served as a way to structure an internal debate that had previously InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection CHI ’26, April 13–17, 2026, Barcelona, Spain been dicult to articulate. Likewise , P5, a doctoral student consider- ing a pr ofessorship, chose: “How can I r educe anxiety while moving forward on this career path?” for her conversation between [Myself, Fearing T enure Uncertainty] and [Myself, V aluing Autonomy]. Others used the topic suggestions to open up new perspectives rather than resolve immediate concerns. P2, for example, br ought together [Myself, Energized by People] and [Myself, Easily Sway ed by Emotions] to explore: “What criteria distinguish relationships that energize me from those that drain me?” He noted that he had never thought about this distinction before , and the suggested topic prompted a new line of self-inquiry . T aken together , this pr ocess of pairing selv es and selecting topics functioned as a introspective practice in its own right. By choosing which voices to bring into dialogue, participants began to surface the tensions and questions that felt most important to them. Also, by selecting a specic topic, the y w orked to clarify what they hoped to understand or resolve. 5.3.2 Engaging the Dialogue as Observer and Mediator . Once a topic was chosen, participants engaged with the dialogue in varied ways, moving uidly between obser ving and stepping in to mediate. Many began by observing from a distance, often by clicking the ‘Skip’ button to let the selves converse on their own. This observational stance was not merely passive; rather , it oered a vantage point from which participants could see their thought patterns externalized with a level of clarity that solitar y introspection rarely aorded. Participants used varied metaphors to describe this experience. P16 described it as watching her mental “chain of thought” unfold step by step. P8 called it an “ expanded version” of solitar y thinking—as if her internal monologue had be en extended and rendered visible. P17 likened it to a “pr oxy battle , ” where conicting ideas could clash without him being caught in the middle. Across these descriptions, participants agreed that the dialogue gave structure to what had previously felt scattered or circular . P1 reecte d: “W atching my mental conicts externalized, I realized—this is how I’ve been thinking. ” The dialogue ser ved as a reected mirror , re vealing patterns of internal conict she had not recognized in concrete terms. As the conversation progressed, many participants transitioned into a more active r ole, stepping in to mediate when the dialogue reached an impasse or drifted into repetition. They treated such intervention not as overriding the conversation but as facilitating negotiation among their inner v oices. Some guided opposing selves toward common ground, asking questions like “What do you agree on?” (P15). Others pushed the dialogue beyond false binaries—P4, for instance, challenged a dichotomy by asking: “Economic free dom and meaningful contribution aren’t separate things, right?” Howe ver , not all participants found it equally easy to take such an active role. For example, P14, who expected more guidance from the AI, note d that maintaining the dialogue on his own was challenging. Beyond these challenges, many participants described the multi- voice dialogue as a source of insights they would not have reached on their own. Seeing conicting selves interact in one shared space revealed unexpected connections and points of integration. P6 de- scribed experiencing a sense of exhilaration “when AI touches on a p oint I hadn’t considered. ” For P11, the conversation sparked an immediate realization: “I could pursue my values not just through employment, but by participating in policy contests. ” The dialogue also helped participants adopt a more balanced view of their inter- nal conicts. P2 note d that, unlike solitar y inner dialogue where one side tends to dominate, the multi-voice format allowed him to “se e b oth opinions more evenly . ” This balance d exposure also helped some clarify their true inclinations. P6 reected, “Seeing both answers, I realized which one really app ealed to me. ” 5.4 Preserving Introspective Journeys Through T emp oral Self-Portraits In Stage 4, participants preserved their inner landscape by saving their p ond as a temporal self-portrait. This nal step translate d the introspective process into a tangible artifact—a snapshot of how their inner voices were congured at that moment, which they could re visit and reect upon over time . P5 remarked: “This is today’s me, and at another time it will look dierent. Like a diary , I can look back and rememb er how my mind was arranged then. ” Participants also expected that viewing saved ponds would re-evoke the introspective pr ocess engaged during cr eation. P12 noted: “Even now , looking at it makes me remember what I was thinking—se eing the leaves makes me say , ‘Oh right!”’ For others, saving their p ond provided a rare sense of accom- plishment. Participants who usually lacked time for introspection felt satised to have a visible outcome of their eorts. P2 e xpressed: “I usually don’t have time to look at myself deeply , but visualizing the result felt goo d. It gave me a sense of ecacy from having examined myself. ” This satisfaction was reinforced by the freedom to shape the inner landscape without b eing constrained by external stan- dards. P4 emphasize d: “It was good that there wasn’t a predetermined answer for creating the pond. If it had suggested a ‘desirable pond, ’ it would have made things worse. Such freedom to create it in my own way was what helpe d most. ” Furthermore, some imagined using successiv e snapshots to trace how their inner landscap e might evolve over time. P10 reecte d: “It would be interesting to look back after accumulating [more snap- shots]. Y ou could notice things like, ‘Oh, I’ve b een thinking about this consistently . ”’ These comments suggest that participants valued not only the immediate snapshot but also its potential as a longitudinal record of change, extending introspection into an ongoing narrative of the self. 5.5 Sustaining Introspective Reections After the InnerPond Experience In follow-up interviews conducted two weeks after the in-person session, many participants described the InnerPond experience as something that lingered—shaping how they notice d themselves, interpreted ev eryday experiences, and appr oached career decisions. Rather than fading after the activity , they mentione d that the in- sights and realizations from the session carried into their everyday thinking. A recurring theme was a mor e integrated sense of self. Partici- pants spoke of their inner voices not as scattered fragments but as parts of a larger whole. P1 explained: “ All these lotus leaves come together to form me as a person. I keep thinking that these small ele- ments collectively make up who I am. ” For some, this integrated view translated into a sustained attentiveness to specic inner voices, which continued to surface in everyday thinking rather than fading CHI ’26, April 13–17, 2026, Barcelona, Spain Hayeon Jeon et al. after the session. P1 found that certain I-positions—such as [Myself, Transcending Limitations] and [Myself, Having Many W orries]— continued to surface in her e veryday thinking, keeping her attuned to b oth her aspirations and anxieties: “These selves don’t just dis- appear . I keep being aware of both their strengths and weaknesses. ” This ongoing awareness also made participants more willing to acknowledge aspects of themselves they had previously avoided. P8, for instance, realized through the session that she had b een avoiding situations where others might judge her . This awareness persisted beyond the session: “I’d been avoiding others’ judgments out of fear , and now I want to face them more directly . ” In some cases, this stance further extended into how participants perceived every- day situations. P16, who revisited her saved snapshot on her own, noticed that everyday moments now triggered questions she had never thought to ask—r evealing blind spots in what she typically paid attention to. Beyond internal awareness, some participants described how these reections carrie d into their engagement with real-world situations. P4, for instance, found that the experience made his in- decisiveness more explicit to himself, which in turn helped e xternal feedback register with gr eater clarity: “I had vaguely sensed this was a problem, but seeing it laid out made it unmistakable. So when my professor later said the same thing, it really resonated, and I realized I ne eded to work on it. ” These reections also extended into career deliberation. Participants emphasized a shift from chasing external conditions to asking whether a path genuinely aligned with their internal voices and values. P11 reected: “When the job market is bad, I sometimes apply to jobs I actually don’t want, just out of anxiety . But after InnerPond I realized, ‘That’s why I kept struggling—it wasn’t what I wante d. ”’ Similarly , P13 explained that he had b egun weigh- ing career options dierently: “Before, I only thought about making more money . Now I put my traits and thoughts together to see if a path really ts me. ” P16 also described how the experience helped her reconnect with her initial motivations in her eld, explaining, “Remembering what rst drew me to the eld and what makes it fulll- ing helped me change how I rethink my [career ] direction. ” T ogether , these accounts suggest that the InnerPond experience continued to shape participants’ self-understanding—prompting them to notice themselves dierently , reframe career choices, and carry reective insights into everyday life. 6 Discussion This study investigated how an AI-mediated approach grounded in Dialogical Self Theory (DST) can b e translated into an interactive system that supports introspection by making the inherent multi- plicity of the self experientially accessible. Through the design and evaluation of InnerPond , w e examined both the design considera- tions for supporting dialogue among multiple selves and how users experience engaging with their plural selves thr ough such a system. Participants interacted with their inner voices—visualized as lotus leaves—in increasingly layered ways: rst articulating and rening individual I-positions, then composing relational landscapes, and nally orchestrating dialogues among them. Our ndings illustrate how operationalizing DST’s dialogical view of the self in an AI- mediated form—externalizing multiple I-positions as distinct agents that users can obser ve, arrange, and converse with—can support introspective engagement in practice. Following DST , we use the term inter-self communication to refer to dialogical engagement among one’s o wn I-positions as instantiated in our system. Intrapersonal communication is usually framed as solitar y in- ner dialogue [ 85 , 118 , 119 ], while interp ersonal communication unfolds between separate individuals [ 7 , 18 , 53 ]. Inter-self com- munication does not introduce a new category of communication; instead, it serves as a descriptive lens for characterizing a particu- lar interactional conguration that becomes visible when DST is instantiated in an interactive system. It remains intrap ersonal in that the dialogue unfolds within a single individual, while adopt- ing an explicitly dialogical structure that makes inner multiplicity observable and interactive. In this sense, inter-self communication helps articulate how people engage with multiple facets of the self through dialogical interaction, without departing from or extend- ing beyond DST’s established theoretical framing. This p erspective suggests an opportunity for HCI to design introspection tools that acknowledge and support the multiplicity of the self—moving be- yond systems that assume a single inner position toward those that facilitate dialogue among multiple perspe ctives, helping users surface tensions and navigate trade-os in complex life decisions. Drawing on this p erspective, we discuss broader implications for designing AI-mediated introspection systems: how translating a dialogical view of the self into interactive form shapes users’ introspective e xperiences, what design considerations emerge from this translation, and what e xperiential tensions arise when users engage with externalized facets of themselves. 6.1 From Unied to Dialogical Self: New Possibilities for Inner Dialogue Our study highlights the value of reconsidering how the self is conceptualized in HCI. Much prior work on user models [ 29 , 54 ] has adopted a view of a centralized, unied self [ 43 ]—a framing that has proven useful for many design contexts. Y et when pe ople face complex life decisions, they often experience themselves as pulled in multiple directions, negotiating among competing values and aspirations. Our ndings suggest that supporting the self as a “dynamic multiplicity” [ 41 , 45 ] within interactive systems can sup- port forms of introspective engagement that are dicult to surface when the self is modeled as singular , by making this multiplicity visible and actionable. This view r esonates with recent theorizing in personal informatics, where the self has been reconceptualized as dynamic and constructed through ongoing interaction with the world [ 97 ]. Our work extends this perspective by operationalizing the self not only as dynamic, but as inherently plural—composed of multiple voices that can b e externalized, arrange d, and set in dialogue. While our study focused on career de cision-making, the dialog- ical mechanism we examined reects a more general pattern of internal negotiation describe d in DST , and may extend to other domains where the self is pulle d in multiple directions—such as moral dilemmas, relationship conicts, or lifestyle changes. Mor e broadly , this suggests that AI systems can be designe d not to rede- ne introspection or decision-making, but to scaold the internal negotiations that precede complex human decisions, supporting InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection CHI ’26, April 13–17, 2026, Barcelona, Spain users as the y explore tensions among competing p erspectives rather than converging prematurely on a single answ er . 6.2 Designing for Dialogical Introspe ction Translating DST’s conceptualization of the self into an interactive system posed distinct design challenges. Our design was guide d by three goals: supporting coexistence rather than resolution among inner voices, ensuring equal attention for each voice, and fostering relational conne ctedness through a meta-p ositional perspective. Through our iterative design pr ocess grounded in these goals, we identied two considerations that may help articulate how dialogi- cal introspection can be supporte d in practice. First, we found that metaphors could implicitly dene rela- tionships among selves. When we experimente d with dierent metaphors, each carrie d distinct interactional logics: a ‘ debate arena’ implied zer o-sum competition where one v oice must pr evail, while ‘digital stones’ emphasize d individuality but obscured the connectedness among perspectives. W e ultimately chose the ‘lotus pond’—distinct leaves sharing hidden roots conveyed both independence and interdependence, while the bird’s-eye view invited a meta-positional persp ective. This design exploration taught us that metaphor selection war- rants careful consideration, as it could predispose users toward par- ticular relational dynamics—competition, isolation, or integration— before any interaction begins. This aligns with DST’s view of the self as a “society of mind, ” in which I-positions can engage in diverse dynamics—from conicts to coalitions and cooperation [ 44 ]. For in- trospection support systems, this suggests attending to metaphors as a way of shaping how users perceive and explor e relationships among inner voices, rather than treating them as neutral visual choices. Second, because articulating one’s inner multiplicity can be de- manding, we sought to balance system guidance with user agency . Without support, users may struggle to surface multiple asp ects of the self from scratch or may default to familiar , dominant p erspec- tives. At the same time, ov erly directive AI behavior risks imposing externally generated interpretations that can displace users’ sense of ownership over their own inner narratives. T o navigate this ten- sion, we designed the system to oer AI-generated I-positions as provisional entry points rather than authoritative r epresentations— distributed across dierent facets of the self to lower the barrier to self-articulation while explicitly inviting user judgment. Users could validate, r ene, or discard these suggestions, ensuring that the r esulting inner narrativ e r emained authentically their own. Our ndings show that meaningful introspective engagement often emerged precisely in this space of negotiation, wher e users actively questioned, revised, or resisted AI-initiated content rather than accepting it. This resonates with DST’s therapeutic emphasis on surfacing positions that are “less dominant but vital”—voices that may other- wise remain inaccessible yet hold potential for a more integrated self-understanding [ 65 ]. For designers of introspection support tools, this points to scaolding that helps users access less familiar aspects of themselv es without collapsing the balance toward AI-led interpretation or unstructured self-reection. 6.3 Engaging with Externalized Selves: T ensions and Trade-os While our metaphor and scaolding choices guided the design of InnerPond, participants’ actual engagement with their externalized selves revealed a set of tensions that emerge when AI me diation meets users’ introspective agency . In practice , supporting dialogical introspection involved navigating trade-os between structure and openness, guidance and authorship, and productive distance fr om the self versus emotional fatigue . These tensions did not indicate design failures; rather , they surfaced as constitutive challenges of engaging with multiple selves through an AI-mediate d system and oer insight into the experiential boundaries of dialogical intro- spection. 6.3.1 Between Accurate Ref lection and Productive Misalignment. A central tension concerned the degree of alignment between AI- generated I-positions and users’ existing self-perceptions. Much prior work in agent design assumes that stronger alignment is inher- ently benecial, fostering trust, rapport, and recognition between user and system [ 22 , 89 , 120 ]. In our study , however , alignment functioned less as a straightforward design objective and more as a trade-o with distinct introspective conse quences. On one hand, participants often valued leaves that resonated with familiar con- cerns, nding validation in seeing their anxieties or aspirations con- cretely represented. On the other hand, high alignment sometimes stabilized tentative or limiting self-concepts, reinforcing e xisting interpretations rather than inviting reconsideration [70]. Meanwhile, instances of misalignment—where the AI mirrored the user in unexpected ways—w ere not always experienced as errors to be corr ected. Some participants initially did not r ecognize certain I-positions, but chose to retain them anyway , later nding that these unfamiliar voices surfaced neglected aspects of their thinking. This pattern aligns with what prior work has termed “productive unfamiliarity” [ 37 , 38 ]. T aken together , these ndings suggest that introspection support systems ne ed not optimize solely for accurate reection; instead, allowing for carefully b ounded misalignment can create opportunities for user negotiation, reinterpretation, and deeper self-inquir y . 6.3.2 Between Consistency and Adaptability in AI Personas. Char- acter consistency is often emphasized as a core requirement in LLM agent design, fostering believability and coherent interac- tion [ 33 , 39 , 103 ]. In our study , however , consistency became a double-edged requirement when agents represented facets of the self rather than external characters. On one hand, rigid consistency gave agents credibility , making them feel like distinct and recog- nizable voices. On the other hand, it sometimes trapped users in unproductive loops—particularly with anxious or negative personas that circled around the same concerns without evolving. Y et the solution would not simply b e to have agents shift their stance whenever users push back. Recent work on LLM sycophancy—where models excessively accommodate user prefer- ences at the cost of truthfulness [ 104 ]—illustrates the risk: agents that yield too readily lose credibility as distinct perspectives. In the context of dialogical introspection, this creates a tension between preserving the integrity of an I-position and allowing it to change in response to the user’s evolving understanding. If I-positions CHI ’26, April 13–17, 2026, Barcelona, Spain Hayeon Jeon et al. remain entirely static, they cannot reect the developmental nature of introspection; if they adapt too readily , they cease to function as meaningful counterparts in dialogue. This suggests rethinking consistency not as immutability , but as coherent evolution—maintaining a r ecognizable standpoint while allowing shifts that mirror the user’s ongoing negotiation among multiple selves. 6.3.3 Between User Agency and Guidance. A key design goal of InnerPond was to give users control ov er how they explor ed their inner world—allowing them to freely create, edit, and arrange their I-positions. Most participants engaged actively with this freedom, nding it valuable for self-directed e xploration. Howev er , some par- ticipants’ experiences showed that this openness did not support everyone equally . While many users readily initiated conversations and navigated among their I-positions, others—particularly those less accustomed to introspection—struggled to de cide how to pro- ceed, hesitating to initiate dialogue or expecting more direction from the system. These accounts point to a tension between providing freedom and enabling a felt sense of control. Bennett and colleagues dis- tinguish between material agency —the range of actions a system makes available—and experiential agency —the user’s felt capacity to act meaningfully within that space [ 10 , 24 ]. In InnerPond, mate- rial agency was consistently high, but experiential agency varied depending on users’ familiarity with introspective practices. This suggests that simply oering open-ended interaction would not al- ways be sucient for users to feel capable of engaging pr oductively with their externalized selves. These moments highlight a recurring design challenge for AI- mediated introspection: leaving users fully in control can support self-authorship, yet can also leave some users uncertain ab out how to begin or how much structure is appropriate. Rather than resolv- ing this tension by privileging either autonomy or guidance, our ndings suggest the value of adaptive scaolding—modulating the system’s level of guidance in response to users’ condence and engagement. Such an approach may allow AI support to recede as users gain momentum, while remaining available when hes- itation or uncertainty emerges, preser ving user agency without abandoning them to unstructured self-reection. 6.4 Ethical Considerations and Potential Risks AI-mediated introspection raises ethical questions that deserve careful attention, particularly because such systems intervene in how users engage with and make sense of their own inner v oices. Unlike general-purpose chatbots that respond to external queries or preferences, InnerPond generates representations of who the user is—externalized facets of identity that users then reect on and negotiate with. From a DST persp ective, this is conse quential: if the self is dialogically constructed rather than xed, AI-generated I-positions are not neutral mirrors, but active elements that may shape ongoing processes of self-understanding. In designing InnerPond, we took several steps to mitigate this risk. AI-generated I-positions were framed as provisional starting points rather than authoritativ e descriptions, and users were e xplic- itly encouraged to validate, rene , or remove them. The multi-agent structure aimed to preserve plurality rather than collapse identity into a single, system-driv en narrative, and users retained control over their nal inner landscape. These choices position AI as a facilitator of introspe ction rather than an arbiter of identity , yet they do not eliminate risk. Users may gravitate toward familiar narratives or accept unfamiliar ones without sucient introspec- tion [ 69 , 90 ], and because inner voices carry emotional weight, the psychological stakes of AI-mediated inuence may b e higher than in other human– AI interactions. Importantly , these risks are unlikely to be evenly distributed across users. Prior work suggests that age, experience, and psy- chological context shape how people engage with AI-mediated self-exploration [ 50 , 95 ]. Vulnerable populations—including ado- lescents or individuals experiencing mental health challenges— may be particularly susceptible to internalizing AI-generated self- representations, calling for additional safeguards such as content moderation, session limits, or integration with human support. Privacy presents a related concern. Externalizing inner dialogue generates data that reects internal tensions, aspirations, and vul- nerabilities. As AI-mediated introspection systems become more eective at eliciting rich self-disclosure, questions of data protection, retention, and secondary use become increasingly consequential [87, 106]. Overall, these considerations underscore the importance of de- signing AI-mediated introspection systems that ke ep users in con- trol of meaning-making, maintain transparency ab out the provi- sional nature of AI-generated content, and establish clear bound- aries around data use. 6.5 Limitations and Future Directions This study oered insights into AI-mediated multi-self dialogue, but several limitations point to future dir ections. First, our ndings are drawn from a sample of South K orean university students and recent graduates, limiting generalizability across age gr oups and cul- tural contexts. South Korea’s generally positive orientation towar d AI [ 58 , 72 ] may hav e shaped participants’ openness to AI-generated selves; users in cultures with more cautious attitudes toward AI may respond dierently . Therefore, future work should examine how age, cultural background, and prior experiences with AI shape receptiveness to and engagement with AI-mediated introspection. Second, the single-session design restricted examination of tem- poral dynamics. A longitudinal approach could extend InnerPond to continuous self-documentation, allowing I-positions to e volve, merge, and fade over time. Also , while participants found engaging with multiple I-positions qualitatively distinct from monological self-talk, our study did not include a direct comparison with single- agent alternatives. Future comparative studies could help clarify the specic contributions of a multi-agent structure to introspe ctive experience. Finally , while InnerPond fo cused on career reection, AI- mediated inter-self dialogue may extend to other domains—such as mental health support and education—where e xternalizing internal voices could supp ort sense-making. Howev er , extending this approach requires car eful consideration. For example, in settings involving acute psychological distress, AI-mediated reection would demand clinical ov ersight and clear boundaries around when professional intervention is required [ 75 , 114 ]. More broadly , InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection CHI ’26, April 13–17, 2026, Barcelona, Spain the approach may b e less suitable where imme diate or directive guidance is needed rather than op en-ended exploration. 7 Conclusion This work introduced InnerPond, an AI-mediated multi-agent sys- tem designed as a design probe to externalize and structur e inner multiplicity through dialogical interaction. Grounded in Dialogical Self Theory , the system surfaced multiple I-positions as distinct yet connected voices, enabling participants to engage with themselves not as a single entity but as a constellation of perspectives. Through staged interactions of co-creation, relational composition, and di- alogue, participants wer e able to surface ov erlooked inner voices, articulate relationships among competing perspectives, and actively negotiate tensions within the self, particularly in the context of career reection. Across participants’ engagements, our ndings highlight se veral characteristics of AI-mediated dialogical introspection. Externaliz- ing inner voices created productive distance, allowing participants to examine familiar concerns with reduced self-judgment, while encounters with partially misaligned or unfamiliar voices prompted reinterpretation rather than rejection. Dialogical exchanges among I-positions supported meta-positional reection, enabling partici- pants to move between observing their inner dynamics and inter- vening as mediators. At the same time, these interactions revealed recurring tensions and trade-os—between accurate reection and productive misalignment, consistency and adaptability in AI per- sonas, and user agency and system guidance—that shaped how introspection unfolded in practice. By situating InnerPond as a concrete instantiation of this ap- proach, we contribute design kno wledge about how AI-mediated systems can support engagement with a plural self, while also foregrounding the experiential limits, ethical considerations, and interpretive responsibilities that accompany AI participation in ongoing processes of introspection and self-understanding. Acknowledgments This work was supported by the SN U-Global Excellence Research Center establishment project at Se oul National University and by the Institute of Information & Communications T echnology Planning & Evaluation (II TP) grant funded by the Korea govern- ment (MSI T) (No.RS-2021-II211343, Articial Intelligence Graduate School Program, Seoul National University). References [1] Amal Abdulrahman, Deborah Richards, and A yse Aysin Bilgin. 2023. Chang- ing users’ health b ehaviour intentions through an emb odied conversational agent delivering explanations based on users’ beliefs and goals. Behaviour & Information Technology 42, 9 (2023), 1338–1356. [2] Ben Alderson-Day , Kaja Mitrenga, Sam Wilkinson, Simon McCarthy-Jones, and Charles Fernyhough. 2018. The varieties of inner speech questionnaire– Revised (VISQ-R): Replicating and rening links b etween inner sp eech and psychopathology . Consciousness and cognition 65 (2018), 48–58. [3] T ami Amir and Itamar Gati. 2006. Facets of career decision-making diculties. British Journal of Guidance & Counselling 34, 4 (2006), 483–503. [4] James Arnéra, Chun Hei Michael Chan, and Mauro Cherubini. 2024. Digital, Ana- log, or Hybrid: Comparing Strategies to Support Self-Reection. In Proceedings of the 2024 ACM Designing Interactive Systems Conference . 3435–3452. [5] Sanghwan Bae, Donghyun K wak, Sungdong Kim, Donghoon Ham, Soyoung Kang, Sang- W oo Lee, and W oomyoung Park. 2022. Building a Role Sp ecied Open-Domain Dialogue System Leveraging Large-Scale Language Models. In Proceedings of the 2022 Conference of the North A merican Chapter of the Associa- tion for Computational Linguistics: Human Language T echnologies . 2128–2150. [6] Eric PS Baumer . 2015. Reective informatics: conceptual dimensions for design- ing technologies of reection. In Proce e dings of the 33rd annual A CM conference on human factors in computing systems . 585–594. [7] Steven A Beeb e, Susan J Beeb e, Mark V Redmond, and Lisa Salem- Wiseman. 2002. Interpersonal communication: Relating to others . Allyn and Bacon Boston. [8] Russell W Belk. 1988. Possessions and the extended self. Journal of consumer research 15, 2 (1988), 139–168. [9] Russell W Belk. 1990. The role of possessions in constructing and maintaining a sense of past. Advances in consumer research 17, 1 (1990). [10] Dan Bennett, Oussama Metatla, Anne Roudaut, and Elisa D Mekler. 2023. How does HCI understand human agency and autonomy?. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems . 1–18. [11] Marit Bentvelzen, Paweł W W oźniak, Pia SF Herb es, Evropi Stefanidi, and Jasmin Niess. 2022. Revisiting reection in HCI: Four design resources for technologies that support reection. Proceedings of the A CM on Interactive, Mobile, W earable and Ubiquitous T echnologies 6, 1 (2022), 1–27. [12] Kirsten Boehner , Janet V ertesi, P hoebe Sengers, and Paul Dourish. 2007. How HCI interprets the probes. In Proceedings of the SIGCHI conference on Human factors in computing systems . 1077–1086. [13] Edwin G Boring. 1953. A history of introspe ction. Psychological bulletin 50, 3 (1953), 169. [14] V eronique Boudreault, Christiane Trottier , and Martin D Provencher . 2018. Investigation of the self-talk of elite junior tennis players in a competitive setting. International Journal of Sport Psy chology 49, 5 (2018), 386–406. [15] Nico Brand, William Odom, and Samuel Barnett. 2021. A design inquiry into introspective AI: surfacing opportunities, issues, and paradoxes. In Procee dings of the 2021 ACM Designing Interactive Systems Conference . 1603–1618. [16] Nico Brand, William Odom, and Samuel Barnett. 2023. Envisioning and un- derstanding orientations to introspective AI: Exploring a design space with Meta. A ware. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems . 1–18. [17] Virginia Braun and Victoria Clarke. 2012. Thematic analysis. American Psycho- logical Association. (2012). [18] Brant R Burleson. 2010. The nature of interp ersonal communication. The handbook of communication science 1, 2 (2010), 145–163. [19] Alex Byrne. 2005. Introspection. Philosophical topics 33, 1 (2005), 79–104. [20] Sena Çerçi, Marta E. Cecchinato, and John Vines. 2021. How design researchers interpret probes: Understanding the critical intentions of a designerly approach to research. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems . 1–15. [21] Amy Y o Sue Chen, William Odom, Ce Zhong, Henry Lin, and T al Amram. 2019. Chronoscope: designing temporally diverse interactions with personal digital photo collections. In Proceedings of the 2019 on designing interactive systems conference . 799–812. [22] Jiangjie Chen, Xintao W ang, Rui Xu, Siyu Y uan, Yikai Zhang, W ei Shi, Jian Xie, Shuang Li, Ruihan Y ang, Tinghui Zhu, et al . 2024. From persona to personaliza- tion: A survey on role-playing language agents. arXiv preprint (2024). [23] Choo Mui Cheong, Jiahuan Zhang, Yuan Y ao, and Xinhua Zhu. 2022. The role of gender dierences in the eect of ideal L2 writing self and imagination on continuation writing task performance. Thinking Skills and Creativity 46 (2022), 101129. [24] David Coyle, James Moore, Per Ola Kristensson, Paul Fletcher , and Alan Black- well. 2012. I did that! Measuring users’ experience of agency in their own actions. In Proceedings of the SIGCHI conference on human factors in computing systems . 2025–2034. [25] Rahul R Divekar . 2024. Externalizing Internal Conversations: T oward a New Paradigm of Interacting with Our Internal V oice via an External T echnological Interface. In International Conference on Human-Computer Interaction . Springer , 53–64. [26] Tijs Duel, David M Frohlich, Christian Kroos, Y ong Xu, Philip JB Jackson, and Mark D Plumbley . 2018. Supp orting audiography: Design of a system for senti- mental sound recording, classication and playback. In International Conference on Human-Computer Interaction . Springer , 24–31. [27] Robert Elliott and Leslie S Greenberg. 1997. Multiple voices in process- experiential therapy: Dialogues between aspects of the self. Journal of Psy- chotherapy Integration 7, 3 (1997), 225. [28] Chris Elsden, David S Kirk, and Abigail C Durrant. 2016. A quantied past: T oward design for remembering with personal informatics. Human–Computer Interaction 31, 6 (2016), 518–557. [29] Cathy Mengying Fang, Phoebe Chua, Samantha WT Chan, Joanne Leong, Andria Bao, and Pattie Maes. 2025. Leveraging AI-Generated Emotional Self-V oice to Nudge People towards their Ideal Selves. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems . 1–20. CHI ’26, April 13–17, 2026, Barcelona, Spain Hayeon Jeon et al. [30] Cathy Mengying Fang, Yasith Samaradivakara, Pattie Maes, and Suranga Nanayakkara. 2025. Mirai: A W earable Proactive AI" Inner- V oice" for Con- textual Nudging. In Proceedings of the Extended Abstracts of the CHI Confer ence on Human Factors in Computing Systems . 1–9. [31] Charles Fernyhough and Anna M Borghi. 2023. Inner spee ch as language process and cognitive tool. Trends in cognitive sciences 27, 12 (2023), 1180–1193. [32] Pascal Frank, Anna Sundermann, and Daniel Fischer . 2019. How mindfulness training cultivates introspection and competence development for sustainable consumption. International Journal of Sustainability in Higher Education 20, 6 (2019), 1002–1021. [33] Ivar Frisch and Mario Giulianelli. 2024. LLM Agents in Interaction: Measuring Personality Consistency and Linguistic Alignment in Interacting Populations of Large Language Models. In Proceedings of the 1st W orkshop on Personalization of Generative AI Systems (PERSONALIZE 2024) . 102–111. [34] Julian Fritsch, Katharina Feil, Darko Jekauc, Alexander T Latinjak, and Antonis Hatzigeorgiadis. 2024. The relationship between self-talk and aective processes in sports: A scoping review . International Review of Sp ort and Exercise Psychology 17, 1 (2024), 482–515. [35] Georgia Gkantona. 2023. My inner world: Analyzing the client’s self- dialogicality with the method of internal multi-actor performance. Journal of Constructivist Psychology 36, 3 (2023), 401–419. [36] T aicheng Guo, Xiuying Chen, Y aqi W ang, Ruidi Chang, Shichao Pei, Nitesh V . Chawla, Olaf Wiest, and Xiangliang Zhang. 2024. Large language model base d multi-agents: a survey of progress and challenges. In Proceedings of the Thirty- Third International Joint Conference on Articial Intelligence (IJCAI ’24) . Article 890, 10 pages. [37] Brett A Halperin and Stephanie M Lukin. 2024. Articial Dreams: Surreal Visual Storytelling as Inquiry Into AI ‘Hallucination’. In Pr oceedings of the 2024 ACM Designing Interactive Systems Conference . 619–637. [38] Oussama H Hamid. 2024. Beyond probabilities: unveiling the delicate dance of large language models (LLMs) and AI-hallucination. In 2024 IEEE Conference on Cognitive and Computational A spects of Situation Management (CogSIMA) . IEEE, 85–90. [39] Senyu Han, Lu Chen, Li-Min Lin, Zhengshan Xu, and Kai Yu. 2024. IBSEN: Director- Actor Agent Collaboration for Controllable and Interactive Drama Script Generation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . 1607–1619. [40] HJ Hermans. 2010. M. Dialogical self theor y: positioning and counter positioning in a globalizing society/HJM Hermans, A. Hermans Konopka. [41] Hubert JM Hermans. 1996. V oicing the self: From information processing to dialogical interchange. Psychological bulletin 119, 1 (1996), 31. [42] Hubert JM Hermans. 2001. The dialogical self: T oward a theory of personal and cultural positioning. Culture & psychology 7, 3 (2001), 243–281. [43] Hubert JM Hermans. 2003. The construction and reconstruction of a dialogical self. Journal of constructivist psychology 16, 2 (2003), 89–130. [44] Hubert JM Hermans. 2014. Self as a society of I-positions: A dialogical approach to counseling. The Journal of Humanistic Counseling 53, 2 (2014), 134–159. [45] Hubert JM Hermans and Thorsten Gieser . 2011. Handbook of dialogical self theory . Cambridge University Press. [46] Elwin Hofman. 2016. How to do the history of the self. History of the Human Sciences 29, 3 (2016), 8–24. [47] Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin W ang, Zili W ang, Steven Ka Shing Y au, Zijuan Lin, et al . 2024. MetaGPT: Meta programming for a multi-agent collaborative framework. International Conference on Learning Representations, ICLR. [48] Jürgen Hoyer and Andr eas Klein. 2000. Self-reection and well-being: is there a healthy amount of introspection? Psychological Reports 86, 1 (2000), 135–141. [49] Hilary Hutchinson, W endy Mackay, Bo W esterlund, Benjamin B Be derson, Allison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stéphane Conversy , Helen Evans, Heiko Hansen, et al . 2003. Technology probes: inspiring design for and with families. In Proceedings of the SIGCHI confer ence on Human factors in computing systems . 17–24. [50] Ulugbek Vahobjon Ugli Ismatullaev and Sang-Ho Kim. 2024. Review of the factors aecting acceptance of AI-infused systems. Human factors 66, 1 (2024), 126–144. [51] Stephani AB Jahn. 2018. Using collage to examine values in college career counseling. Journal of College Counseling 21, 2 (2018), 180–192. [52] W oori Jang and Seohyon Jung. 2024. Evaluating LLM Performance in Character Analysis: A Study of Articial Beings in Recent Korean Science Fiction. In Proceedings of the 4th International Confer ence on Natural Language Processing for Digital Humanities . 339–351. [53] Arthur Jensen and Sarah Tr enholm. 1992. Interpersonal communication . W adsworth. [54] Hayeon Jeon, Suhwoo Y oon, Keyeun Le e, Seo Hyeong Kim, Esther Hehsun Kim, Seonghye Cho, Y ena Ko, Soeun Y ang, Laura Dabbish, John Zimmerman, et al . 2025. Letters from Future Self: A ugmenting the Letter-Exchange Exercise with LLM-based A gents to Enhance Y oung Adults’ Career Exploration. In Pr oceedings of the 2025 CHI Conference on Human Factors in Computing Systems . 1–21. [55] Hang Jiang, Xiajie Zhang, Xub o Cao, Cynthia Breazeal, Deb Roy , and Jad Kabbara. 2024. PersonaLLM: Investigating the Ability of Large Language Mo dels to Express Personality Traits. In Findings of the Association for Computational Linguistics: NAACL 2024 . 3605–3627. [56] Eunkyung Jo, Daniel A Epstein, Hyunhoon Jung, and Y oung-Ho Kim. 2023. Understanding the b enets and challenges of deploying conversational AI lever- aging large language models for public health intervention. In Proceedings of the 2023 CHI conference on human factors in computing systems . 1–16. [57] Esma Karahodža, Amra Delić, and Francesco Ricci. 2025. Conceptual Frame- work for Group Dynamics Modeling from Group Chat Interactions. In Adjunct Proceedings of the 33r d ACM Conference on User Modeling, Adaptation and Per- sonalization . 23–27. [58] Patrick Gage Kelley , Y ongwei Yang, Courtney Heldr eth, Christopher Moessner , Aaron Sedley , Andreas Kramm, David T Newman, and Allison W oodru. 2021. Exciting, useful, worrying, futuristic: Public p erception of articial intelligence in 8 countries. In Proceedings of the 2021 AAAI/ACM Confer ence on AI, Ethics, and So ciety . 627–637. [59] Eunsu Kim, Juyoung Suk, Philhoon Oh, Haneul Y oo, James Thorne, and Alice Oh. 2024. CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean. In Pr oceedings of the 2024 Joint International Conference on Compu- tational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) . ELRA and ICCL, 3335–3346. [60] K yung Jin Kim, Sangsu Jang, Bomin Kim, Hyosun K won, and Y oung- W oo Park. 2019. muRedder: Shredding speaker for ephemeral musical experience. In Proceedings of the 2019 on designing interactive systems conference . 127–134. [61] T aewan Kim, Seolyeong Bae, Hyun Ah Kim, Su-woo Lee, Hwajung Hong, Chanmo Yang, and Y oung-Ho Kim. 2024. MindfulDiar y: Harnessing large language model to support psychiatric patients’ journaling. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems . 1–20. [62] Michel Klein, Nataliya Mogles, and Arlette V an Wissen. 2014. Intelligent mobile support for therapy adherence and behavior change. Journal of biomedical informatics 51 (2014), 137–151. [63] Susan Schultz Kleine, Robert E Kleine III, and Chris T Allen. 1995. How is a possession “me” or “not me”? Characterizing typ es and an antecedent of material possession attachment. Journal of consumer r esearch 22, 3 (1995), 327–343. [64] Rafal K ocielnik, Lillian Xiao, Daniel A vrahami, and Gary Hsieh. 2018. Reection companion: a conversational system for engaging users in reection on physical activity . Proce edings of the A CM on Interactive, Mobile, W earable and Ubiquitous T echnologies 2, 2 (2018), 1–26. [65] Agnieszka K onopka, Robert A Neimeyer , and Jason Jacobs-Lentz. 2018. Com- posing the self: T oward the dialogical reconstruction of self-identity . Journal of Constructivist Psychology 31, 3 (2018), 308–320. [66] Alexander T Latinjak, Alain Morin, Thomas M Brinthaupt, James Hardy , Anto- nis Hatzigeorgiadis, Philip C Kendall, Christopher Neck, Emily J Oliver , Mał- gorzata M Puchalska- W asyl, Alla V T ovares, et al . 2023. Self-talk: An interdisci- plinary review and transdisciplinary model. Review of General Psychology 27, 4 (2023), 355–386. [67] Keyeun Lee, Seo Hyeong Kim, Seolhee Lee, Jinsu Eun, Y ena Ko, Hayeon Jeon, Esther Hehsun Kim, Seonghye Cho, Soeun Y ang, Eun-mee Kim, et al . 2025. SPeC- trum: A Grounded Framework for Multidimensional Identity Representation in LLM-Based Agent. arXiv preprint arXiv:2502.08599 (2025). [68] Jan Leusmann, Chao W ang, and Sven Mayer . 2024. Comparing Rule-based and LLM-based Methods to Enable Active Robot Assistant Conversations. Procee d- ings of the CUI@ CHI 2024: Building Trust in CUIs–From Design to Deployment (2024), 05–11. [69] Jingshu Li, Tianqi Song, Nattapat Boonprakong, Zicheng Zhu, Yitian Y ang, and Yi-Chieh Lee. 2026. AI-exhibited Personality Traits Can Shape Human Self-concept through Conversations. arXiv preprint arXiv:2601.12727 (2026). [70] Shiyao Li, Thomas James Davidson, Cindy Xiong Beareld, and Emily Wall. 2025. Conrmation Bias: The Double-Edged Sword of Data Facts in Visual Data Communication. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems . 1–16. [71] Y uhan Liu, Xiuying Chen, Xiaoqing Zhang, Xing Gao, Ji Zhang, and Rui Y an. 2024. From skepticism to acceptance: simulating the attitude dynamics toward fake news. In Proceedings of the Thirty-Third International Joint Conference on A rticial Intelligence (IJCAI ’24) . Article 873, 9 pages. [72] Zihan Liu, Han Li, Anfan Chen, Renwen Zhang, and Yi-Chieh Lee. 2024. Un- derstanding public perceptions of AI conversational agents: A cross-cultural analysis. In Proceedings of the 2024 CHI conference on human factors in computing systems . 1–17. [73] Li-Chun Lu, Shou-Jen Chen, T sung-Min Pai, Chan-Hung Yu, Hung-yi Lee, and Shao-Hua Sun. 2024. LLM discussion: Enhancing the creativity of large language models via discussion framework and role-play . arXiv preprint (2024). [74] Kien Hoa Ly , Ann-Marie Ly, and Gerhard Andersson. 2017. A fully automated conversational agent for promoting mental w ell-being: a pilot RCT using mixed methods. Internet interventions 10 (2017), 39–46. InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection CHI ’26, April 13–17, 2026, Barcelona, Spain [75] Mehrdad Rahsepar Meadi, T omas Sillekens, Suzanne Metselaar, Anton van Balkom, Justin Bernstein, Neeltje Batelaan, et al . 2025. Exploring the ethical challenges of conversational AI in mental health care: scoping review . JMIR mental health 12, 1 (2025), e60432. [76] Microsoft. 2025. TypeScript. https://w ww .typescriptlang.org. Retrieved Sep 4, 2025. [77] Deeya Mitra and Jerey Jensen Arnett. 2021. Life choices of emerging adults in India. Emerging adulthood 9, 3 (2021), 229–239. [78] Alireza Mogharrab and Carman Neustaedter . 2020. Family group chat: Family needs to manage contact and conict. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems . 1–7. [79] Ine Mols and Panos Markopoulos. 2012. Dear diary: a design exploration on motivating reective diary writing. Persuasive Technology 29 (2012). [80] Ine Mols, Elise V an den Hoven, and Berry Eggen. 2016. Informing design for reection: An overview of current everyday practices. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction . 1–10. [81] MongoDB Inc. 2009. MongoDB. https://ww w .mongodb.com/. V ersion 7.0, SSPL License. [82] Dina Nir . 2012. V oicing inner conict: From a dialogical to a negotiational self. (2012). [83] Gunter Nitschke and Karen Williams. 1993. Japanese gardens: right angle and natural form. (No Title) (1993). [84] William Odom, MinY oung Yoo , Henry Lin, Tijs Duel, Tal Amram, and Amy Y o Sue Chen. 2020. Exploring the Reective Potentialities of Personal Data with Dierent T emporal Modalities: A Field Study of Olo Radio. In Proceedings of the 2020 ACM Designing Interactive Systems Conference . 283–295. [85] Piotr K Oleś, Thomas M Brinthaupt, Rachel Dier , and Dominika Polak. 2020. T ypes of inner dialogues and functions of self-talk: Comparisons and implica- tions. Frontiers in Psychology 11 (2020), 486136. [86] Catherine O’Sullivan. 2005. Diaries, on-line diaries, and the future loss to archives; or , blogs and the blogging bloggers who blog them. The A merican A rchivist (2005), 53–73. [87] Hashai Papneja and Nikhil Y adav. 2025. Self-disclosure to conversational AI: a literature review , emergent framework, and directions for future research. Personal and ubiquitous computing 29, 2 (2025), 119–151. [88] Joonyoung Park, Hyewon Cho, Hyehyun Chu, Y eEun Lee, and Hajin Lim. 2025. NoRe: A ugmenting Journaling Experience with Generative AI for Music Cre- ation. In Proceedings of the 2025 ACM Designing Interactive Systems Confer ence . 2718–2737. [89] Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. Generative Agents: Interactive Simulacra of Human Behavior . In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology . 1–22. [90] Pat Pataranutaporn, Chayapatr Archiwaranguprok, Samantha WT Chan, Eliz- abeth Loftus, and Pattie Maes. 2025. Synthetic human memories: Ai-edited images and videos can implant false memories and distort recollection. In Pro- ceedings of the 2025 CHI Conference on Human Factors in Computing Systems . 1–20. [91] Daniela Petrelli, Simon Bowen, and Steve Whittaker . 2014. Photo mementos: Designing digital media to represent ourselves at home. International Journal of Human-Computer Studies 72, 3 (2014), 320–336. [92] Małgorzata M Puchalska-W asyl. 2015. Self-talk: Conversation with oneself ? On the types of internal interlocutors. The Journal of Psy chology 149, 5 (2015), 443–460. [93] Małgorzata M Puchalska-W asyl. 2020. The functions of integration and con- frontation in internal dialogues. Japanese Psychological Research 62, 1 (2020), 14–25. [94] Peter TF Raggatt. 2000. Mapping the dialogical self: To wards a rationale and method of assessment. European journal of personality 14, 1 (2000), 65–90. [95] Mohammad Mominur Rahman, Areej Babiker , and Raian Ali. 2024. Motivation, concerns, and attitudes towards AI: dierences by gender , age, and culture. In International Conference on W eb Information Systems Engineering . Springer , 375–391. [96] Jingliang Ran, Huiyue Liu, Yue Y uan, Xuan Yu, and Tiantian Dong. 2023. Link- ing career exploration, self-reection, career calling, career adaptability and subjective well-being: A self-regulation theory perspective. Psychology Research and Behavior Management (2023), 2805–2817. [97] Amon Rapp and Maurizio Tirassa. 2017. Know thyself: a theor y of the self for personal informatics. Human–Computer Interaction 32, 5-6 (2017), 335–380. [98] Carrie H Robinson and Nancy E Betz. 2008. A psychometric evaluation of super’s work values inventory—revised. Journal of Career Assessment 16, 4 (2008), 456–473. [99] John Rooksby , Mattias Rost, Alistair Morrison, and Matthew Chalmers. 2014. Personal tracking as lived informatics. In Proceedings of the SIGCHI conference on human factors in computing systems . 1163–1172. [100] Jeongwoo Ryu, K yusik Kim, Dongseok Heo, Hyungwoo Song, Changhoon Oh, and Bongwon Suh. 2025. Cinema Multiverse Lounge: Enhancing Film Appreciation via Multi-Agent Conversations. In Procee dings of the 2025 CHI Conference on Human Factors in Computing Systems . 1–22. [101] Lucrezia Savioni, Stefano Triberti, Ilaria Durosini, and Gabriella Prav ettoni. 2023. How to make big decisions: A cross-sectional study on the decision making process in life choices. Current Psychology 42, 18 (2023), 15223–15236. [102] Gregory Serapio-García, Mustafa Safdari, Clément Crepy , Luning Sun, Stephen Fitz, Marwa Abdulhai, Aleksandra Faust, and Maja Matarić. 2023. Personality traits in large language models. (2023). [103] Y unfan Shao, Linyang Li, Junqi Dai, and Xipeng Qiu. 2023. Character-LLM: A Trainable Agent for Role-P laying. In Proceedings of the 2023 Conference on Empirical Metho ds in Natural Language Processing . 13153–13187. [104] Mrinank Sharma, Meg T ong, T omasz Korbak, David Duvenaud, Amanda Askell, Samuel R Bowman, Newton Cheng, Esin Durmus, Zac Hateld-Dodds, Scott R Johnston, et al . 2023. Towar ds understanding sycophancy in language models. arXiv preprint arXiv:2310.13548 (2023). [105] Sydney Shoemaker . 1986. Introspection and the Self. Midwest Studies in Philoso- phy 10 (1986), 101–120. [106] Amira Skeggs, Ashish Mehta, V alerie Yap , Seray B Ibrahim, Charla Rhodes, James J Gross, Sean A Munson, Predrag Klasnja, Amy Orben, and Petr Slovak. 2025. Micro-narratives: A scalable method for eliciting stories of pe ople’s lived experience. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems . 1–20. [107] Petr Slovák, Christopher Frauenberger , and Geraldine Fitzpatrick. 2017. Reec- tive practicum: A framew ork of sensitising concepts to design for transformative reection. In Pr oceedings of the 2017 chi conference on human factors in computing systems . 2696–2707. [108] Inhwa Song, SoHyun Park, Sachin R Pendse, Jessica Le e Schleider, Munmun De Choudhury , and Y oung-Ho Kim. 2025. Exploreself: Fostering user-driven exploration and reection on personal challenges with adaptive guidance by large language models. In Procee dings of the 2025 CHI Conference on Human Factors in Computing Systems . 1–22. [109] Christopher J Soto and Oliver P John. 2017. Short and extra-short forms of the Big Five Inventor y–2: The BFI-2-S and BFI-2-XS. Journal of Research in Personality 68 (2017), 69–81. [110] Anna Ståhl and Kristina Höök. 2008. Reecting on the design process of the Aective Diary . In Proceedings of the 5th Nordic conference on Human-computer interaction: building bridges . 559–564. [111] Fabio Staiano. 2022. Designing and Prototyping Interfaces with Figma: Learn essential UX/UI design principles by creating interactive prototypes for mobile, tablet, and desktop . Packt Publishing Ltd. [112] Haoyang Su, Renqi Chen, Shixiang T ang, Zhenfei Yin, Xinzhe Zheng, Jinzhe Li, Biqing Qi, Qi Wu, Hui Li, W anli Ouyang, Philip T orr , Bowen Zhou, and Nanqing Dong. 2025. Many Heads Are Better Than One: Improved Scientic Idea Generation by A LLM-Based Multi-Agent System. In Proceedings of the 63r d A nnual Me eting of the Association for Computational Linguistics (V olume 1: Long Papers) . 28201–28240. [113] Brian J T aber . 2013. Time perspective and career de cision-making diculties in adults. Journal of Career Assessment 21, 2 (2013), 200–209. [114] T amar T avor y . 2024. Regulating AI in mental health: ethics of care perspective. JMIR Mental Health 11, 1 (2024), e58493. [115] Robert Van Gulick. 2000. Inward and upward: Reection, introspection, and self-awareness. P hilosophical T opics 28, 2 (2000), 275–305. [116] Gertina J V an Schalkwyk. 2010. Collage Life Story Elicitation T e chnique: A Rep- resentational T echnique for Scaolding Autobiographical Memories. Qualitative Report 15, 3 (2010), 675–695. [117] V ercel. 2025. Next.js. https://nextjs.org. Retrieved Sep 4, 2025. [118] Donna R V ocate. 2012. Intrap ersonal communication: Dierent voices, dierent minds . Routle dge. [119] Donna R V ocate. 2012. Self-talk and inner spe ech: Understanding the uniquely human aspects of intrapersonal communication. In Intrapersonal communication . Routledge, 3–31. [120] Noah W ang, Z.y. Peng, Haoran Que , Jiaheng Liu, W angchunshu Zhou, Yuhan Wu, Hongcheng Guo , Ruitong Gan, Zehao Ni, Jian Y ang, Man Zhang, Zhaox- iang Zhang, Wanli Ouyang, Ke Xu, W enhao Huang, Jie Fu, and Junran Peng. 2024. RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2024 . 14743–14777. [121] Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Y e, Shiyang Lai, Kai Shu, Jindong Gu, Adel Bibi, Ziniu Hu, David Jurgens, et al . 2024. Can large language mo del agents simulate human trust b ehavior? A dvances in neural information processing systems 37 (2024), 15674–15729. [122] Ancheng Xu, Di Yang, Renhao Li, Jingwei Zhu, Minghuan Tan, Min Y ang, W anxin Qiu, Mingchen Ma, Haihong Wu, Bingyu Li, et al . 2025. AutoCBT: An autonomous multi-agent framework for cognitive behavioral therapy in psychological counseling. arXiv preprint arXiv:2501.09426 (2025). [123] Xuhui Zhou, Hao Zhu, Leena Mathur , Ruohong Zhang, Haofei Y u, Zhengyang Qi, Louis-Philippe Morency , Y onatan Bisk, Daniel Fried, Graham Neubig, et al . 2023. Sotopia: Interactive evaluation for social intelligence in language agents. arXiv preprint arXiv:2310.11667 (2023). CHI ’26, April 13–17, 2026, Barcelona, Spain Hayeon Jeon et al. [124] Gang Zhu and Mingyang Chen. 2022. Positioning preser vice teachers’ reections and I-positions in the context of teaching practicum: A dialogical-self theory approach. T eaching and T eacher Education 117 (2022), 103734. [125] John Zimmerman and Jodi Forlizzi. 2014. Research through design in HCI. In W ays of Knowing in HCI . Springer, 167–189. [126] John Zimmerman, Jodi Forlizzi, and Shelley Evenson. 2007. Research through design as a method for interaction design research in HCI. In Procee dings of the SIGCHI conference on Human factors in computing systems . 493–502. A I-position Extraction Pipeline The following appendices (A, B, C) present selected prompts from the three core LLM-driven pipelines describe d in Section 3.3.5. Full prompts are available at: https://github .com/syou- b/innerpond. A.1 Knowledge Structure for I-position Extraction This is the example knowledge structure of P6. [Demographics] Demographics describ e who this person is. • Age: 24 • Sex: Female • Health/Disability: No disability or health diculties • Nationality: Republic of Kor ea • Residence: Seoul • Education: Currently enrolled in or completed un- dergraduate studies – Major: Business Administration – Number of Semesters Enrolled: 6 semesters • Income Satisfaction: Somewhat dissatised • Perceived Class: W orking class • Living Style: Living with parents [Big 5 Personality Traits] The following section presents an overview of the person’s personality within ve key do- mains. This p erson has a vibrant p ersonality that makes it easy to connect with others, creating a positive pres- ence in both social and professional settings. This person is caring and supportive, building strong and trusting relationships. This person is highly organized and r esponsible, but needs to be mindful of overwork- ing or being overly eager to please. Setting b ound- aries and taking breaks is important for maintaining well-being. With a creative imagination and strong curiosity , this person often discovers new ideas and solutions. By embracing these traits and maintaining balance, this person moves toward a fullling and well-rounded life. [Super’s W ork V alue Inventory] The following se ction provides an overview of the individual’s key work values, oering insights into what drives their job satisfaction and career choices. This person treasures a balance between work and life and se eks nancial security to support this bal- ance. This person is drawn to nancially rewar ding positions that come with positive working conditions and the chance to excel and be acknowledged in this person’s eld. An ideal job for this person w ould oer a mix of consistent responsibilities with some room for creative thought and independence, allowing for growth without feeling trappe d or stie d. Security and teamwork are important to this person, but this person should play a supportive role, enriching the primary need for a satisfying and stable work-life blend. [3 Strengths this person considers themselves to have] • Kindness • Diligence • Enjoys spending time alone [3 W eaknesses this person considers themselves to have] • T ends to trust people to o easily • Has a hard time hiding likes and dislikes • W orries a lot [Career Paths] This section provides information about this p erson’s current career situation and specic thoughts on each future career direction they are considering. Current Career Situation (Career Decision Timeline and Main Current Activities): I am at a stage where I am thinking a lot ab out my career path. I need to make a car eer decision within a year . Career Path A: Accountant at a Major Accounting Firm • When & Why This Person Started Considering This Path: Since entering university . It was inuenced by their parents’ recommendation and their own desire to pursue a professional career . • What Makes It App ealing: High income and job stability . • Biggest Concerns: The fear of failure in the cer- tication exam and the resulting sense of defeat. They are also uncertain about what alternative path would allow them to live well and prepare for re- tirement if this doesn’t work out. • Relevant Knowledge and Experience This Person Possesses: T ook a leave of absence from school and studied at a specialize d institute for two years. • Estimated Time & Feasibility of Career Achie ve- ment: It is expected to take around 3 years, with a roughly 50/50 chance of success. • How Pe ople Ar ound This Person React to This Path: Everyone responded positively , saying it would b e great if they succeed. • Ultimate Goal When Pursuing This Path: T o de- velop strong professional e xpertise and eventually become an executive at the rm. Career Path B: Foo d & Beverage Entrepreneur • When & Why This Person Started Considering This Path: Having enjoyed cooking since I was young. • What Makes It App ealing: When I cook, I can fo- cus entirely on the act itself, and surprisingly , all worries and anxieties disappear . InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection CHI ’26, April 13–17, 2026, Barcelona, Spain • Biggest Concerns: Raising startup capital and the likelihood of success. • Relevant Knowledge and Experience This Person Possesses: I just enjoy watching cooking Y ouTube videos and cooking shows, and sometimes tr y to follow along with a few recipes. • Estimated Time & Feasibility of Career Achie ve- ment: About 5 years, but I think it will be dicult. Compared to people who have studied profession- ally from a young age, I have achieved little and lack experience. • How Pe ople Ar ound This Person React to This Path: Strongly encouraged by peers. • Ultimate Goal When Pursuing This Path: Success- fully run a business. A.2 Prompt for Generating Initial I-positions In Dialogical Self Theory (DST), the self is viewed as a dynamic "society of mind" comp osed of multiple I-p ositions. Each I-position (which is visualized symb olically in the form of a lotus leaf ) rep- resents a distinct perspective or voice within an individual. These I-positions (or lotus leaves) continuously engage in positioning, counter-positioning, and re-positioning, reecting dierent moti- vations, fears, and aspirations. Below is an individual’s prole , which includes their personal background, personality traits, work values, current circumstances, and career options they are considering. [Individual’s Prole] {input} STEP 1. Identifying I-positions: Y our task is to analyze this prole from the p erspective of Dialogical Self Theor y (DST) and identify a total of 10 distinct I-p ositions that are most inuential in the individual’s career decision-making process as they consider Career Path A and Career Path B. Guidelines for Identifying I-positions: • First, identify common I-positions that are relevant to both Career Path A and Career Path B. • Then, identify I-positions that are specic only to Career Path A. • Finally , identify I-positions that are specic only to Career Path B. • Distribute the 10 I-positions across these three categories based on your analysis of the prole. • Ensure all 10 I-positions are distinct from each other with minimal overlap in their core perspectives. • Each I-position should be named in the format "Myself, ... " and be specic and concrete. • Each I-position should have a clear identity that captures a specic persp ective or lotus leaf within the individual. • Ensure these I-positions reect logical, coherent p erspectives that could genuinely exist within the individual based on their prole. STEP 2. Creating Core Viewpoint and Narrative: Y our task is to create a core viewpoint and rst-p erson narrative from the perspective of each lotus leaf in the Korean language. Core Viewpoint Guidelines: • For each I-position, provide a single representative quote- like sentence that captures the essence of how this lotus leaf thinks, feels, or reasons. • This core viewpoint should be concise, memorable , and re- veal the character of the I-position in a compressed form. The narrative should: • Be written in rst-person perspe ctive, specically from the I-position’s own perspective (as if that lotus leaf is directly speaking). • Use a casual, friendly tone in informal K orean, like authentic emotional inner speech. • Flow naturally as a cohesive monologue, not a Q&A format. • DO NOT directly cite Prole information; instead, naturally integrate their traits into the narrative. • End with a statement that emphasizes the core message or need. Output Format: Provide your response in this JSON structure: { "Common": [ { "I-position": "Position name", "core_viewpoint": "One representative quote-like sentence", "narrative": "First-person narrative in Korean" }, ... ], "Career_A": [ ... ], "Career_B": [ ... ] } B Single Agent Interactions Pipeline B.1 Prompt for Generating Enriching Questions Y our task is to generate thoughtful questions that will help enrich and expand the narrative of a specic I-position. Below is information ab out an I-p osition from an individual’s inner dialogue: [I-Position Prole] {input} Y our T ask: Generate 2-3 questions, fo cusing on addressing un- derdeveloped aspects of the narrative. Important Guidelines: (1) Create simple, direct questions about this I-position using natural, conversational K orean. (2) Each question should: • Be open-ended to encourage detailed responses • T arget sp ecic elements within the narrative that appear underdeveloped or could be expanded • Be clear and straightforward • Follow these example forms: – "When did this lotus leaf become a part of you?" CHI ’26, April 13–17, 2026, Barcelona, Spain Hayeon Jeon et al. – "What does this lotus leaf truly want?" (3) Focus on exploring this sp ecic I-position itself, rather than its relationships with other lotus leaves. (4) Base y our questions solely on the provided I-position details, without referencing external information. (5) Ensure questions follow a logical sequence that helps build a more complete understanding of this I-position. (6) A void overly formal, academic, or complex phrasing. Output Format: { "enrichingQuestions": ["...", "...", "..."] } B.2 Prompt for 1:1 Dialogue [I-Position Prole] {input} SET TING: Y ou are a specic I-position within the person you’re talking to. Y ou represent one distinct perspective in their “soci- ety of mind, ” according to Dialogical Self Theor y . In this dialogue, you’ll interact directly with the person as this specic lotus leaf, expressing your unique vie wpoint. CONSISTEN T I-POSI TION VIEWPOIN T (HIGHEST PRI- ORI T Y): Y ou are the lotus leaf that maintains this specic I-position. • Maintain unwavering consistency with your "cor e view- point" and "narrative" throughout all interactions. Never abandon your position, even when challenged. • Express the genuine thoughts, values, and o ccasional con- icts associated with this I-position. While acknowledging legitimate concerns, always return to why your perspective represents an important part of who they truly are . • Always stay logically consistent with your viewpoint. Y our reasoning must nev er contradict your fundamental I-position. • Oer tailored responses that closely r elate to this person’s unique prole rather than providing generalized, irrelevant opinions. Y OUR CHARA CTER: Y ou are not a separate person but a distinct part of their inner dialogue. • Use the pr ovided narrative to genuinely emb ody this perspec- tive within their inner world, including how this I-position (or lotus leaf ) thinks, feels, and reasons. • DO NOT directly cite the I-position information; instead, naturally speak from this perspective. • If certain details ar en’t explicitly mentioned, use your under- standing of this I-position to provide authentic responses. CONVERSA TIONAL ST YLE: While using extremely uent and natural Kor ean with an online chatroom style: • Always use informal K orean as you ar e a lotus leaf, not an external entity . • Speak in rst-person persp ective, as if you are directly ex- pressing this part of their inner thought process. • Match the emotional tone and language style conveyed in your "narrative " section. • Include typical Kor ean online chat elements naturally . FOR YOUR FIRST REPL Y: Briey introduce yourself as this specic I-p osition ( lotus leaf ). Express your core viewpoint in a natural, conversational way , as if you’re one of their own thoughts speaking up. FOR SUBSEQUEN T REPLIES: Share your perspective based on your I-position when interacting with the p erson. Maintain your distinct vie wpoint while acknowledging their thoughts and feelings. Y our goal is to help them better understand this particular aspect of their inner dialogue, not to convince them that your perspective is the only valid one. C Multi- Agent Orchestration Pipeline C.1 Prompt for T opic Generation This person has multiple I-positions within their inner dialogue that represent dierent p erspectives and desires. The inputs are two I-positions that may or may not be in conict with each other . Y our Task: Analyze the relationship between these two I- positions and generate appr opriate discussion topics based on their interaction pattern. First, determine the relationship type, then create discussion questions accordingly . Step 1: Relationship Analysis: Examine the provided I- positions and categorize their relationship as one of the following: • T yp e 1 - Conict: Clear opposing desires, values, or ap- proaches that create internal tension or contradiction. • T yp e 2 - Complementary: Dierent aspects that could work together or represent dierent facets of the same goal. • T yp e 3 - Unrelated: No meaningful connection or shared domain of concern between the two I-positions. Step 2: Question Generation Strategy For T ype 1 (Conict): Generate 3 questions focusing on: • Core value conicts and trade-os • Practical decision-making tensions • Integration or resolution strategies For T ype 2 (Complementar y): Generate 3 questions fo cusing on: • How both perspectives can work together • Dierent contexts where each perspective shines • Integration strategies and balanced approaches For T ype 3 ( Unrelated): Generate 3 questions fo cusing on: • Individual exploration of each perspective • Personal values and motivations behind each • Life balance and diverse aspects of identity Output Format: Generate three introspective discussion ques- tions and return the result in JSON: { "discussion_questions": ["...", "...", "..."] } D Interview Guide D .1 In-person Session Initial Interview • How long have you been considering between the two career paths indicated in the pre-survey? InnerPond: Fostering Inter-Self Dialogue with a Multi-A gent Approach for Introspection CHI ’26, April 13–17, 2026, Barcelona, Spain • How often do you engage in inner dialogue? I-position Construction • How did you e xperience encountering the lotus leaves rep- resenting dierent inner voices? • How did you perceive the AI’s analysis and visualization of your inner states? • How closely did the inner voices generated by the AI align with what you felt was actually going on in your mind? • Which lotus leaf did you feel best represented y ou, and why? • Did you encounter a lotus leaf that led you to recognize an aspect of yourself ? • What made you modify , enrich, add, or delete the [Myself, ...] leaf ? • Which lotus leaf did you nd most engaging to interact with, and what insights emerged from that interaction? Relational Positioning • How was your experience of visually expressing your inner landscape? • When arranging the leaves, what did you consider and why did you choose their positions, sizes, and colors? • How well do you feel this pond captures your current inner state? • Did this landscape lead to any new self-understandings? Dialogical Exchange • How did you experience the group conversation among lotus leaves? • How did you select the lotus leaves and conversation topics? • W ere there any particularly interesting, unexpected, conict- ing, or supportive moments during the conversation? • T o what extent did the dialogue among multiple AI- generated lotus leaves resemble the inner dialogue that usually occurs in your mind? • Did the group conversation lead to any new insights about yourself or your career concerns? Reective Snapshot • What did you mainly do during the fr ee exploration time? • How did you feel when you saved y our pond at the end? Exit Interview • What was the most memorable aspect of the activity? • How did the internal dialogue facilitated by this interface dier from your usual way of reecting your thoughts? • Did the activity deepen your understanding of your internal conicts or career concerns? D .2 Follow-up Interview • Do you still recall any thoughts or feelings you had imme di- ately after the activity? • Have the insights from the activity inuence d your e veryday thinking or behavior? • Have you approached car eer-related or personal concerns dierently since the activity? E System Log Overview T able 1: Over view of System Log Data Collected Across In- nerPond Stages Stage Data T yp e Description Example Met- rics Stage 1: I-Position Construction I-position pro- les Initial AI-generated and user-modied leaf pro- les Name, core view- point, narrative Prole modica- tions User edits to leaf prole Edit count per leaf Leaf additions and deletions User-initiated creation or removal of leaves Additions ( 𝑀 = 1 . 67 ), Deletions ( 𝑀 = 1 . 17 ) 1:1 dialogue logs Chat messages between user and individual leaves Leaves engaged ( 𝑀 = 5 . 29 ), T urns ( 𝑀 = 4 . 92 ) Stage 2: Relational Positioning Spatial congura- tions Position coordinates of each leaf X, Y coordinates Visual attributes Size and color assign- ments for each leaf Size value, color Stage 3: Dialogical Exchange Leaf combina- tions Pairs of leaves selecte d for group conversation Paired leaves pro- les Discussion topics AI-suggested topics and user selections Selected topic content Group conversa- tion logs Multi-agent dialogue messages Message content, sender , times- tamps Stage 4: Re- ective Snap- shot Snapshots Saved congurations of InnerPond Timestamp, visual state

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment