"Shall We Dig Deeper?": Designing and Evaluating Strategies for LLM Agents to Advance Knowledge Co-Construction in Asynchronous Online Discussions

"Shall We Dig Deeper?": Designing and Evaluating Strategies for LLM Agents to Advance Knowledge Co-Construction in Asynchronous Online Discussions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Asynchronous online discussions enable diverse participants to co-construct knowledge beyond individual contributions. This process ideally evolves through sequential phases, from superficial information exchange to deeper synthesis. However, many discussions stagnate in the early stages. Existing AI interventions typically target isolated phases, lacking mechanisms to progressively advance knowledge co-construction, and the impacts of different intervention styles in this context remain unclear and warrant investigation. To address these gaps, we conducted a design workshop to explore AI intervention strategies (task-oriented and/or relationship-oriented) throughout the knowledge co-construction process, and implemented them in an LLM-powered agent capable of facilitating progression while consolidating foundations at each phase. A within-subject study (N=60) involving five consecutive asynchronous discussions showed that the agent consistently promoted deeper knowledge progression, with different styles exerting distinct effects on both content and experience. These findings provide actionable guidance for designing adaptive AI agents that sustain more constructive online discussions.


💡 Research Summary

**
This paper tackles the persistent problem that many asynchronous online discussions—despite their potential for collective knowledge building—remain trapped in early, surface‑level stages. Existing AI interventions either focus on a single phase or emphasize either task‑oriented guidance or relationship‑oriented support, but they lack a systematic, process‑orchestrated approach that can shepherd discussions through the full trajectory of knowledge co‑construction.

The authors first adopt a refined four‑phase model of knowledge co‑construction (initiation, exploration, negotiation, co‑construction) derived from the Interaction Analysis Model and related collaborative‑knowledge frameworks. For each phase they define concrete “sufficiency criteria” (e.g., idea sharing, divergence exploration, meaning negotiation, synthesis) that can be operationalized for quantitative coding.

A design workshop with twelve active contributors to online knowledge forums and three AI designers generated four distinct AI intervention styles, mapped onto the Task‑Relationship (TR) framework:

  1. Telling – predominantly task‑oriented, providing clear prompts, summaries, and structural guidance.
  2. Selling – blends task and relationship cues, reframing user contributions and offering counter‑arguments to deepen discussion.
  3. Participating – relationship‑focused, the agent behaves as an equal peer, contributing knowledge alongside humans to build trust and egalitarian collaboration.
  4. Delegating – relationship‑oriented but minimally task‑supportive, encouraging users to take the lead.

These styles were encoded as phase‑specific intervention rules for a large‑language‑model (LLM) powered agent (based on GPT‑4). The agent continuously analyses incoming comments, infers the current phase and discourse moves (e.g., idea proposal, elaboration, challenge), and then generates an intervention matching the selected style. Crucially, the agent also performs “foundation reinforcement” in the exploration phase, summarising and preserving earlier contributions so that later phases can build on a stable knowledge base.

To evaluate the impact of these styles, a within‑subject study recruited 60 participants who each took part in five consecutive discussion threads (six participants per thread). The conditions were the four AI styles plus a human‑only baseline, presented in counterbalanced order. Data collection comprised (a) full discussion logs, (b) phase‑level coding scores, (c) post‑task surveys measuring satisfaction, trust, perceived agency, and cognitive load, and (d) semi‑structured interviews.

Quantitative findings show that all three active styles (Telling, Selling, Participating) significantly increased the proportion of threads advancing to deeper phases compared with the baseline. The Participating style achieved the highest rate of reaching the final co‑construction phase (≈68 %). The Delegating style, while not disruptive, failed to promote deeper engagement. Survey results reveal that participants rated Participating and Selling highest on perceived usefulness, trust, and overall experience; Telling was praised for efficiency but noted to create a sense of distance between human and AI.

Qualitative analysis of interview data indicates that when the agent actively contributed arguments (Selling) or co‑authored content (Participating), human‑to‑human feedback loops intensified, leading to richer elaborations and more frequent negotiation moves. Conversely, overly directive interventions (Telling) sometimes suppressed spontaneous peer interaction, while overly passive ones (Delegating) left participants without sufficient scaffolding.

The authors discuss several design implications:

  • Phase‑adaptive interventions are essential; agents should reinforce foundations early and shift styles as the discussion matures (e.g., start with Telling, transition to Selling, culminate with Participating).
  • Balancing task and relationship dimensions yields the best outcomes; pure task‑oriented or pure relationship‑oriented approaches each have trade‑offs.
  • Transparency and control are critical; users should be aware of the agent’s role and be able to adjust its level of involvement to preserve autonomy.

In conclusion, the study provides empirical evidence that LLM‑driven agents, equipped with phase‑tailored, style‑varying interventions, can meaningfully advance the depth and quality of knowledge co‑construction in asynchronous online discussions. The findings offer concrete guidance for building adaptive AI facilitators in educational, professional, and community platforms, highlighting the importance of integrating both task guidance and relational participation to sustain constructive, collaborative knowledge building.


Comments & Academic Discussion

Loading comments...

Leave a Comment