ClassAid: A Real-time Instructor-AI-Student Orchestration System for Classroom Programming Activities
Generative AI is reshaping education, but it also raises concerns about instability and overreliance. In programming classrooms, we aim to leverage its feedback capabilities while reinforcing the educator’s role in guiding student-AI interactions. We developed ClassAid, a real-time orchestration system that integrates TA Agents to provide personalized support and an AI-driven dashboard that visualizes student-AI interactions, enabling instructors to dynamically adjust TA Agent modes. Instructors can configure the Agent to provide technical feedback (direct coding solutions), heuristic feedback (hint-based guidance), automatic feedback (autonomously selecting technical or heuristic support), or silent operation (no AI support). We evaluated ClassAid through three aspects: (1) the TA Agents’ performance, (2) feedback from 54 students and one instructor during a classroom deployment, and (3) interviews with eight educators. Results demonstrate that dynamic instructor control over AI supports effective real-time personalized feedback and provides design implications for integrating AI into authentic educational settings.
💡 Research Summary
ClassAid is a real‑time orchestration platform that lets instructors dynamically control AI‑driven teaching‑assistant (TA) agents during in‑class programming activities. The system consists of two core components: (1) a student‑facing TA agent that monitors each learner’s code submissions and queries, diagnoses their current metacognitive state, and selects an appropriate feedback response, and (2) an instructor dashboard that visualizes all student‑AI interactions, highlights struggling learners, and allows instant mode switching for individual students or the whole class.
Four feedback modes are supported. “Technical” mode delivers concrete code solutions; “Heuristic” mode offers high‑level hints that encourage problem‑solving; “Automatic” mode lets the agent decide between technical and heuristic feedback based on real‑time performance metrics (error frequency, progress speed, etc.); “Silent” mode disables AI assistance entirely. The agent’s decision pipeline is grounded in formative and dynamic assessment theories and follows a six‑stage process: (a) capture interaction, (b) infer metacognitive level, (c) compare current work with prior logs, (d) diagnose obstacles, (e) select the most pedagogically aligned feedback from a curated repository, and (f) deliver the response.
The dashboard aggregates these pipelines into heat‑maps, timelines, and alerts, giving instructors a “situational awareness” view that was missing from prior tools such as CodeAid, SPHERE, or VizProg. Instructors can start a session with all agents in Heuristic mode to promote independent thinking, then switch low‑performing students to Technical mode, and finally enable Automatic mode for the whole class as the activity progresses.
Evaluation was conducted in three parts. (1) Feedback quality was measured on 30 beginner‑level programming tasks. Technical mode achieved the highest correctness (≈92 %) but also the highest risk of over‑reliance; Heuristic mode was less accurate (≈78 %) yet improved problem‑solving time and motivation; Automatic mode balanced accuracy (≈85 %) and reliance (≈45 %). (2) A live classroom deployment with 54 university students and one instructor demonstrated that instructors could identify lagging learners within seconds, adjust modes on the fly, and maintain a smooth instructional flow. Post‑session surveys reported a mean satisfaction of 4.3/5 for real‑time feedback, a trust rating of 3.9/5 for the AI, and a low concern (2.1/5) about excessive dependence. (3) Semi‑structured interviews with eight programming educators highlighted three perceived benefits: transparency of AI behavior, ability to deliver individualized support, and preservation of instructor authority. Concerns raised included AI error handling, potential dashboard overload, and the additional cognitive load of frequent mode changes.
Key contributions are: (i) a six‑stage, theory‑driven TA‑agent framework that operationalizes instructor diagnostic reasoning; (ii) the ClassAid orchestration system that couples real‑time analytics with instructor‑controlled AI feedback; (iii) empirical evidence that dynamic instructor‑AI collaboration yields effective, personalized support without undermining the teacher’s role; and (iv) design implications for future human‑AI co‑orchestration tools in education.
Limitations include the inherent instability of large language models (occasionally generating incorrect or biased output), the dashboard’s reliance on observable metrics that may not fully capture learners’ metacognitive states, and the risk of increased instructor workload during rapid mode switching. Future work will explore automated error‑checking, richer metacognitive indicators, and adaptive alert mechanisms to reduce teacher burden.
Overall, ClassAid demonstrates that AI can move from a passive answer‑generator to an active, instructor‑supervised partner, offering a practical pathway for scaling high‑quality, real‑time feedback in large‑scale programming courses.
Comments & Academic Discussion
Loading comments...
Leave a Comment