AI Sensing and Intervention in Higher Education: Student Perceptions of Learning Impacts, Affective Responses, and Ethical Priorities
AI technologies that sense student attention and emotions to enable more personalised teaching interventions are increasingly promoted, but raise pressing questions about student learning, well-being, and ethics. In particular, students’ perspectives about AI sensing-intervention in learning are often overlooked. We conducted an online mixed-method experiment with Australian university students (N=132), presenting video scenarios varying by whether sensing was used (in-use vs. not-in-use), sensing modality (gaze-based attention detection vs. facial-based emotion detection), and intervention (by digital device vs. teacher). Participants also completed pairwise ranking tasks to prioritise six core ethical concerns. Findings revealed that students valued targeted intervention but responded negatively to AI monitoring, regardless of sensing methods. Students preferred system-generated hints over teacher-initiated assistance, citing learning agency and social embarrassment concerns. Students’ ethical considerations prioritised autonomy and privacy, followed by transparency, accuracy, fairness, and learning beneficence. We advocate designing customisable, social-sensitive, non-intrusive systems that preserve student control, agency, and well-being.
💡 Research Summary
This paper investigates university students’ perceptions of AI systems that sense attention and emotions and deliver personalised learning interventions, focusing on both affective responses and ethical priorities. The authors conducted a two‑stage online mixed‑method study with 132 Australian undergraduates. In Stage 1, participants viewed six video scenarios that systematically varied along three dimensions: (1) whether sensing was employed (sensing vs. no‑sensing), (2) the sensing modality (gaze‑based attention detection vs. facial‑based emotion detection), and (3) the form of intervention (system‑generated hints vs. teacher‑mediated assistance). After each scenario, participants rated expected learning impact and affective reactions on five‑point Likert scales and provided open‑ended comments.
The quantitative and qualitative analyses converged on three key findings. First, any form of AI monitoring elicited negative affect (anxiety, discomfort, embarrassment) and lowered perceived learning benefits, regardless of whether the system used gaze or facial cues. The lack of a significant difference between the two modalities suggests that the mere presence of surveillance, rather than the specific data type, drives student unease. Second, students strongly preferred system‑generated assistance over teacher‑initiated help. They cited greater sense of agency, privacy, and reduced fear of peer judgment when hints arrived automatically, whereas teacher involvement was associated with public evaluation and social embarrassment. Third, when sensing and intervention were combined, perceived learning efficacy dropped further compared with non‑sensing conditions, indicating that monitoring can undermine the motivational advantages of personalised support.
Stage 2 explored ethical value trade‑offs using pairwise comparison tasks across six core concerns identified in the AIEd ethics literature: autonomy, privacy, transparency, accuracy, fairness, and learning beneficence. The ranking revealed a clear hierarchy: autonomy and privacy were top‑most priorities, followed by transparency, accuracy, fairness, and finally learning beneficence. Participants emphasized the need for dynamic consent, control over data, and the ability to opt‑in or out of sensing at any time.
Based on these findings, the authors contribute three major insights to HCI and AIEd research. (1) They foreground affective and pedagogical dimensions that are often omitted from effectiveness‑centric evaluations, demonstrating that emotional well‑being and perceived agency are critical determinants of technology acceptance. (2) They provide empirical evidence of how students prioritise ethical values, offering a transferable methodology for value‑sensitive design in other sensitive domains. (3) They propose five concrete design recommendations: minimise or make sensing optional; enable student‑triggered, lightweight, and customisable interventions; redesign teacher‑mediated support to preserve student agency (e.g., private feedback channels); implement dynamic consent and transparent data‑use dashboards; and establish continuous model auditing to ensure accuracy and fairness.
In sum, while AI‑enabled sensing and intervention hold promise for personalised education, this study shows that without careful attention to privacy, autonomy, and affective impact, such systems risk alienating the very learners they aim to help. Human‑centred, non‑intrusive, and controllable designs are essential for aligning technological capabilities with student values and fostering sustainable adoption in higher education.
Comments & Academic Discussion
Loading comments...
Leave a Comment