Teaching AI Interactively: A Case Study in Higher Education
Introductory artificial intelligence (AI) courses present significant learning challenges due to abstract concepts, mathematical complexity, and students' diverse technical backgrounds. While active and collaborative pedagogies are often recommended,…
Authors: Jennifer M. Reddig, Scott Moon, Kaitlyn Crutcher
T eaching AI Interactively: A Case Study in Higher Education JENNIFER M. REDDIG, Georgia Institute of T echnology, USA SCOT T MOON, Independent Researcher, USA KAI TL YN CRU TCHER, Georgia Institute of T echnology, USA CHRISTOPHER J. MA CLELLAN, Georgia Institute of T echnology, USA Introductory articial intelligence ( AI) courses present signicant learning challenges due to abstract concepts, mathematical complexity , and students’ div erse technical backgrounds. While active and collaborative pedagogies are often recommended, implementation can be dicult at scale due to large class sizes and the intensive design eort r e quired of instructors. This paper presents a quasi-experimental case study examining the redesign of in-class instructional time in a university-le vel Introduction to Articial Intelligence course. Inspired by CS Unplugge d approaches, we redesigned the summer oering, integrating embodied, unplugged simulations, collaborative programming labs, and structured reection to provide students with a rst-person perspective on AI decision-making. W e maintained identical assignments, exams, and assessments as the traditional lecture-based oering. Using course evaluation data, nal grade distributions, and post-course interviews, we examined dierences in student engagement, experiences, and traditional learning outcomes. Quantitative results show that students in the redesigned course reported higher attendance, stronger agreement that assessments measured their understanding, and greater overall course eectiveness, despite no signicant dier ences in nal grades or self-reported learning. Qualitative ndings indicate that unplugged simulations and collab oration fostered a safe, supportive learning environment that increased engagement and condence with AI concepts. These results highlight the imp ortance of in-class instructional design in improving students’ learning experiences without compromising rigor . CCS Concepts: • Do Not Use This Code → Generate the Correct T erms for Y our Paper ; Generate the Correct T erms for Y our Paper ; Generate the Correct T erms for Y our Paper; Generate the Correct T erms for Y our Pap er . Additional K ey W ords and Phrases: AI Education, Higher Education, CS Unplugged. A CM Reference Format: Jennifer M. Reddig, Scott Moon, Kaitlyn Crutcher, and Christopher J. MacLellan. 2018. T eaching AI Interactively: A Case Study in Higher Education. In Proce edings of Make sure to enter the correct conference title from your rights conrmation email (Conference acronym ’XX). ACM, New Y ork, NY, USA, 18 pages. https://doi.org/XXXXXXX.XXXXXXX 1 Introduction Articial intelligence (AI) has become a core component of modern computing curricula, with introductory AI courses often serving as students’ rst sustained encounter with probabilistic reasoning, sear ch, learning algorithms, and ethical considerations surrounding intelligent systems. These courses play a critical role in shaping how students understand what AI is, how it w orks, and whether they view the eld as accessible and relevant to their goals. Howe ver , AI concepts Authors’ Contact Information: Jennifer M. Reddig, jreddig3@gate ch.edu, Georgia Institute of T echnology, Atlanta, Georgia, USA; Scott Moon, scottqmoon@ gmail.com, Independent Researcher, Atlanta, Georgia, USA; Kaitlyn Crutcher, Georgia Institute of T echnology, Atlanta, Georgia, USA, kcrutcher3@ gatech.edu; Christopher J. MacLellan, Georgia Institute of T e chnology, Atlanta, Georgia, USA, cmaclell@gatech.edu. Permission to make digital or hard copies of all or part of this work for personal or classr o om use is granted without fee provided that copies ar e not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components of this work owned by others than the author(s) must b e honored. Abstracting with credit is p ermitted. T o copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specic permission and /or a fee. Request permissions from permissions@acm.org. © 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. Manuscript submitted to ACM Manuscript submitted to ACM 1 2 Reddig et al. can b e conceptually dense and mathematically demanding, creating barriers that can discourage engagement and undermine students’ condence, particularly for those with less te chnical experience. As a result, how AI is taught has signicant implications for student learning, persistence, and identity formation within the eld. Active learning provides a promising pathway for enhancing student engagement and understanding. In higher education AI courses, active learning most often takes the form of project-, problem-, and game-based learning, where students develop substantial AI artifacts to apply course concepts in realistic conte xts [ 7 , 23 , 25 , 37 ]. Studies of game- and simulation-based AI instruction demonstrate that students can become highly engaged while they build mental models of AI algorithms [ 13 ]. However , these approaches primarily emphasize out-of-class project work and nal deliverables. Relatively little work has examined how changes to in-class instruction can meaningfully shape students’ experiences in AI courses. In-class instruction is wher e students rst encounter new ideas, ask questions, and decide whether AI feels accessible and is something they can see themselves pursuing further . CS Unplugged is a promising approach for making computing concepts engaging and accessible. Interactive, hands-on, and unplugged activities can act as complementary strategies for supp orting conceptual grounding, but their role in shaping student engagement in higher-education AI courses is underexamined. In this paper , we present a quasi-experimental mixed-methods case study of a redesigned introductory AI course that emphasized interactive, unplugged, and guided in-class activities, while keeping course content, assignments, and assessments consistent with a lecture-based baseline oering. The redesign focused on changing how students engaged with AI concepts during class time. W e e xamine how this redesign inuenced students’ course experiences, engagement, and condence, alongside traditional measures of academic performance. Specically , we seek to answer the following research questions: RQ1 What dierences are obser ved in students’ course evaluations and academic outcomes following an active redesign of a lecture-based AI course? RQ2 What elements of the redesigned course support student engagement? RQ3 How did the redesign contribute to students’ growth in condence with AI concepts and skills? Using course evaluation data and student interviews, we nd that while grades and self-reporte d learning outcomes did not dier signicantly b etween the two course oerings, students in the redesigned course reported higher attendance, greater perceiv e d course eectiveness, and increased condence in their ability to understand and work with AI concepts. Qualitative ndings illuminate how interactiv e activities, a supportive learning environment, and a visible sense of progress contributed to these outcomes. This work contributes empirical evidence that changing how students engage with AI concepts during class time can meaningfully improve their p erceptions of out-of-class assignments, making coursework feel more purposeful and aligne d with learning goals. This improved alignment supported gains in condence, enabling students to se e themselves as capable of understanding and working with AI techniques. By prioritizing the student experience, this study oers design insights for instructors seeking to make technically demanding AI content more accessible and engaging without sacricing academic rigor . 2 Background Articial intelligence is a challenging and technically demanding subject in higher education, due to its reliance on advanced mathematics, algorithmic reasoning, and programming skills. Introductory AI and machine learning courses typically require substantial prior preparation in areas such as discr ete mathematics, calculus, linear algebra, Manuscript submitted to ACM T eaching AI Interactively: A Case Study in Higher Education 3 and probability , creating signicant cognitive demands even for students with formal prerequisites [ 26 ]. Students consistently identify mathematical concepts as primar y sources of diculty , alongside challenges in translating theor y into implementation, and describe these courses as overwhelming and time-intensive across levels of prior e xp erience [ 32 ]. These technical demands are further comp ounded by math anxiety , which has been shown to interfere with working memory and problem-solving ability , thereby exacerbating diculty in domains like AI that require sustained multi-step reasoning [ 2 ]. These challenges can negatively impact students’ self-ecacy and persistence, particularly when students lack condence in their mathematical or technical abilities [ 1 ]. Anxiety and self-ecacy are closely linked, and increased experience in computing is associated with reduce d anxiety and greater condence over time [ 8 ]. The learning environment can also strongly shape student engagement persistence and identity . Large scale AI courses delivered in lecture-based formats limit individualize d support and foster competitive classroom climates, which can discourage collaboration and make students hesitant to ask questions or seek help [ 3 ]. Students may be reluctant to seek help for fear of appearing less capable, particularly in intr o ductory courses where perceptions of diculty and belonging are still forming. Anxiety and self-concept shape ho w students interpret struggle and whether they view diculty as a signal of growth or as evidence that the y do not belong [ 24 ]. Students’ decisions to drop courses ar e rarely driven by ability alone , but rather by an accumulation of challenges related to condence, motivation, comfort level, and perceived t within computing [ 16 , 28 ]. A student’s vision of themselves as legitimate participants in computing unfolds through classroom interactions and instructional practices rather than performance alone [27]. Active learning approaches support engagement and conceptual understanding, particularly for abstract and tech- nically challenging topics like computer science [ 11 , 12 ]. One well-established approach is CS Unplugged, which uses technology-free, collaborative activities to make abstract computing concepts concrete through physical and embodied experiences [ 5 ]. Unplugged activities can lo wer barriers to entry , increase motivation, and support conceptual understanding, especially for students with limited prior technical experience [ 4 , 14 ]. Recent research has e xtende d unplugged approaches to AI education, demonstrating their use for introducing concepts such as agent behavior , semantic networks, and facial recognition through embodied analogies and interactive simulations [ 18 – 20 ]. Students and instructors naturally rely on embodied metaphors and gesture when explaining core computing concepts, suggesting that physical interaction can support ho w these ideas are conceptualized and communicated [ 22 , 34 ]. Embodied learning environments can enhance attention, memory , and conceptual understanding by making intermediate reasoning visible and r e ducing cognitive load [ 36 ]. Importantly , unplugged activities are most eective when integrated into instructional sequences that transition students from conceptual exploration to guide d programming practice [ 21 ]. In this work, we draw on these principles to evaluate unplugged and hands-on activities that we designed to supp ort engagement while explicitly bridging embodied understanding to code-base d implementation. 3 Methodology This study is an exploratory pilot fo cused on understanding students’ experiences with emb odied, unplugged activities teaching articial intelligence in higher education. The course, Introduction to Articial Intelligence, is a general survey course of AI techniques and has four main topics: Search, Reasoning with Uncertainty , Probabilistic Reasoning through Time, and Machine Learning. The Spring oerings of Introduction to Articial Intelligence typically enrolls approximately 300 students in each lecture section, and are taught over the 16-week semester . During the assigned class time, students receive traditional lecture-based instruction, and the instructional team hosts drop-in oce hours during the week for students to ask questions and get help on assignments. Manuscript submitted to ACM 4 Re ddig et al. Fig. 1. asi-experimental study design comparing the Spring 2025 (control) and Summer 2025 (inter vention) oerings of the undergraduate artificial intelligence course. Both semesters used identical units, programming projects, comparable exams, and student surveys. The Summer oering replaced traditional lecture-based instruction with active, unplugged, and collaborative instruction, and interviews were conducted following the Summer semester . T able 1. Comparison of the Spring and Summer oerings of the course used in the quasi-experimental study . Across semesters, course topics, projects, and exams remained the same. The primary dierences were enrollment size, semester length, and the use of active, unplugged instruction and in-class T A support during the Summer oering. Feature Spring Summer Enrollment ∼ 300 ∼ 50 Semester length 16 weeks 12 weeks In-class instruction Lecture-based Active & unplugged Projects & exams Same Same T A involvement Oce hours Oce hours + in-class support Exit tickets No Y es There are four in-depth coding projects use d as a summative assessment for each main topic. Students program in Python using Jupyter noteb ooks to implement a selection of fundamental algorithms. After each co ding project, students reect on the algorithm’s use, benets, and p otential issues through So cratic Mind [ 15 ]. The course has a midterm and nal exam, with new questions written by the instructional team ev er y semester . W e used a quasi-experimental b etween-semester design comparing the traditional Spring oering with a redesigned Summer oering. The goal was to isolate the impact of redesigned in-class instruction. Acr oss semesters, the course topics, programming projects, exams, and out-of-class expectations remained the same. The primar y dierence was the use of activ e, embodied, and collaborative instructional methods during class time in the Summer oering. The Summer oering was taught by an instructor who had ser ved as a teaching assistant for the Spring semester and was therefore familiar with the course structure, materials, and assessments. A new teaching assistant joined the instructional team during the Summer oering. Figure 1 shows the quasi-experimental design comparing Spring and Summer semesters. T able 1 describes the dierences between the two course oerings. For our intervention, we targeted the 2025 summer semester . The summer semester typically enrolls approximately 50 students and is taught in 12 weeks instead of 16. Though there are fewer students in the summer session, the instructional team ratio remains the same. In addition to hosting drop-in oce hours, the T As attended the assigned lecture section to assist with the new instructional methods. Manuscript submitted to ACM T eaching AI Interactively: A Case Study in Higher Education 5 3.1 Instructional Design During the summer semester , we modied the in-class lecture time to fo cus on active and interactive instructional methods. W e used a wide variety of instructional metho ds. T o introduce each topic, students participate d in an unplugged simulation like the ones described by Re ddig et al . [30] . The simulation served as an accessible entr y point to AI decision-making, and the mathematical formalization. Students then constructe d the simulation in a Jupyter notebook, scaolding the Python programming skills that will be nee ded for the summative coding projects. For more in-depth exploration of the intricacies of each algorithm, the instructional team dev elop ed collaborative coding labs that allowed students to experiment with parameters and training an AI agent. At the end of each class session, students completed an e xit ticket in the form of a One Minute Paper (OMP) [ 35 ]. Students were asked to describe what the y now know about the day’s topic, how what the y did during class helped them learn each topic, and what questions they have now (points of confusion, clarication r e quests, musings about real-world applications, connections to other topics, etc). The OMP not only allow e d students to reect on their own learning and meaning-making during class, but also helpe d the instructional team be aware of what the students were taking away from each session. The instructional team used the OMP submissions to revise the next session’s lesson plan to match the student’s actual progression through course content. As an example, consider the lesson on reinfor cement learning. Students had previously learned value and policy iteration in a prior session, so they were expected to have knowledge of the Bellman equation and Q-values. The traditional lecture-based lesson plan involved e xplaining mo del-free and temporal distance learning rst as conte xt for Q-learning, followed by the algorithm and using the Bellman equation to estimate the utility of each action. The instructor then showed several videos and explained the role of each parameter in the algorithm and how , through repetition and exploration, Q-learning converges to an optimal policy . In the redesigned version, students still came into class with prior knowledge of value and policy iteration. Class started with students playing an online rst-person maze game that required students to r eset when they ran into a terminal state, thereby beginning the class session with the idea of learning thr ough repetition and converging on an optimal policy . Then the instructor provided a short lesson moving the students from value iteration with a global perspective to Q-learning with a rst-person p erspective. Students then participated in an unplugged simulation that involved each student representing one exploration episo de through a hidden maze, relying on information left by previous students to either explore ne w states or exploit current estimates. In the simulation, they made Q-value updates using a simplied Bellman equation to make the mental computation easier . This embodied activity is analogous to showing a video in the traditional session, except students have an active r ole in thinking through how their choices and computations aect the overall success of the algorithm (see Figure 2). Having completed a practical example of Q-learning, the instructor then intr o duced the full update equation and explored how the algorithm handles stochasticity in a plugged simulation. Students recreated the algorithm with a visualization of the training sequence and could ev en play through an episode to experience how the GridW orld slip probabilities impacted the optimal action, as well as how the value of each parameter – like the epsilon-de cay rate and the futur e reward discount – aected Q-value population and convergence. After imparting a practical sense of how and why the algorithm w orks, the instructor introduced the concepts of model-free and temporal distance learning, working through the nitty-gritty theoretical details and using Q-learning as a example. The lesson plan use d the unplugged simulation as an accessible entry-p oint to the concept and grounded the theoretical and technical details in a concrete example. Manuscript submitted to ACM 6 Reddig et al. Fig. 2. In the unplugged simulation, students collaboratively build the Q-table. In the le grid, students have no information about the utility of each action choice so they need to prioritize exploration. In the right grid, the actions for mo ving up or down have a high utility , so students want to exploit the existing information. T o compare the impact of the new instructional approach, we used the same summative coding assessments and exam structure as the Spring semester . All out-of-class work remained the same. Class sessions were held twice a week for one hour and 45 minutes. Each member of the instructional team hosted two drop-in oce hour sessions each week. T able 2. T wo-Day Lesson Structure Day 1 Day 2 W arm-up prompt to discuss with neighbor (~2 min) W arm-up sample problem to activate prior knowledge ( ~5 min) Unplugged simulation activity ( ~30 min) Review pre vious session (~10 min) Activity de-brief and reection with small groups, then full class ( ~10 min) Generalized AI technique mini-lecture (~20 min) Connect activity to Mathematical formalization ( ~10 min) Collaborative programming lab ( ~45 min) Plugged reconstruction of the simulation (~30 min) Reect on applicability of AI technique (~10 min) Reect on how the AI performed compared to un- plugged simulation ( ~10 min) One-Minute Paper exit ticket ( ~2 min) One-Minute Paper exit ticket ( ~2 min) T able 2 shows the structure for a typical week’s lesson plan. The e xams were given in weeks 6 and 12. Students were given approximately tw o weeks to work on each programming assignment, which were due in weeks 3, 5, 8 and 10. 3.2 Course Evaluation After completion of the course, we invited students to complete the institution end of course survey so we could compare previous semesters to the r edesigned course. The survey asked students to self-report their preparedness to take the course at the start, ho w well the assignments measured their knowledge, the percentage of assignments completed, and the percentage of classes attended. It also asked students to rate the amount they learned from the course and the overall course ee ctiveness. Each sur vey item is on a 5-point Likert scale, except the two percentage questions which are on a 6-point Likert scale. W e compared the scores from the Spring and Summer semesters using an ordinal regression. W e also conducte d a chi-square test of independence to examine whether the nal grade distributions diered between terms. In addition to survey and grade data, we invited the students to participate in a retrospective semi-structured interview of their experience learning articial intelligence. W e conducte d only post-course inter views after all graded Manuscript submitted to ACM T eaching AI Interactively: A Case Study in Higher Education 7 Fig. 3. alitative Data Analysis Process materials were submitted and nal scores released so that student responses would not b e inuenced by their desire to achieve a particular grade. This appr oach allowed for in-depth reection on the perceived impact of the instructional design and oered insight into how students engaged with and interpreted their learning experiences. As a pilot implementation, this study aims to inform future iterations of the design by analyzing learner persp ectives. This study was review e d and appro ved by the authors’ Institutional Review Boar d. All participants provided informed consent prior to participation. W e conducted interviews with eleven students (4 female, 7 male), asking them to recall and reect on their time in class. W e asked questions like “How would you describe what articial intelligence is, now that y ou’ve completed the course?” , “Thinking back, was there an activity or assignment that was particularly memorable or helpful?” , “How did the activities aect your engagement with the material?” , and “If you could suggest one improvement for the way this course is taught in the future, what would it be?” . The goal of the inter views was to have students reect on their outcomes and experiences, with the intent of using their fee dback to improve futur e iterations of the course. W e conducted the interviews via Zoom and recorded after obtaining verbal consent. After transcribing the inter views manually from the recording, we cleaned and anonymized each transcript before analysis and subsequently deleted the recordings. W e used a two-cycle coding process to analyze student interview transcripts to better understand the student experience, detailed in Figure 3 [ 31 ]. In the rst cycle, the research team built a preliminary code-book based on RQ2 and RQ3. The initial codebook include d 12 codes like ‘Social Interaction’ and ‘In-class Activity’ grouped into 5 categories. T wo independent coders then coded 3 of the 11 transcripts individually with the preliminar y deductive codebook. After reviewing the results, sev eral additional codes emerged inductively ab out the classroom environment, like ‘Instructor Behaviors’ and ‘ Accessible’ . W e also revised the codebook to merge the codes ‘A ctive Learning’ and ‘Passive Learning’ into one code ‘Comparing Learning Environments’ , since students did not mention a passiv e learning activity unless they were comparing it to the active learning strategies in the course . The revised codebook include d 16 codes grouped into 5 categories. With the updated codeb ook, we coded the remaining eight transcripts. Manuscript submitted to ACM 8 Reddig et al. T able 3. Ordinal logistic regression coeicients estimating the eect of semester (Spring vs. Summer) on student sur vey ratings of course experiences. Coeicients are log-odds estimates from proportional odds models. Positive values indicate higher odds of selecting higher response categories for Summer relative to Spring. Significant dierences exist between the two semesters for aendance, measuring knowledge, and ov erall ee ctiveness. Survey Item Coecient Standard Error W ald z p-value Student preparedness to take subject 0.3749769 0.5691604 0.6588246 0.510 Percentage of homework completed 0.6803388 1.0910841 0.6235439 0.533 Amount learned in course 0.7737859 0.5561489 1.3913285 0.164 Percentage of classes attended 1.9235761 0.5815755 3.307526 0.001* Assignments measured knowledge 1.44022020 0.6006474 2.3977798 0.016* Overall course eectiveness 1.2170385 0.6218450 1.957141 0.050* W e then reviewed the coded data and applied an inductive process to identify patterns and themes across codes and categories [ 6 ]. W e identied four themes within the data that students attributed to their engagement with the course: Actively Constructing Kno wle dge, a Safe and Supportive Learning Envir onment, a Sense of Progress, and Increased Condence. 4 Results 4.1 antitative Data W e conducted an ordinal regression to examine whether the dierent instructional strategies predicted student ratings on the end-of-semester course evaluation survey . Semester was a categorical pr e dictor , with Spring as the r eference category since it represents the typical instructional strategy . The only dependent variable in the ordinal regression was Semester , and an ordinal model was created for each survey item to examine the impact of Semester . Dierences in student scores are repr esente d in Figure 4 and regression coecients for all items are listed in T able 3. Three of the six survey items responses did not signicantly dier between the tw o semesters. Students reported that they were comparatively prepared to take the class, W ald 𝑧 = 0 . 66 , 𝑝 = 0 . 51 . Students also completed a similar percentage of all homework assignments, W ald 𝑧 = 0 . 62 , 𝑝 = 0 . 53 . Finally , at the end of the course, students learned a comparable amount in both semesters, W ald 𝑧 = 1 . 39 , 𝑝 = 0 . 16 . The instructional dierences between the tw o semesters did not signicantly predict student preparedness, homework completed, or amount learned. There were statistically signicant dierences between the two course sessions on the remaining thr e e questions. When asked what percentage of classes they were physically present for , student answers were signicantly dier ent, W ald 𝑧 = 3 . 3 , 𝑝 < 0 . 001 . Students in the summer session had 6.8 times higher odds of reporting higher attendance than students in the spring session ( 𝛽 = 1 . 92 , 𝑆 𝐸 = 0 . 58 ) . W e also see signicant dierences in how students perceiv ed the assignments, W ald 𝑧 = 2 . 40 , 𝑝 < . 016 . Though students in both semesters completed the same projects and exams, students in the summer semester were appr oximately four times more likely to favorably r ep ort degree to which the exams, homework, and other assignments measur ed their knowledge and understanding ( 𝛽 = 1 . 44 , 𝑆 𝐸 = 0 . 60 ) . Finally , there is also a signicant dierence in the student ratings for overall course eectiveness, W ald 𝑧 = 1 . 96 , 𝑝 < . 05 . Students in the summer session had 3.4 times higher o dds of agreeing that the course was overall eective ( 𝛽 = 1 . 22 , 𝑆 𝐸 = 0 . 62 ) . A series of chi-square tests of independence were conducted to examine whether nal grade distributions diered between Spring and Summer terms. Across multiple specications, including collapsed grade categories to address low expected cell counts, no statistically signicant association was observed between academic term and nal grade Manuscript submitted to ACM T eaching AI Interactively: A Case Study in Higher Education 9 Fig. 4. Stacked proportional bar charts showing student responses to survey items on assignment eectiveness, perceived learning, course eectiveness, preparedness, aendance, and homework completion comparing Spring and Summer semesters. Statistically significant responses are marked with an asterisk (*). Manuscript submitted to ACM 10 Reddig et al. distribution, 𝜒 2 ( 5 ) = 7 . 53 , 𝑝 = . 34 . A Fisher’s exact test conducted as a sensitivity analysis yielded consistent results ( 𝑝 = . 286 ). These ndings indicate that nal grade outcomes wer e broadly comparable across terms despite dierences in instructional format. W e note that the analysis was based on aggregated grade distributions rather than individual-lev el performance data, and ther efore conclusions ar e limited to o verall outcome patterns. In addition, the class sizes between the two semesters diered greatly , and there may not be enough data to reveal a signicant eect. 4.2 alitative Data The rst theme deriv e d from student interviews was A ctively Constructing Knowledge . Students enjo yed all the ways the class sessions helped them engage with the content in a hands-on manner . They appreciated the unplugge d simulation activities, and often described their favorite activity as an example of a time they felt engaged. The activities allo wed students to visualize the algorithm physically and play through small details in a hands-on manner ( “The in class activities really broke concepts down to their core components and allowed us to b e the components and perform actions, the way that they would be performe d by an AI agent. ” ). While learning the mathematics behind an AI algorithm can be challenging, the in-class instruction helped break down the challenge into manageable pieces. The coding labs helped students transition from conceptual understanding to practical implementation, in a low-stakes way that prepared them to complete the graded programming assignments. Students described learning as an active process of building understanding through doing rather than passively receiving information. Being physically and cognitively involved in class helped students engage deeply in advance d content. Students also repeatedly commented on all the ways the instructional team created a Safe, Supportive Learning Environment . The unplugged activities designed by the instructional team provided a step-by-step breakdown of each algorithm which claried conceptual misunderstandings and help ed students understand what their co de implementation should do line-by-line. Collaboration during the in-class activities helped students build conne ctions, make friends, and nd someone to share with and learn together ( “I was able to have chances to talk to the p eople around me and they were able to help me understand concepts I didn’t understand and I was able to share things with them” ). The constant presence of the instructor and T As made it very easy to receive timely , personal support. The instructional team valued questions and encouraged students to reach out for help, and followed up by providing detailed feedback and explanations. Students also found attending class enjoyable, perhaps moreso because the classroom culture welcomed questions and acknowledged confusion as part of the learning pr o cess ( “I actually enjoyed the content b ecause I felt like I was doing something fun with each activity and I also felt it was a more safe environment for me to b e able to ask questions. ” ). This safe and supportive atmosphere helped students engage deeply with complex content while feeling comfortable asking for help and learning through uncertainty . Students wanted to put in eort every class because they felt a Sense of Progress . They felt that attending class was worth while, because they were going to get something valuable and meaningful each day ( “Seeing something come to life and with an actual lab or some type of activity that made you feel you weren’t just learning it, you were using it and it had a purpose... It made me feel the time coming to class was worth it, which is the only reason I attended so many classes versus staying at home. ” ). Students also saw content from other courses, like Data Structures and Probability , applied in new ways. Connecting AI concepts to other classes made both courses feel more practical. This cumulative sense of progress reinfor ce d the value of attending class and motivated sustained engagement throughout the semester . Finally , one of the biggest takeaways for students was their Increased Condence . Students tackle d many challenging concepts: formulas with lots of variables, plenty of math operations, learning to code in Python for the rst time, and demystifying black-box AI methods like neural nets. Breaking each concept down into manageable pieces through the Manuscript submitted to ACM T eaching AI Interactively: A Case Study in Higher Education 11 unplugged simulations and guided programming labs made the content accessible with an easy entry point, so students could see that AI methods are attainable ( “A nd I guess like how , I don’t want to say how simple it is b ecause it really isn’t that simple, but it’s a lot less scary than I thought it was. ” ). Discussing, using, and coding AI ev ery day in class helped students build their condence as an AI engine er , esp ecially those that had attempted and dropped this class in a prior semester . Students attitudes shifted, from thinking AI is inaccessible and complicated to visualizing themselves as AI engineers one day ( “[I used to say] ‘I don’t think I would ever become a engineer in this. I don’t think I have the skill set. I don’t think this is ever doable’ . A nd it got pulle d down the earth really quickly and I realized that I want to do it now . ” ). By engaging daily with challenging concepts in a supportive, hands-on environment, students gained condence in both their technical abilities and their capacity to learn dicult material. 5 Discussion 5.1 What dierences are observed in students’ course evaluations and academic outcomes follo wing the redesign? Students reported similar levels of preparedness and work completed for all semesters. Our intervention course yielded higher reported attendance. In the interviews, students repeatedly describe class as “worth attending” because they made visible progress every day . Doing activities that applied AI concepts conceptually and programatically kept students engaged in a hands-on matter and sustained that engagement throughout the semester . Making AI algorithms observable and manipulable can increase students’ willingness to engage consistently with challenging material. Of particular interest is our result that students were more likely to report that the assignments measured their knowledge, despite the fact that the assignments were unchanged between semesters. Paired with the interviews, we speculate that because students were more engaged and felt the challenge was manageable, that the y were more willing to see the assignments as an accurate reection of their progress. Students also remarked that the in-class coding labs helped them prepar e to tackle the more inv olved summative coding projects. Overhauling a course does not require changing ev er y assignment to improv e the student experience. Making class time feel more meaningful can have cascading eects to other parts of the course. Small, evidence-based changes to everyday classroom practices can produce large improvements in student motivation, condence, and learning without requiring major course redesign [17]. These results point to a shift in how students engaged with the course rather than a change in traditional outcome measures. The redesign led to meaningful increases in attendance, engagement, and condence while maintaining comparable performance on exams and projects. This pattern suggests that the intervention changed how students approached challenging material, not the level of challenge itself. Students were more willing to ask questions, collaborate with p eers, and take risks during class. The unplugged and in-class activities acted as a conceptual scaold, helping students break down complex AI ideas into manageable steps and build condence b efore tackling larger assessments. Fink’s T axonomy of Signicant Learning [ 10 ] emphasizes not only foundational knowledge and application, but integration, learning how to learn, and the human dimension. The collaborative and interactive structure of the redesigned course encouraged students to conne ct conceptual, mathematical, and programming perspectives, reect on their o wn problem-solving processes, and learn from their peers. These experiences supporte d students in developing condence, persistence, and a sense of ecacy when approaching unfamiliar AI problems. By making AI algorithms observable and manipulable in collaborative settings, Manuscript submitted to ACM 12 Reddig et al. the redesign fostered deeper engagement and gr eater condence in working with rigorous material and preserving academic outcomes. 5.2 What elements of the redesigned course support student engagement? Students described the in-class activities and labs as making class time engaging and worthwhile. They were visualizing, interacting, and applying the content instead of passively listening. They were active participants and co-collaborators in their learning, requiring them to reect on how each activity related to AI techniques. This engagement was supported by how new concepts were intr o duced and reinforced throughout each lesson cycle. Rather than r elying on extended in-class lectures, students encountered key ideas through a sequence of short mini-lectures, unplugged simulations, collaborative coding labs, and iterative reection. This structure allowed students to arrive at activities with sucient conceptual grounding while continuing to r ene and formalize their understanding during and after the hands-on work. Despite similar preparedness and home work completion across semesters, students wer e more likely to attend class when instructional time was perceived as valuable and interactive. Students also emphasized the accessibility of the instructional team. The instructional team encouraged questions, normalized struggle, and reframed mistakes as safe learning experiences. This sense of safety lowered barriers to help-seeking and allowed students to remain engaged when they encountered diculty . As a result, students sustained engagement beyond the point where they would “ smash [their] head into the computer and hop e it worked ” . Rather than disengaging in response to frustration, students describe d persisting through challenges with the support of peers and the instructional team. Students’ sense of progress also motivated sustained engagement. Students felt engaged when learning was purposeful for immediate tasks, rather than only deferred to futur e assessments. Students were more likely to r ep ort that those assessments measured their knowledge. A strong alignment between instruction and assessment may have reinforced students’ sense of progress. Students could see how the daily activities contribute d to the larger learning goals and for-credit assessments. 5.3 How did the redesign contribute to students’ growth in confidence with AI concepts and skills? Students come into the course with a limited or opaque vie w of AI. They use words like ‘black box’ and ‘magic’ , and talk ab out the eld’s reputation for b eing challenging. Through the rst-person active learning activities, complex topics begin to feel ‘solidied’ or ‘brought down to earth’ . With the topic ‘broken do wn’, each individual piece feels surmountable, and the constant support from the instructional team and fellow students in an environment of fun kept the momentum going. The sense of continual progress led to higher condence and motivation. Students also reported feeling more comfortable discussing and explaining AI methods. This takeaway was especially important for students with limited prior programming experience, students who had not taken a CS class before, and students who had previously withdrawn fr om the course. Students who initially doubted their ability to succeed and learn AI were able to not only acquire new skills, but improve their self-ecacy . They changed their self-image into someone who could have meaningful conv ersations with AI professionals, e ven expressing a desire to continue their AI studies beyond what they had previously imagined. 5.4 Implications for Instructional Design This course redesign demonstrates the importance of making the time students spend in class meaningful. Students need to nd value in attending lecture sessions beyond earning an attendance grade or passively receiving content Manuscript submitted to ACM T eaching AI Interactively: A Case Study in Higher Education 13 Fig. 5. Conceptual model of the instructional design. Complex AI topics are decomposed through active learning activities and supported by social interaction, enabling repeated experiences of progress that build student confidence. to keep them coming to every session. Using a variety of active learning techniques and centering students during discussions helps keep students engaged through the session and the semester ( “It wasn’t just a le cture where I just had to sit and listen the entire time..., even though the class lecture was longer , almost two hours, I was able to just fo cus the entire two hours, which I never have been b efore in like other classes. ” ). Even though assignments, exams, and out-of-class expectations were unchanged, students were signicantly more likely to attend class when in-class activities provided tangible progress and immediate payo. This pattern aligns with expectancy–value theory , which suggests that students are mor e motivated to engage in tasks when they perceive them as both valuable and directly connected to their success [ 38 , 39 ]. Attendance should not be motivate d by course policies or incentives, but of whether instructional time oers learning experiences that students perceive as distinct from what they could achie ve on their own. It also appears that tightly coupling in-class and out-of-class activities appears to amplify the meaning of both. Unplugged simulations, collaborative coding labs, and structured reection were explicitly connected to the summative programming assignments students completed outside of class. As a result, students reported that assignments better measured their understanding than the deliv ery of the exact same assignments in the prior semester . The alignment helped students see coursework as a learning trajectory rather than a collection of disconnected tasks, reinforcing a sense of progress and purpose throughout the semester For instructors looking to redesign their course, consider how to bring your in-class and out-of-class activities closer together to str engthen the instructional through-line (“ I think when the activity was tightly coupled with the topic was when it was the most engaging. ”). Changing how students learn can be just as important as what they produce. Manuscript submitted to ACM 14 Reddig et al. Proactive instructional behaviors can lower the barrier to help-seeking. When students have larger challenges with learning content or completing assignments, it can be vital to demonstrate that any question is welcome by regularly soliciting questions, checking in with students, and normalizing points of friction. By initiating support rather than waiting for students to self-identify as struggling, the instructional team signaled approachability and care, helping students feel comfortable asking for help b efore diculties became overwhelming. By being proactive, we wer e able to build trust and sustained engagement, particularly in a te chnically demanding course where hesitation to seek help can lead to frustration, disengagement, and leaving the subject (“ I was like I’m not doing [an AI concentration] anymore after this class, but I’m glad I stuck with it because I am 100% more comfortable with AI and I really want to continue it now . ”). All of these elements worked together to create a productive learning environment. Figure 5 illustrates how the facets of the learning experience interact to support student success. W e took a complex concept in AI, ltered it through several active learning activities, and broke it down into manageable components. W e supported each component through social interaction and collaboration, providing scaolding as students work through conceptual and technical challenges. Student’s experienced repeated successes in completing these smaller steps, which created a visible sense of progress and ultimately contributed to their condence in their own abilities. 5.5 Limitations This study has several limitations that should be considered when interpreting the results. First, the study used a quasi-experimental between-semester design rather than a randomized controlled design. The Spring and Summer oerings diered not only in instructional approach, but also in enrollment size and semester length. The Summer oering enrolled approximately 50 students over 12 weeks, compared to approximately 300 students over a 16-week Spring semester . These structural dierences may have inuenced student engagement, classroom dynamics, and help-seeking behaviors independent of the instructional redesign. Second, the redesigned course was taught by an instructor who had previously served as a teaching assistant for the Spring oering. Dierences in instructor experience, teaching style, or rapport with students may have contributed to the observed outcomes. Although the course materials, projects, and content were held constant across semesters, instructor eects cannot be fully separated from the impact of the instructional redesign. Third, sev eral data sources rely on self-reported measures, including course evaluations and retrospective interviews. Self-reported data are subject to r esponse bias, recall bias, and social desirability eects [ 29 ]. While interviews provided rich insight into student experiences, only eleven students participated, limiting the br eadth of perspectives captured. Fourth, this work represents a single-course case study at one institution. The ndings may not generalize to other institutional contexts, student populations, or AI course structures. In particular , the redesigned course benete d from strong teaching assistant involvement during class time , which may not b e available in all instructional settings. Finally , this study should b e understood as a pilot implementation intende d to inform future iterations and larger-scale investigations. Future w ork should examine replication across multiple semesters, instructors, and institutions, and explore how this instructional model can be adapted for larger enrollments. 5.6 Scaling Up This work raises natural questions on how the new instructional methods could b e applied to larger enrollments. The intervention was implemented in a summer oering of approximately 50 students, while the traditional lecture- based course typically enrolls approximately 300 students per section. Scaling this model therefore requires car eful consideration of instructional stang, classroom logistics, and activity design. Manuscript submitted to ACM T eaching AI Interactively: A Case Study in Higher Education 15 One important factor in the redesigned course was the active presence of teaching assistants during class sessions. T eaching assistants provided real-time support, answered questions, and helped facilitate collaborative activities. W e believe that with greater involv ement from teaching assistants, the active and student-centered instruction described in this paper can scale to the larger Spring sessions. Prior work [ 9 , 33 ] pr ovides an example for how T As can be trained and utilized to scale-up collaborative learning in large classes. With appropriate training and clearly dened facilitation r oles, T As can support subgroups of students in the collab orative, unplugged simulations and surface group misconceptions and confusion areas to the instructor to address more broadly . Coordinating classroom management between T As and the instructor would allow each student to receive individualized attention from at least one member of the instructional team and allow the instructor to concentrate in-class support where most neede d. Short mini-lectures combined with structured group tasks could replace extended lecture segments without requiring a complete overhaul of course materials or assessment. Many of the unplugged simulations and collaborative programming labs w ere intentionally designed to be exible with the numb er of students in each group. In a large-enrollment setting, modifying the group size and using teaching assistants to organize and facilitate p eer collaboration can help extend the instructional team’s reach to make learning feel personal and individual. The use of frequent formative feedback, such as One-Minute Pap ers, may b ecome even more valuable at scale. Aggregated student reections can provide instructors with rapid insight into common misconceptions and guide targeted adjustments to subsequent class sessions. Reshaping feedback mechanisms to better suit aggregation, like surveys, polls, and quizzes, would make it easier for instructors to r esp ond to the student e xp erience. Alternatively , using T As to collect, aggregate , and interpret OMPs would let the instructional team receive qualitativ e responses about student progress. 5.7 Lessons Learned As a pilot, this course redesign pro vides insight into how future iterations could be strengthened. 5.7.1 Tighten simulation to formalization. The most successful unplugged simulations closely mirrored the structure of the underlying mathematics. When the mechanics of the simulation reected the actual update equations or algorithmic steps, students were able to transition more smoothly to formal notation and code implementation. In contrast, when a simulation abstracted or simplied the mathematics too heavily , students sometimes struggled to map their e xp eriential understanding onto the symbolic formulation. Future iterations will prioritize tighter structural alignment between simulation design and the targeted algorithm. The closer the correspondence between the embodied experience and the mathematical model, the more seamless the transition to formal derivation and programming becomes. 5.7.2 Clarify the transition from experience to formalization. While unplugged simulations provided accessible entry points into complex AI algorithms, students beneted most when the transition to formal mathematics was made explicit. Making this bridge visible helps students better articulate how interactive activities connect to theor y . Future iterations will place greater emphasis on systematically annotating the mapping between the simulation and the algorithm. Rather than assuming students will infer how the physical experience connects to symbolic notation, instructional explanations will more deliberately trace how each step of the simulation reects the structure of the algorithm. 5.7.3 Strengthen explicit connections to assessments. Students beneted from additional structured time to apply key frameworks and tools, like pgmpy or PyTorch , in ways that closely resembles project tasks. Giving students chances to construct derivations, interpret algorithm behavior , debug mo del implementations, and reason about parameter choices Manuscript submitted to ACM 16 Reddig et al. during class can reduce the gap b etween exploratory activity and high-stakes assessment. When in-class practice mirrors the cognitive and technical demands of exams and projects, students can build uency and condence in applying concepts independently . Future oerings will ther efore incorporate targeted rehearsal problems and scaolded coding challenges that reect the structure and expectations of summative assessments. 6 Conclusion This study demonstrates that redesigning instructional time can meaningfully improve students’ experiences in introductory AI courses without compromising academic rigor . By replacing primarily lecture-based class sessions with interactive, unplugged, and collaborative activities, while keeping the assignments and learning objectives unchanged, we observed higher student attendance, str onger perceptions that assessments measured understanding, and gr eater overall course eectiveness, alongside comparable grades and self-reported learning outcomes. Embodied activities, a close alignment between in-class and out-of-class work, and proactive instructional supp ort built engagement, condence, and a sense of progress with challenging AI concepts in our students. These results show that in-class instructional design plays a critical role in shaping how students interpret diculty , value coursework, and see themselves as capable learners in AI. Acknowledgments This project is supported by National Science Foundation under Grant No. 2247790 and Grant No. 2112532. Any opinions, ndings, and conclusions or recommendations expressed in this material are those of the author( s) and do not necessarily reect the views of the National Science Foundation. References [1] Becky Allen, Andrew Stephen McGough, and Marie Devlin. 2021. T oward a Framework for T eaching Articial Intelligence to a Higher Education Audience . ACM Trans. Comput. Educ. 22, 2, Article 15 (Nov . 2021), 29 pages. doi:10.1145/3485062 [2] Mark H. Ashcraft. 2002. Math Anxiety: Personal, Educational, and Cognitive Consequences. Current Directions in Psychological Science 11, 5 (2002), 181–185. doi:10.1111/1467- 8721.00196 [3] Lecia J. Barker and Kathy Garvin-Doxas. 2004. Making Visible the Behaviors that Inuence Learning Environment: A Qualitative Exploration of Computer Science Classrooms. Computer Science Education 14, 2 (2004), 119–145. doi:10.1080/08993400412331363853 [4] Ali Battal, Gülgün Afacan Adanır , and Yasemin Gülbahar . 2021. Computer science unplugged: A systematic literature review . Journal of Educational T echnology Systems 50, 1 (2021), 24–47. [5] Tim Bell and Jan V ahrenhold. 2018. CS unplugged—how is it used, and does it work? In Adventures b etween lower bounds and higher altitudes: essays dedicated to Juraj Hromkovič on the occasion of his 60th birthday . Springer , 497–521. [6] Andrea J Bingham and Patricia Witkowsky . 2021. Deductive and inductive approaches to qualitative data analysis. Analyzing and interpreting qualitative data: After the interview 1 (2021), 133–146. [7] Vitor Augusto Menten De Barros, Henrique Mohallem Paiva, and Victor T akashi Hayashi. 2023. Using PBL and agile to teach articial intelligence to undergraduate computing students. IEEE Access 11 (2023), 77737–77749. [8] Eileen Doyle, Ioanna Stamouli, and Meriel Huggard. 2005. Computer anxiety , self-ecacy , computer experience: An investigation throughout a computer science degree. In Proceedings Fr ontiers in Education 35th A nnual Conference . IEEE, S2H–3. [9] Ray Essick, Matthew W est, Mariana Silva, Georey L Herman, and Emma Mercier . 2016. Scaling-up collab orative learning for large introductory courses using active learning spaces, T A training, and computerized team management. In 2016 ASEE Annual Conference & Exposition . [10] L Dee Fink. 2013. Creating signicant learning experiences: A n integrated approach to designing college courses . John Wiley & Sons. [11] Qiang Hao, Bradley Barnes, Ewan Wright, and Eunjung Kim. 2018. Eects of Active Learning Environments and Instructional Methods in Computer Science Education. In Proceedings of the 49th ACM T echnical Symposium on Computer Science Education (Baltimore, Maryland, USA) (SIGCSE ’18) . Association for Computing Machinery , New Y ork, N Y , USA, 934–939. doi:10.1145/3159450.3159451 [12] Georey L. Herman and Sushmita Azad. 2020. A Comparison of Peer Instruction and Collab orative Problem Solving in a Computer Architecture Course. In Proceedings of the 51st ACM T echnical Symposium on Computer Science Education (Portland, OR, USA) (SIGCSE ’20) . Association for Computing Machinery , New Y ork, N Y , USA, 461–467. doi:10.1145/3328778.3366819 Manuscript submitted to ACM T eaching AI Interactively: A Case Study in Higher Education 17 [13] Philip Hingston, Barbara Combes, and Martin Masek. 2006. T eaching an undergraduate AI course with games and simulation. In T echnologies for E-Learning and Digital Entertainment: First International Conference, Edutainment 2006, Hangzhou, China, April 16-19, 2006. Proceedings 1 . Springer , 494–506. [14] W endy Huang and Chee-Kit Looi. 2021. A critical review of literature on “unplugged” pedagogies in K-12 computer science and computational thinking education. Computer Science Education 31, 1 (2021), 83–111. [15] Jui- Tse Hung, Christopher Cui, Diana M Popescu, Saurabh Chatterjee, and Thad Starner . 2024. So cratic Mind: Scalable Oral Assessment Power e d By AI. In Proceedings of the Eleventh ACM Conference on Learning@ Scale . 340–345. [16] Päivi Kinnunen and Lauri Malmi. 2006. Why students drop out CS1 course?. In Proceedings of the second international workshop on Computing education resear ch . 97–108. [17] James M Lang. 2021. Small teaching: Everyday lessons from the science of learning . John Wiley & Sons. [18] Hansol Lim, W o okhee Min, Jessica V andenberg, V eronica Cateté, and Bradford Mott. 2024. Unplugge d K-12 AI Learning: Exploring Representation and Reasoning with a Facial Recognition Game. In Proceedings of the AAAI Conference on A rticial Intelligence , V ol. 38. 23285–23293. Issue 21. [19] Annabel Lindner , Stefan Seegerer , and Ralf Romeike. 2019. Unplugged Activities in the Context of AI. In International conference on informatics in schools: Situation, evolution, and perspectives . Springer , 123–135. [20] Duri Long, Jonathan Moon, and Brian Magerko. 2021. Unplugged assignments for K-12 AI education. AI Matters 7, 1 (July 2021), 10–12. doi:10.1145/3465074.3465078 [21] Ruizhe Ma, Ismaila T emitayo Sanusi, V aishali Mahipal, Joseph E Gonzales, and Fred G Martin. 2023. Developing machine learning algorithm literacy with novel plugged and unplugged approaches. In Proceedings of the 54th ACM T e chnical Symp osium on Computer Science Education V . 1 . 298–304. [22] Andrew Manches, Peter E McKenna, Gnanathusharan Rajendran, and Judy Rob ertson. 2020. Identifying embodied metaphors for computing education. Computers in Human Behavior 105 (2020), 105859. [23] Z. Markov , I. Russell, T . Neller , and S. Coleman. 2005. Enhancing undergraduate AI courses through machine learning projects. In Proceedings Frontiers in Education 35th A nnual Conference . T3E–21. doi:10.1109/FIE.2005.1611941 [24] Luis Morales-Navarro, Michael T Giang, Deborah A Fields, and Y asmin B Kafai. 2024. Connecting beliefs, mindsets, anxiety and self-ecacy in computer science learning: An instrument for capturing secondary school students’ self-beliefs. Computer Science Education 34, 3 (2024), 387–413. [25] Davy Tsz Kit Ng, Min Lee, Roy Jun Yi T an, Xiao Hu, J Stephen Downie, and Samuel Kai W ah Chu. 2023. A review of AI teaching and learning from 2000 to 2020. Education and Information T echnologies 28, 7 (2023), 8445–8501. [26] Rose Niousha, Dev Ahluwalia, Michael Wu, Lisa Zhang, and Narges Nor ouzi. 2024. Mapping the Pathways: A Comparative Analysis of AI/ML/DS Prerequisite Structures in R1 Institutions in the United States. In 2024 IEEE Frontiers in Education Conference (FIE) . 1–9. doi:10.1109/FIE61694.2024. 10893290 [27] Anne-Kathrin Peters and Detlef Rick. 2014. Identity development in computing education: theoretical persp ectives and an implementation in the classroom. In Proceedings of the 9th workshop in primary and se condary computing education . 70–79. [28] Andrew Petersen, Michelle Craig, Jennifer Campbell, and Anya T aiovich. 2016. Revisiting why students drop CS1. In Proceedings of the 16th K oli Calling International Conference on Computing Education Research . 71–80. [29] Philip M Podsako, Scott B MacKenzie, Jeong- Y eon Lee, and Nathan P Podsako. 2003. Common method biases in behavioral research: a critical review of the literature and r e commended remedies. Journal of applied psychology 88, 5 (2003), 879. [30] Jennifer M Reddig, Scott Mo on, Kaitlyn Crutcher , and Christopher J MacLellan. 2026. AI Unplugged: Emb odied Interactions for AI Literacy in Higher Education. In Proceedings of The 16th Symp osium on Educational Advances in A rticial Intelligence (EAAI ’26) . Presented at EAAI 2026; proceedings forthcoming. [31] Johnny Saldaña. 2021. The coding manual for qualitative researchers. (2021). [32] Naaz Sibia, Amber Richardson, Alice Gao, Andrew Petersen, and Lisa Zhang. 2025. Student Perspectives on the Challenges in Machine Learning. In Proceedings of the 30th ACM Conference on Innovation and T echnology in Computer Science Education V . 1 (Nijmegen, Netherlands) (I TiCSE 2025) . Association for Computing Machinery , New Y ork, N Y , USA, 9–15. doi:10.1145/3724363.3729107 [33] Mariana Silva, Philipp Hieronymi, Matthew W est, Nicolas Nytko, Akshit Deshpande, Jer-Chin Chuang, and Sascha Hilgenfeldt. 2022. Innovating and modernizing a Linear Algebra class through teaching computational skills. In 2022 ASEE A nnual Conference & Exposition . [34] Amber Solomon, Miyeon Bae, Betsy DiSalvo, and Mark Guzdial. 2020. Embodied representations in computing education: How gesture, embodied language, and tool use support teaching recursion. (2020). [35] David R Stead. 2005. A review of the one-minute paper . Active learning in higher education 6, 2 (2005), 118–131. [36] Jaclynn V Sullivan. 2018. Learning and embodied cognition: A review and proposal. Psychology Learning & Teaching 17, 2 (2018), 128–143. [37] Manuel V argas, Tabita Nunez, Miguel Alfaro , Guillermo Fuertes, Sebastian Gutierrez, Rodrigo T ernero, Jorge Sabattin, Leonardo Banguera, Claudia Duran, and Maria Alejandra Peralta. 2020. A project based learning approach for teaching articial intelligence to undergraduate students. Int. J. Eng. Educ 36, 6 (2020), 1773–1782. [38] Allan Wigeld. 1994. Expectancy-value the ory of achievement motivation: A developmental perspective. Educational psychology review 6, 1 (1994), 49–78. [39] Allan Wigeld, Katherine Muenks, and Jacquelynne S Eccles. 2021. Achievement motivation: What we know and wher e we are going. A nnual Review of Developmental Psychology 3 (2021), 87–111. Manuscript submitted to ACM 18 Reddig et al. Received 20 February 2007; revised 12 March 2009; accepted 5 June 2009 Manuscript submitted to ACM
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment