Barriers that Programming Instructors Face While Performing Emergency Pedagogical Design to Shape Student-AI Interactions with Generative AI Tools
Generative AI (GenAI) tools are increasingly pervasive, pushing instructors to redesign how students use GenAI tools in coursework. We conceptualize this work as emergency pedagogical design: reactive, indirect efforts by instructors to shape student-AI interactions without control over commercial interfaces. To understand practices of lead users conducting emergency pedagogical design, we conducted interviews (n=13) and a survey (n=169) of computing instructors. These instructors repeatedly encountered five barriers: fragmented buy-in for revising courses; policy crosswinds from non-prescriptive institutional guidance; implementation challenges as instructors attempt interventions; assessment misfit as student-AI interactions are only partially visible to instructors; and lack of resources, including time, staffing, and paid tool access. We use these findings to present emergency pedagogical design as a distinct design setting for HCI and outline recommendations for HCI researchers, academic institutions, and organizations to effectively support instructors in adapting courses to GenAI.
💡 Research Summary
The paper introduces the concept of “emergency pedagogical design” to describe the reactive, ad‑hoc efforts that computing instructors are undertaking to shape how students interact with rapidly emerging generative AI (GenAI) tools such as ChatGPT, Claude, and Gemini. Unlike traditional curriculum redesign, emergency pedagogical design occurs without control over commercial AI interfaces and under time pressure, mirroring the “emergency remote teaching” response to the COVID‑19 pandemic but with the expectation that AI will remain a permanent fixture in education.
To investigate this phenomenon, the authors conducted a mixed‑methods study. They performed semi‑structured interviews with 13 full‑time computing instructors who had already modified course materials, assignments, or infrastructure to incorporate GenAI. Participants were selected through purposive and snowball sampling to ensure they had concrete implementation experience rather than merely policy statements. The interview protocol leveraged critical incident technique and cognitive walkthrough methods, asking instructors to screen‑share a memorable assignment and explain their intended student‑AI interaction, motivations, implementation challenges, and student reactions.
In parallel, a broader survey of 169 computing instructors—including faculty from minority‑serving institutions (MSIs) and historically Black colleges and universities (HBCUs)—captured quantitative patterns of attitudes, practices, and perceived barriers. The study was conducted in mid‑2025, roughly 2.5 years after the release of high‑profile tools like ChatGPT and GitHub Copilot, providing a snapshot of early‑stage adaptation before best practices have coalesced.
Analysis of interview transcripts and survey responses revealed five recurrent barriers that instructors face when performing emergency pedagogical design:
-
Fragmented buy‑in for course revision – Institutional consensus on AI integration is lacking. Faculty within the same department or university often hold divergent views, making it difficult to achieve coordinated curriculum changes.
-
Policy crosswinds from non‑prescriptive guidance – University‑level AI policies are typically vague, leaving instructors to interpret and apply them locally. This ambiguity is especially pronounced across public vs. private, research‑intensive vs. teaching‑focused institutions.
-
Implementation challenges – Technical integration of GenAI tools with existing Learning Management Systems (LMS), the need to teach effective prompting, and limited real‑time technical support create substantial workload for instructors.
-
Assessment misfit – Student‑AI interactions are only partially observable, complicating the measurement of learning outcomes. Automatic grading systems and human evaluators often conflict, and distinguishing between AI‑generated code and a student’s conceptual understanding remains an open problem.
-
Lack of resources – Time, staffing, and paid access to advanced AI services are scarce. Instructors must allocate additional hours for design, testing, and monitoring, yet many institutions do not provide dedicated personnel or budget for these activities.
The authors argue that these barriers constitute a distinct design setting for Human‑Computer Interaction (HCI) research. They propose three sets of recommendations:
For HCI researchers: Develop tools that support the full lifecycle of emergency pedagogical design—prompt scaffolding, transparent logging of student‑AI interactions, and assessment‑aligned analytics. Design interventions should respect the limited control instructors have over commercial AI APIs and should be adaptable to diverse institutional contexts.
For academic institutions: Create clear, actionable AI usage policies that balance openness with academic integrity, and foster cross‑departmental coalitions to achieve unified buy‑in. Allocate dedicated staff (e.g., instructional designers or AI‑learning technologists) and budget lines for paid AI subscriptions, ensuring equitable access across faculty.
For funding agencies and AI vendors: Offer grants or subsidized licenses targeted at educational pilots, and involve educators early in the product development cycle to surface pedagogical requirements.
Finally, the paper reflects on lessons from emergency remote teaching—such as the importance of rapid prototyping, community support structures, and flexible assessment models—and suggests that similar mechanisms can mitigate the turbulence of emergency pedagogical design. By framing the current response to GenAI as an emergent, systematic design practice, the study positions instructors as “lead users” whose lived experiences can forecast broader educational needs and guide future HCI innovations.
In sum, the work contributes (1) a novel conceptualization of emergency pedagogical design, (2) an empirically grounded taxonomy of barriers faced by computing instructors, and (3) actionable guidance for researchers, institutions, and industry to support sustainable, AI‑enhanced teaching practices.
Comments & Academic Discussion
Loading comments...
Leave a Comment