Exploring Student Perception on Gen AI Adoption in Higher Education: A Descriptive Study
The rapid proliferation of Generative Artificial Intelligence (GenAI) is reshaping pedagogical practices and assessment models in higher education. While institutional and educator perspectives on GenAI integration are increasingly documented, the st…
Authors: Harpreet Singh, Jaspreet Singh, Satwant Singh
Exploring Student P er ception on Gen AI Adoption in Higher Education: A Descriptiv e Study Harpr eet Singh Jaspr eet Singh Satwant Singh h.singh@tees.ac.uk J.singh@tees.ac.uk singh.satwant61@yahoo.in Rupinder Singh Shamim Ibne Shahid Mohammad Hassan T ayarani Najaran rupinder@outlook.in s.shahid2@herts.ac.uk m.tayaraninajaran@herts.ac.uk The rapid proliferation of Generative Artificial Intelligence (GenAI) is reshaping ped- agogical practices and assessment models in higher education. While institutional and educator perspectives on GenAI integration are increasingly documented, the student perspectiv e remains comparati vely underexplored. This study examines how students perceiv e, use, and e v aluate GenAI within their academic practices, focusing on usage pat- terns, perceiv ed benefits, and expectations for institutional support. Data were collected through a questionnaire administered to 436 postgraduate Computer Science students at the University of Hertfordshire and analysed using descriptiv e methods. The findings rev eal a Confidence–Competence P aradox : although more than 60% of students report high familiarity with tools such as ChatGPT , daily academic use remains limited and confidence in effecti ve application is only moderate. Students primarily employ GenAI for cognitiv e scaffolding tasks, including concept clarification and brainstorming, rather than fully automated content generation. At the same time, respondents express concerns regarding data pri vac y , reliability of AI-generated information, and the potential erosion of critical thinking skills. The results also indicate strong student support for integrating AI literacy into curricula and programme Knowledge, Skills, and Behaviours (KSBs). Overall, the study suggests that univ ersities should move be yond a policing approach to GenAI and adopt a pedagogical framew ork that emphasises AI literacy , ethical guidance, and equitable access to AI tools. Keyw ords : AI and education (AIeD), GenAI in education, AI in Higher education 1. Introduction The rise of generati ve artificial intelligence (GenAI) and v arious tools (e.g., ChatGPT , Genmini, Claude) over the past few years has mark ed a transforma- tiv e moment in the history of higher education. The technological advance- ments have transformed higher education, moving from physical textbooks to digital resources, the rise of internet enabled online learning platforms, smartphones and apps enabled flexible learning. T oday , GenAI systems such as ChatGPT extend this trajectory by introducing tools that not only medi- ate but also co-create knowledg. Thus, it becomes very important to study and understand the impact of GenAI within the broader context of digital 1 transformation, examining ho w students as primary stakeholders, percei ve and navigate its gro wing influence. 1.1 Digital T ransformation in Higher Education Higher education has gone through four major stages of digital transfor- mation. The first stage began in the 1990s, when internet access and digitised resources expanded online teaching and learning. Learning Management Sys- tems (LMSs), such as Blackboard and Moodle, became central tools for content deliv ery , communication, and course management ( Al-Emran et al. , 2018 ; Brynjolfsson and McAfee , 2014 ). The second stage follo wed with W eb 2.0 and MOOCs, where platforms like Coursera and edX widened access and encouraged network-based, participatory learning ( Hollands and Tirthali , 2014 ; Siemens , 2005 ). The third stage appeared in the 2010s through early AI applications, including adapti ve learning and automated feedback, b ut it also raised concerns about bias, fairness, and educator autonomy ( Ifenthaler and Y au , 2020 ; W illiamson and Eynon , 2020 ). The current fourth stage started with ChatGPT in 2022 and continued with tools such as Gemini, Claude, and DeepSeek, which can generate original te xt and images and hav e intensified debates on authorship, originality , and academic integrity ( Hu , 2023 ; Luo , 2024b ; Kasneci et al. , 2023 ). 1.2 AI as a Disruptive F orce Generati ve AI (GenAI) is a subclass of AI that enables the development of models capable of generating original and contextually rele vant content such as text, images, and multimedia ( Brown et al. , 2020 ). These models are trained on v ast amounts of data from the Internet using large-scale computational infrastructure, including high-performance data centres. GenAI has become a disruptiv e force in education due to its capability to provide personalized tu- toring, instant explanations, and adaptiv e feedback to learners. It also supports multilingual learners by providing accessible materials ( Ng et al. , 2024 ), while educators benefit from assistance in rubric generation, assessment design, and content summarization ( Sousa and Cardoso , 2025 ). Ho we ver , these capabil- ities also introduce risks. Academic integrity is challenged as conv entional assessment models struggle to e valuate AI-generated work ( Ardito , 2024 ). Concerns about biased or inaccurate outputs also raise issues of reliability and misinformation ( Bender et al. , 2021 ). In addition, GenAI poses challenges for educators regarding job displacement and shifting professional roles ( Iv anov et al. , 2024 ). Institutional responses to GenAI remain unev en. Some univer - sities have banned AI tools and classify their use as academic misconduct 2 ( Heav en , 2023 ), while others adv ocate their constructiv e and ethical inte gra- tion in education ( Russell Group , 2023 ). International org anisations such as UNESCO and OECD hav e proposed human-centred governance frame works that align innov ation with accountability and transparency ( UNESCO , 2023 ; OECD , 2023 ). 1.3 Expectations and Institutional Responsibilities Follo wing the disrupti ve dynamics discussed in Section 2.2, student e xpec- tations are becoming more specific. Many students no w treat GenAI as part of normal academic and professional practice, and they expect clear , consis- tent guidance on acceptable use ( K elly et al. , 2023 ; Arowose gbe, Rahman and Park er , 2024 ). When institutional rules are unclear , students report anxi- ety about misconduct and uncertainty about boundaries ( Kelly et al. , 2023 ; Arow osegbe, Rahman and P arker , 2024 ; Y usuf et al. , 2024 ). Students generally fa vour guided inte gration rather than strict prohibition. Evidence indicates support for ethical boundaries, transparent policy commu- nication, and practical direction on how GenAI can be used within disciplines ( Luo , 2024a ; W eng et al. , 2024 ). This expectation also e xtends to curriculum design: students increasingly want AI literac y and wider digital competence embedded as core employability skills ( Kelly et al. , 2023 ; Chiu , 2024a ). In- stitutional responsibility therefore goes beyond policy statements. Students expect educators to act as facilitators, supported by workshops, seminars, and other structured forms of training that translate policy into practice ( W eng et al. , 2024 ; Sousa and Cardoso , 2025 ). Overall, the literature suggests that effecti ve gov ernance depends on a balanced approach: clear rules, discipline- specific guidance, and practical learning support that reduces uncertainty while encouraging responsible use ( Luo , 2024a ; Y usuf et al. , 2024 ). Although research on generati ve artificial intelligence (GenAI) in educa- tion is expanding rapidly , sev eral important gaps persist. Existing studies re- main largely confined within national contexts, of fering limited cross-regional or comparativ e perspecti ves that would enable broader generalization of find- ings ( Iv anov et al. , 2024 ). Moreover , much of the current scholarship privile ges institutional and educator viewpoints, lea ving student experiences and expecta- tions underrepresented in analyses of GenAI adoption ( W eng et al. , 2024 ; Luo , 2024a ). Institutional frame works addressing GenAI also tend to be fragmented, often focusing separately on ethical, pedagogical, or policy dimensions rather than integrating these into coherent, system-wide strategies ( UNESCO , 2023 ; Russell Group , 2023 ). In addition, while innov ative assessment models ha ve been proposed to respond to GenAI’ s impact on academic integrity , there re- 3 mains a lack of empirical evidence v alidating their pedagogica l effecti veness or student acceptability ( Roe et al. , 2024 ). Addressing these shortcomings calls for research that centers student perspectiv es, situates them within the histori- cal ev olution of educational technology , and dev elops actionable frameworks for institutional implementation. 1.4 Research Gap and Contrib ution Despite growing a wareness of GenAI among uni versity students, existing research provides limited empirical insight into ho w this awareness translates into actual academic practice. In this paper , we analyses surv ey data from 436 postgraduate Computer Science students at the Univ ersity of Hertfordshire to examine how students perceive and use generati ve artificial intelligence (GenAI) in academic contexts. This study makes the follo wing contributions: • It identifies a Utility Gap , sho wing a mismatch between students’ per- ception of GenAI as essential and their relativ ely lo w frequency of academic use. • The findings rev eal a F alse P eak of F amiliarity , where apparent fluency with GenAI masks gaps in critical competence and ethical understand- ing. • Using both multiple-choice and open-text responses, the study uncov ers a Social Desirability Bias in reporting AI-assisted writing practices. • It also highlights an Algorithmic P aywall , where reliance on premium AI tools may create new forms of digital inequality in higher education. 2. Literature r eview 2.1 A wareness and Exposur e Prior research on tertiary-le vel students’ awareness of AI suggests that ov er 80% are familiar with tools such as ChatGPT ; howe ver , deeper literacy in responsible and critical use remains limited ( Mitevska Petrushe va and Idrizi , 2023 ). Similarly , although awareness appears high across disciplines, students often ov erestimate their competence, particularly in applying GenAI tools ethically ( Kelly et al. , 2023 ). This mismatch between familiarity and responsible engagement remains a central theme in recent research. Recent studies ( Xia et al. , 2024 ) sho w that students frequently use GenAI for brainstorming, drafting, and feedback, yet often struggle to e valuate output accuracy . The same pattern appears in the U.S. ( Klimov a et al. , 2025 ), where ChatGPT is widely used for understanding complex concepts and structuring assignments, but concerns persist about overreliance and reduced critical 4 thinking. Evidence from the U AE further suggests that adoption is shaped by trust, ef fort expectanc y , and hedonic moti v ation ( Shuhaiber et al. , 2025 ). Across disciplines, use cases v ary (e.g., coding in STEM, writing support in humanities), and perceptions of reliability also differ . A multicultural view indicates that awareness is high globally , b ut exposure is context-dependent ( Y usuf et al. , 2024 ). In Southeast Asia, students report lower confidence in independent use and prefer blended approaches where AI feedback is mediated by human guidance ( Roe et al. , 2024 ). In Hong K ong, students recognize personalization benefits but remain cautious about creativity and priv acy ( Chan , 2024 ). A multi-university polic y analysis similarly shows stronger guideline dev elopment in the Global North, while Global South institutions face equity and infrastructure constraints ( Jin et al. , 2025 ). Studies capturing student v oices also sho w both enthusiasm and caution. UK students report broad a wareness and use for academic writing b ut uncer - tainty about ethical boundaries and institutional expectations ( Arow osegbe, Alqahtani and Oyelade , 2024 ). U.S. students report frequent use for clari- fication tasks alongside concerns about priv acy and reduced independence ( Klimov a et al. , 2025 ). This rapid normalization of GenAI in ev eryday study practices ( Kurtz et al. , 2024 ) can, without deliberate strategy , widen inequality and encourage instrumental rather than critical use ( Francis et al. , 2025 ). The ethical dimension is increasingly central. Heightened a wareness with- out guidance risks normalizing AI-assisted plagiarism and weakening aca- demic inte grity ( K ov ari , 2025 ). Reported concerns include plagiarism, superfi- cial learning, and diminished creati vity ( Klimo va et al. , 2025 ), while perceiv ed risk also shapes adoption intentions, reinforcing the need for transparenc y and institutional trust-building ( Shuhaiber et al. , 2025 ). Ef fecti ve exposure there- fore requires digital literacies such as fact-checking, bias recognition, and critical engagement ( Kasneci et al. , 2023 ). The institutional en vironment plays a piv otal role. Framing GenAI primar- ily as a threat to originality can limit constructi ve engagement ( Luo , 2024a ). By contrast, man y European and U.S. uni versities are moving to ward balanced guidance that permits use while stressing AI literacy and ethics ( W eng et al. , 2024 ; Christ-Brendemühl , 2025 ). Faculty-focused e vidence supports this di- rection: many teachers support integration but call for clearer guidance, and German guidelines increasingly frame GenAI opportunities as outweighing risks ( Bender et al. , 2021 ; Christ-Brendemühl , 2025 ). T aken together , aware- ness is widespread, but responsible e xposure depends on proactiv e literacy initiativ es, scaf folded policy , and cross-disciplinary support. 5 2.2 AI Literacy in the Curriculum As GenAI becomes increasingly embedded in higher education, students and educators alike recognise AI literacy as a fundamental component of future- ready curricula. AI literacy extends beyond technical skills and includes ethical, critical, and creati ve dimensions that support meaningful engagement with AI tools. It has been described as an “essential digital literacy” for navigating a world where intelligent technologies increasingly mediate learning and professional practice ( Bender , 2024 ). Ethical and responsible AI use also requires e valuati ve judgment, bias detection, and critical interpretation of AI- generated content ( Y usuf et al. , 2024 ). Without explicit instruction in these areas, students may over -rely on AI or misuse it in ways that undermine learning integrity . Empirical studies therefore highlight the importance of embedding AI literacy into formal instruction rather than treating it as optional. In a large- scale Australian survey of 1,135 university students, most respondents reported limited confidence in using GenAI tools and called for structured learning activities to build these competencies ( Kelly et al. , 2023 ). The study also argues that students should be e xplicitly taught appropriate use of generati ve AI through discipline-specific learning acti vities. Evidence from Hong K ong similarly shows that students value AI for idea generation and personalised learning, while e xpecting educators to pro vide explicit guidance on responsible use ( Chan and Hu , 2023 ). Practical frame works ha ve also been proposed, including the AI Assess- ment Integration Frame work and the Six Assessment Redesign Pi votal Strate- gies (SARPS), to embed AI literac y outcomes into teaching, learning, and assessment. De veloping AI literacy in curricula also aligns with institutional and policy priorities. Both educators and students need support through pro- fessional de velopment to ensure consistent understanding of AI tools, their ethical implications, and pedagogical integration ( A yyoub et al. , 2025 ). From a policy perspecti ve, AI literacy is also positioned as a core learning outcome in higher education, with emphasis on equitable access to training and a culture of critical engagement ( Chan , 2024 ). Overall, inte grating AI literacy across curricula helps students become informed, ethical, and innovati ve thinkers who can co-work with AI in academic and professional contexts. 2.3 Student Expectations of Educators and Institutions in GenAI Inte- gration The rise of generativ e GenAI in higher education (HE) has reshaped what students expect from educators and institutions. Students consistently call for 6 explicit, transparent, and f air policies on AI use, because ambiguity increases anxiety around academic misconduct ( Arow osegbe, Rahman and Parker , 2024 ). This concern is echoed in policy revie ws sho wing that uni versities often frame GenAI as a threat to originality without fully addressing its potential as a collaborativ e academic tool ( Luo , 2024a ). Students therefore e xpect more nuanced guidance that distinguishes un- ethical use from legitimate academic support. Evidence suggests that students value discipline-specific rules and accessible support such as workshops and consultations ( W eng et al. , 2024 ). They also e xpect AI to be inte grated into curricula and assessment rather than excluded: students in Hong Kong and Australia report valuing AI for brainstorming and personalised support, while still expecting assessment redesign that protects fairness, integrity , creativity , and critical thinking ( Chan and Hu , 2023 ; Kelly et al. , 2023 ). In this context, GenAI is seen as potentially supporting self-regulated learning, b ut only when institutions redesign assessment in line with these ne w affordances ( Xia et al. , 2024 ; W eng et al. , 2024 ). Students like wise expect a shift from easily auto- mated traditional tasks to ward authentic and interdisciplinary assessment for AI-rich workplaces ( Chiu , 2024b ; Sousa and Cardoso , 2025 ; Khlaif et al. , 2025 ). Another central expectation is structured de velopment of AI literac y . Stu- dents emphasize that univ ersities should provide formal opportunities to learn responsible AI use, not just tool f amiliarity . AI literacy is framed as an es- sential digital literacy spanning technical, ethical, and critical dimensions, including e v aluativ e judgment, bias detection, and critical interpretation of out- puts ( Bender , 2024 ; Y usuf et al. , 2024 ). Conceptual and policy work similarly stresses inte grating AI literacy across higher education through curriculum and assessment framew orks, alongside equitable access to training ( Chan , 2024 ). Students also expect institutions to reduce inequity in access. Reliance on commercial AI platforms can deepen digital divides, especially where cost and infrastructure constraints limit adoption. Comparati ve e vidence shows that students expect univ ersities to play an equalising role by pro viding institutional licences, subsidised access, and support tailored to varied levels of digital competence ( Sousa and Cardoso , 2025 ; Zubair et al. , 2025 ). W ithout such measures, AI integration risks widening e xisting disparities. At the same time, students do not view AI as a replacement for educators. They e xpect educators to act as mentors who model responsible and critical AI use. Evidence from V ietnam and Singapore suggests that students value AI- generated feedback most when paired with instructor commentary , indicating a preference for a human–AI partnership ( Roe et al. , 2024 ). Related work also 7 notes expectations that educators demonstrate ethical engagement with AI, supported by professional dev elopment for staf f in AI-rich contexts ( Francis et al. , 2025 ; A yyoub et al. , 2025 ). Students further expect institutions to protect academic integrity through constructiv e and transparent systems rather than purely puniti ve enforcement. Many students recognise plagiarism and misuse risks, but remain sceptical about detector fairness and reliability ( Ardito , 2024 ; Roe et al. , 2024 ). Con- cerns include inconsistency , possible bias against non-nativ e English writers, and vulnerability to circumvention tools ( Ogunle ye et al. , 2024 ; Singh et al. , 2023 ). Overall, students expect institutions to provide clear policy , fair assess- ment reform, educator readiness, ethical guidance, and equitable access. Their expectations point to supportiv e go vernance that enables responsible use while preserving academic integrity ( W eng et al. , 2024 ; Chiu , 2024b ). 2.4 Institutional role in the integration of AI tools in Higher Education. The emergence of GenAI has required univ ersities to move quickly from ad hoc responses to coordinated institutional strategies that combine governance, staff support, and pedagogical redesign ( Nikolic et al. , 2024 ; W eng et al. , 2024 ; Xia et al. , 2024 ). Because GenAI can complete many con ventional written tasks, higher education institutions face growing pressure to update academic-integrity frame works without reducing policy to prohibition alone ( Luo , 2024a ; Rudolph et al. , 2023 ; W eng et al. , 2024 ). At policy le vel, the literature sho ws that many institutional rules remain too generic for current GenAI use cases ( Luo , 2024a ; Nikolic et al. , 2024 ). Univ ersities often frame GenAI primarily as a threat to originality , but this framing can overlook collaborativ e and technology-mediated forms of con- temporary knowledge production ( Luo , 2024a ; W ang et al. , 2024 ). More effecti ve approaches combine clear disclosure and attribution requirements with culturally sensitiv e implementation, supported by integrated go vernance models such as the AI Ecological Education Policy Framework ( Chan and Hu , 2023 ; Khlaif et al. , 2024 ; W eng et al. , 2024 ; Y usuf et al. , 2024 ). Beyond go vernance, institutions must in vest in educator de velopment and assessment reform. Evidence indicates that teaching staff often lack suf ficient training, particularly in AI literacy , assessment literacy , and the ethical lim- itations of GenAI outputs ( A yyoub et al. , 2025 ; Kurtz et al. , 2024 ; Nikolic et al. , 2024 ; W ang et al. , 2024 ). In parallel, curricula and assessment need to shift tow ard higher-order outcomes and authentic tasks, including approaches where students critique AI-generated outputs, while av oiding overreliance on 8 unreliable AI-detecti on systems ( Francis et al. , 2025 ; Khlaif et al. , 2024 , 2025 ; Perkins et al. , 2024 ; W eng et al. , 2024 ; Xia et al. , 2024 ). 3. Methodology The data for this study w as collected from students through a questionnaire on institutional AI policy . The participants were students enrolled in the May 2025 session at the Univ ersity of Hertfordshire under the module Researc h Methods in Computer Science . The data were collected from 436 Computer Science students. Overview of moduel Research Methods in Computer Science: This is a postgraduate-lev el module, weighted at 30 credits, focusing on Research Methods in Computer Science. The teaching structure comprises a two-hour lecture session deliv ered weekly ov er an academic term of ten weeks. Learning Outcomes: Upon successful completion of this module, students will be able to: • Discern and classify a comprehensiv e spectrum of research method- ologies applicable to comple x issues within the domain of computer science. • Comprehend and contextualise the practical application of these meth- ods, specifically in relation to an adv anced Master’ s dissertation or project. • Critically appraise and implement a diverse set of planning and exe- cution strategies that are essential for undertaking a substantial, self- directed research program at the advanced master’ s lev el. Evaluation and Intervention: The intervention utilised w as a multiple-choice, unmarked quiz administered via the Can vas learning management system. The core component of this intervention w as a questionnaire designed to capture student familiarity , perspectives and e xpectations on institutional AI policy . Questionnaire Design: The instrument comprised 27 questions tar geting various dimensions of student interaction with the GenAI tool. These aspects included: • Students’ familiarity with GenAI tools and patterns of academic use. • Self-reported confidence in using GenAI effecti vely , as a proxy for perceiv ed competence. • Perceiv ed benefits and risks of GenAI use in academic work. • Expectations for assessment-related guidance. • Perceptions of institutional communication and policy clarity regarding GenAI. 9 • Preferences for curriculum integration and training provision. • Perceiv ed consistency of staff engagement and encouragement to ex- plore GenAI across modules. • V iews on equity and access to tools. Response F ormat: Most items used closed-response formats (e.g., multiple- choice selections and 5-point Likert-type agreement/satisfaction scales), sup- plemented by selected open-text prompts (labelled as ’.b’ sub-questions) to capture qualitativ e explanations. Demographic V ariable: The study intentionally excluded demographic v ari- ables to obtain a perception of GenAI and institutional policies that is agnostic to personal characteristics. This approach was f acilitated by the controlled student group, as all participants are enrolled in the same M.Sc. program. Full Instrument: The complete questionnaire is provided in Appendix A. Data Collection Data collection was conducted through the Canv as VLE. While the surve y contained 27 questions, the platform’ s format necessitated splitting multi-part questions into individual numbered entries. T o maintain clarity in Section IV , follow-up qualitati ve prompts are identified by the suffix ’.b’ and paired with their primary question. As a result, certain numerical gaps exist in the sequence ( 3, 6, 17, 21 and 27) to account for these integrated question pairs. Anonymity and Confidentiality: T o ensure candid and unhesitating responses, the questionnaire was fully anon ymised. No demographic data that could po- tentially rev eal the indi vidual identity of any student was collected. Data Processing: Following collection, the raw data were exported to a comma-separated v alues (CSV) file. Python was then utilised to conduct the required statistical analysis. Sample Size: The analysis was focused on the responses received from all 436 students who participated in the study . 4. Survey Result and Analysis This section presents the main findings from the student survey on gen- erativ e AI use in higher education. W e first report the descriptiv e results for each question and then interpret what these patterns suggest about student familiarity , confidence, concerns, and e xpectations of institutional support. T ogether , these results highlight ke y areas where uni versities can strengthen AI literacy , guidance, and equitable access. Q1: How familiar ar e you with generati ve AI tools? Figure 1 sho ws that o ver 60% of respondents reported being “V ery” or “Ex- 10 0 10 20 30 40 50 1 2 3 4 5 4 . 1 7 . 8 27 . 4 44 . 1 16 . 6 Percent Selected Option Figure 1: Student familiarity with generati ve AI tools on a 1–5 scale, where 1 repr esent no familiarity and 5 repr esent extreme familiarity . tremely” familiar with generati ve AI. At first glance, this suggests a digitally nativ e group that has easily adopted these tools. Howe ver , this high level of reported familiarity contrasts with the moderate confidence lev els e xpressed in later questions. This gap indicates that many students may be confusing “usage familiarity , ” such as the ability to prompt a chatbot, with “critical familiarity , ” which inv olves understanding limitations like hallucinations and bias. The data therefore points to a “F alse Peak” ( Kim et al. , 2025 ) of familiarity among students, where self-reported fluency likely hides gaps in practical and critical competence. In addition, the 27.4% of students who identified themselves as “Moderately familiar” form an important risk group. These students possess suf ficient knowledge to utilise generati ve AI for academic tasks, b ut may lack the deeper understanding required to use it responsibly . From a univ ersity perspecti ve, this overconfidence is concerning ( Mah and Groß , 2024 ) because it can lead to complacency . Students who believ e they already understand AI well are less lik ely to participate in v oluntary training, which can create a competence trap where institutions assume a le vel of skill that is not actually present. Therefore, it is important for uni versities to help students accurately recognize their own le vel of familiarity with AI, as this aware- ness can encourage them to engage in training and capacity-building initiatives. 11 0 10 20 30 40 Other None Deepseek Copilot Gemini Claude ChatGPT 2.7 0.1 12.7 20.4 23.5 7.3 33.3 Percent Selected option Figure 2: V arious GenAI tools that students are familiar with. Figure 3: W ord cloud of generative AI tools identified by students in text response. Q2 & Q2(b): Which Gen AI tools do you ha ve experience with? Figure 2 shows that ChatGPT is the most widely used generative AI tool, reported by 33.3% of respondents. This is follo wed by Gemini at 23.5% and Copilot at 20.4%. Smaller but still notable shares of students report using DeepSeek at 12.7% and Claude at 7.3%, while only a v ery small number indicate no e xperience with generati ve AI tools. This pattern suggests that students are not relying on a single dominant system b ut are instead using multiple AI platforms. Their choices often appear to be shaped by specific academic tasks or by ho w well the tools are inte grated into familiar software en vironments. The accompanying word cloud in Fig. 3 supports this interpreta- tion, showing frequent references to AI tools that are embedded within lar ger ecosystems such as Google and Microsoft, as well as the growing visibility of answer-engine platforms lik e Perplexity and Grok. The strong presence of these externally adopted tools points to the rise of Shado w IT , where students independently use AI systems outside institutional provision or oversight. These findings indicate a fragmented AI usage landscape in which students act as multi-model users. From an institutional perspectiv e, this fragmentation makes standardisation more dif ficult and raises concerns about equity , as dif- ferences in model quality , access to paid features, and platform integration may lead to unequal academic advantages among students. Q4: How fr equently do you use AI tools f or academic purposes? Figure 4 sho ws that only 6.7% of respondents report daily academic use of AI tools. A further 16.8% indicate frequent use, defined as multiple times per week, and 20.3% report re gular use (once a week). In contrast, a large share of students fall into lo wer-frequenc y categories, including occasional use at 26.3%, rare use at 22.8%, and no use at 7.1%. Although awareness of AI tools is widespread, daily use remains limited, with most students using them 12 0 10 20 30 Daily Frequently (multiple times a week) Regularly (once a week) Occasionally (once a month) Rarely (1-2 times per semester) Nev er (0 times per semester) 6.7 16.8 20.3 26.3 22.8 7.1 Percent Selected option Figure 4: Student fr equency of GenAI use on for academic pur poses. only occasionally or rarely . This pattern challenges the common narrati ve of AI “addiction” and instead suggests that, for many students, AI functions as a problem-solving tool used only when needed rather than as a regular part of their academic routine. Such infrequent use is likely influenced by concerns about academic inte grity ( Pudasaini et al. , 2024 ). Students may deliberately limit their use to avoid detection by monitoring systems, which in turn pre vents them from de veloping the practical skills needed for confident and responsible use of AI. From a pedagogical perspective, this reflects a lack of ef fectiv e integration. If AI were embedded in teaching and learning as a tutor or research assistant, usage w ould be more consistent. Instead, the low frequenc y of use suggests that students vie w AI as a shortcut for specific challenges rather than as an ongoing support for learning. This points to a clear “Utility Gap” between the high le vel of familiarity students report and the low frequency ( Kim et al. , 2025 ; Arow osegbe, Alqahtani and Oyelade , 2024 ) with which they actually use these tools. 13 0 20 40 60 80 Other T ime-saving and ef ficiency improv ement Understanding or clarifying concepts Mathematical computations and reasoning T ranslation tasks Summarizing texts or articles Programming or coding tasks Writing tasks (e.g., essays, reports) Generating ideas or brainstorming 2 . 8 47 . 2 69 36 . 7 40 . 1 52 . 1 42 36 . 9 64 Percent Selected option Figure 5: V arious ways AI support student learning and academic tasks. Figure 6: W ord cloud of ways GenAI support students for academic tasks. Q5 & Q5.(b): In what ways do AI tools support your lear ning? Reporting multiple responses, the Fig. 5 shows that the most common uses of AI tools are related to understanding or clarifying concepts at 69.0% and generating ideas or brainstorming at 64.0%. These are follo wed by summarising te xts or articles at 52.1% and improving time efficienc y at 47.2%. Writing-related acti vities, such as essays or reports, are reported by a smaller share of respondents at 36.9%. Similarly , programming or coding tasks are reported by 42.0% of students, and translation tasks by 40.1%. Overall, the data supports the “Cognitiv e Scaffolding” hypothesis and challenges the com- mon assumption among faculty that students mainly use AI to generate essays from scratch. The high frequenc y of concept clarification and brainstorming suggests that students often use AI as a personal tutor to support learning and address gaps in formal instruction. Ho wev er , the open-text responses in Fig. 6 present a more complex picture. In these responses, terms such as essays, reports, and writing tasks appear prominently . The dif ference between the multiple-choice results, where writing ranks lower , and the open-text responses suggests the presence of “Social Desirability Bias” in the survey . Students may underreport their use of AI for writing when asked directly , but are more open about it when giv en space to respond freely . This indicates that while cogniti ve scaffolding is the main reported use, a significant le vel of text generation is also taking place. Uni versities therefore need to clearly distinguish between acceptable practices such as editing and support, and unacceptable practices such as ghostwriting. Many students appear to vie w 14 AI as a “Super -Editor” that helps refine their ideas, a distinction that is often ov erlooked in strict, zero-tolerance plagiarism policies. 0 20 40 5 4 3 2 1 8 . 3 26 . 7 36 . 2 19 . 1 9 . 7 Percent Response Figure 7: Student confidence in using AI tools on a scale of 1–5, where 1 repr esents not at all confident and 5 repr esents extreme confidence. 0 10 20 30 40 5 4 3 2 1 11 . 7 28 30 . 1 22 . 9 7 . 3 Percent Response Figure 8: Student view on GenAI being essential for their academic success on a scale of 1-5, where 1 repr esent not at all essential and 5 repr esents absolutely essential. Q7. T o what extent do y ou feel confident in your ability to use AI tools effectively? Figure 7 sho ws that student confidence in using AI tools for academic tasks follows a bell-shaped pattern. The largest group of respondents report being Moderately confident at 36.2%, followed by V ery confident at 26.7%. Smaller proportions identify as Slightly confident at 19.1%, Not at all confident at 9.7%, and Extremely confident at 8.3%. When these confidence lev els are compared with students’ views on the importance of AI for their future, a clear “Confidence-Competence P aradox” becomes visible. Although students see AI as essential, their confidence in using it ef fecti vely peaks at a moderate le vel ( Kim et al. , 2025 ). This pattern reflects a state of “Conscious Incompetence” ( Kim et al. , 2025 ; Asio , 2024 ), where students are aware that their kno wledge and skills are incomplete. This awareness can contribute to anxiety , as students feel pressure to master a technology that univ ersities often describe as central to future academic and professional success, while receiving limited guidance on ho w to do so. From an institutional perspecti ve, this finding signals an important opportunity . The dominance of moderate confidence suggests that students are realistic about their limitations and open to learning. Howe ver , without structured support, this uncertainty may lead to avoidance of AI tools or to hidden and potentially unethical forms of use. Q8: How essential ar e AI tools to your academic success? Figure 8 shows that a lar ge majority of respondents vie w generati ve AI tools as essential to their academic success. About 30.1% describe them as Moderately essential, 28.0% as V ery essential, and 11.7% as Absolutely 15 essential. T ogether , these groups account for nearly 70% of the sample. In contrast, smaller shares of students consider AI tools to be Slightly essential at 22.9% or Not at all essential at 7.3%. When majority of students categorize AI tools as essential, these systems mov e beyond the role of optional support and become part of core academic infrastructure ( Luo et al. , 2025 ; Liang et al. , 2025 ), similar to internet access or digital learning platforms. Such reliance also creates vulnerability , as students become dependent on third- party corporate tools that may change in availability , cost, or performance ov er time. In addition, the perception of AI as “absolutely essential” suggests that many current assessment tasks focus on skills that AI can easily perform, such as basic summarisation or synthesis. This raises important concerns for curriculum design and assessment practices. If assessments primarily measure skills that are easily replicated by AI, students may feel compelled to rely on these tools to remain competitiv e. Uni versities therefore need to rethink assessment strategies to place greater emphasis on human-centered skills, including critical thinking, judgment, and originality , rather than on tasks that can be readily automated by AI systems. 0 20 40 5 4 3 2 1 20 . 2 36 . 9 28 . 2 8 6 . 7 Percent Response Figure 9: Student view of resour ces and guidance University pr ovided on GenAI on a scale of 1-5, where 1 r epresent strongly disagr ee and 5 repr esent strongly agr ee. 0 20 40 5 4 3 2 1 29 . 7 39 . 6 18 . 7 7 . 1 4 . 8 Percent Response Figure 10: Student-rated le vels of awareness r egarding University’ s official AI assessment policies on a scale of 1-5, where 1 r epresent not inf ormed and 5 repr esent extremely inf ormed. Q9: My university has pr ovided sufficient r esources or guidance? Figure 9 sho ws a mixed assessment of institutional support for ef fective use of AI. A total of 57.1% of respondents agree that their univ ersity has provided suf ficient resources or guidance, including 36.9% who agree and 20.2% who strongly agree. At the same time, a notable proportion of students express uncertainty or dissatisf action. About 28.2% select a neutral response, while 14.7% report disagreement, including 8.0% who disagree and 6.7% who strongly disagree. This pattern suggests that although students are aw are of institutional policies, many are not fully satisfied with the support provided to help them comply with those policies. The findings point to a “Policing 16 ov er Pedagogy” approach ( Pudasaini et al. , 2024 ; Mah and Groß , 2024 ), in which the institution has been effecti ve in communicating restrictions on AI use but less effecti ve in offering practical guidance on appropriate and productiv e use. The sizeable share of neutral and ne gativ e responses indicates that students clearly distinguish between being giv en permission and being gi ven support. While policies and w arnings are visible, students perceiv e a lack of in vestment in practical resources such as licensed tools, prompt libraries, and clear technical guidance. This gap between regulation and support risks creating a climate of frustration, where students feel that institutional ef forts are focused more on managing risk and liability than on acti vely dev eloping their AI-related skills for the future. Q10. How well-inf ormed are y ou about official policies? Figure 10 shows a high le vel of student aw areness of institutional policies related to the use of AI in assessment. About 39.6% of respondents report being V ery informed, and a further 29.7% identify as Extremely informed. T ogether , these groups account for 69.3% of the sample. Smaller proportions describe themselves as Moderately informed at 18.7%, Slightly informed at 7.1%, or Not informed at all at 4.8%. This unusually high level of policy awareness suggests that the univ ersity has been effecti ve in communicating its rules on AI use. Ho wev er , this apparent “Compliance V ictory” ( Mah and Groß , 2024 ) may come at a cost. High a wareness of restrictive policies often reflects a focus on control and surv eillance rather than on learning and guidance. In this context, students may understand what is prohibited but receiv e little clarity on ho w AI can be used ethically and productively . This highlights a key distinction between notification and education. Kno wing the consequences of misuse does not equip students with the skills needed to use AI responsibly . While the institution appears to hav e successfully managed risk, the findings suggest that it has not yet gi ven equal attention to supporting innov ation and ethical engagement with AI tools. 0 20 40 60 Not aware of an y such opportunities No, and I am not interested No, but I would be interested Y es, once Y es, multiple times 12 . 8 6 . 4 48 . 9 21 . 1 10 . 8 Percent Response Figure 11: Student participation le vels in university-pr ovided GenAI use in learning workshops. 17 Q11: Hav e y ou participated in any university-pr ovided training? Figure 11 sho ws that 32.1% of respondents ha ve taken part in univ ersity- provided training on generative AI, including 21.1% who attended once and 10.8% who participated multiple times. In contrast, 48.9% of students report that they ha ve not attended an y training but would like to do so, while 12.8% are not aware that such training exists. Only 6.4% state that they are not interested in participating. These results rev eal a clear “Latent Demand Crisis” ( Kim et al. , 2025 ; Mah and Groß , 2024 ; Luo et al. , 2025 ), marked by a wide gap between students who ha ve recei ved training and those who w ant access to it. This gap is unlikely to reflect lo w motiv ation, as interest lev els are high across the sample. Instead, it suggests problems related to accessibility , visibility , or scheduling of training opportunities. When institutional support does not meet student demand, learners may turn to informal online sources for guidance, where information is often unregulated and may encourage unethical practices. The findings, therefore, indicate a missed opportunity for uni versities to shape responsible and effecti ve AI use. W ith a large group of students acti vely seeking guidance, improving the reach and design of AI training programs represents a practical and immediate step toward strengthening academic support and skill dev elopment. 0 20 40 5 4 3 2 1 16 . 8 47 24 . 4 6 5 . 8 Percent Response Figure 12: Student agr eement regarding the explicit inclusion of AI skills in program KSBs on scale of 1-5, where 1 r epresent strongly disagr ee and 5 repr esent strongly agr ee. 0 20 40 5 4 3 2 1 18 . 6 44 . 7 23 . 6 7 . 3 5 . 7 Percent Response Figure 13: Student view on integration of AI literacy in academic curriculum on scale of 1-5, where 1 r epresent strongly disagr ee and 5 repr esent strongly agr ee. Figure 12 indicates strong student support for the formal inclusion of AI- related skills within programme Kno wledge, Skills, and Beha viours (KSBs). A total of 63.8% of respondents express agreement, including 47.0% who agree and 16.8% who strongly agree. In comparison, 24.4% of students remain neutral, while only a small proportion report disagreement (6.0% disagree and 5.8% strongly disagree). This clear student mandate for formalizing AI skills in KSBs reflects underlying employability anxiety ( Liang et al. , 2025 ; Deep et al. , 2025 ). Students increasingly vie w AI literac y as an essential professional 18 competence rather than a short-term academic aid. Their demand for formal inclusion signals a desire for uni versities to recognise and validate the time spent de veloping these skills. By seeking this recognition, students aim to legitimise learning that currently occurs outside the formal curriculum and to ensure that their de grees communicate rele vant technical competencies to employers. Ho we ver , this expectation presents a challenge for univ ersities, as formal inclusion requires reliable assessment practices, and many institutions currently lack clear frame works or e xpertise to e valuate AI-related practices such as interactiv e use of chat-based tools. Q13: Should AI literacy be formally integrated into y our curriculum? Figure 13 shows strong support for the formal integration of AI literacy into the academic curriculum. A total of 63.3% of respondents express agreement, including 44.7% who agree and 18.6% who strongly agree. In contrast, 23.6% of students select a neutral position, while a relativ ely small proportion report disagreement (7.3% disagree and 5.7% strongly disagree). Similar to the mandate for inclusion in Kno wledge, Skills, and Behaviours, this lev el of agreement reflects a broader fear of academic and professional obsolescence. Students increasingly perceive degrees without AI training as outdated and belie ve this places them at a disadvantage compared to graduates from institutions that hav e already adapted. Their call for formal integration ( Liang et al. , 2025 ; Luo et al. , 2025 ) also re veals a mismatch in expectations. Students operate within the fast pace of technological change, while uni versities function within slo wer cycles of accreditation and curriculum approv al. This dif ference in timelines creates tension between student demand and institutional capacity for reform. In addition, students may not fully recognise that formal integration requires formal assessment, which could result in them being ev aluated on skills the y have largely dev eloped independently . 19 0 20 40 60 5 4 3 2 1 17 . 7 46 . 7 27 . 1 5 . 3 3 . 2 Percent Response Figure 14: Student satisfaction le vels with university communication and transparent on GenAI integration in courses on a scale of 1-5, where 1 repr esent very dissatisfied and 5 repr esent very satisfied. 0 20 40 5 4 3 2 1 10 . 6 38 . 8 35 . 1 11 . 5 4 . 1 Percent Response Figure 15: Student view r egarding lecturers encouragements f or AI use in curriculum. Q14: How satisfied ar e you with the uni versity’ s communication? Figure 14 shows generally high lev els of satisfaction with institutional communication and transparency re garding AI use. A total of 64.4% of respon- dents report being satisfied, including 46.7% who are satisfied and 17.7% who are very satisfied. In comparison, 27.1% of students select a neutral response, while only a small minority express dissatisf action (5.3% dissatisfied and 3.2% very dissatisfied). This high le vel of satisf action reflects what can be described as a transparenc y placebo ( Nguyen , 2025 ; Kim et al. , 2025 ). Students feel reas- sured because clear communication reduces uncertainty around acceptable and unacceptable uses of AI, particularly in relation to academic misconduct. How- ev er, satisfaction with communication should not be interpreted as satisfaction with learning support. While the univ ersity has effecti vely reduced ambiguity anxiety by clarifying rules and boundaries, this outcome represents a defensiv e achiev ement rather than an educational one. It demonstrates success in policy communication and risk management, b ut provides limited insight into the quality or av ailability of pedagogical guidance that would enable students to use AI as a meaningful learning tool. Q15. Do lecturers acti vely encourage the exploration of AI? Figure 15 sho ws a di vided pattern in students’ perceptions of lecturer encouragement to explore and use AI tools within the curriculum. A total of 49.4% of respondents report encouragement, including 38.8% who agree and 10.6% who strongly agree. At the same time, 35.1% of students select a neutral position, while 15.6% express disagreement (11.5% disagree and 4.1% strongly disagree). This une ven distrib ution points to an inconsistenc y crisis in institutional practice. The high level of neutrality suggests that a significant proportion of lecturers are av oiding engagement with AI-related discussions altogether . In the current educational context, such neutrality 20 0 20 40 60 80 Other AI use for time and task management Data analysis using AI Research support (e.g., summarization, literature revie w) Coding or programming assistance Using AI for academic writing (e.g., structure, clarity , paraphrasing) Ethical and responsible use of AI tools 1 . 8 44 . 3 54 . 1 60 . 1 50 . 9 42 . 7 63 Percent Selected option Figure 16: Student pr eferences f or Generative AI training topics. effecti vely functions as discouragement, as it lea ves students without clear guidance. As a result, students may rely on informal or hidden forms of AI use that fall outside transparent academic practice. This inconsistenc y undermines institutional equity , since students in dif ferent courses or sections may receive conflicting messages about acceptable AI use. Such variation increases student anxiety and creates conditions for future disputes ov er assessment fairness, particularly when access to and guidance on learning tools are not applied consistently across the institution. Figure 17: W ord cloud showing student prefer ences for Generati ve AI training topics. 0 20 40 5 4 3 2 1 19 . 4 45 . 6 22 . 1 7 . 6 5 . 3 Percent Response Figure 18: Student expectations f or instructor -led guidance on AI usage in assessments on a scale of 1–5, where 1 repr esents strongly disagr ee and 5 repr esents strongly agr ee. Q16 & Q16 (b): Which types of AI training would be most beneficial? 21 Figure 16 , which reports multiple responses, shows that the ethical and responsible use of AI tools is the most frequently selected training priority , identified by 63.8% of respondents. This is followed by interest in research support at 60.1%, data analysis using AI at 54.1%, and coding or programming assistance at 50.9%. Smaller b ut still substantial proportions of students express interest in using AI for academic writing (42.7%) and for time and task management (44.3%). The prominence of ethical and responsible use as the top priority suggests a strong safety first orientation among students. Rather than reflecting a demand for abstract ethical discussion, this preference appears to signal a need for clear guidance on acceptable use and on ho w to av oid academic misconduct. Students are seeking precise boundaries that allo w them to use AI tools with confidence. The word cloud in Fig. 17 reinforces this interpretation by emphasising terms such as structure, clarity , and paraphrasing, which point to practical concerns about improving written w ork. This pattern indicates that students view AI primarily as a tool for refining and editing academic output. At the same time, strong demand for research support and data analysis suggests a desire to reduce routine technical tasks, enabling students to concentrate on higher-le vel analysis and synthesis. Q18: I expect course tutors to pro vide clear guidance? Figure 18 sho ws strong student expectations for explicit guidance on the use of AI in assessments. A total of 65.2% of respondents express agreement, including 45.6% who agree and 19.4% who strongly agree. In comparison, 22.1% of students select a neutral response, while only a small minority report disagreement (7.6% disagree and 5.3% strongly disagree). This high lev el of agreement indicates that students perceiv e guidance on AI use as a basic academic entitlement rather than optional support. Students sho w low tolerance for ambiguity and increasingly treat assessment rubrics as formal agreements that define acceptable practice. When instructions on AI use are not clearly stated, students report feeling uncertain and e xposed to risk. This pattern reflects a broader flight from ambiguity ( Pudasaini et al. , 2024 ; Kim et al. , 2025 ), which places pressure on univ ersities to provide highly detailed and standardised assessment guidance. In the absence of explicit instructions, students may interpret later allegations of misconduct as unfair , using the lack of prior guidance as a mitigating factor in academic inte grity processes. Q19: In which parts of the curriculum would you like integration the most? Figure 19 shows that students express the strongest preference for integrat- ing AI tools within research methods and dissertation modules, selected by 26.3% of respondents. This is follo wed by time management and producti v- ity at 20.6% and lab work or technical ex ercises at 15.9%. Lo wer le vels of 22 0 10 20 30 T ime management and productivity Group work or collaborati ve learning Research methods and dissertation modules Lab work or technical ex ercises Assessment and feedback Lectures and content deliv ery 20 . 6 12 . 7 26 . 3 15 . 9 10 . 4 14 . 1 Percent Response Figure 19: Student-identified priorities f or the embedding of AI competencies in curriculum. preference are reported for lectures and content deliv ery (14.1%), group work or collaborativ e learning (12.7%), and especially assessment and feedback (10.4%). The strong preference for AI integration in research methods and dissertation modules ( Luo et al. , 2025 ; Liang et al. , 2025 ) reflects students’ desire for support in managing the most demanding component of their de gree. The dissertation requires sustained independent work, comple x reading, and methodological planning, and students appear to view AI as a tool to reduce the workload associated with these processes. This trend presents a signifi- cant challenge for uni versities, as the dissertation has traditionally served as evidence of independent research capability . Integrating AI into this space raises questions about how academic independence and originality should be defined. At the same time, the relativ ely low interest in AI use for assessment and feedback suggests that students prefer AI as a support tool rather than an ev aluative authority , indicating limited trust in automated judgment within academic assessment. Q20 & Q20(b): In what ways might GenAI negatively affect y our experi- ence? 23 0 20 40 60 Other I do not believ e AI tools negativ ely affect my academic experience Insufficient guidance or training from lecturers or uni versity Data priv acy and ethical concerns Low-quality or inaccurate AI outputs Unclear rules about plagiarism or academic misconduct Risk of over -reliance and reduced critical thinking 1 . 1 14 . 7 33 . 7 53 . 6 43 43 . 9 56 . 1 Percent Selected Option Figure 20: Student concer ns regarding the negativ e impact of Generative AI on the academic experience. Figure 21: W ord cloud showing student prefer ences for Generati ve AI training topics. 0 20 40 5 4 3 2 1 5 . 8 19 . 2 38 . 4 22 . 7 13 . 9 Percent Response Figure 22: Student expectations f or instructor -led guidance on AI usage in assessments on a scale of 1–5, where 1 repr esents strongly disagr ee and 5 repr esents strongly agr ee. Figure 20 sho ws that students report se veral major concerns about the potential negati ve impact of generati ve AI on their academic experience. Re- porting multiple responses to this question, the most frequently cited concern is the risk of over -reliance and reduced critical thinking, selected by 56.1% of respondents. This is follo wed by data pri vac y and ethical concerns at 53.6%. Significant proportions of students also identify unclear rules related to pla- giarism or academic misconduct (43.9%) and concerns about low-quality or inaccurate AI outputs (43.0%). A smaller but notable group highlights insuffi- cient guidance or training from lecturers or the univ ersity (33.7%). Only 14.7% of respondents indicate that they do not believe AI tools ha ve a negati ve effect on their academic e xperience. These findings suggest a high le vel of student awareness of what can be described as cognitiv e atrophy , with reduced critical 24 0 10 20 30 40 V ery well prepared – they are confident and guide students effecti vely W ell prepared – they use AI tools responsibly Moderately prepared – they understand the basics Slightly prepared – they need more training Not at all prepared 12 . 5 30 . 7 32 . 3 13 . 9 10 . 6 Percent Response Figure 23: Student assessment of lectur er prepar edness regarding GenAI f or teaching and assessments. thinking emerging as the primary concern. Students appear to recognise that while AI increases ef ficiency , it can also discourage deeper engagement unless limits are in place. Qualitative responses in Fig. 21 support this interpretation, as reduced critical thinking is repeatedly mentioned alongside pri v acy and ethical concerns. This pattern indicates that students are not rejecting AI use, but are instead seeking institutional structures, such as assessment designs that require acti ve reasoning, to counterbalance o ver -reliance. In addition, strong concern about data priv acy points to a critical awareness of the risks associ- ated with commercial AI platforms, challenging assumptions that students are unconcerned about how their data are used. Q22: Has reliance on AI r educed opportunities to demonstrate ability? Figure 22 sho ws that a majority of students belie ve increased reliance on AI tools reduces opportunities to demonstrate their own academic abilities. In total, 63.4% of respondents report a moderate to extreme impact, including 38.4% who select ‘moderately’, 19.2% ‘v ery much’, and 5.8% ‘e xtremely’. In comparison, 22.7% report only a slight impact, while 13.9% indicate no impact at all. These responses suggest that students perceive AI use as flattening differ - ences in academic performance. This process, described as a homogenization of merit ( Kofinas et al. , 2025 ; Pudasaini et al. , 2024 ), appears to raise the baseline le vel of work while reducing the visibility of exceptional performance. High-achieving students may feel that their originality and personal v oice are less distinguishable when polished and well-structured outputs can be easily produced with AI assistance. As a result, the signalling value of grades may be weakened if av erage-quality work becomes easier to generate. This finding indicates that current assessment practices may unintentionally fa vour AI-like characteristics such as clarity , structure, and grammatical accuracy ov er less refined but original human thinking, contributing to a sense of credential infla- tion in which high grades feel less strongly earned. Q23: How well pr epared ar e your lectur ers? 25 Figure 23 shows mixed student perceptions of lecturer preparedness to use AI tools effecti vely in teaching and assessment. A total of 63.0% of respondents view their lecturers as at least moderately prepared, including 32.3% who select moderately prepared and 30.7% who report well prepared. In contrast, 24.5% of students perceiv e lo wer lev els of readiness, with 13.9% describing lecturers as slightly prepared and in need of further training, and 10.6% viewing them as not prepared at all. Only 12.5% of respondents consider lecturers to be very well prepared to confidently guide students in AI use. These findings highlight a clear competence gap ( Mah and Groß , 2024 ; Kim et al. , 2025 ), as nearly one quarter of students question their lecturers’ readiness in this area. In higher education, academic authority is closely linked to percei ved expertise, and doubts about technological competence can weaken this authority . The large proportion of moderate ratings suggests that students believ e lecturers are managing the transition to AI rather than activ ely leading it. This perception may be influenced by a digital nati ve bias, where students assess preparedness based on visible technical fluency rather than deeper pedagogical understanding, potentially confusing difficulties with interfaces for a lack of insight into the broader educational implications of AI. 0 20 40 5 4 3 2 1 16 . 8 29 . 3 35 10 . 8 8 . 1 Percent Response Figure 24: Student attitudes toward the mandatory disclosure of AI-generated content in assessments on a scale of 1–5, where 1 r epresents str ongly disagree and 5 repr esents strongly agr ee. 0 20 40 5 4 3 2 1 20 . 5 36 . 2 31 . 8 7 . 4 4 . 1 Percent Response Figure 25: Student expectation on the institutional responsibility to pr ovide equitable access to GenAI tools on a scale of 1–5, where 1 r epresents strongly disagree and 5 r epresents str ongly agree. Q24: Should students be requir ed to declare AI-generated content? Figure 24 shows mixed but generally cautious support for requiring stu- dents to declare AI-generated content in academic submissions. A total of 46.1% of respondents express agreement, including 29.3% who agree and 16.8% who strongly agree. At the same time, a large proportion of students se- 26 lect a neutral response (35.0%), while a smaller minority report disagreement (10.8% disagree and 8.1% strongly disagree). Support for mandatory decla- ration reflects what can be described as an honesty box approach ( Pudasaini et al. , 2024 ). Students appear to vie w declaration as a protecti ve administrati ve measure that allo ws them to legitimise their AI use and reduce anxiety about potential misconduct. Howe ver , this approach raises an enforceability paradox. Students who use AI in limited or permitted w ays are more likely to declare their use, while those who rely on AI in inappropriate ways may be least willing to self-report. As a result, the polic y risks capturing information mainly from compliant students while failing to address more serious misuse. The large neutral group further suggests that many students are indifferent to the mechanism of declaration itself and are primarily concerned with whether the process provides a sense of security rather than ensuring substanti ve academic integrity . Q25: Should universities ensur e equal access to GenAI tools? Figure 25 , the data reveals a significant consensus regarding the institu- tional responsibility to bridge the digital divide in artificial intelligence. A clear majority of students support the mandate for equal access, with 36.2% agree- ing and 20.5% strongly agreeing, resulting in a combined positive sentiment of 56.7%. In contrast, opposition is minimal, with only 11.5% of respondents expressing disagreement (7.4% disagreeing and 4.1% strongly disagreeing). A notable 31.8% of the cohort remained neutral, a trend that may reflect “Priv- ilege Blindness” among those who already possess the financial means to access premium tools. This majority agreement underscores the “ Algorithmic Paywall” as discussed crisis, as discussed in ( Mah and Groß , 2024 ; Kim et al. , 2025 ). Students recognize that a “T wo-T ier” system has emerged, where wealthier students access superior models while others must rely on inferior free versions. By demanding equal access, students are arguing that AI is no longer a luxury good b ut has become essential academic infrastructure. For the uni versity , this presents a significant budgetary challenge, complying with this mandate requires massiv e in vestment, yet failing to do so ef fecti vely endorses a “pay-to-win” system where financial capital directly translates into academic advantage ( Mah and Groß , 2024 ; Kim et al. , 2025 ). Q26 & Q26(b): What would impr ove awar eness of AI policies? 27 0 20 40 60 Other Integrating AI guideline directly into course work materials Regular workshops or webinars about AI guidelines Mandatory quizzes on AI guidelines at the beginning of the term Dedicated academic staff or help desk a vailable to address queries Frequently Asked Questions sections on univ ersity portals Clearly visible links on course homepages 0 . 7 36 . 9 50 . 2 45 . 6 37 . 6 33 . 7 33 Percent Selected option Figure 26: Student pr eferred methods f or impro ving student awareness of AI guidelines. Figure 27: W ord cloud for preferr ed methods for impro ving student awareness of AI guidelines. Figure 26 sho ws broad student support for equitable access to generati ve AI tools. A total of 56.7% of respondents express agreement, including 36.2% who agree and 20.5% who strongly agree. In addition, 31.8% of students select a neutral position, while only a small minority report disagreement (7.4% disagree and 4.1% strongly disagree). These responses indicate that students are moving be yond passi ve forms of communication, such as polic y documents or email notices, and instead fav our activ e and timely forms of learning. Preferences for mandatory quizzes and integration into coursework suggest a desire for structured confirmation of acceptable practice. Students appear to seek formal assurance that they understand institutional rules, which they believ e will protect them from future accusations of academic misconduct. The word cloud shown in Fig. 27 , which highlights terms such as explanations, directly , and coursework, supports this interpretation by showing that students want clear , applied guidance rather than general statements. Overall, the find- ings suggest that students value practical, embedded instruction on AI use that provides both clarity and a sense of procedural security . 5. Discussion This study makes a clear pedagogical contrib ution by sho wing that student engagement with Generative AI in higher education is shaped less by basic aw areness and more by guidance, confidence, and academic framing. Although 28 students report high familiarity with GenAI, actual academic use is lower and confidence in effecti ve use remains moderate. This gap suggests that technical fluency does not automatically produce critical or responsible use. The findings therefore support treating AI literac y as a core area of learning rather than a peripheral digital skill ( Kelly et al. , 2023 ; Kasneci et al. , 2023 ). The reported uses of GenAI concept clarification, brainstorming, and summarising indicate that many students use AI as a cogniti ve support tool rather than a direct substitute for their o wn work. Pedagogically , this matters because it shifts the focus from prohibition to structured integration. The k ey issue is not only whether students use AI, b ut how institutions help them use it for judgment, verification, and deeper understanding. A second contribution is the evidence of mismatch between student de- mand and institutional provision. Strong support for AI literacy in the curricu- lum, high interest in training, and expectations for tutor guidance suggest that students see GenAI as part of mainstream learning, not an external add-on. The findings also highlight the need for explicit structures for ethical practice, ev aluative thinking, and appropriate academic use ( Y usuf et al. , 2024 ; Chan and Hu , 2023 ). The study is also rele v ant to assessment design. Student concerns about ov er-reliance, reduced critical thinking, priv acy risks, and incorrect outputs indicate that the challenge is not only AI use itself, but ho w to assess inde- pendent reasoning in AI-enabled conte xts. These results align with calls for assessment approaches that prioritise process, justification, and transparenc y ov er product-only ev aluation ( Xia et al. , 2024 ). Finally , the findings sho w that pedagogical change depends on institu- tional readiness as well as student readiness. Mixed perceptions of lecturer preparedness and concerns about unequal tool access suggest that ef fecti ve implementation requires both staff de velopment and equity-focused infrastruc- ture. Overall, the study identifies practical gaps that institutions must address if GenAI is to be implemented as an educationally meaningful practice rather than a purely administrativ e control problem. 6. Conclusion The integration of GenAI into higher education is no longer a prospectiv e shift but a present reality . This study of 436 Computer Science postgraduates rev eals that students increasingly view GenAI as an essential academic in- frastructure, comparable to internet access. Ho we ver , a significant gap exists between student awareness and institutional support. While the uni versity has successfully communicated its restricti ve policies, leading to high levels 29 of awar eness it has been less ef fectiv e in providing the practical, licensed resources and pedagogical training students activ ely seek. The data highlights sev eral critical areas for institutional reform: • From restriction to inte gration: Students do not view AI as a replacement for educators b ut as a mentor -led human-AI partner ship . There is a clear mandate for AI literacy to be embedded directly into curricula and formalized in KSBs to ensure graduates are workplace-ready . • Addressing the confidence-competence paradox : Institutions must help students mov e beyond usa ge familiarity to ward critical familiarity en- suring they can identify hallucinations, bias, and inaccuracies. • Ensuring equity: T o pre vent a two-tier education system, univ ersities must mitigate the Algorithmic P aywall by provi ding equitable access to premium GenAI models for all students, regardless of financial standing. Ultimately , the goal of higher education in the AI era should not be the detection of AI use, b ut the culti vation of higher-or der human skills-critical thinking, ethical judgment, and originality-that GenAI cannot replicate. By moving from a policing to a pedago gical approach, institutions can empower students to become informed, ethical, and innov ativ e co-creators in an AI- saturated world. Limitations of the Study • The study is based on a single institutional setting, targeting postgraduate computer science students, which may af fect its generalizability to other disciplines. • The study is based on self-reported data, and participants’ responses could be influenced by various biases, including social desirability bias or uncertainty about institutional norms related to AI use. • The study is descriptiv e in nature, establishing patterns related to famil- iarity , usage, and perceptions of Generative AI, but not testing causal relationships between variables. Future Resear ch Directions • Future studies could be conducted in cross-disciplinary and cross- institutional settings to establish dif ferences in AI usage and perceptions across varied academic conte xts. • Mixed-methods studies, including intervie ws, observations, or analyses of learning analytics, could offer deeper insights into students’ actual AI usage in their academic practices. 30 • Longitudinal studies are needed as Generativ e AI becomes increas- ingly integrated into higher education, affecting students’ capabilities, perceptions, and pedagogical approaches ov er time. References Al-Emran, M., Elsherif, H. M. and Shaalan, K. (2018). In vestigating attitudes to wards the use of mobile learning in higher education , Computer s in Human Behavior , 56 , 93–102. Ardito, C. G. (2024). Generativ e ai detection in higher education assessments , New Dir ections for T eaching and Learning , 2024 , 1–18. Arow osegbe, A., Alqahtani, J. S. and Oyelade, T . (2024). Perception of generati ve ai use in uk higher education , F r ontiers in Education , 9 , 1463208. Arow osegbe, J., Rahman, S. and Parker , L. (2024). Students’ perceptions and concerns on generativ e ai use in higher education: A uk surve y , British Journal of Educational T echnology , 55 , 742–759. Asio, J. M. R. (2024). Ai literacy , self-efficac y , and self-competence among college students: V ariances and interrelationships among variables, MOJES: Malaysian Online Journal of Educational Sciences , 12 (3), 44–60. A yyoub, A. M., Khlaif, Z. N., Shamali, M., Abu Eideh, B., Assali, A., Hattab, M. K., Barham, K. A. and Bsharat, T . R. K. (2025). Advancing higher education with genai: Factors influencing educator ai literacy , F rontier s in Education , 10 , 1530721. Bender , E. (2024). A wareness of artificial intelligence as an essential digital literacy: Chatgpt and genai in the classroom, Digital Education Review , 45 , 77–94. Bender , E. M., Gebru, T ., McMillan-Major, A. and Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? , Pr oceedings of the 2021 A CM Confer ence on F airness, Accountability , and T ranspar ency (F AccT ’21) , pp. 610–623. Brown, T . B., Mann, B., Ryder , N., Subbiah, M., Kaplan, J., Dhariwal, P ., Amodei, D. and et al. (2020). Language models are fe w-shot learners , Advances in Neural Information Pr ocessing Systems , 33 , 1877–1901. Brynjolfsson, E. and McAfee, A. (2014). The second machine ag e: W ork, pr ogr ess, and pr osperity in a time of brilliant technolo gies , W . W . Norton. Chan, C. K. Y . (2024). Gener ative AI in Higher Education: The ChatGPT Effect , T aylor & Francis. Chan, C. K. Y . and Hu, W . (2023). Students’ voices on generati ve ai: Percep- tions, benefits, and challenges in higher education , International Journal of Educational T echnology in Higher Education , 20 . 31 Chiu, T . K. F . (2024a). Future research recommendations for transforming higher education with generative ai , Computers and Education: Artificial Intelligence , 6 , 100197. Chiu, T . K. F . (2024b). Generati ve artificial intelligence and the redesign of assessment in higher education , Assessment & Evaluation in Higher Education , 49 , 223–238. Christ-Brendemühl, S. (2025). Le veraging generati ve ai in higher educa- tion: An analysis of opportunities and challenges addressed in uni versity guidelines , Eur opean Journal of Education , 60 , e12891. Deep, P . D., Edgington, W . D., Ghosh, N. and Rahaman, M. S. (2025). Evalu- ating the ef fectiveness and ethical implications of ai detection tools in higher education, Information , 16 (10), 905. Francis, N. J., Jones, S. and Smith, D. P . (2025). Generativ e ai in higher education: Balancing innov ation and inte grity , British J ournal of Biomedical Science , 81 , 14048. Heav en, W . D. (2023). Chatgpt is e verywhere. here’ s where it came from, MIT T echnology Review , . Retriev ed from https://www .technologyreview.com/2 023/01/26/1067824/chatgpt- is- everywhere/ . Hollands, F . M. and T irthali, D. (2014). Moocs: Expectations and reality , T echnical r eport , Columbia Uni versity , T eachers College, Center for Benefit- Cost Studies of Education. Hu, M. (2023). Chatgpt as the fastest-gro wing app in history: Implications for higher education , Computers and Education: Artificial Intelligence , 4 , 100149. Ifenthaler , D. and Y au, J. Y .-K. (2020). Utilising learning analytics for study success: Reflections on current empirical findings , Resear ch and Practice in T echnology Enhanced Learning , 15 , 1–13. Ivano v , S., Soliman, M., Tuomi, A., Alkathiri, N. A. and Al-Ala wi, A. N. (2024). Driv ers of generati ve ai adoption in higher education through the lens of the theory of planned behaviour , T echnology in Society , 77 , 102521. Jin, Y ., Y an, L., Echev erria, V ., Gaše vi ´ c, D. and Martinez-Maldonado, R. (2025). Generativ e ai in higher education: A global perspective of institu- tional adoption policies and guidelines , Computer s and Education: Artificial Intelligence , 8 , 100348. Kasneci, E., Seßler , K., Küchemann, S., Bannert, M., Dementiev a, D., Fischer , F ., Kasneci, G. and et al. (2023). Chatgpt for good? on opportunities and challenges of large language models for education , Learning and Individual Differ ences , 103 , 102274. Kelly , A., Sulli van, M. and Strampel, K. (2023). Generativ e artificial intel- 32 ligence: Univ ersity student aw areness, experience, and confidence in use across disciplines , Journal of University T eaching & Learning Practice , 20 (Article 12). Khlaif, Z. N., Al-Abed, W . A., Salama, N. and Abu Eideh, B. (2025). Re- designing assessments for ai-enhanced learning: A frame work for educators in the generativ e ai era, Education Sciences , 15 , 174. Khlaif, Z. N., Khlaif, B., A youb, S., A yyoub, A. M. and Hassan, R. A. (2024). Univ ersity teachers’ vie ws on the adoption and integration of generati ve ai tools for student assessment in higher education, Education Sciences , 14 , 1090. Kim, J., Klopfer , M., Grohs, J. R., Eldardiry , H., W eichert, J., Cox, L. A. and Pike, D. (2025). Examining faculty and student perceptions of generati ve ai in univ ersity courses, Innovative Higher Education , pp. 1–33. Klimov a, B., Bachmann, P . and Frutos-Bencze, D. (2025). The use of chatgpt in academia: Perspecti ves of higher education students , Cog ent Education , 12 , 2508216. K ofinas, A. K., Tsay , C. H.-H. and Pike, D. (2025). The impact of generative ai on academic integrity of authentic assessments within a higher education context, British J ournal of Educational T echnolo gy , . K ov ari, A. (2025). Ethical use of chatgpt in education—best practices to combat ai-induced plagiarism , F r ontiers in Education , 9 , 1465703. Kurtz, G., Amzalag, M., Shaked, N., Zaguri, Y ., K ohen-V acs, D., Gal, E., Zailer , G. and Barak-Medina, E. (2024). Strategies for integrating generati ve ai into higher education: Na vigating challenges and le veraging opportunities , Education Sciences , 14 , 503. Liang, J., Stephens, J. M. and Brown, G. T . (2025). A systematic re view of the early impact of artificial intelligence on higher education curriculum, instruction, and assessment, F r ontiers in Education , V ol. 10, Frontiers Media SA, p. 1522841. Luo, J. (2024a). A critical revie w of genai policies in higher education assess- ment: A call to reconsider the “originality” of students’ work , Assessment & Evaluation in Higher Education , 49 , 651–664. Luo, J., Zheng, C., Y in, J. and T eo, H. H. (2025). Design and assessment of ai- based learning tools in higher education: A systematic re vie w , International Journal of Educational T echnology in Higher Education , 22 (1), 42. Luo, T . (2024b). Academic integrity in the era of generati ve ai: A comparative policy analysis of global univ ersities , J ournal of Higher Education P olicy and Management , 46 , 257–274. Mah, D.-K. and Groß, N. (2024). Artificial intelligence in higher education: 33 exploring faculty use, self-ef ficacy , distinct profiles, and professional dev el- opment needs, International J ournal of Educational T echnology in Higher Education , 21 (1), 58. Mitevska Petrushev a, K. and Idrizi, E. (2023). Ai technologies and learning: T ertiary level students’ a wareness and perceptions, International J ournal of Emer ging T echnolo gies in Learning , 18 , 86–99. Ng, A., T ang, C. and Chan, L. (2024). Equity and accessibility in ai-supported learning: A re view of generati ve ai in higher education , Computers & Edu- cation , 206 , 104977. Nguyen, K. V . (2025). The use of generative ai tools in higher education: Ethical and pedagogical principles, Journal of Academic Ethics , pp. 1–21. Nikolic, S., W entworth, I., Sheridan, L., Moss, S., Duursma, E., Jones, R. A., Ros, M. and Middleton, R. (2024). A systematic literature revie w of atti- tudes, intentions and behaviours of teaching academics pertaining to ai and generativ e ai (genai) in higher education: An analysis of genai adoption using the utaut framework , A ustralasian J ournal of Educational T echnology , 40 , 56–75. OECD (2023). OECD digital education outlook 2023: Pushing the fr ontiers with AI, blockc hain, and r obots , OECD Publishing. Ogunleye, B., Zakariyyah, K. I., Ajao, O., Olayinka, O. and Sharma, H. (2024). Higher education assessment practice in the era of generati ve ai tools, Higher Education Assessment Practice in the Er a of Generative AI T ools , 1 , 1. Perkins, M., Roe, J., Binh, H. V ., Postma, D., Hickerson, D., McGaughran, J. and Khuat, H. Q. (2024). Genai detection tools, adversarial techniques and implications for inclusivity in higher education, arXiv , . Pudasaini, S., Miralles-Pechuán, L., Lillis, D. and Llorens Salvador , M. (2024). Surve y on ai-generated plagiarism detection: The impact of large language models on academic integrity , Journal of Academic Ethics , pp. 1–34. Roe, J., Perkins, M. and Ruelle, D. (2024). Is genai the future of feedback? understanding student and staff perspecti ves on ai in assessment , Intelligent T echnologies in Education , 5 , 1–19. Rudolph, J., T an, S. and T an, S. (2023). Chatgpt: Bullshit spe wer or the end of traditional assessments in higher education?, Journal of Applied Learning and T eaching , 6 . Russell Group (2023). Principles on the use of generative ai tools in education, https://russellgroup.ac.uk/news/principles- on- use- of- generati ve- ai- tools- i n- education . Shuhaiber , A., Kuhail, M. A. and Salman, S. (2025). Chatgpt in higher education—a student’ s perspective , Computers in Human Behavior Reports , 34 17 , 100565. Siemens, G. (2005). Connectivism: A learning theory for the digital age, International J ournal of Instructional T echnology and Distance Learning , 2 , 3–10. Singh, H., T ayarani-Najaran, M.-H. and Y aqoob, M. (2023). Exploring com- puter science students’ perception of chatgpt in higher education: A descrip- tiv e and correlation study , Education Sciences , 13 (9), 924. Sousa, M. J. and Cardoso, A. (2025). Generati ve ai in higher education: T each- ing innov ation, inclusion, and assessment reform , Education and Information T echnologies , 30 , 85–102. UNESCO (2023). Guidance for generative AI in education and r esearc h , United Nations Educational, Scientific and Cultural Organization. URL: https://unesdoc.unesco.org/ark:/48223/pf0000386797 W ang, H., Dang, A., W u, Z. and Mac, S. (2024). Generativ e ai in higher education: Seeing chatgpt through univ ersities’ policies, resources, and guidelines , Computers and Education: Artificial Intelligence , 7 , 100326. W eng, X., Xia, Q., Gu, M., Rajaram, K. and Chiu, T . K. F . (2024). As- sessment and learning outcomes for generati ve ai in higher education: A scoping re view on current research status and trends , Austr alasian Journal of Educational T echnology , 40 , 37–55. W illiamson, B. and Eynon, R. (2020). Historical threads, missing strands, and future directions in ai in education , Learning, Media and T echnology , 45 , 217–235. Xia, Q., W eng, X., Ouyang, F ., Lin, T . T . J. and Chiu, T . K. F . (2024). A scoping revie w on how generativ e artificial intelligence transforms assessment in higher education , International Journal of Educational T echnology in Higher Education , 21 , 40. Y usuf, A., Pervin, N. and Román-González, M. (2024). Generative ai and the future of higher education: A threat to academic inte grity or reformation? e v- idence from multicultural perspecti ves , International Journal of Educational T echnology in Higher Education , 21 , 21. Zubair , M., Satti, S. A., Ahmad, I., Dahdoul, N., Al-Zubeidi, A. and Alsalhi, N. S. (2025). Determinants of student adoption of generativ e ai in higher education , The Electr onic Journal of e-Learning , 23 , 16–30. 35
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment