Development and Validation of a Faculty Artificial Intelligence Literacy and Competency (FALCON-AI) Scale for Higher Education
The integration of artificial intelligence (AI) in higher education underscores the growing importance of faculty AI literacy and competency across teaching, research, and service. Existing AI literacy instruments, however, primarily target the general public, students, or K-12 teachers, and therefore lack the role-embedded indicators and psychometric validation needed for scalable assessment among university faculty. Grounded in the Critical Tech-resilient Literacies (CTRL) framework, this study develops and validates the Faculty Artificial Intelligence Literacy and Competency (FALCON-AI) Scale as a concise and practically deployable tool for higher education contexts. Using a theory-driven development process, we generated an initial pool of 43 items mapped to three literacies (functional, evaluative, and ethical literacy) and situated them across four faculty work domains (general, teaching, research, service/administration), creating a 3 x 4 framework. Content validation was conducted through structured interviews with four subject-matter experts, supplemented by a GPT-based reviewer to triangulate ratings of clarity, relevance, and necessity, yielding refined 39 items for pilot testing. Pilot testing involved 269 valid responses, which were analyzed using confirmatory factor analysis (CFA). CFA evaluated the theoretically specified structure, followed by item reduction to minimize respondent burden while preserving content coverage. The final 23-item FALCON-AI demonstrated good model fit for the AI Literacy x Faculty Work measurement and strong reliability. This study presents a validated FALCON-AI scale with good reliability and validity, offering a refined practical instrument for assessing faculty AI in higher education.
💡 Research Summary
**
The paper addresses the pressing need for a validated instrument that can assess artificial intelligence (AI) literacy and competency specifically among university faculty, a group whose responsibilities span teaching, research, and administrative service. Existing AI literacy scales target the general public, students, or K‑12 educators and therefore lack the role‑embedded constructs and psychometric rigor required for higher‑education contexts. To fill this gap, the authors adopt the Critical Tech‑Resilient Literacies (CTRL) framework, which delineates three interrelated literacies—functional, evaluative, and ethical—and integrates them with four faculty work domains (general, teaching, research, service) to create a 3 × 4 matrix.
Item generation began with a comprehensive literature review and expert workshops, yielding an initial pool of 43 statements that map each literacy to each work domain. Content validation involved structured interviews with four subject‑matter experts and a parallel review by a GPT‑4‑based AI reviewer. Experts rated each item on clarity, relevance, and necessity; the AI reviewer provided triangulated scores. Based on this process, four items were eliminated, resulting in a 39‑item pilot instrument.
The pilot study collected 269 valid responses from faculty members at several U.S. and Korean universities. Data cleaning removed incomplete or inattentive responses. Confirmatory factor analysis (CFA) was then employed to test the hypothesized 12‑factor structure (3 literacies × 4 domains). Initial fit indices (CFI = 0.94, TLI = 0.93, RMSEA = 0.05) indicated acceptable model fit, but several items displayed high cross‑loadings and contributed to respondent fatigue. The authors applied a systematic item‑reduction strategy, considering standardized loadings, item‑total correlations, and content coverage. Sixteen items were removed, producing a final 23‑item scale that preserves representation across all literacies and domains.
Reliability analyses showed strong internal consistency for the overall scale (Cronbach’s α = 0.91) and for each literacy dimension (α = 0.89–0.92). Convergent validity was supported by average variance extracted (AVE > 0.70) for each factor, and discriminant validity was confirmed through the Fornell‑Larcker criterion. Criterion‑related validity was examined by correlating scale scores with self‑reported AI usage frequency; moderate positive correlations (r = 0.34–0.48) demonstrated that higher literacy scores align with greater AI engagement.
The authors argue that the FALCON‑AI instrument fills a critical methodological void, enabling institutions to assess faculty AI readiness, design targeted professional development, and evaluate the impact of AI‑focused workshops or policy interventions. Its concise format reduces respondent burden while maintaining comprehensive coverage of functional, evaluative, and ethical competencies across faculty duties.
Limitations include the convenience sampling limited to a handful of institutions in two countries, which may restrict generalizability across cultural and institutional contexts. The reliance on self‑report data introduces potential social desirability bias, and the rapid evolution of AI tools may render some items outdated without periodic revision. Future work should expand the sample to a broader, more diverse set of universities, incorporate objective usage metrics (e.g., log data from AI platforms), and conduct longitudinal studies to track literacy development over time.
In conclusion, the study presents a rigorously developed and validated 23‑item FALCON‑AI scale grounded in the CTRL framework, offering a practical, psychometrically sound tool for measuring faculty AI literacy and competency in higher education.
Comments & Academic Discussion
Loading comments...
Leave a Comment