Advancing Problem-Based Learning in Biomedical Engineering in the Era of Generative AI

Advancing Problem-Based Learning in Biomedical Engineering in the Era of Generative AI
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Problem-Based Learning (PBL) has significantly impacted biomedical engineering (BME) education since its introduction in the early 2000s, effectively enhancing critical thinking and real-world knowledge application among students. With biomedical engineering rapidly converging with artificial intelligence (AI), integrating effective AI education into established curricula has become challenging yet increasingly necessary. Recent advancements, including AI’s recognition by the 2024 Nobel Prize, have highlighted the importance of training students comprehensively in biomedical AI. However, effective biomedical AI education faces substantial obstacles, such as diverse student backgrounds, limited personalized mentoring, constrained computational resources, and difficulties in safely scaling hands-on practical experiments due to privacy and ethical concerns associated with biomedical data. To overcome these issues, we conducted a three-year (2021-2023) case study implementing an advanced PBL framework tailored specifically for biomedical AI education, involving 92 undergraduate and 156 graduate students from the joint Biomedical Engineering program of Georgia Institute of Technology and Emory University. Our approach emphasizes collaborative, interdisciplinary problem-solving through authentic biomedical AI challenges. The implementation led to measurable improvements in learning outcomes, evidenced by high research productivity (16 student-authored publications), consistently positive peer evaluations, and successful development of innovative computational methods addressing real biomedical challenges. Additionally, we examined the role of generative AI both as a teaching subject and an educational support tool within the PBL framework. Our study presents a practical and scalable roadmap for biomedical engineering departments aiming to integrate robust AI education into their curricula.


💡 Research Summary

The paper presents a comprehensive, implementation‑ready framework that integrates generative artificial intelligence (GenAI) into problem‑based learning (PBL) for biomedical engineering (BME) education. Recognizing that traditional PBL demands extensive faculty expertise, continuous curriculum updates, and faces scalability issues—especially when dealing with privacy‑sensitive biomedical data—the authors propose a modular approach that treats GenAI both as a subject of study and as a supportive tool for knowledge synthesis and coding.

A three‑year case study (2021‑2023) was conducted jointly by Georgia Institute of Technology and Emory University, involving 248 students (92 undergraduates and 156 graduate students). Participants worked in interdisciplinary teams on authentic biomedical AI challenges, ranging from predictive modeling of disease outcomes to the development of robust, reproducible pipelines for medical imaging analysis. The curriculum was structured into four sequential phases: (1) Problem Formation – students receive real‑world briefs and curated datasets; (2) AI‑Supported Knowledge Inquiry – GenAI is used for literature summarization, code scaffolding, and reflective prompts, under strict disclosure, source‑anchoring, verification, and version‑logging policies; (3) Problem Solving – teams design experiments, implement models, produce model cards, ethical risk assessments, and reproducibility reports; (4) Presentation – written reports, oral presentations, and peer reviews consolidate learning.

Key innovations include:

  • Guardrails for GenAI use: mandatory logging of prompts and model versions, citation of source material for every AI‑generated output, instructor verification before submission, and explicit ethical/privacy checks.
  • Balanced team formation algorithm: teams are composed to equalize expertise across biology, medicine, computer science, and engineering, while also accounting for prior AI experience.
  • Multi‑dimensional assessment: beyond traditional exams, the authors evaluate peer‑review scores, rubric‑based project rubrics (technical soundness, clinical relevance, ethical considerations, reproducibility), code reproducibility metrics, and scholarly output.

Outcomes were striking. The cohort produced 16 peer‑reviewed student‑authored publications, a rate several times higher than typical BME PBL programs. Survey data indicated significant gains in confidence using AI tools, perceived equity of learning opportunities, and satisfaction with team collaboration. Objective metrics showed average code reproducibility of 92 % and model performance comparable to state‑of‑the‑art benchmarks in the selected biomedical domains.

To facilitate adoption, the authors release a replication package containing syllabi, weekly milestones, detailed rubrics, team‑formation procedures, and ready‑to‑use GenAI prompt and disclosure templates. This package enables other institutions to deploy the framework with minimal additional development effort.

The study concludes that embedding GenAI within a structured PBL environment can simultaneously address resource constraints, enhance equity, and produce tangible research outputs. It offers a scalable roadmap for BME departments seeking to embed robust AI education, recommending future work on longitudinal tracking of graduates, expansion to diverse clinical problem sets, and continuous alignment with evolving AI policy and regulation.


Comments & Academic Discussion

Loading comments...

Leave a Comment