» Articles » PMID: 38144670

Automating the Identification of Feedback Quality Criteria and the CanMEDS Roles in Written Feedback Comments Using Natural Language Processing

Overview
Publisher Ubiquity Press
Specialty Medical Education
Date 2023 Dec 25
PMID 38144670
Authors
Affiliations
Soon will be listed here.
Abstract

Introduction: Manually analysing the quality of large amounts of written feedback comments is time-consuming and demands extensive resources and human effort. Therefore, this study aimed to explore whether a state-of-the-art large language model (LLM) could be fine-tuned to identify the presence of four literature-derived feedback quality criteria ( and ) and the seven CanMEDS roles ( and ) in written feedback comments.

Methods: A set of 2,349 labelled feedback comments of five healthcare educational programs in Flanders (Belgium) (specialistic medicine, general practice, midwifery, speech therapy and occupational therapy) was split into 12,452 sentences to create two datasets for the machine learning analysis. The Dutch BERT models BERTje and RobBERT were used to train four multiclass-multilabel classification models: two to identify the four feedback quality criteria and two to identify the seven CanMEDS roles.

Results: The classification models trained with BERTje and RobBERT to predict the presence of the four feedback quality criteria attained macro average F1-scores of 0.73 and 0.76, respectively. The F1-score of the model predicting the presence of the CanMEDS roles trained with BERTje was 0.71 and 0.72 with RobBERT.

Discussion: The results showed that a state-of-the-art LLM is able to identify the presence of the four feedback quality criteria and the CanMEDS roles in written feedback comments. This implies that the quality analysis of written feedback comments can be automated using an LLM, leading to savings of time and resources.

Citing Articles

Leveraging Narrative Feedback in Programmatic Assessment: The Potential of Automated Text Analysis to Support Coaching and Decision-Making in Programmatic Assessment.

Nair B, Moonen-van Loon J, van Lierop M, Govaerts M Adv Med Educ Pract. 2024; 15:671-683.

PMID: 39050116 PMC: 11268569. DOI: 10.2147/AMEP.S465259.

References
1.
Whitehead C, Hodges B, Austin Z . Dissecting the doctor: from character to characteristics in North American medical education. Adv Health Sci Educ Theory Pract. 2012; 18(4):687-99. DOI: 10.1007/s10459-012-9409-5. View

2.
Fu R, Cho Y, Quattri F, Monrouxe L . 'I did not check if the teacher gave feedback': a qualitative analysis of Taiwanese postgraduate year 1 trainees' talk around e-portfolio feedback-seeking behaviours. BMJ Open. 2019; 9(1):e024425. PMC: 6361414. DOI: 10.1136/bmjopen-2018-024425. View

3.
Van Ostaeyen S, Embo M, Rotsaert T, De Clercq O, Schellens T, Valcke M . A Qualitative Textual Analysis of Feedback Comments in ePortfolios: Quality and Alignment with the CanMEDS Roles. Perspect Med Educ. 2023; 12(1):584-593. PMC: 10742175. DOI: 10.5334/pme.1050. View

4.
Sirianni G, Glover Takahashi S, Myers J . Taking stock of what is known about faculty development in competency-based medical education: A scoping review paper. Med Teach. 2020; 42(8):909-915. DOI: 10.1080/0142159X.2020.1763285. View

5.
Janssens O, Embo M, Valcke M, Haerens L . An online Delphi study to investigate the completeness of the CanMEDS Roles and the relevance, formulation, and measurability of their key competencies within eight healthcare disciplines in Flanders. BMC Med Educ. 2022; 22(1):260. PMC: 8994879. DOI: 10.1186/s12909-022-03308-8. View