Purpose: The objective structured clinical examination (OSCE) assesses clinical competence in health sciences education. There is little research regarding the reliability and validity of using an OSCE during the transition from undergraduate to graduate medical education. The goal of this study was to measure the reliability of a unique 2-rater entrustable professional activity (EPA)-based OSCE format for transition to internship using generalizability theory for estimating reliability.
Method: During the 2018 to 2022 academic years, 5 cohorts of interns (n = 230) at the University of Iowa Hospital and Clinics participated in a 6-station OSCE assessment delivered during orientation. A univariate and multivariate generalizability study (G study) was conducted on the scores generated by the 3 cases in the orientation OSCE that shared the 2-rater format. This analysis was supplemented with an associated decision study (D study).
Results: The univariate G study for the cases that used a simulated patient and a faculty rater demonstrated that this OSCE generated a moderately reliable score with 3 cases. The D study showed that increasing the OCSE to 12 cases yielded a mean score reliable enough (G = 0.76) for making high-stakes normative decisions regarding remediation and readiness to practice. The universe score correlation between 2 types of raters was 0.398. The faculty ratings displayed a larger proportion of universe (true) score variance and yielded a more reliable (G = 0.433) score compared with the standardized patient ratings (G = 0.337).
Conclusions: This study provides insight into the development of an EPA-based OSCE. The univariate G study demonstrated that when using the 2 rater types, this assessment could generate a moderately reliable score with 3 cases. The multivariate G study showed that the 2 types of raters assessed different aspects of clinical skills and faculty raters were more reliable.
Copyright © 2024 the Association of American Medical Colleges.