A comparison of the American Board of Anesthesiology's in-person and virtual objective structured clinical examinations

Wei Xu, Huaping Sun, Stacie G. Deiner, Ann E. Harman, Robert S. Isaak, Mark T. Keegan

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Background: The American Board of Anesthesiology's Objective Structured Clinical Examination (OSCE), as a component of its initial certification process, had been administered in-person in a dedicated assessment center since its launch in 2018 until March 2020. Due to the COVID-19 pandemic, a virtual format of the exam was piloted in December 2020 and was administered in 2021. This study aimed to compare candidate performance, examiner grading severity, and scenario difficulty between these two formats of the OSCE. Methods: The Many-Facet Rasch Model was utilized to estimate candidate performance, examiner grading severity, and scenario difficulty for the in-person and virtual OSCEs separately. The virtual OSCE was equated to the in-person OSCE by common examiners and common scenarios. Independent-samples t-test was used to compare candidate performance, and partially overlapping samples t-tests were applied to compare examiner grading severity and scenario difficulty between the in-person and virtual OSCEs. Results: The in-person (n = 3235) and virtual (n = 2934) first-time candidates were comparable in age, sex, race/ethnicity, and whether U.S. medical school graduates. The virtual scenarios (n = 35, mean [0.21] ± SD [0.38] in logits) were more difficult than the in-person scenarios (n = 93, 0.00 ± 0.69, Welch's partially overlapping samples t-test, p = 0.01); there were no statistically significant differences in examiner severity (n = 390, −0.01 ± 0.82 vs. n = 304, −0.02 ± 0.93, Welch's partially overlapping samples t-test, p = 0.81) or candidate performance (2.19 ± 0.93 vs. 2.18 ± 0.92, Welch's independent samples t-test, p = 0.83) between the in-person and virtual OSCEs. Conclusions: Our retrospective analyses of first-time OSCEs found comparable candidate performance and examiner grading severity between the in-person and virtual formats, despite the virtual scenarios being more difficult than the in-person scenarios. These results provided assurance that the virtual OSCE functioned reasonably well in a high-stakes setting.

Original languageEnglish
Article number111258
JournalJournal of Clinical Anesthesia
Volume91
DOIs
StatePublished - Dec 2023
Externally publishedYes

Keywords

  • OSCE examinee performance
  • OSCE examiner grading severity
  • OSCE scenario difficulty
  • Objective Structured Clinical Examination
  • Virtual OSCE

Fingerprint

Dive into the research topics of 'A comparison of the American Board of Anesthesiology's in-person and virtual objective structured clinical examinations'. Together they form a unique fingerprint.

Cite this