Studies repeatedly have shown the clinical performance of students and residents to be less than expected by faculty. Because evaluation methods substantially influence education, poor performance can be improved with better clinical evaluation methods. This study evaluated a standardized method to measure clinical performance in which trained actual and simulated patients were organized in a multiplestation format for efficient testing of examinees on 17 cases in less than four hours. Specific checklists completed by patients and predetermined scoring protocols yielded reliable data and reduced faculty time. Data from 204 students in three clerkships were consistent with previous research showing case specificity and substantial case-to-case variability. As a group however, the students' overall total scores were very similar. This suggests that clinical education is inconsistent and that a profile of an examinee's performance is more accurate than a single overall score. Validity of this standardized clinical examination was supported by significant but moderate correlations with faculty ratings of ward performance and the medicine subtest of the National Board of Medical Examiners test, part II. Direct per-student costs were $21.00. This standardized objective examination of clinical skills is feasible for use in training programs and will provide reliable and valid data on clinical performance not available through typical methods.
ASJC Scopus subject areas