23 OSCES, OSATS and more

Standardized testing of skills permits program and individual evaluation with targeted feedback and training. Bias is easy to introduce; rater training is important.

OSCEs – objective structured clinical exam

OSCEs have been frequently used to assess clinical skills. It is considered a practical, valid and reliable approach to skills setting. However it is costly. OSCEs are resource intensive requiring extensive set up, multiple trained examiners and time. No cheaper alternative has been found. OSCEs have advantages beyond assessment including student learning and focus on important clinical aspects. OSCEs can help direct individual and training programs. Global rating scales have been shown to be equally reliable to checklists for scoring and may help with time and training of assessors.

Cut off scores and pass/fail decisions remain challenging. Yousef and others compared cut off setting methods and determined the borderline regression method to have the highest convergent validity evidence. Cluster analysis using mean method may also be useful.

OSATS – objective structured assessment of technical skills

Beard et al described the OSATs and its usage in evaluating surgical skills in 2011. Based on a meta-analysis performed by  Hatala et al, OSATS are useful for formative feedback. Scoring is reasonable for high stakes assessment. However, OSATS scores have yet to be linked to clinical performance so use in high stakes performance may not be warranted.

Live or later

Assessment in the clinical skills laboratory can be difficult to manage with competing demands. Williamson et al demonstrated the value of GoPro cameras and rubrics to evaluate veterinary student abdominal wall closure. The amount of time involved was not discussed.

Borderline Pass

Students on the edge of pass and fail marks in high stakes exam can be challenging. Most examiners will err on the side of passing the student. Shulruf  et al described a method of assigning original grades  as distinction, pass, borderline and fail. Subsequently station difficulty was evaluated and that used to create a grade modifier per station. When grades were recalculated, a more distinct pass/fail line was obtained.

Resources

Shulruf B et al.  Borderline grades in high stakes clinical examinations: resolving examiner uncertainty. BMC Medical Education (2018) 18:272

Williamson JA et al. Evaluation of a method to assess digitally recorded surgical skills of novice veterinary students. Veterinary Surgery. 2018;47:378–384.

Hatala R et al. Constructing a validity argument for the Objective Structured Assessment of Technical Skills (OSATS): a systematic review of validity evidence.  Adv in Health Sci Educ (2015) 20:1149–1175

Yousef N et al. Standard Setting Methods for Pass/Fail Decisions on High-Stakes Objective Structured Clinical Examinations: A Validity Study. Teaching and Learning in Medicine, 27(3), 280–291

Read EK et al. The Use of Global Rating Scales for OSCEs in Veterinary Medicine. PLoS ONE 10(3): e0121000.

Pugh D et al. Progress testing: is there a role for the OSCE? Medical Education 2014; 48: 623–631

2014 A guide to assessment in veterinary medicine  – a great place to start

Khan KZ et al.  The Objective Structured Clinical Examination (OSCE): AMEE Guide No. 81. Part II: Organisation & Administration.  Medical Teacher 2013; 35: e1447–e1463

Beard JD et al. Assessing the surgical skills of trainees in the operating theatre: a prospective observational study of the methodology. Health Technol Assess 2011;15(1)

License

Share This Book