25 Organizing training

While a few programs are able to start from scratch, most are making small or large tweaks over time. This can lead to deviation from plans and best practices. A taskforce that meets regularly may help keep the vision of the program alive and functional.  The following are questions a taskforce or program director should consider:

What should we teach and what should we assess?

What are the crucial skills required of our graduates? Have they changed over the past year? Are we able to achieve that goal or should we decrease the number of skills? Do we have capacity to increase the number of skills and improve the performance of our graduates? Should some skills be required of certain groups but not all? Do we have to assess on all the skills taught?

Who gets the training?

A scoping review by Brydges et al of internal medicine resident training suggested that training programs are variable, often less than effective, and not following ideal models. The authors suggest that the costs associated with making real consistent progress are going to be very expensive and challenging to implement. It may be necessary to ensure all trainees have cognitive competence (understand procedural indications, limitations, risks, etc) but that procedural competence may need to be targeted toward those trainees most likely needing to actually perform the skills in the future.

How do we want to teach the skills?

Should we emphasize simulations, laboratories or experiential training? How much time should be allotted to each skill set and for practice? These choices also change the “cost” of the program in terms of supplies, manpower and instructor skills, not to mention time available in the program for student learning.

Enhanced success has been connected to having a range of difficulties, opportunities for repetitive practice, distributed practice, interactivity, multiple learning strategies, individualized learning, mastery learning, feedback and clinical variation. How many of those does the program contain? Could more be added?

How do we want to connect the components?

Will the training sessions be cohort or competency based or something else? Do the later sessions depend on earlier training? How will we ensure all learners are ready for the next level? Is there any evidence that they are not? What can we do to better prepare the learner and inform instructors on both ends?

Can we create similar rubrics for assessment? Can we use similar platforms for debriefing, feedback and/or instruction across the sessions?

How can we keep learners engaged?

Who is teaching, how and when?

Will the sessions be primarily run by faculty? A simulation center? A combination? Do we need standardized clients or patients? Peer or near peer instructors?

Can we convince them to teach similar methods for a skill? What is the  “correct” way to perform each task? Or which one will we use for assessment?

How will we recruit and/or pay for instructor time? Can good simulation or laboratory teaching minimize faculty time later?

How will instructors be trained in teaching? How will we identify if an instructor is effective? Are there ways to enhance learning and retention without adding resources?

How are we doing?

What are the successes and gaps in our program? Who could help us determine the answer to that question? What needs to adjust?

Do we want progress testing? Repeat testing ensures retention and readiness for progression and means someone must know what is happening in various sessions to connect the dots.  And this means the assessment needs to match what was taught and emphasized previously or leaves open some opportunity for variation. Who gets to decide?

What is getting in the way?

Many groups can resist efforts, creating a less than optimum training program. Administration may resent costs, faculty may resent time and students may be overwhelmed or undersupported. What are the likely pain points of the program? How can those be addressed?

Resources

Brydges R et al. Core Competencies or a Competent Core? A Scoping Review and Realist Synthesis of Invasive Bedside Procedural Skills Training in Internal Medicine. Acad Med. 2017;92:1632–1643

Quilici AP et al. Faculty perceptions of simulation programs in healthcare education. Intl J Med Ed 2015;6:166-171

Reedy GB. Using Cognitive Load Theory to Inform Simulation Design and Practice. Clinical Simulation in Nursing (2015) 11, 355-360

Paige JT et al. Debriefing 101: training faculty to promote learning in simulation-based training. Am J Surg (2015) 209, 126-131

Cook DA et al. Comparative effectiveness of instructional design features in simulation-based education: Systematic review and meta-analysis. Med Teach 2013; 35: e844–e875

Chiniara D et al. Simulation in healthcare: A taxonomy and a conceptual framework for instructional design and media selection. Med Teach 2013; 35: e1380–e1395

Schaefer III JJ et al. Instructional Design and Pedagogy Science in Healthcare Simulation. Simulation in Healthcare (2011) Vol. 6, No. 7, August 2011.

Brydges R et al. Coordinating Progressive Levels of Simulation Fidelity to Maximize Educational Benefit. Acad Med. 2010; 85:806–812.

 

License

Share This Book