Background: Historically, a “competent sonographer” has been defined as someone who performed a predetermined number of ultrasound studies. The number is usually rather low, and in Emergency Medicine this cutoff was... [ view full abstract ]
Background: Historically, a “competent sonographer” has been defined as someone who performed a predetermined number of ultrasound studies. The number is usually rather low, and in Emergency Medicine this cutoff was twenty five scans per modality. This number has been supported by National Societies including the American College of Emergency Physicians. However, physicians who do the majority of the bedside training know that performing an arbitrary number of scans does not reflect the actual attainment of “competency.” A more objective method of assessment based on outcomes and not numbers is needed.
Objectives: To describe a new method of competency assessment based on the three separate outcome categories of comprehension, confidence and insight.
Methods: A total of 50 fourth year medical students participated in our one month Emergency Ultrasound elective between 2012 and 2016. Each student was given a 50-question test at the beginning of the elective to determine baseline comprehension of the course material. At the end of their test the students were asked to rate their confidence in their answers on a 100mm visual analog scale. To measure their insight they were also asked to state how many questions out of fifty they think they answered correctly. During the 4-week rotation the students met with an ultrasound fellowship trained faculty member twice a week. On each of these days the students received a didactic lecture, performed supervised scanning within the University of Texas School of Medicine Center for Clinical Ultrasound Education, and also performed supervised scanning on patients within the Emergency Department. Two days each week the students performed independent scanning sessions in the Emergency Department and recorded all their studies for review by the course instructors. At the conclusion of the rotation, the same outcome based competency test was then provided to each student. Values for student comprehension (total number of correct answers), confidence (visual analog measurement in mm), and insight (difference between comprehension and the students estimated number of correct answers) were obtained for the pretest and posttest, then compared using paired T-Tests. All testing was 2-sided with a significance level of 5%. SAS Version 9.4 for Windows was used throughout.
Results: Comprehension increased by a mean of 11.9 (95% CI, 10.8-13.1), p<0.001. Confidence increased by a mean of 49.1mm (95% CI, 43.4-54.8), p<0.001. Student insight mean was 7.5 in the pretest (95% CI, 5.6-9.3), p<0.001 and 0.62 (95% CI, -0.82-2.1), p<0.001 in the posttest. The variance decreased from pretest to posttest (pre 42.9, post 25.8, p=0.078) and the posttest mean was significantly closer to zero than the pretest mean (pre 7.5, post 0.62) indicating better insight. In each case, insight was greater than zero, showing that students on average underestimated their number of correct answers.
Conclusions: Our new competency model allowed for objective determination of three specific aspects of student ultrasound competency. Using this model we were able to determine that the students objectively showed improvement in comprehension, confidence and insight. Using this system, it may be possible to define specific learner outcome profiles that educators can use as a guide during training. It may also allow educators to specifically tailor training to individual learner weaknesses and improve learner outcomes at the end of training.
Use of ultrasound in Undergraduate Medical Education , Point of Care ultrasound in general clinical practice , Patient Safety