Endoscopy 2014; 46(09): 735-744
DOI: 10.1055/s-0034-1365463
Original article
© Georg Thieme Verlag KG Stuttgart · New York

Development and initial validation of an endoscopic part-task training box

Christopher C. Thompson
1  Division of Gastroenterology, Brigham and Women’s Hospital, Boston, Massachusetts, USA
2  Harvard Medical School, Boston, Massachusetts, USA
,
Pichamol Jirapinyo
1  Division of Gastroenterology, Brigham and Women’s Hospital, Boston, Massachusetts, USA
2  Harvard Medical School, Boston, Massachusetts, USA
,
Nitin Kumar
1  Division of Gastroenterology, Brigham and Women’s Hospital, Boston, Massachusetts, USA
,
Amy Ou
1  Division of Gastroenterology, Brigham and Women’s Hospital, Boston, Massachusetts, USA
,
Andrew Camacho
1  Division of Gastroenterology, Brigham and Women’s Hospital, Boston, Massachusetts, USA
,
Balazs Lengyel
3  Department of Radiology, Brigham and Women’s Hospital, Boston, Massachusetts, USA
,
Michele B. Ryan
1  Division of Gastroenterology, Brigham and Women’s Hospital, Boston, Massachusetts, USA
› Author Affiliations
Further Information

Publication History

submitted 03 September 2013

accepted after revision 11 March 2014

Publication Date:
25 April 2014 (online)

Background and study aims: There is currently no objective and validated methodology available to assess the progress of endoscopy trainees or to determine when technical competence has been achieved. The aims of the current study were to develop an endoscopic part-task simulator and to assess scoring system validity.

Methods: Fundamental endoscopic skills were determined via kinematic analysis, literature review, and expert interviews. Simulator prototypes and scoring systems were developed to reflect these skills. Validity evidence for content, internal structure, and response process was evaluated.

Results: The final training box consisted of five modules (knob control, torque, retroflexion, polypectomy, and navigation and loop reduction). A total of 5 minutes were permitted per module with extra points for early completion. Content validity index (CVI)-realism was 0.88, CVI-relevance was 1.00, and CVI-representativeness was 0.88, giving a composite CVI of 0.92. Overall, 82 % of participants considered the simulator to be capable of differentiating between ability levels, and 93 % thought the simulator should be used to assess ability prior to performing procedures in patients. Inter-item assessment revealed correlations from 0.67 to 0.93, suggesting that tasks were sufficiently correlated to assess the same underlying construct, with each task remaining independent. Each module represented 16.0 % – 26.1 % of the total score, suggesting that no module contributed disproportionately to the composite score. Average box scores were 272.6 and 284.4 (P = 0.94) when performed sequentially, and average score for all participants with proctor 1 was 297.6 and 308.1 with proctor 2 (P = 0.94), suggesting reproducibility and minimal error associated with test administration.

Conclusion: A part-task training box and scoring system were developed to assess fundamental endoscopic skills, and validity evidence regarding content, internal structure, and response process was demonstrated.