Subscribe to RSS
Competency assessment: a journey of lifelong learningReferring to Khan R et al. p. 847–856
Awareness of quality during training in endoscopy has increased tremendously over past decades and this is mainly attributable to the fact that we are moving away from minimum threshold numbers as surrogate markers of competence. Our focus has shifted to procedural competency, which is reached when a trainee can independently complete, successfully and repeatedly, all tasks that are required for a specific type of procedure. This is the point where certification can be obtained and independent practice can commence.
“If we document progression over time and repeatedly perform formative assessment, we literally see the incline of the trainees’ learning curves and thresholds being reached for key performance measures.”
This procedural competence entails more than just the technical performance; the cognitive and integrative skills that are required to select the right patient for the correct indication and decide on patient management are just as important . The road toward procedural competence is the training phase, during which we, as trainers, have to make sure that our trainees are exposed to and master all these aspects within these endoscopic technical and nontechnical skills (ENTS). Validated assessment tools are needed to document this formative learning phase and again in the summative phase, which is commonly regarded as the point of certification. Assessment tools have been developed to help mentors assess trainees in a structured, more unified and objective manner, and are indispensable in our training curricula.
Some of us wonder whether we could use virtual reality simulators to assess competency in endoscopy; after all, a computer is highly objective and a certain performance should always generate the same assessment score and reduce rater bias. This has been studied in colonoscopy and esophagogastroduodenoscopy. There are no studies performed regarding endoscopic retrograde cholangiopancreatography (ERCP) simulator assessment. Unfortunately, it turns out that performance parameters derived from simulators do not correlate with scores given by blinded experts. It seems that our current simulators lack the discriminative power to assess performance and determine competence levels in patient-based endoscopy .
ERCP is among the most complex and challenging procedures in gastrointestinal endoscopy. ERCP has a high risk of complications and high quality performance is essential. Complication risks are inseparable from ERCP but risks tend to increase in patients who need it the least or when the indication seems questionable . This once more stresses the importance of ENTS and its focus during training.
It is extremely important that we move away from threshold numbers and train our trainee endoscopists to a level where independent practice is justified. Having said that, a few questions remain.
1. What level of competency would indicate independent practice can commence? In the recent past, both the British Society of Gastroenterology and American Society for Gastrointestinal Endoscopy have recommended a common bile duct (CBD) cannulation success rate of 80 %–85 % after completion of ERCP training  . A recent quality improvement initiative published by the European Society of Gastrointestinal Endoscopy (ESGE) states that a competent endoscopist should be able to successfully cannulate the CBD with native papillary anatomy in at least 90 % of cases . If this is expected from an independent practising endoscopist, it makes sense that this should be a target for certification after training as well. It is remarkable, therefore, that the same ESGE has issued a position statement regarding ERCP training where a CBD cannulation rate of ≥ 80 % is upheld, increasing up to 90 % after a “mentored period of independent practice” . The reason for this approach is debatable.
2. Do we have the means to assess and document competency development and procedural competence that marks the end point of training and starting point of independent practice? In this issue of Endoscopy, the systematic review on validity of our currently available ERCP assessment tools by Khan et al.  suggests that we do. Validity evidence supporting our ERCP assessment tools is essential because this proves that the tools we use actually document and measure what they were designed for. This systematic review shows that we have three assessment tools that demonstrate excellent validity evidence to support their use in formative assessment during ERCP training: the Bethesda ERCP Skills Assessment Tool (BESAT), ERCP Direct Observation of Procedural Skills Tool (ERCP DOPS), and The Endoscopic Ultrasound (EUS) and ERCP Skills Assessment Tool (TEESAT). The tools have not been validated for summative assessment and this raises the question of whether they can be used for certification or not? However, the answer to this question is not that simple. In a final exam scenario, a student gets a single chance to demonstrate his or her skills acquired over the entire training period. The ERCP case might be too easy or completely impossible. Both pose serious challenges for good assessment if there is no way to compensate for that. At the very least, the assessment tool should be as objective and reproducible as possible and rule out any rater bias. In this regard, I would argue that the ERCP DOPS is probably the most useful tool for summative assessment. The TEESAT seems to lack internal structure and the BESAT tool focuses too much on the technical skills side although its video-based assessment might be useful in summative assessment.
Another way of looking at this issue is to question whether summative assessment is really necessary. If we document progression over time and repeatedly perform formative assessment, we literally see the incline of the trainees’ learning curves and thresholds being reached for key performance measures. Then, in independent practice we can continue the same lifelong monitoring. After all, in many ways, competency assessment is a journey of lifelong learning.
Article published online:
13 July 2023
© 2023. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
- 1 Walsh CM. In-training gastrointestinal endoscopy competency assessment tools: types of tools, validation and impact. Best Pract Res Clin Gastroenterol 2016; 30: 357-374
- 2 Ekkelenkamp VE, Koch AD, de Man RA. et al. Training and competence assessment in GI endoscopy: a systematic review. Gut 2016; 65: 607-615
- 3 Cotton PB. Endoscopic retrograde cholangiopancreatography: maximizing benefits and minimizing risks. Gastrointest Endosc Clin N Am 2012; 22: 587-599
- 4 ERCP – the way forward. A standards framework. British Society of Gastroenterology ERCP Working Party. https://www.bsg.org.uk/wp-content/uploads/2019/12/ERCP-%E2%80%93-The-Way-Forward-A-Standards-Framework-1.pdf Available at (Accessed: 05/24/2023):
- 5 Baron TH, Petersen BT, Mergener K. et al. Quality indicators for endoscopic retrograde cholangiopancreatography. Am J Gastroenterol 2006; 101: 892-897
- 6 Domagk D, Oppong KW, Aabakken L. et al. Performance measures for ERCP and endoscopic ultrasound: a European Society of Gastrointestinal Endoscopy (ESGE) Quality Improvement Initiative. Endoscopy 2018; 50: 1116-1127
- 7 Johnson G, Webster G, Boškoski I. et al. Curriculum for ERCP and endoscopic ultrasound training in Europe: European Society of Gastrointestinal Endoscopy (ESGE) Position Statement. Endoscopy 2021; 53: 1071-1087
- 8 Khan R, Homsi H, Gimpaya N. et al. Validity evidence for observational ERCP competency assessment tools: a systematic review. Endoscopy 2023; 55: 847-856