Endoscopy 2019; 51(11): 1013-1014
DOI: 10.1055/a-1012-8514
Editorial
© Georg Thieme Verlag KG Stuttgart · New York

“To measure is to know” certainly applies to ERCP training

Referring to Siau K et al. p. 1017–1026
Arjun D. Koch
Department of Gastroenterology and Hepatology, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
› Author Affiliations
Further Information

Publication History

Publication Date:
29 October 2019 (online)

Awareness of the quality of endoscopic procedures has increased over recent decades. This has led to the development of numerous quality programs and performance measures being defined to monitor and improve outcomes. Endoscopic retrograde cholangiopancreatography (ERCP) is considered to be one of the most complex and challenging procedures in gastrointestinal endoscopy. It carries a high risk of complications, and therefore, high quality performance is crucial. In order to attain competence in performing this procedure, it is obvious that extensive training is necessary. Questions that then immediately arise are: How much training is necessary? And, What level of performance is acceptable for subsequent unsupervised independent practice? To answer these questions, the performance of trainees has to be monitored and assessed. This not only generates an impression of progress and the extent to which certain end points have been met, but it also creates awareness of things that need improvement. This opens up the feedback loop that can be used to intervene in the learning curves of individual trainees, and is a good example of a Plan-Do-Check-Act (PDCA) cycle. In other words, it is hard to improve when one does not reflect on performance or, as the old adage says, “to measure is to know.”

Regarding procedural competence, both the British Society of Gastroenterology and American Society for Gastrointestinal Endoscopy recommend a common bile duct cannulation success rate of 80 % – 85 % after completion of ERCP training [1] [2]. This number has no real scientific basis but is adopted by many national societies.

However, sometimes things are easier said than done. It is quite remarkable that to date, in many countries and societies competence is merely assumed on the basis of performance of a minimum number of procedures – a threshold number that leads to certification. How odd is this? There is not a single country in the world that issues a driver’s license to a person who has completed a minimum number of 20 lessons, for example. Why is this different for medical procedures that can be potentially harmful for patients? Certainly, a certain number of ERCP procedures does reflect a degree of exposure to the technique but it does not reveal any information on the performance level of the individual trainee.

“...a certain number of ERCP procedures does reflect a degree of exposure to the technique but it does not reveal any information on the performance level of the individual trainee.”

This is largely explained by the fact that it is not easy to put the relevant quality measures for ERCP performance in a single assessment form. ERCP is a very diverse procedure and can be carried out for a number of indications with very different therapeutic interventions. It is therefore hard to define common end points to objectively document quality outcomes and to express this in a learning curve or comparison with other trainees. The quality measure that comes closest to a common end point is probably “procedural success” because this is the same desirable end point for all procedures but is a so-called composite end point. The composition of parameters can be very different for each indication to perform an ERCP. Although successful cannulation is a “sine qua non” for any subsequent therapeutic intervention in ERCP, it is a surrogate marker for competence and provides no objective information on specific therapeutic aspects such as sphincterotomy or stent placement. It does, however, seem to reflect the corresponding learning curves for therapeutic interventions [3], and in that regard, it can be used to monitor overall progress expressed in a normal inclining learning curve.

As deep cannulation is a prerequisite for any subsequent therapeutic intervention, “correct positioning” in front of the papilla is another key to success, as advocated by many expert trainers [4] and supported by recent findings in a Japanese study [5]. In this context, every single ERCP procedure has, to some extent, an effect on the overall learning process, even if not all therapeutic procedures are performed during these procedures. Therapies such as sphincterotomy can be regarded as an “add-on” for an already experienced trainee who is able to achieve a correct and stable position and cannulate the common bile duct. Still, ideally we are able to objectively assess and document competency for each and every single important aspect of an ERCP. And this is not only for the technical execution of it but also holds for the other important domains such as pre-procedural quality indicators such as the appropriateness of the indication, procedural indicators on appropriate decision making, and post-procedural outcomes (i. e. documentation and complications).

The ERCP direct observation of procedural skills (DOPS) assessment tool presented in the study by Siau et al. [6] in this issue of Endoscopy, is the first in its form to deconstruct all the necessary competencies within each of six domains (pre-procedure, intubation and positioning, cannulation and imaging, execution of selected therapy, post-procedure, endoscopic non-technical skills). Competency is reached when a trainee can successfully complete all tasks independently, without any verbal or physical assistance. This is the point where independent practice can commence and certification is justified. The authors have done a great job in demonstrating that this is indeed a scientifically valid assessment tool that can be used both during training as well as up until the point of certification. In my opinion this is an assessment form that in all likelihood many national societies would like to adopt in their ERCP training curriculum.

With all this said, I am somewhat surprised and disappointed that in their final conclusion the authors fall back, again, on suggesting a minimum number of procedures at which assessment and certification can be triggered. The authors hold the key to the content of an ERCP training curriculum that is tailored to the needs of individual trainees. Normal statistical distribution implies that if the average number of procedures needed is 300, some trainees will need more time to reach the desired competency level but others will reach the same end point after fewer procedures, so why not use this tool to optimize available capacity while accommodating the needs of all trainees? Nobody will deny that numbers do matter; an individual’s competency will ultimately be defined by the number of ERCPs they carried out, but procedural competence is a status that cannot be reached independently simply from a prefixed number of procedures performed.