Yearb Med Inform 2009; 18(01): 23-31
DOI: 10.1055/s-0038-1638633
Original Article
Georg Thieme Verlag KG Stuttgart

STARE-HI -Statement on Reporting of Evaluation Studies in Health Informatics

J. Talmon
1   Center for Research, Innovation, Support and Policy-CRISP, Maastricht University, Maastricht, The Netherlands
,
E. Ammenwerth
2   UMIT-University for Health Sciences, Medical Informatics and Technology, Hall in Tyrol, Austria
,
J. Brender
3   Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
,
N. de Keizer
4   Department of Medical Informatics, Academic Medical Center, Amsterdam, The Netherlands
,
P. Nykänen
5   Department of Computer Sciences, University of Tampere, Tampere, Finland
,
M. Rigby
6   Centre for Health Planning and Management, Keele University, Keele, United Kingdom
› Institutsangaben
Weitere Informationen

Publikationsverlauf

Publikationsdatum:
07. März 2018 (online)

Summary

Objective Development of guidelines for publication of evaluation studies of Health Informatics applications.

Methods An initial list of issues to be addressed in reports on evaluation studies was drafted based on experiencesas editorsand reviewers and as authors of systematic reviews , taking into account guidelines for reporting of medical research. This list has been discussed in several rounds by an increasing number of experts in Health Informatics evaluation during conferences and by using e-mail.

ResultsA set of STARE-HI principles to be addressed in papers describing evaluations of Health Informatics interventions is presented. These principles include formulation of title and abstract, of introduction (e.g. scientific background, study objectives), study context (e.g. organizational setting, system details), methods (e.g. study design, outcome measures), results (e.g. study findings, unexpected observations)and discussion and conclusion.

Conclusion Acomprehensivelistofprinciplesrelevantforproperlydescribing Health Informatics evaluations has been developed. When manuscripts submitted to Health Informatics journals and general medical journals adhere to these aspects, readers will be better positioned to place the studies in a proper context and judge their validity and generalisability. STARE-HI may also be used for study planning and hence positively influence the quality of evaluation studies in Health Informatics. We believe that better publication of (both quantitative and qualitative) evaluation studies is an important step toward the vision of evidence-based Health Informatics.

Limitations This study is based on experiences from editors, reviewers, authors of systemati c reviews and readers of the scientific literature. The applicability of the principles has not been evaluated in real practice. Only when authors start to use these principles for reporting, shortcomings in the principles will emerge.

 
  • References

  • 1 Sackett D, Rosenberg W, Gray J, Haynes RB, Richardson S. Evidence based medicine: what it is and what it isn’t. BMJ 1996; 312 7023 71-2.
  • 2 Ammenwerth E, Brender J, Nykanen P, Prokosch HU, Rigby M, Talmon J. Visions and strategies to improve evaluation of health information systems. Reflections and lessons based on the HIS-EVAL workshop in Innsbruck. Int J Med Inform 2004; Jun 30; 73 (06) 479-91.
  • 3 Ammenwerth E, Shaw NT. Bad health informatics can kill – is evaluation the answer?. Methods Inf Med 2005; 44 (01) 1-3.
  • 4 Ash JS, Berg M, Coiera E. Some unintended consequences of information technology in health care: the nature of patient care information systemrelated errors. J Am Med Inform Assoc 2004; Mar-Apr; 11 (02) 104-12.
  • 5 Tenner E. Why things bite back: technology and the revenge of unintended consequences. New York, NY: Vintage Books; 1997
  • 6 Rigby M. Evaluation: 16 Powerful Reasons Why Not to Do It – And 6 Over-Riding Imperatives. In: Patel V, Rogers R, Haux R. editors. MEDINFO. 2001; 2001: IOS Press; 2001: 1198-202.
  • 7 Talmon J. Evaluation and implementation: A call for action. Methods Inf Med 2006; 45 (Suppl. 01) S11-S5.
  • 8 Rigby M. Evaluation – the Cinderella Science of ICT in Health. Methods Inf Med 2006; 45 (Suppl. 01) S114-S20.
  • 9 Whitten PS, Mair FS, Haycox A, May CR, Williams TL, Hellmich S. Systematic review of cost effectiveness studies of telemedicine interventions. BMJ 2002; Jun 15; 324 7351 1434-7.
  • 10 Ammenwerth E, de Keizer N. An inventory of evaluation studies of information technology in health care trends in evaluation research 1982-2002. Methods Inf Med 2005; 44 (01) 44-56.
  • 11 Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E. et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 2006; May 16; 144 (10) 742-52.
  • 12 Brender J. Handbook of Evaluation Methods for Health Informatics. New York: Academic Press; 2006
  • 13 Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I. et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA 1996; Aug 28; 276 (08) 637-9.
  • 14 Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D. et al. The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med 2001; Apr 17; 134 (08) 663-94.
  • 15 Campbell MK, Elbourne DR, Altman DG. CONSORT statement: extension to cluster randomised trials. BMJ 2004; Mar 20; 328 7441 702-8.
  • 16 Boutron I, Moher D, Altman D, Schulz KF, Ravaud P. Methods and Processes of the CONSORT group: Example of an extension for trials assessing nonpharmacologic treatments. Ann Intern Med 2008 19 February 2008; 148 (04) W-60-W-6.
  • 17 Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman D. et al. Consort for reporting randomized trials in journal and conference abstracts. Lancet 2008 January 26 2008; 371: 281-3.
  • 18 Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Metaanalyses. Lancet 1999; Nov 27; 354 9193 1896-900.
  • 19 Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM. et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Standards for Reporting of Diagnostic Accuracy. Clin Chem 2003; Jan; 49 (01) 1-6.
  • 20 von Elm EA, Altman D, Egger M, Pockock SJ, Gotzsche PC, Vandenbroucke JP. et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement. Lancet. 2007
  • 21 Falagas ME, Pitsouni EI. Guidelines and consensus statements regarding the conduction and reporting of clinical research studies. Arch Intern Med 2007; May 14; 167 (09) 877-8.
  • 22 The Equator-Network Website. 2008 [cited 26 May 2009]; Available from: http://www.equatornetwork.org/
  • 23 Moher D, Jones A, Lepage L. Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA 2001; Apr 18; 285 (15) 1992-5.
  • 24 Egger M, Juni P, Bartlett C. Value of flow diagrams in reports of randomized controlled trials. JAMA 2001; Apr 18; 285 (15) 1996-9.
  • 25 Heathfield HA, Buchan IE. Current evaluations of information technology in health care are often inadequate. BMJ 1996; Oct 19; 313 7063 1008.
  • 26 Kaplan B. Evaluating informatics applications— some alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inform 2001; Nov; 64 (01) 39-56.
  • 27 Altman DG. Better reporting of randomised controlled trials: the CONSORT statement. BMJ 1996; Sep 7; 313 7057 570-1.
  • 28 Ammenwerth E, Wolff AC, Knaup P, Ulmer H, Skonetzki S, van Bemmel JH. et al. Developing and evaluating criteria to help reviewers of biomedical informatics manuscripts. J Am Med Inform Assoc 2003; Sep-Oct; 10 (05) 512-4.
  • 29 International Commitee of Medical Journal Editors. Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Writing and Editing for Biomedical Publication. 2007 [cited 26 May 2009]; Available from: www.ICMJE.org
  • 30 Talmon J, Ammenwerth E, Geven T. The quality of reporting of health informatics evaluation studies: A pilot study. MEDINFO. 2007; 2007; Brisbane, Australia: 2007
  • 31 Haynes RB, Mulrow CD, Huth EJ, Altman DG, Gardner MJ. More informative abstracts revisited. Ann Intern Med 1990; Jul 1; 113 (01) 69-76.
  • 32 National Library of Medicine. Medical Subject Headings. 2008 [cited 26 May 2009]; Available from: http://www.nlm.nih.gov/mesh/meshhome.html
  • 33 Harris AD, McGregor JC, Perencevich EN, Furuno JP, Zhu J, Peterson DE. et al. The use and interpretation of quasi-experimental studies in medical informatics. J Am Med Inform Assoc 2006; Jan-Feb; 13 (01) 16-23.
  • 34 Friedman C, Wyatt J. Evaluation Methods in Biomedical Informatics. 2 ed. New York, NY: Springer-Publishing; 2005
  • 35 Greene J, McClintock C. Triangulation in evaluation: Design and analysis issues. Evaluation review 1985; 09 (05) 523-45.
  • 36 Kaplan B, Duchon D. Combining qualitative and quantitative approaches in information systems research: a case study. MIS Quaterly 1988; 12 (04) 571-86.
  • 37 Docherty M, Smith R. The case for structuring the discussion of scientific papers. BMJ 1999; May 8; 318 7193 1224-5.
  • 38 Patrias K. Citing medicine: the NLM style guide for authors, editors and publishers [Internet]. 2007 [cited 26 May 2009]; 2nd ed.: Available from: http://www.nlm.nih.gov/citingmedicine