physioscience 2025; 21(S 03): S13-S14
DOI: 10.1055/s-0045-1812377
Abstracts
Präsentationen/Presentations
PS 7

Basel, We Have a Problem. Reviews in Physiotherapy Often Miss the Goal: Incomplete and Flawed

Authors

  • R Hilfiker

    1   Physiotherapie Tschopp & Hilfiker, Glis, Switzerland
  • R Hilfiker

    2   Physiotherapie Tschopp & Hilfiker, Forschung in der Privatpraxis, Glis, Switzerland
 
 

Background In 2024, 713 systematic reviews related to physiotherapy were published. Yet many of these reviews offer limited value not only for clinicians but also for researchers. Common problems include the omission of relevant studies, poorly defined or misaligned comparisons that do not match the stated PICO framework, and vague or inadequate descriptions of interventions. Frequently, comparisons combine treatments with differing goals, intensities, or delivery formats, making it unclear what is actually being tested. As a result, reviews often fail to provide meaningful guidance for clinical decision-making.

Methods We systematically analyzed a sample of physiotherapy-related reviews, focusing on inclusion and exclusion criteria, search strategies, comparison structures, and the quality of intervention descriptions. Based on our findings, we developed a checklist for clinicians and researchers to identify common problems such as incomplete study inclusion and flawed comparisons. To illustrate these issues in a tangible and engaging way, we will present three real-world examples of problematic reviews during the science slam.

Results A substantial proportion of reviews failed to include all relevant studies, while others included studies that did not meet their own eligibility criteria. Comparisons between intervention and control groups were often inappropriate or lacked clarity. Intervention descriptions were frequently too vague to enable clinical application or replication.

Conclusions Clinicians should exercise caution when drawing conclusions from systematic reviews in physiotherapy. Researchers must improve the clarity and consistency of intervention reporting to enhance clinical utility. While AI tools may help identify review shortcomings, they cannot yet solve deeper methodological flaws. A broader discussion is needed to define what high-quality, practice-relevant reviews should look like – covering clearer reporting standards, sound comparisons, and more complete evidence. Systematic reviews are not rocket science – but they should offer more than the experience of watching a Banksy self-destruct or standing in front of an abstract painting with no title or context: striking at first glance, but ultimately leaving you puzzled, unsure what it means or how to use it. Let’s build reviews that inform, not frustrate.


Conflict of interest

The author has also contributed to the problem by publishing a few suboptimal reviews.

Publication History

Article published online:
23 October 2025

© 2025. Thieme. All rights reserved.

Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany