Abstract
Objective Sample size calculations play a central role in risk-factor study design because
sample size affects study interpretability, costs, hospital resources and staff time.
We demonstrate the consequences of using misclassified control groups on the power
of risk association tests, with the intent of showing that control groups with even
small misclassification rates can reduce the power of association tests. So, sample
size calculations that ignore misclassifications may underpower studies.
Study Design This was a simulation study using study designs from published orthopaedic risk-factor
studies. The approach was to use their designs but simulate the data to include known
proportions of misclassified affected subjects in the control group. The simulated
data were used to calculate the power of a risk-association test. We calculated powers
for several study designs and misclassification rates and compared them to a reference
model.
Results Treating unlabelled data as disease-negative only always reduced statistical power
compared with the reference power, and power loss increased with increasing misclassification
rate. For this study, power could be improved back to 80% by increasing the sample
size by a factor of 1.1 to 1.4.
Conclusion Researchers should use caution in calculating sample sizes for risk-factor studies
and consider adjustments for estimated misclassification rates.
Keywords
risk factor - case–control - power - sample size - veterinary orthopaedics