Background: Observational epidemiologic studies often yield different results on the same question. In this article, we explain how this comes about.
Methods: In this review, which is based on publications retrieved by a selective search in PubMed and the Web of Science, we use information from international publications, simulation studies on sampling error, and a quantitative bias analysis on fictitious data to demonstrate why the results of epidemiologic studies are often uncertain, and why it is, therefore, generally necessary to perform more than one study on any particular question.
Results: Sampling errors, imprecise measurements, alternative but equally appropriate methods of data analysis, and features of the populations being studied are common reasons why studies on the same question can yield different results. Simulation studies are used to illustrate the fact that effect estimates such as relative risks or odds ratios can deviate markedly from the true value because of sampling error, i.e., by chance alone. Quantitative bias analysis is used to show how strongly effect estimates can be distorted by misclassification of exposures or outcomes. Finally, it is shown through illustrative examples that different but equally appropriate methods of data analysis can lead to divergent study results.
Conclusion: The above reasons why epidemiologic study results can be heterogeneous are explained in this review. Quantitative bias analyses and sensitivity analyses with alternative data evaluation strategies can help explain divergent results on one and the same question.