Background: We examined whether and to what extent different strategies of defining and incorporating quality of included studies affect the results of metaanalyses of diagnostic accuracy.
Methods: We evaluated the methodological quality of 487 diagnostic-accuracy studies in 30 systematic reviews with the QUADAS (Quality Assessment of Diagnostic-Accuracy Studies) checklist. We applied 3 strategies that varied both in the definition of quality and in the statistical approach to incorporate the quality-assessment results into metaanalyses. We compared magnitudes of diagnostic odds ratios, widths of their confidence intervals, and changes in a hypothetical clinical decision between strategies.
Results: Following 2 definitions of quality, we concluded that only 70 or 72 of 487 studies were of "high quality". This small number was partly due to poor reporting of quality items. None of the strategies for accounting for differences in quality led systematically to accuracy estimates that were less optimistic than ignoring quality in metaanalyses. Limiting the review to high-quality studies considerably reduced the number of studies in all reviews, with wider confidence intervals as a result. In 18 reviews, the quality adjustment would have resulted in a different decision about the usefulness of the test.
Conclusions: Although reporting the results of quality assessment of individual studies is necessary in systematic reviews, reader wariness is warranted regarding claims that differences in methodological quality have been accounted for. Obstacles for adjusting for quality in metaanalyses are poor reporting of design features and patient characteristics and the relatively low number of studies in most diagnostic reviews.