BACKGROUND: Studies with methodologic shortcomings can overestimate the accuracy of a medical test. We sought to determine and compare the direction and magnitudeof the effects of a number of potential sources of bias and variation in studies on estimates of diagnostic accuracy.METHODS: We identified meta-analyses of the diagnostic accuracy of tests through an electronic search of the databases MEDLINE, EMBASE, DARE and MEDION(1999-2002). We included meta-analyses with at least 10 primary studies withoutpreselection based on design features. Pairs of reviewers independently extractedstudy characteristics and original data from the primary studies. We used amultivariable meta-epidemiologic regression model to investigate the directionand strength of the association between 15 study features on estimates ofdiagnostic accuracy.RESULTS: We selected 31 meta-analyses with 487 primary studies of testevaluations. Only 1 study had no design deficiencies. The quality of reportingwas poor in most of the studies. We found significantly higher estimates ofdiagnostic accuracy in studies with nonconsecutive inclusion of patients(relative diagnostic odds ratio [RDOR] 1.5, 95% confidence interval [CI] 1.0-2.1)and retrospective data collection (RDOR 1.6, 95% CI 1.1-2.2). The estimates were highest in studies that had severe cases and healthy controls (RDOR 4.9, 95% CI0.6-37.3). Studies that selected patients based on whether they had been referredfor the index test, rather than on clinical symptoms, produced significantlylower estimates of diagnostic accuracy (RDOR 0.5, 95% CI 0.3-0.9). The variancebetween meta-analyses of the effect of design features was large to moderate for type of design (cohort v. case-control), the use of composite reference standardsand the use of differential verification; the variance was close to zero for the other design features.INTERPRETATION: Shortcomings in study design can affect estimates of diagnosticaccuracy, but the magnitude of the effect may vary from one situation to another.Design features and clinical characteristics of patient groups should becarefully considered by researchers when designing new studies and by readerswhen appraising the results of such studies. Unfortunately, incomplete reporting hampers the evaluation of potential sources of bias in diagnostic accuracystudies.
Evidence of bias and variation in diagnostic accuracy studies
Rutjes A
Writing – Original Draft Preparation
;
2006-01-01
Abstract
BACKGROUND: Studies with methodologic shortcomings can overestimate the accuracy of a medical test. We sought to determine and compare the direction and magnitudeof the effects of a number of potential sources of bias and variation in studies on estimates of diagnostic accuracy.METHODS: We identified meta-analyses of the diagnostic accuracy of tests through an electronic search of the databases MEDLINE, EMBASE, DARE and MEDION(1999-2002). We included meta-analyses with at least 10 primary studies withoutpreselection based on design features. Pairs of reviewers independently extractedstudy characteristics and original data from the primary studies. We used amultivariable meta-epidemiologic regression model to investigate the directionand strength of the association between 15 study features on estimates ofdiagnostic accuracy.RESULTS: We selected 31 meta-analyses with 487 primary studies of testevaluations. Only 1 study had no design deficiencies. The quality of reportingwas poor in most of the studies. We found significantly higher estimates ofdiagnostic accuracy in studies with nonconsecutive inclusion of patients(relative diagnostic odds ratio [RDOR] 1.5, 95% confidence interval [CI] 1.0-2.1)and retrospective data collection (RDOR 1.6, 95% CI 1.1-2.2). The estimates were highest in studies that had severe cases and healthy controls (RDOR 4.9, 95% CI0.6-37.3). Studies that selected patients based on whether they had been referredfor the index test, rather than on clinical symptoms, produced significantlylower estimates of diagnostic accuracy (RDOR 0.5, 95% CI 0.3-0.9). The variancebetween meta-analyses of the effect of design features was large to moderate for type of design (cohort v. case-control), the use of composite reference standardsand the use of differential verification; the variance was close to zero for the other design features.INTERPRETATION: Shortcomings in study design can affect estimates of diagnosticaccuracy, but the magnitude of the effect may vary from one situation to another.Design features and clinical characteristics of patient groups should becarefully considered by researchers when designing new studies and by readerswhen appraising the results of such studies. Unfortunately, incomplete reporting hampers the evaluation of potential sources of bias in diagnostic accuracystudies.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.