The measurement of effort and performance validity is essential for computerized testing where less direct supervision is needed. The clinical validation of an Automated Neuropsychological Metrics-Performance Validity Index (ANAM-PVI) was examined by converting ANAM test scores into a common metric based on their relative infrequency in an outpatient clinic sample with presumed good effort. Optimal ANAM-PVI cut-points were determined using receiver operator characteristic (ROC) curve analyses and an a priori specificity of 90%. Sensitivity/specificity was examined in available validation samples (controls, simulators, and neurorehabilitation patients). ANAM-PVI scores differed between groups with simulators scoring the highest. ROC curve analysis indicated excellent discriminability of ANAM-PVI scores ≥5 to detect simulators versus controls (area under the curve = 0.858; odds ratio for detecting suboptimal performance = 15.6), but resulted in a 27% false-positive rate in the clinical sample. When specificity in the clinical sample was set at 90%, sensitivity decreased (68%), but was consistent with other embedded effort measures. Results support the ANAM-PVI as an embedded effort measure and demonstrate the value of sample-specific cut-points in groups with cognitive impairment. Examination of different cut-points indicates that clinicians should choose sample-specific cut-points based on sensitivity and specificity rates that are most appropriate for their patient population with higher cut-points for those expected to have severe cognitive impairment (e.g., dementia or severe acquired brain injury).
Keywords: ANAM; Computer-based testing; Neuropsychology; Performance Validity Testing; Simulators.