Objectives: To measure the inter-expert and intra-expert agreement in sleep spindle scoring, and to quantify how many experts are needed to build a reliable dataset of sleep spindle scorings.
Methods: The EEG dataset was comprised of 400 randomly selected 115s segments of stage 2 sleep from 110 sleeping subjects in the general population (57±8, range: 42-72 years). To assess expert agreement, a total of 24 Registered Polysomnographic Technologists (RPSGTs) scored spindles in a subset of the EEG dataset at a single electrode location (C3-M2). Intra-expert and inter-expert agreements were calculated as F1-scores, Cohen's kappa (κ), and intra-class correlation coefficient (ICC).
Results: We found an average intra-expert F1-score agreement of 72±7% (κ: 0.66±0.07). The average inter-expert agreement was 61±6% (κ: 0.52±0.07). Amplitude and frequency of discrete spindles were calculated with higher reliability than the estimation of spindle duration. Reliability of sleep spindle scoring can be improved by using qualitative confidence scores, rather than a dichotomous yes/no scoring system.
Conclusions: We estimate that 2-3 experts are needed to build a spindle scoring dataset with 'substantial' reliability (κ: 0.61-0.8), and 4 or more experts are needed to build a dataset with 'almost perfect' reliability (κ: 0.81-1).
Significance: Spindle scoring is a critical part of sleep staging, and spindles are believed to play an important role in development, aging, and diseases of the nervous system.
Keywords: Agreement; Electroencephalography; Event detection; Inter-expert; Inter-rater; Intra-expert; Intra-rater; Polysomnography; Reliability; Sleep scoring; Sleep spindles; Sleep staging.
Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.