Introduction: Extraction of Doppler-based measurements from feto-placental Doppler images is crucial in identifying vulnerable new-borns prenatally. However, this process is time-consuming, operator dependent, and prone to errors.
Methods: To address this, our study introduces an artificial intelligence (AI) enabled workflow for automating feto-placental Doppler measurements from four sites (i.e., Umbilical Artery (UA), Middle Cerebral Artery (MCA), Aortic Isthmus (AoI) and Left Ventricular Inflow and Outflow (LVIO)), involving classification and waveform delineation tasks. Derived from data from a low- and middle-income country, our approach's versatility was tested and validated using a dataset from a high-income country, showcasing its potential for standardized and accurate analysis across varied healthcare settings.
Results: The classification of Doppler views was approached through three distinct blocks: (i) a Doppler velocity amplitude-based model with an accuracy of 94%, (ii) two Convolutional Neural Networks (CNN) with accuracies of 89.2% and 67.3%, and (iii) Doppler view- and dataset-dependent confidence models to detect misclassifications with an accuracy higher than 85%. The extraction of Doppler indices utilized Doppler-view dependent CNNs coupled with post-processing techniques. Results yielded a mean absolute percentage error of 6.1 ± 4.9% (n = 682), 1.8 ± 1.5% (n = 1,480), 4.7 ± 4.0% (n = 717), 3.5 ± 3.1% (n = 1,318) for the magnitude location of the systolic peak in LVIO, UA, AoI and MCA views, respectively.
Conclusions: The developed models proved to be highly accurate in classifying Doppler views and extracting essential measurements from Doppler images. The integration of this AI-enabled workflow holds significant promise in reducing the manual workload and enhancing the efficiency of feto-placental Doppler image analysis, even for non-trained readers.
Keywords: artificial intelligence; convolutional neural networks; deep learning; feto-placental Doppler; ultrasound view classification; ultrasound waveform delineation.
© 2024 Aguado, Jimenez-Perez, Chowdhury, Prats-Valero, Sánchez-Martínez, Hoodbhoy, Mohsin, Castellani, Testa, Crispi, Bijnens, Hasan and Bernardino.