Background: Cardiovascular diseases (CVD) cause 19 million fatalities each year and cost nations billions of dollars. Surrogate biomarkers are established methods for CVD risk stratification; however, manual inspection is costly, cumbersome, and error-prone. The contemporary artificial intelligence (AI) tools for segmentation and risk prediction, including older deep learning (DL) networks employ simple merge connections which may result in semantic loss of information and hence low in accuracy.
Methodology: We hypothesize that DL networks enhanced with attention mechanisms can do better segmentation than older DL models. The attention mechanism can concentrate on relevant features aiding the model in better understanding and interpreting images. This study proposes MultiNet 2.0 (AtheroPoint, Roseville, CA, USA), two attention networks have been used to segment the lumen from common carotid artery (CCA) ultrasound images and predict CVD risks.
Results: The database consisted of 407 ultrasound CCA images of both the left and right sides taken from 204 patients. Two experts were hired to delineate borders on the 407 images, generating two ground truths (GT1 and GT2). The results were far better than contemporary models. The lumen dimension (LD) error for GT1 and GT2 were 0.13±0.08 and 0.16±0.07 mm, respectively, the best in market. The AUC for low, moderate and high-risk patients' detection from stenosis data for GT1 were 0.88, 0.98, and 1.00 respectively. Similarly, for GT2, the AUC values for low, moderate, and high-risk patient detection were 0.93, 0.97, and 1.00, respectively. The system can be fully adopted for clinical practice in AtheroEdge™ model by AtheroPoint, Roseville, CA, USA.
Keywords: AtheroEdge; Attention-based deep learning; Common carotid artery; Stenosis; Ultrasound.
Copyright © 2024 Elsevier Ltd. All rights reserved.