Objectives: We sought to test the hypothesis that a novel 2-dimensional echocardiographic image analysis system using artificial intelligence-learned pattern recognition can rapidly and reproducibly calculate ejection fraction (EF).
Background: Echocardiographic EF by manual tracing is time consuming, and visual assessment is inherently subjective.
Methods: We studied 218 patients (72 female), including 165 with abnormal left ventricular (LV) function. Auto EF incorporated a database trained on >10,000 human EF tracings to automatically locate and track the LV endocardium from routine grayscale digital cineloops and calculate EF in 15 s. Auto EF results were independently compared with manually traced biplane Simpson's rule, visual EF, and magnetic resonance imaging (MRI) in a subset.
Results: Auto EF was possible in 200 (92%) of consecutive patients, of which 77% were completely automated and 23% required manual editing. Auto EF correlated well with manual EF (r = 0.98; 6% limits of agreement) and required less time per patient (48 +/- 26 s vs. 102 +/- 21 s; p < 0.01). Auto EF correlated well with visual EF by expert readers (r = 0.96; p < 0.001), but interobserver variability was greater (3.4 +/- 2.9% vs. 9.8 +/- 5.7%, respectively; p < 0.001). Visual EF was less accurate by novice readers (r = 0.82; 19% limits of agreement) and improved with trainee-operated Auto EF (r = 0.96; 7% limits of agreement). Auto EF also correlated with MRI EF (n = 21) (r = 0.95; 12% limits of agreement), but underestimated absolute volumes (r = 0.95; bias of -36 +/- 27 ml overall).
Conclusions: Auto EF can automatically calculate EF similarly to results by manual biplane Simpson's rule and MRI, with less variability than visual EF, and has clinical potential.