Point clouds are a widely used format for storing information in a memory-efficient and easily manipulatable representation. However, research in the application of point cloud mapping and subsequent organ reconstruction with deep learning, is limited. In particular, current methods for left atrium (LA) visualization using point clouds recorded from clinical mapping during cardiac ablation are proprietary and remain difficult to validate. Many clinics rely on additional imaging such as MRIs/CTs to improve the accuracy of LA mapping. In this study, for the first time, we proposed a novel deep learning framework for the automatic 3D surface reconstruction of the LA directly from point clouds acquired via widely used clinical mapping systems. The backbone of our framework consists of a 30-layer 3D fully convolutional neural network (CNN). The architecture contains skip connections that perform multi-resolution processing to maximize information extraction from the point clouds and ensure a high-resolution prediction by combining features at different receptive levels. We used large kernels with increased receptive fields to address the sparsity of the point clouds. Residual blocks and activation normalization were further implemented to improve the feature learning on sparse inputs. By utilizing a light-weight design with low-depth layers, our CNN took approximately 10 s per patient. Independent testing on two cross-modality clinical datasets showed excellent dice scores of 93% and surface-to-surface distances below 1 pixel. Overall, our study may provide a more efficient, cost-effective 3D LA reconstruction approach during ablation procedures, and potentially lead to improved treatment of cardiac diseases.
Keywords: 3D surface reconstruction; convolutional neural network; left atrium; point cloud; sparse data.
Copyright © 2022 Xiong, Stiles, Yao, Shi, Nalar, Hawson, Lee and Zhao.