A Point Says a Lot: An Interactive Segmentation Method for MR Prostate via One-Point Labeling

Mach Learn Multimodal Interact. 2017:10541:220-228. doi: 10.1007/978-3-319-67389-9_26. Epub 2017 Sep 7.

Abstract

In this paper, we investigate if the MR prostate segmentation performance could be improved, by only providing one-point labeling information in the prostate region. To achieve this goal, by asking the physician to first click one point inside the prostate region, we present a novel segmentation method by simultaneously integrating the boundary detection results and the patch-based prediction. Particularly, since the clicked point belongs to the prostate, we first generate the location-prior maps, with two basic assumptions: (1) a point closer to the clicked point should be with higher probability to be the prostate voxel, (2) a point separated by more boundaries to the clicked point, will have lower chance to be the prostate voxel. We perform the Canny edge detector and obtain two location-prior maps from horizontal and vertical directions, respectively. Then, the obtained location-prior maps along with the original MR images are fed into a multi-channel fully convolutional network to conduct the patch-based prediction. With the obtained prostate-likelihood map, we employ a level-set method to achieve the final segmentation. We evaluate the performance of our method on 22 MR images collected from 22 different patients, with the manual delineation provided as the ground truth for evaluation. The experimental results not only show the promising performance of our method but also demonstrate the one-point labeling could largely enhance the results when a pure patch-based prediction fails.