Simultaneous Co-segmentation of Tumors in PET-CT Images Using Deep Fully Convolutional Networks.
Med Phys. 2018 Dec 08;:
Authors: Zhong Z, Kim Y, Plichta K, Bryan GA, Zhou L, Buatti J, Wu X
PURPOSE: To investigate the use and efficiency of 3-D deep learning, fully convolutional networks (DFCN) for simultaneous tumor co-segmentation on dual-modality non-small cell lung cancer (NSCLC) PET-CT images.
METHODS: We used DFCN co-segmentation for NSCLC tumors in PET-CT images, considering both the CT and PET information. The proposed DFCN-based co-segmentation method consists of two coupled 3D-UNets with an encoder-decoder architecture, which can communicate with the other in order to share complementary information between PET and CT. The weighted average sensitivity and positive predictive values denoted as Scores, dice similarity coefficients (DSCs), and the average symmetric surface distances (ASSDs) were used to assess the performance of the proposed approach on 60 pairs of PET/CTs. A Simultaneous Truth and Performance Level Estimation algorithm (STAPLE) of 3 expert physicians’ delineations were used as a reference. The proposed DFCN framework was compared to 3 graph-based co-segmentation methods.
RESULTS: Strong agreement was observed when using the STAPLE references for the proposed DFCN co-segmentation on the PET-CT images. The average DSCs on CT and PET are 0.861 ± 0.037 and 0.828 ± 0.087, respectively using DFCN, compared to 0.638 ± 0.165 and 0.643 ± 0.141, respectively, when using the graph-based co-segmentation method. The proposed DFCN co-segmentation using both PET and CT also outperforms the deep learning method using either PET or CT alone.
CONCLUSIONS: The proposed DFCN co-segmentation is able to outperform existing graph-based segmentation methods. The proposed DFCN co-segmentation shows promise for further integration with quantitative multi-modality imaging tools in clinical trials. This article is protected by copyright. All rights reserved.
PMID: 30537103 [PubMed – as supplied by publisher]