A k-Space Model of Movement Artefacts: Application to Segmentation Augmentation and Artefact Removal.
IEEE Trans Med Imaging. 2020 Mar 05;:
Authors: Shaw R, Sudre CH, Varsavsky T, Ourselin S, Cardoso MJ
Patient movement during the acquisition of magnetic resonance images (MRI) can cause unwanted image artefacts. These artefacts may affect the quality of clinical diagnosis and cause errors in automated image analysis. In this work, we present a method for generating realistic motion artefacts from artefact-free magnitude MRI data to be used in deep learning frameworks, increasing training appearance variability and ultimately making machine learning algorithms such as convolutional neural networks (CNNs) more robust to the presence of motion artefacts. By modelling patient movement as a sequence of randomly-generated, ‘demeaned’, rigid 3D affine transforms, we resample artefact-free volumes and combine these in k-space to generate motion artefact data. We show that by augmenting the training of semantic segmentation CNNs with artefacts, we can train models that generalise better and perform more reliably in the presence of artefact data, with negligible cost to their performance on clean data. We show that the performance of models trained using artefact data on segmentation tasks on real-world test-retest image pairs is more robust. We also demonstrate that our augmentation model can be used to learn to retrospectively remove certain types of motion artefacts from real MRI scans. Finally, we show that measures of uncertainty obtained from motion augmented CNN models reflect the presence of artefacts and can thus provide relevant information to ensure the safe usage of deep learning extracted biomarkers in a clinical pipeline.
PMID: 32149627 [PubMed – as supplied by publisher]