Deep learning for improving ZTE MRI images in free breathing

Research output: Contribution to journalArticleAcademicpeer-review

1 Downloads (Pure)


INTRODUCTION: Despite a growing interest in lung MRI, its broader use in a clinical setting remains challenging. Several factors limit the image quality of lung MRI, such as the extremely short T2 and T2* relaxation times of the lung parenchyma and cardiac and breathing motion. Zero Echo Time (ZTE) sequences are sensitive to short T2 and T2* species paving the way to improved "CT-like" MR images. To overcome this limitation, a retrospective respiratory gated version of ZTE (ZTE4D) which can obtain images in 16 different respiratory phases during free breathing was developed. Initial performance of ZTE4D have shown motion artifacts. To improve image quality, deep learning with fully convolutional neural networks (FCNNs) has been proposed. CNNs has been widely used for MR imaging, but it has not been used for improving free-breathing lung imaging yet. Our proposed pipeline facilitates the clinical work with patients showing difficulties/uncapable to perform breath-holding, or when the different gating techniques are not efficient due to the irregular respiratory pace.

MATERIALS AND METHODS: After signed informed consent and IRB approval, ZTE4D free breathing and breath-hold ZTE3D images were obtained from 10 healthy volunteers on a 1.5 T MRI scanner (GE Healthcare Signa Artist, Waukesha, WI). ZTE4D acquisition captured all 16 phases of the respiratory cycle. For the ZTE breath-hold, the subjects were instructed to hold their breath in 5 different inflation levels ranging from full expiration to full inspiration. The training dataset consisting of ZTE-BH images of 10 volunteers was split into 8 volunteers for training, 1 for validation and 1 for testing. In total 800 ZTE breath-hold images were constructed by adding Gaussian noise and performing image transformations (translations, rotations) to imitate the effect of motion in the respiratory cycle, and blurring from varying diaphragm positions, as it appears for ZTE4D. These sets were used to train a FCNN model to remove the artificially added noise and transformations from the ZTE breath-hold images and reproduce the original quality of the images. Mean squared error (MSE) was used as loss function. The remaining 2 healthy volunteer's ZTE4D images were used to test the model and qualitatively assess the predicted images.

RESULTS: Our model obtained a MSE of 0.09% on the training set and 0.135% on the validation set. When tested on unseen data the predicted images from our model improved the contrast of the pulmonary parenchyma against air filled regions (airways or air trapping). The SNR of the lung parenchyma was quantitatively improved by a factor of 1.98 and the CNR lung- blood, which is indicating the visibility of the intrapulmonary vessels, was improved by 4.2%. Our network was able to reduce ghosting artifacts, such as diaphragm movement and blurring, and enhancing image quality.

DISCUSSION: Free-breathing 3D and 4D lung imaging with MRI is feasible, however its quality is not yet acceptable for clinical use. This can be improved with deep learning techniques. Our FCNN improves the visual image quality and reduces artifacts of free-breathing ZTE4D. Our main goal was rather to remove ghosting artifacts from the ZTE4D images, to improve diagnostic quality of the images. As main results of the network, diaphragm contour increased with sharper edges by visual inspection and less blurring of the anatomical structures and lung parenchyma.

CONCLUSION: With FCNNs, image quality of free breathing ZTE4D lung MRI can be improved and enable better visualization of the lung parenchyma in different respiratory phases.

Original languageEnglish
JournalMagnetic Resonance Imaging
Publication statusE-pub ahead of print - 18 Jan 2023

Bibliographical note

Copyright © 2023. Published by Elsevier Inc.


Dive into the research topics of 'Deep learning for improving ZTE MRI images in free breathing'. Together they form a unique fingerprint.

Cite this