Sinogram Inpainting with Generative Adversarial Networks and Shape Priors
Abstract
:1. Introduction
1.1. Related Work
1.2. Utilising Object Shape Information
2. Proposed Approach
2.1. Generative Adversarial Networks for Sinogram Inpainting with Edge Information
2.1.1. Deep Convolutional GAN and U-Net—The pix2pix Architecture
2.1.2. Adapted Loss Function
2.2. Encoding the Shape Prior
Architectural Details
3. Datasets
3.1. Preparing the SophiaBeads Training Data
3.2. Synthetic Training Data
3.3. GAN Training
4. Results
4.1. Estimation Performance on the SophiaBeads Dataset
4.2. Effect of Inconsistencies between the Two Modalities on the Synthetic Dataset
5. Discussion and Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
XCT | X-ray Computed Tomography |
CAD | Computer Assisted Design |
(DC)GAN | (Deep-Convolutionnal) Generative Adversarial Network |
CNN | Convolutionnal Neural Network |
References
- Hall, E.J.; Brenner, D.J. Cancer risks from diagnostic radiology. Br. J. Radiol. 2008, 81, 362–378. [Google Scholar] [CrossRef] [PubMed]
- du Plessis, A.; le Roux, S.G.; Guelpa, A. Comparison of medical and industrial X-ray computed tomography for non-destructive testing. Case Stud. Nondestruct. Test. Eval. 2016, 6, 17–25. [Google Scholar] [CrossRef] [Green Version]
- Feldkamp, L.A.; Davis, L.C.; Kress, J.W. Practical Cone-Beam Algorithm; Technical Report 6; Ford Motor Company: Dearborn, MI, USA, 1984. [Google Scholar]
- Biguri, A.; Dosanjh, M.; Hancock, S.; Soleimani, M. TIGRE: A MATLAB-GPU toolbox for CBCT image reconstruction. Biomed. Phys. Eng. Express 2016, 2, 055010. [Google Scholar] [CrossRef]
- Candès, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
- Xu, Q.; Yu, H.Y.; Mou, X.Q.; Zhang, L.; Hsieh, J.; Wang, G. Low-dose X-ray CT reconstruction via dictionary learning. IEEE Trans. Med. Imaging 2012, 31, 1682–1697. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wu, D.; Kim, K.; El Fakhri, G.; Li, Q. Iterative low-dose CT reconstruction with priors trained by artificial neural network. IEEE Trans. Med. Imaging 2017, 36, 2479–2486. [Google Scholar] [CrossRef] [PubMed]
- Karimi, D.; Ward, R. Reducing streak artifacts in computed tomography via sparse representation in coupled dictionaries. Med. Phys. 2016, 43, 1473–1486. [Google Scholar] [CrossRef] [PubMed]
- Köstler, H.; Prümmer, M.; Rüde, U.; Hornegger, J. Adaptive variational sinogram interpolation of sparsely sampled CT data. In Proceedings of the International Conference on Pattern Recognition, Hong Kong, China, 20–24 August 2006; Volume 3, pp. 778–781. [Google Scholar] [CrossRef]
- Pohlmann, M.; Berger, M.; Maier, A.; Hornegger, J.; Fahrig, R. Estimation of Missing Fan-Beam Projections Using Frequency Consistency Conditions; Technical Report; Friedrich Alexander Universitat Erlangen-Nurnberg: Nurnberg, Germany, 2014. [Google Scholar]
- Benammar, A. Sinogram interpolation method for limited angle tomography. In Proceedings of the 2019 International Conference on Advanced Electrical Engineering (ICAEE), Algiers, Algeria, 19–21 November 2019; pp. 1–5. [Google Scholar]
- Lee, H.; Lee, J.; Cho, S. View-interpolation of sparsely sampled sinogram using convolutional neural network. In Proceedings of the Medical Imaging 2017, Orlando, FL, USA, 13–16 February 2017; Volume 10133, p. 1013328. [Google Scholar] [CrossRef]
- Lee, H.; Cho, S.; Lee, J. Sinogram synthesis using convolutional-neural-network for sparsely view-sampled CT. In Proceedings of the Medical Imaging 2018: Image Processing, Houston, TX, USA, 11–13 February 2018; pp. 592–601. [Google Scholar] [CrossRef]
- Lee, H.; Lee, J.; Kim, H.; Cho, B.; Cho, S. Deep-neural-network-based sinogram synthesis for sparse-view CT image reconstruction. IEEE Trans. Radiat. Plasma Med. Sci. 2019, 3, 109–119. [Google Scholar] [CrossRef] [Green Version]
- Yuan, H.; Jia, J.; Zhu, Z. SIPID: A deep learning framework for sinogram interpolation and image denoising in low-dose CT reconstruction. In Proceedings of the International Symposium on Biomedical Imaging, Washington, DC, USA, 4–7 April 2018; pp. 1521–1524. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Anirudh, R.; Kim, H.; Thiagarajan, J.J.; Mohan, K.A.; Champley, K.; Bremer, T. Lose the Views: Limited Angle CT Reconstruction via Implicit Sinogram Completion. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6343–6352. [Google Scholar] [CrossRef] [Green Version]
- Kim, H.; Anirudh, R.; Mohan, K.A.; Champley, K. Extreme Few-View CT Reconstruction Using Deep Inference; Technical Report; Lawrence Livermore National Laboratory: Livermore, CA, USA, 2019.
- Yoo, S.; Yang, X.; Wolfman, M.; Gursoy, D.; Katsaggelos, A.K. Sinogram Image Completion for Limited Angle Tomography with Generative Adversarial Networks. In Proceedings of the International Conference on Image Processing, ICIP, Taipei, Taiwan, 22–25 September 2019; pp. 1252–1256. [Google Scholar] [CrossRef]
- Pathak, D.; Krahenbuhl, P.; Donahue, J.; Darrell, T.; Efros, A.A. Context Encoders: Feature Learning by Inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vegas, NV, USA, 27–30 June 2016; pp. 2536–2544. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks; Technical Report; Indico Research: Boston, MA, USA, 2015. [Google Scholar]
- Yeh, R.A.; Chen, C.; Lim, T.Y.; Schwing, A.G.; Hasegawa-Johnson, M.; Do, M.N. Semantic Image Inpainting with Deep Generative Models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vegas, NV, USA, 27–30 June 2016; pp. 5485–5493. [Google Scholar]
- Jahanian, A.; Chai, L.; Isola, P. On the “Steerability” of Generative Adversarial Networks; Technical Report; Massachussetts Institute of Technology: Cambridge, MA, USA, 2019. [Google Scholar]
- Ackerman, M.J. The visible human project. Proc. IEEE 1998, 86, 504–511. [Google Scholar] [CrossRef] [Green Version]
- Xu, X.G.; Taranenko, V.; Zhang, J.; Shi, C. A boundary-representation method for designing whole-body radiation dosimetry models: Pregnant females at the ends of three gestational periods—RPI-P3, -P6 and -P9. Phys. Med. Biol. 2007, 52, 7023–7044. [Google Scholar] [CrossRef]
- Kervyn, G. The First Open Source 3D Atlas of Human Anatomy; Technical Report; Felix Meritis: Amsterdam, The Netherlands, 2022. [Google Scholar]
- Huang, J.; Zhang, Y.; Ma, J.; Zeng, D.; Bian, Z.; Niu, S.; Feng, Q.; Liang, Z.; Chen, W. Iterative image reconstruction for sparse-view CT using normal-dose image induced total variation prior. PLoS ONE 2013, 8, e0079709. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Abbas, S.; Min, J.; Cho, S. Super-sparsely view-sampled cone-beam CT by incorporating prior data. J. X-ray Sci. Technol. 2013, 21, 71–83. [Google Scholar] [CrossRef] [PubMed]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Goodfellow, I. NIPS 2016 Tutorial: Generative Adversarial Networks; Technical Report; OpenAI: San Francisco, CA, USA, 2016. [Google Scholar]
- Goodfellow, I.J.; Warde-Farley, D.; Mirza, M.; Courville, A.; Bengio, Y. Maxout Networks. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 1319–1327. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Dupé, F.X.; Fadili, J.; Starck, J.L. Inverse Problems with Poisson noise: Primal and Primal-Dual Splitting. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 1901–1904. [Google Scholar]
- Larsen, A.B.L.; Sønderby, S.K.; Larochelle, H.; Winther, O. Autoencoding beyond pixels using a learned similarity metric. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 1558–1566. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier Nonlinearities Improve Neural Network Acoustic Models. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; p. 3. [Google Scholar]
- Coban, S. SophiaBeads Dataset Project [Data Set]; Technical Report; University of Manchester: Manchester, UK, 2015. [Google Scholar]
- Coban, S. SophiaBeads Dataset Project Codes (v1.0); Technical Report; University of Manchester: Manchester, UK, 2015. [Google Scholar]
- Jin, K.H.; McCann, M.T.; Froustey, E.; Unser, M. Deep Convolutional Neural Network for Inverse Problems in Imaging. IEEE Trans. Image Process. 2017, 26, 4509–4522. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Adler, J.; Ringh, A.; Öktem, O.; Karlsson, J. Learning to Solve Inverse Problems Using Wasserstein Loss; Technical Report; KTH Royal Institute of Technology, Deptartment of Mathematics: Stockholm, Sweden, 2017. [Google Scholar]
- Kelly, B.; Matthews, T.P.; Anastasio, M.A. Deep Learning-Guided Image Reconstruction from Incomplete Data; Technical Report; Washington University, Department of Computer Science: Saint Louis, MO, USA, 2017. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization; Technical Report; Google Brain: Mountain View, CA, USA, 2014. [Google Scholar]
- Sinha, S.; Zhao, Z.; Goyal, A.; Raffel, C.; Odena, A. Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad Samples. In Proceedings of the Advances in Neural Information Processing Systems, Online, 6–12 December 2020; pp. 14638–14649. [Google Scholar]
Missing Angles | 30° | 60° | 90° |
---|---|---|---|
(1) Replacing missing projections with CAD data | 22.5 | 19.2 | 17.1 |
(2) Replacing missing projections with CAD data (and with attenuation values scaling) | 23.1 | 21.3 | 19.7 |
(3) Linear Interpolation | 23.6 | 22.6 | 20.0 |
(4) Interpolation using a Unet | 9.9 | 8.4 | 7.7 |
(5) Interpolation using a Unet and CAD data | 16.8 | 12.9 | 11.0 |
(6) Interpolation using a GAN | 7.0 | 3.82 | 2.0 |
(7) Interpolation using pix2pix and CAD data | 29.1 | 27.8 | 27.0 |
Missing Angles | 30 ° | 60° | 90° |
---|---|---|---|
Reconstruction from sinograms without treatment | 69.1 | 66.4 | 63.8 |
Reconstruction corresponding to Table 1 (2) | 61.3 | 58.2 | 56.3 |
Reconstruction corresponding to Table 1 (3) | 61.2 | 61.8 | 59.4 |
Reconstruction corresponding to Table 1 (4) | 52.7 | 51.4 | 51.4 |
Reconstruction corresponding to Table 1 (5) | 60.4 | 56.5 | 55.1 |
Reconstruction corresponding to Table 1 (6) | 53.4 | 51.4 | 50.9 |
Reconstruction corresponding to Table 1 (7) | 69.6 | 67.9 | 66.5 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Valat, E.; Farrahi, K.; Blumensath, T. Sinogram Inpainting with Generative Adversarial Networks and Shape Priors. Tomography 2023, 9, 1137-1152. https://doi.org/10.3390/tomography9030094
Valat E, Farrahi K, Blumensath T. Sinogram Inpainting with Generative Adversarial Networks and Shape Priors. Tomography. 2023; 9(3):1137-1152. https://doi.org/10.3390/tomography9030094
Chicago/Turabian StyleValat, Emilien, Katayoun Farrahi, and Thomas Blumensath. 2023. "Sinogram Inpainting with Generative Adversarial Networks and Shape Priors" Tomography 9, no. 3: 1137-1152. https://doi.org/10.3390/tomography9030094
APA StyleValat, E., Farrahi, K., & Blumensath, T. (2023). Sinogram Inpainting with Generative Adversarial Networks and Shape Priors. Tomography, 9(3), 1137-1152. https://doi.org/10.3390/tomography9030094