Next Article in Journal
Application of Pathomic Features for Differentiating Dysplastic Cells in Patients with Myelodysplastic Syndrome
Previous Article in Journal
How to Implement Clinical 7T MRI—Practical Considerations and Experience with Ultra-High-Field MRI
Previous Article in Special Issue
Automated Segmentation of Levator Ani Muscle from 3D Endovaginal Ultrasound Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Special Issue: Artificial Intelligence in Advanced Medical Imaging

by
Huang Jia
1,
Qingliang Jiao
1,* and
Ming Liu
2
1
Beijing Key Lab of Nanophotonics & Ultrafine Optoelec-Tronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China
2
Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Bioengineering 2024, 11(12), 1229; https://doi.org/10.3390/bioengineering11121229
Submission received: 19 November 2024 / Accepted: 29 November 2024 / Published: 5 December 2024
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)

1. Introduction

Medical imaging is of great significance in modern medicine and is a crucial part of medical diagnosis. Doctors can use imaging techniques such as X-ray [1], CT [2], MRI [3], and ultrasound [4] to obtain internal images of the human body. In the early stage of disease diagnosis, medical imaging can detect hidden lesions early on [5], such as tumors, increasing the possibility of curing patients. In surgical planning, effective features [6] are segmented from medical images to provide surgeons with information on the location and size of the lesion and its relationship with surrounding tissues, helping to plan a safe and effective surgical path and reducing surgical risks and complications. During the treatment process, medical imaging can monitor the treatment effect in real time and provide a basis for adjusting the treatment plan.
Deep learning [7,8,9,10] is widely used in the field of medical imaging [11]. It can handle massive medical image data [12] combined with image restoration [13], denoising [14], and other processing techniques to mine valuable information. Its image classification [15] ability can accurately distinguish between normal and diseased tissue images, helping doctors screen key cases. In image segmentation [16], it can precisely depict the boundaries of organs and lesions [17], which is of great significance for surgical planning and radiotherapy positioning. In addition, by analyzing time-series medical images, it can predict disease development and provide a basis for doctors to take preventive measures. Deep learning tools are trained with a large amount of labeled data [18], constantly improving their diagnostic accuracy and reliability [19], which can relieve the burden on doctors, promote the development of medical imaging technology, and improve the efficiency and quality of medical diagnosis [20].
We first introduce some marvelous studies about artificial intelligence in advanced medical imaging. Then, we review the work published in this Special Issue. Finally, we provide a brief summary and recommend our new Special Issue.

2. Related Work

In this section, we briefly introduce many excellent studies related to the scope of this Special Issue. Specifically, we introduce medical image processing, a method based on deep learning, and the multi-modality medical–biomedical image fusion method.

2.1. Medical Image Processing

Medical image processing can be formulated as an image-to-image transformation problem, where deep learning models are used to determine the high-dimensional nonlinear relationships between image inputs and outputs. Scholars have designed many different network architectures to solve the problem revolving around the following key question: “how can we learn this relationship effectively”?
The convolutional neural network (CNN) is a popular backbone network that has been used for medical image denoising [21,22], enhancement [23,24,25], and super resolution [26,27,28,29]. For example, the authors of [23] reported a residual network-based method for low-dose CT image texture enhancement. The authors of [28] also used a residual network for parallel MRI reconstruction. This method utilizes the correlation between the real and imaginary parts of an MRI image and considers the relevant term of k-space data, which can automatically capture the MR image correlation across channels. The authors of [30] proposed a network architecture with a residual network, attention mechanism, and U-net for CT image reconstruction. Zhang et al. [31] designed a squeeze attention module using the global spatial information of the input and obtained global descriptors for MRI super-resolution.
The Transformer, which is a backbone network in natural language processing, has recently gained widespread interest due to its powerful global information modeling ability in computer vision [32]. Zou et al. [33] designed a multi-scale deformable Transformer to capture and fuse significant auxiliary information, and it can improve the quality of knee MR images. Swin-Transformer was proven by Wu et al. [34] to have high performance in MRI reconstruction, and it is also used for super-resolution CT imaging [35]. To reduce the complexity of the Transformer, Korkmaz et al. [36] used cross-attention Transformers, which can maintain linear complexity regarding the feature map size in MRI reconstruction. To combine local information, the authors of [37] reported a hybrid CNN–Transformer network for ultrasound image segmentation, where the CNN can learn the local information, and the Transformer can exact global features.
In addition to the CNN and Transformer, the multilayer perceptron (MLP) [38] also has potential as a backbone network in biomedical image processing applications. Pan et al. [39] proved that the MLP-Mixer is a powerful tool for CT image segmentation. However, MLP still needs to be developed for application in medical images.

2.2. Learning Method Based on Deep Learning

Deep learning is a typical data-driven method which requires a large amount of data to learn the relationship between the input and output. However, it is unrealistic to collect a large amount of paired data for medical image processing. To solve this problem, many learning methods are applied in medical image processing.
It is worth noting that the image denoising task is especially different. Many studies on self-supervised learning have proven that denoising networks can be trained without clean references. Lehtinen et al. introduced a pioneering network named Noise2Noise [40]. They trained the network with a noisy image as the input and another noisy image as the output, where the two noisy images are independent samples of the same underlying ground truth. Many self-supervision medical image denoising methods were developed based on their study. Wu et al. [41] used the Noise2Noise as the image prior, and the network weights are fine-tuned with the iterative. Yuan et al. designed the half2half model [42], which can reduce noise in CT images. June et al. [43] developed a deep network based on Noise2Noise to improve the quality of MR images.
Self-supervision-based methods are infrequently used for other medical image processing tasks. Semi-supervision-based methods are applied in medical image segmentation. For example, Shi et al. [44] developed a semi-supervised segmentation method based on uncertainty estimation and a separate self-training strategy and achieved superior performance in CT and MR image segmentation. Lei et al. [45] used the adversarial consistency training strategy to determine the nonlinear relationships between labeled and unlabeled data and designed a bidirectional attention model based on dynamic convolution for medical image segmentation. Voulodimos et al. [46] used the few-shot learning strategy and U-net for CT regarding COVID-19 infected area segmentation. Furthermore, momentum contrastive learning also demonstrates high performance in COVID-19 infected area segmentation [47].

2.3. Multi-Modality Medical Image Fusion

Generally, it is impossible to extract all required information from a single imaging modality, which impacts clinical precision and efficiency. Hence, multi-modal medical image fusion is reliable for clinical usage because it can combine information from various modalities in a single image.
Liang et al. [48] developed the multi-layer concatenation fusion network, which connects the feature maps of each CNN layer with the input map, and they used down-sampling to supplement the spatial information. Zhao et al. [49] used a generative adversarial network to implement medical image fusion, and they applied a dense block to refine intermediate layer maps, which can prevent information loss. Zhang et al. [50] designed a self-supervised framework based on contrastive auto-encoding and convolution information exchange for a CT and MR image fusion task. Wang et al. [51] proved that the weakly supervised learning frame is also a powerful tool for medical image fusion. Lakshmi et al. [52] proposed an adaptive MRI-PET image fusion method; they used a residual network to enhance the resolution of the PET image, and adaptive total variation was used to decompose the input images. Xiao et al. [53] reported a Siamese pyramid network for PET and CT image fusion, and they used structure similarity and the L1 norm to build the loss function. Medical image fusion based on deep learning has been widely developed; see, for example [54,55].

3. Special Issue Article

The Special Issue titled “Artificial Intelligence in Advanced Medical Imaging” contains 11 articles (as of 30 June 2023).
Rahman et al. [56] proposed a CT image segmentation method for livers and tumors, and they combined the U-net and a residual network to segment the liver and assess the region of interest (ROI).
The red dot system uses expertise in the identification of anomalies to assist radiologists in distinguishing radiological abnormalities and managing them before the radiologist report is sent. Ammar et al. [57] explored whether radiographer reports are necessary and discussed whether any benefits can be highlighted to encourage health authorities worldwide to allow radiographers to write clinical reports.
Liu et al. [58] proposed an end-to-end AI-driven automatic kidney and renal mass diagnosis framework to identify abnormal areas and diagnose the histological subtypes of renal cell carcinoma.
Yossra et al. [59] developed a framework to implement the Internet of Things (IoT) and deep learning to identify lung cancer. This method deploys a DL process with a multi-layered non-local Bayes model to manage the process of early diagnosis. The Internet of Medical Things can be useful in determining factors that can effectively sort quality values through the use of sensors and image processing techniques.
They also identified lung cancer via deep learning IoT [60]. Lung cancer-related information is obtained from the IoT, and it is classified as benign or malignant. They also used Particle Swarm Optimization to optimize the proposed method.
Diffuse optical tomography is a powerful non-invasive tool for detecting breast cancer. Hauptman et al. proposed XGBoost [61], a deep learning-based method, to detect tumors in inhomogeneous breasts. They used genetic programming to improve the performance of the proposed method.
To detect the abdominal hemorrhage, Park et al. [62] used deep learning in CT imaging. They developed a cascade model to detect small lesions. The experiment showed that this method achieved 93.22% sensitivity and 99.60% specificity.
Deformable medical image registration is a critical task in clinical applications. Zou et al. [63] proposed a patient-specific method for deformable lung CT image registration, which decomposes large deformation fields into multi-continuous intermediate fields. In addition, it can estimate the deformation field for a trained network without directly relying on any intermediary images.
Dunn et al. [64] proved that the incremental multiple resolution residual network is useful for lung cancer subtype prediction. They believe this study can close the gap between algorithm study and clinical applications.
Qureshi et al. [65] evaluated the performance of four classical deep learning architectures, namely U-Net, U-NeXt, DeepLabV3+, and ConResNet, for extraocular muscle segmentation. They also used estimated EOM centroid locations, which are crucial for 3D biomechanical modeling and for the clinical management of strabismus, to assess the results.
For the diagnosis of Alzheimer’s disease, Illakiya et al. [66] designed the adaptive hybrid attention network, including enhanced non-local attention and coordinate attention. They also proposed an adaptive feature aggregation module to fuse local and non-local feature maps.
Sofia et al. [67] compared the performance of five different deep learning object detection models with varying architectures and trainable parameters. Their experiment showed that the YOLOv7tiny object detection model had the best mean average precision and inference time, and it was selected as the optimal model for this application.
Gu et al. [68] used deep transfer learning in medical image classification and proposed the “Real-World Feature Transfer Learning” framework. The authors used models pre-trained on large-scale general datasets, such as ImageNet, as feature extractors to classify X-ray images of pneumonia. They also carried out extensive experiments by means of converting grayscale images into RGB format through replicating grayscale channels and so on. The results show that this method performed excellently in multiple performance metrics and outperformed models trained from scratch, which strongly proved the effectiveness of transfer learning using general image data.
Zhang et al. [69] put forward an impulsive aggression prediction model by integrating physiological and facial expression information from video images, aiming to solve the deficiencies of the existing methods in predicting aggressive behaviors [69]. This model acquires the physiological parameters and facial expression information of subjects through IPPG (Imaging Photoplethysmography) technology and facial recognition technology and utilizes the random forest classification model to predict impulsive aggression. The experiments demonstrated that the accuracy rate of the model reaches 89.39%.
Nada et al. [70] used deep learning technology to automatically segment the levator ani muscle in 3D endovaginal ultrasound images. The training data were prepared through multiple preprocessing steps, and U-Net and its variant models were used for training and testing. The results show that the U-Net model performed excellently in terms of the mean Intersection over Union and Dice similarity coefficient metrics and that it is superior to the existing methods.

4. Conclusions and Prospect

The contributions in this Special Issue titled “Artificial Intelligence in Advanced Medical Imaging” take readers on a journey through topical research activities in deep learning and high-quality medical imaging. These include deep learning-based image processing, deep learning methods, and multi-modal medical image fusion. We hope that this Special Issue can provide readers with inspiration to further develop this field. Finally, we encourage readers to consider contributing to our new Special Issue (Artificial Intelligence in Advanced Medical Imaging—2nd Edition), and we look forward to obtaining further research on this topic.

Author Contributions

Conceptualization, Q.J.; validation, Q.J.; writing—original draft preparation, H.J.; writing—review and editing, Q.J. and H.J.; supervision, H.J. and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

M.L., L.Q.D. and Q.L.J would like to express their deep gratitude to the Editor for her work on this Special Issue. They would also like to thank all the contributing authors and reviewers.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Perez, A.M.M.M.; Poletti, M.E. Characterization of digital systems used in medical X-ray imaging. Radiat. Phys. Chem. 2022, 200, 110307. [Google Scholar] [CrossRef]
  2. Guo, S.L.; Wang, H.; Agaian, S.; Han, L.N.; Song, X.W. LRENet: A location-related enhancement network for liver lesions in CT images. Phys. Med. Bio. 2024, 69, 035019. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, S.; Jiang, Z.W.; Yang, H.L.; Li, X.R.; Yang, Z.C. MRI-Based Medical Image Recognition: Identification and Diagnosis of LDH. Comput. Intell. Neurosci. 2022, 2022, 5207178. [Google Scholar] [CrossRef] [PubMed]
  4. Rehman, H.U.; Shuaib, M.; Ismail, E.A.; Li, S. Enhancing medical ultrasound imaging through fractional mathematical modeling of ultrasound bubble dynamics. Ultrason. Sonochemistry 2023, 100, 106603. [Google Scholar] [CrossRef]
  5. Radhika, P.; Sofia Bobby, J.; Francis, S.V.; Femina, M.A. Towards accurate diagnosis: Exploring knowledge distillation and self-attention in multimodal medical image fusion. J. Exp. Theor. Artif. Intell. 2024, 1–30. [Google Scholar] [CrossRef]
  6. Zhi, Z.; Qing, M. Intelligent medical image feature extraction method based on improved deep learning. Technol. Health Care 2021, 29, 363–379. [Google Scholar] [CrossRef]
  7. Jiao, Q.L.; Guo, X.W.; Liu, M.; Kong, L.Q.; Hui, M.; Dong, L.Q.; Zhao, Y.J. Deep learning baseline correction method via multi-scale analysis and regression. Chemom. Intell. Lab. Syst. 2023, 235, 104779. [Google Scholar] [CrossRef]
  8. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  9. Jiao, Q.L.; Liu, M.; Ning, B.; Zhao, F.F.; Dong, L.Q.; Kong, L.Q.; Hui, M.; Zhao, Y.J. Image Dehazing Based on Local and Non-Local Features. Fractal Fract. 2022, 6, 262. [Google Scholar] [CrossRef]
  10. Bao, C.; Cao, J.; Ning, Y.Q.; Cheng, Y.; Hao, Q. Rega-Net: Retina Gabor Attention for Deep Convolutional Neural Networks. IEEE Geosci. Remote. Sens. Lett. 2023, 20, 6004905. [Google Scholar] [CrossRef]
  11. Adegun, A.A.; Viriri, S.; Ogundokun, R.O. Deep Learning Approach for Medical Image Analysis. Comput. Intell. Neurosci. 2021, 2021, 6215281. [Google Scholar] [CrossRef]
  12. Sun, H.Q.; Liu, Z.; Wang, G.Z.; Lian, W.M.; Ma, J. Intelligent Analysis of Medical Big Data Based on Deep Learning. IEEE Access 2019, 7, 142022–142037. [Google Scholar] [CrossRef]
  13. Hephzibah, R.; Anandharaj, H.C.; Kowsalya, G.; Jayanthi, R.; Chandy, D.A. Review on Deep Learning Methodologies in Medical Image Restoration and Segmentation. Curr. Med. Imaging Former. Curr. Med. Imaging Rev. 2023, 19, 844–854. [Google Scholar] [CrossRef]
  14. El-Shafai, W.; Abd El-Nabi, S.; El-Rabaie, E.M.; Ali, A.M.; Soliman, N.F.; Algarni, A.D.; Abd El-Samie, F.E. Efficient Deep-Learning-Based Autoencoder Denoising Approach for Medical Image Diagnosis. Comput. Mater. Contin. 2022, 70, 6107–6125. [Google Scholar] [CrossRef]
  15. Alqahtani, T.M. Big Data Analytics with Optimal Deep Learning Model for Medical Image Classification. Comput. Syst. Sci. Eng. 2023, 44, 1433–1449. [Google Scholar] [CrossRef]
  16. Xie, Q.S.; Li, Y.X.; He, N.J.; Ning, M.N.; Ma, K.; Wang, G. Unsupervised Domain Adaptation for Medical Image Segmentation by Disentanglement Learning and Self-Training. IEEE T. Med. Imaging. 2024, 43, 4–14. [Google Scholar] [CrossRef]
  17. Zhuang, M.; Chen, Z.; Wang, H.; Tang, H.; He, J.; Qin, B.; Yang, Y.; Jin, X.; Yu, M.; Jin, B.; et al. Efficient contour-based annotation by iterative deep learning for organ segmentation from volumetric medical images. Int. J. Comput. Assist. Radiol. Surg. 2023, 18, 379–394. [Google Scholar] [CrossRef]
  18. Chen, J.G.; Yang, N.; Pan, Y.H.; Liu, H.L.; Zhang, Z.L. Synchronous Medical Image Augmentation framework for deep learning-based image segmentation. Comput. Med. Imaging Graph. 2023, 104, 102161. [Google Scholar] [CrossRef]
  19. Fakhouri, H.N.; Alawadi, S.; Awaysheh, F.M.; Alkhabbas, F.; Zraqou, J. A cognitive deep learning approach for medical image processing. Sci. Rep. 2024, 14, 4539. [Google Scholar] [CrossRef]
  20. Wang, C.T.; Zhang, J.H.; Liu, S.Y. Medical Ultrasound Image Segmentation with Deep Learning Models. IEEE Access 2023, 11, 10158–10168. [Google Scholar] [CrossRef]
  21. Hou, R.Z.; Li, F. IDPCNN: Iterative denoising and projecting CNN for MRI reconstruction. J. Comput. Appl. Math. 2022, 406, 113973. [Google Scholar] [CrossRef]
  22. Xiong, Z.; Han, Z.F.; Hong, S.G.; Han, X.L.; Cui, X.Y.; Wang, A.H. Artifact and Detail Attention Generative Adversarial Networks for Low-Dose CT Denoising. IEEE Trans. Med. Imaging 2021, 40, 3901–3918. [Google Scholar] [CrossRef]
  23. Xia, K.J.; Zhou, Q.H.; Jiang, Y.Z.; Chen, B.; Gu, X.Q. Deep residual neural network based image enhancement algorithm for low dose CT images. Multimedia Tools Appl. 2022, 81, 36007–36030. [Google Scholar] [CrossRef]
  24. Liu, J.Y.; Tian, Y.L.; Duzgol, C.; Akin, O.; Agildere, A.M.; Haberal, K.M.; Coskun, M. Virtual contrast enhancement for CT scans of abdomen and pelvis. Comput. Med. Imaging Graph. 2022, 100, 102094. [Google Scholar] [CrossRef] [PubMed]
  25. Chen, C.; Raymond, C.; Speier, W.; Jin, X.Y.; Cloughesy, T.F.; Enzmann, D.; Ellingson, B.M.; Arnold, C.W. Synthesizing MR Image Contrast Enhancement Using 3D High-Resolution ConvNets. IEEE Trans. Biomed. Eng. 2023, 70, 401–412. [Google Scholar] [CrossRef]
  26. Lyu, Q.; Shan, H.M.; Wang, G. MRI Super-Resolution with Ensemble Learning and Complementary Priors. IEEE Trans. Comput. Imaging 2020, 6, 615–624. [Google Scholar] [CrossRef]
  27. Zeng, W.; Peng, J.; Wang, S.S.; Liu, Q.G. A comparative study of CNN-based super-resolution methods in MRI reconstruction and its beyond. Signal Process. Image Commun. 2020, 81, 115701. [Google Scholar] [CrossRef]
  28. Wang, S.S.; Ke, Z.W.; Cheng, H.T.; Jia, S.; Ying, L.S.; Zheng, H.R.; Liang, D. DIMENSION: Dynamic MR imaging with both k-space and spatial prior knowledge obtained via multi-supervised network training. NMR Biomed. 2022, 35, e4131. [Google Scholar] [CrossRef]
  29. Schlemper, J.; Caballero, J.; Hajnal, J.V.; Price, A.N.; Rueckert, D. A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans. Med. Imaging 2018, 37, 491–503. [Google Scholar] [CrossRef]
  30. Qiao, Z.W.; Du, C.C. RAD-UNet: A Residual, Attention-Based, Dense UNet for CT Sparse Reconstruction. J. Digit. Imaging 2022, 35, 1748–1758. [Google Scholar] [CrossRef]
  31. Zhang, Y.L.; Li, K.; Li, K.P.; Fu, Y. MR Image Super-Resolution with Squeeze and Excitation Reasoning Attention Network. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar] [CrossRef]
  32. Han, K.; Wang, Y.; Chen, H.; Chen, X.; Guo, J.; Liu, Z.; Tang, Y.; Xiao, A.; Xu, C.; Xu, Y.; et al. A Survey on Vision Transformer. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 87–110. [Google Scholar] [CrossRef] [PubMed]
  33. Zou, B.; Ji, Z.; Zhu, C.; Dai, Y.; Zhang, W.; Kui, X. Multi-scale deformable transformer for multi-contrast knee MRI super-resolution. Biomed. Signal Process. Control 2023, 79, 104154. [Google Scholar] [CrossRef]
  34. Wu, Z.L.; Liao, W.B.; Yan, C.; Zhao, M.S.; Liu, G.W.; Ma, N.; Li, X.S. Deep learning based MRI reconstruction with transformer. Comput. Methods Programs Biomed. 2023, 233, 107452. [Google Scholar] [CrossRef] [PubMed]
  35. Hu, J.H.; Zheng, S.Z.; Wang, B.; Luo, G.X.; Huang, W.Q.; Zhang, J. Super-Resolution Swin Transformer and Attention Network for Medical CT Imaging. BioMed Res. Int. 2022, 2022, 4431536. [Google Scholar] [CrossRef] [PubMed]
  36. Korkmaz, Y.; Özbey, M.; Cukur, T. MRI Reconstruction with Conditional Adversarial Transformers. Proceedings of 5th International Workshop on Machine Learning for Medical Image Reconstruction, Singapore, 22 September 2022. [Google Scholar] [CrossRef]
  37. He, Q.Q.; Yang, Q.J.; Xie, M.H. HCTNet: A hybrid CNN-transformer network for breast ultrasound image segmentation. Comput. Biol. Med. 2023, 155, 106629. [Google Scholar] [CrossRef]
  38. Tu, Z.Z.; Talebi, H.; Zhang, H.; Yang, F.; Milanfar, P.; Bovik, A.; Li, Y.X. MAXIM: Multi-Axis MLP for Image Processing. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar] [CrossRef]
  39. Pan, S.; Chang, C.; Wang, T.; Wynne, J.; Hu, M.; Lei, Y.; Liu, T.; Patel, P.; Roper, J.; Yang, X. Abdomen CT multi-organ segmentation using token-based MLP-Mixer. Med. Phys. 2022, 50, 3027–3038. [Google Scholar] [CrossRef]
  40. Lehtinen, J.; Munkberg, J.; Hasselgren, J.; Laine, S.; Karras, T.; Aittala, M.; Aila, T. Noise2Noise: Learning Image Restoration without Clean Data. arXiv 2018, arXiv:1803.04189. [Google Scholar]
  41. Wu, D.F.; Kim, K.; Li, Q.Z. Low-dose CT reconstruction with Noise2Noise network and testing-time fine-tuning. Med. Phys. 2021, 48, 7657–7672. [Google Scholar] [CrossRef]
  42. Yuan, N.M.; Zhou, J.; Qi, J.Y. Half2Half: Deep neural network based CT image denoising without independent reference data. Phys. Med. Biol. 2020, 65, 215020. [Google Scholar] [CrossRef]
  43. Jung, W.; Lee, H.-S.; Seo, M.; Nam, Y.; Choi, Y.; Shin, N.-Y.; Ahn, K.-J.; Kim, B.-S.; Jang, J. MR-self Noise2Noise: Self-supervised deep learning–based image quality improvement of submillimeter resolution 3D MR images. Eur. Radiol. 2023, 33, 2686–2698. [Google Scholar] [CrossRef]
  44. Shi, Y.H.; Zhang, J.; Ling, T.; Lu, J.W.; Zheng, Y.F.; Yu, Q.; Qi, L.; Gao, Y. Inconsistency-Aware Uncertainty Estimation for Semi-Supervised Medical Image Segmentation. IEEE Trans. Med. Imaging 2022, 41, 608–620. [Google Scholar] [CrossRef] [PubMed]
  45. Lei, T.; Zhang, D.; Du, X.G.; Wang, X.; Wan, Y.; Nandi, A.K. Semi-Supervised Medical Image Segmentation Using Adversarial Consistency Learning and Dynamic Convolution Network. IEEE Trans. Med. Imaging 2023, 42, 1265–1277. [Google Scholar] [CrossRef] [PubMed]
  46. Voulodimos, A.; Protopapadakis, E.; Katsamenis, I.; Doulamis, A.; Doulamis, N. A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images. Sensors 2021, 21, 2215. [Google Scholar] [CrossRef] [PubMed]
  47. Chen, X.C.; Yao, L.N.; Zhou, T.; Dong, J.M.; Zhang, Y. Momentum contrastive learning for few-shot COVID-19 diagnosis from chest CT images. Pattern. Recogn. 2021, 113, 107826. [Google Scholar] [CrossRef] [PubMed]
  48. Liang, X.C.; Hu, P.Y.; Zhang, L.G.; Sun, J.G.; Yin, G.S. MCFNet: Multi-Layer Concatenation Fusion Network for Medical Images Fusion. IEEE Sensors J. 2019, 19, 7107–7119. [Google Scholar] [CrossRef]
  49. Zhao, C.; Wang, T.F.; Lei, B.Y. Medical image fusion method based on dense block and deep convolutional generative adversarial network. Neural Comput. Appl. 2021, 33, 6595–6610. [Google Scholar] [CrossRef]
  50. Zhang, Y.; Nie, R.C.; Cao, J.D.; Ma, C.Z. Self-Supervised Fusion for Multi-Modal Medical Images via Contrastive Auto-Encoding and Convolutional Information Exchange. IEEE Comput. Intell. Mag. 2023, 18, 68–80. [Google Scholar] [CrossRef]
  51. Wang, L.F.; Liu, Y.; Mi, J.; Zhang, J. MSE-Fusion: Weakly supervised medical image fusion with modal synthesis and enhancement. Eng. Appl. Artif. Intell. 2023, 119, 105744. [Google Scholar] [CrossRef]
  52. Lakshmi, A.; Rajasekaran, M.P.; Jeevitha, S.; Selvendran, S. An Adaptive MRI-PET Image Fusion Model Based on Deep Residual Learning and Self-Adaptive Total Variation. Arab. J. Sci. Eng. 2022, 47, 10025–10042. [Google Scholar] [CrossRef]
  53. Xiao, N.; Yang, W.T.; Qiang, Y.; Zhao, J.J.; Hao, R.; Lian, J.H.; Li, S. PET and CT Image Fusion of Lung Cancer with Siamese Pyramid Fusion Network. Front. Med. 2022, 9, 792390. [Google Scholar] [CrossRef]
  54. Zhou, T.; Cheng, Q.R.; Lu, H.L.; Li, Q.; Zhang, X.X.; Qiu, S. Deep learning methods for medical image fusion: A review. Comput. Biol. Med. 2023, 160, 106959. [Google Scholar] [CrossRef] [PubMed]
  55. Azam, M.A.; Khan, K.B.; Salahuddin, S.; Rehman, E.; Khan, S.A.; Khan, M.A.; Kadry, S.; Gandomi, A.H. A review on multimodal medical image fusion: Compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics. Comput. Biol. Med. 2022, 144, 105253. [Google Scholar] [CrossRef] [PubMed]
  56. Rahman, H.; Bukht, T.F.N.; Imran, A.; Tariq, J.; Tu, S.; Alzahrani, A. A Deep Learning Approach for Liver and Tumor Segmentation in CT Images Using ResUNet. Bioengineering 2022, 9, 368. [Google Scholar] [CrossRef] [PubMed]
  57. Oglat, A.A.; Fohely, F.; Al Masalmeh, A.; Al Jbour, I.; Al Jaradat, L.; Athamnah, S.I. Attitudes toward the Integration of Radiographers into the First-Line Interpretation of Imaging Using the Red Dot System. Bioengineering 2023, 10, 71. [Google Scholar] [CrossRef]
  58. Liu, J.Y.; Yildirim, O.; Akin, O.; Tian, Y.L. AI-Driven Robust Kidney and Renal Mass Segmentation and Classification on 3D CT Images. Bioengineering 2023, 10, 116. [Google Scholar] [CrossRef]
  59. Ali, Y.H.; Chinnaperumal, S.; Marappan, R.; Raju, S.K.; Sadiq, A.T.; Farhan, A.K.; Srinivasan, P. Multi-Layered Non-Local Bayes Model for Lung Cancer Early Diagnosis Prediction with the Internet of Medical Things. Bioengineering 2023, 10, 138. [Google Scholar] [CrossRef]
  60. Ali, Y.H.; Chooralil, V.S.; Balasubramanian, K.; Manyam, R.R.; Raju, S.K.; Sadiq, A.T.; Farhan, A.K. Optimization System Based on Convolutional Neural Network and Internet of Medical Things for Early Diagnosis of Lung Cancer. Bioengineering 2023, 10, 320. [Google Scholar] [CrossRef]
  61. Hauptman, A.; Balasubramaniam, G.M.; Arnon, S. Machine Learning Diffuse Optical Tomography Using Extreme Gradient Boosting and Genetic Programming. Bioengineering 2023, 10, 382. [Google Scholar] [CrossRef]
  62. Park, Y.J.; Cho, H.S.; Kim, M.N. AI Model for Detection of Abdominal Hemorrhage Lesions in Abdominal CT Images. Bioengineering 2023, 10, 502. [Google Scholar] [CrossRef]
  63. Zou, J.; Liu, J.; Choi, K.S.; Qin, J. Intra-Patient Lung CT Registration through Large Deformation Decomposition and Attention-Guided Refinement. Bioengineering 2023, 10, 562. [Google Scholar] [CrossRef]
  64. Dunn, B.; Pierobon, M.; Wei, Q. Automated Classification of Lung Cancer Subtypes Using Deep Learning and CT-Scan Based Radiomic Analysis. Bioengineering 2023, 10, 690. [Google Scholar] [CrossRef] [PubMed]
  65. Qureshi, A.; Lim, S.; Suh, S.Y.; Mutawak, B.; Chitnis, P.V.; Demer, J.L.; Wei, Q. Deep-Learning-Based Segmentation of Extraocular Muscles from Magnetic Resonance Images. Bioengineering 2023, 10, 699. [Google Scholar] [CrossRef] [PubMed]
  66. Illakiya, T.; Ramamurthy, K.; Siddharth, M.V.; Mishra, R.; Udainiya, A. AHANet: Adaptive Hybrid Attention Network for Alzheimer’s Disease Classification Using Brain Magnetic Resonance Imaging. Bioengineering 2023, 10, 714. [Google Scholar] [CrossRef] [PubMed]
  67. Hernandez-Torres, S.I.; Hennessey, R.P.; Snider, E.J. Performance Comparison of Object Detection Networks for Shrapnel Identification in Ultrasound Images. Bioengineering 2023, 10, 807. [Google Scholar] [CrossRef] [PubMed]
  68. Gu, C.; Lee, M. Deep Transfer Learning Using Real-World Image Features for Medical Image Classification, with a Case Study on Pneumonia X-ray Images. Bioengineering 2024, 11, 406. [Google Scholar] [CrossRef]
  69. Zhang, B.R.; Dong, L.Q.; Kong, L.Q.; Liu, M.; Zhao, Y.J.; Hui, M.; Chu, X.H. Prediction of Impulsive Aggression Based on Video Images. Bioengineering 2023, 10, 942. [Google Scholar] [CrossRef]
  70. Rabbat, N.; Qureshi, A.; Hsu, K.T.; Asif, Z.; Chitnis, P.; Shobeiri, S.A.; Wei, Q. Automated Segmentation of Levator Ani Muscle from 3D Endovaginal Ultrasound Images. Bioengineering 2023, 10, 894. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jia, H.; Jiao, Q.; Liu, M. Special Issue: Artificial Intelligence in Advanced Medical Imaging. Bioengineering 2024, 11, 1229. https://doi.org/10.3390/bioengineering11121229

AMA Style

Jia H, Jiao Q, Liu M. Special Issue: Artificial Intelligence in Advanced Medical Imaging. Bioengineering. 2024; 11(12):1229. https://doi.org/10.3390/bioengineering11121229

Chicago/Turabian Style

Jia, Huang, Qingliang Jiao, and Ming Liu. 2024. "Special Issue: Artificial Intelligence in Advanced Medical Imaging" Bioengineering 11, no. 12: 1229. https://doi.org/10.3390/bioengineering11121229

APA Style

Jia, H., Jiao, Q., & Liu, M. (2024). Special Issue: Artificial Intelligence in Advanced Medical Imaging. Bioengineering, 11(12), 1229. https://doi.org/10.3390/bioengineering11121229

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop