Loading [MathJax]/jax/output/HTML-CSS/jax.js
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,094)

Search Parameters:
Keywords = image retrieval

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 11935 KiB  
Article
Identifying the Spatial Coverage of Informal Settlements in Addis Ababa, Ethiopia, for Better Management and Policy Directions
by Melaku Eticha Taye, Elias Yitbarek Alemayehu and Mintesnot Woldeamanuel
Urban Sci. 2025, 9(4), 99; https://doi.org/10.3390/urbansci9040099 - 26 Mar 2025
Viewed by 60
Abstract
Addis Ababa, the capital city of Ethiopia and one of the fastest-expanding cities in Africa, is undergoing rapid urbanization which has led to acute housing scarcity and a growth of informal settlements. The growth of informal settlements seems unstoppable and needs appropriate policy [...] Read more.
Addis Ababa, the capital city of Ethiopia and one of the fastest-expanding cities in Africa, is undergoing rapid urbanization which has led to acute housing scarcity and a growth of informal settlements. The growth of informal settlements seems unstoppable and needs appropriate policy direction to create a sustainable city. Despite the significance of the challenges posed by informal settlements, their coverage is not well-documented or known spatially. The aim of this research is to identify the spatial coverage of informal settlements after restructuring the boundary of the city. This study reviewed the existing literature and different spatial data including city wide line maps, land use plan, all cadaster data, and other spatial maps collected from different sources including city sectoral offices. Furthermore, observation and interviews with experts in the field were conducted to better understand the context of informal settlements. The data were analyzed by ArcGIS 10.8 software to identify the location of informal settlements by overlying those data and verifying this with field observation at selected areas using recent satellite images. The results show that about 50 percent of the settlements are informal. It was revealed that the existing data are fragmented, inconsistent, and difficult to access or retrieve. In this regard, informal settlements are still a critical and growing issue with regard to fast urbanization. Therefore, the results can be used for academic research, devising appropriate policy direction, and in decision-making for sustainable development. Full article
(This article belongs to the Special Issue Sustainable Urbanization, Regional Planning and Development)
Show Figures

Figure 1

18 pages, 6321 KiB  
Article
Enhanced Imaging in Scanning Transmission X-Ray Microscopy Assisted by Ptychography
by Shuhan Wu, Zijian Xu, Ruoru Li, Sheng Chen, Yingling Zhang, Xiangzhi Zhang, Zhenhua Chen and Renzhong Tai
Nanomaterials 2025, 15(7), 496; https://doi.org/10.3390/nano15070496 - 26 Mar 2025
Viewed by 66
Abstract
Scanning transmission X-ray microscopy (STXM) is a direct imaging technique with nanoscale resolution. But its resolution is limited by the spot size on the sample, i.e., by the manufacturing technique of the focusing element. As an emerging high-resolution X-ray imaging technique, ptychography utilizes [...] Read more.
Scanning transmission X-ray microscopy (STXM) is a direct imaging technique with nanoscale resolution. But its resolution is limited by the spot size on the sample, i.e., by the manufacturing technique of the focusing element. As an emerging high-resolution X-ray imaging technique, ptychography utilizes highly redundant data from overlapping scans as well as phase retrieval algorithms to simultaneously reconstruct a high-resolution sample image and a probe function. In this study, we designed an accurate reconstruction strategy to obtain the probe spot with the vibration effects being eliminated, and developed an image enhancement technique for STXM by combining the reconstructed probe with the deconvolution algorithm. This approach significantly improves the resolution of STXM imaging and can break the limitation of the focal spot on STXM resolution when the scanning step size is near or below the spot size, while the data processing time is much shorter than that of ptychography. Both simulations and experiments show that this approach can be applied to STXM data at different energies and different scan steps using the same focal spot retrieved via ptychography. Full article
(This article belongs to the Special Issue New Advances in Applications of Nanoscale Imaging and Nanoscopy)
Show Figures

Figure 1

31 pages, 8488 KiB  
Review
The Rise of the Brown–Twiss Effect
by David Charles Hyland
Photonics 2025, 12(4), 301; https://doi.org/10.3390/photonics12040301 - 25 Mar 2025
Viewed by 60
Abstract
Despite the simplicity of flux collecting hardware, robustness to misalignments, and immunity to seeing conditions, Intensity Correlation Imaging arrays using the Brown–Twiss effect to determine two-dimensional images have been burdened with very long integration times. The root cause is that the essential phase [...] Read more.
Despite the simplicity of flux collecting hardware, robustness to misalignments, and immunity to seeing conditions, Intensity Correlation Imaging arrays using the Brown–Twiss effect to determine two-dimensional images have been burdened with very long integration times. The root cause is that the essential phase retrieval algorithms must use image domain constraints, and the traditional signal-to-noise calculations do not account for these. Thus, the conventional formulations are not efficient estimators. Recently, the long integration times have been emphatically removed by a sequence of papers. This paper is a review of the previous theoretical work that removes the long integration times, making the Intensity Correlation Imaging a practical and inexpensive method for high-resolution astronomy. Full article
(This article belongs to the Special Issue Optical Imaging and Measurements: 2nd Edition)
Show Figures

Figure 1

14 pages, 861 KiB  
Review
High-Resolution Vessel Wall Images and Neuropsychiatric Lupus: A Scoping Review
by Bruno L. D. Matos, Luiz F. M. Borella, Fernanda Veloso Pereira, Danilo Rodrigues Pereira, Simone Appenzeller and Fabiano Reis
Diagnostics 2025, 15(7), 824; https://doi.org/10.3390/diagnostics15070824 - 25 Mar 2025
Viewed by 196
Abstract
Background: Systemic lupus erythematosus (SLE) is a multisystem autoimmune disorder. Neuropsychiatric manifestations are frequently observed and are associated with increased morbidity and reduced quality of life. Magnetic resonance imaging (MRI) is the neuroimaging procedure of choice for investigation. High-resolution vessel wall imaging [...] Read more.
Background: Systemic lupus erythematosus (SLE) is a multisystem autoimmune disorder. Neuropsychiatric manifestations are frequently observed and are associated with increased morbidity and reduced quality of life. Magnetic resonance imaging (MRI) is the neuroimaging procedure of choice for investigation. High-resolution vessel wall imaging (HRVWI) is a neuroimaging methodology that allows active mapping of pathophysiological processes involving brain vessel walls. Methods: To exemplify the importance of HRVWI and its usefulness in patients with SLE, we carried out a scoping review (following PRISMA guidelines) using the PubMed and Embase databases. Results: We retrieved 10 studies that utilized HRVWI in neuropsychiatric SLE, including a total of 69 patients. The majority, 84% (58/69), were women, with ages ranging between 16 and 80 years (average 38.4 years). Approximately 46.3% (32/69) of patients had white matter lesions in the brain at the time of investigation, and 77% (53/69) had normal magnetic resonance angiography. Treatment with immunosuppressants led to the resolution of the majority of the findings. Conclusions: Imaging plays an important role in investigating neuropsychiatric SLE. HRVWI analysis is gaining more importance, with its ability to identify inflammation even if angiographic MRI sequences (3D TOF) are normal, allowing the institution of early immunosuppressant treatment and resolution of symptoms. Full article
(This article belongs to the Special Issue Diagnosis and Management of Systemic Lupus Erythematosus)
Show Figures

Figure 1

27 pages, 10045 KiB  
Article
Vision-Language Models for Autonomous Driving: CLIP-Based Dynamic Scene Understanding
by Mohammed Elhenawy, Huthaifa I. Ashqar, Andry Rakotonirainy, Taqwa I. Alhadidi, Ahmed Jaber and Mohammad Abu Tami
Electronics 2025, 14(7), 1282; https://doi.org/10.3390/electronics14071282 - 24 Mar 2025
Viewed by 211
Abstract
Scene understanding is essential for enhancing driver safety, generating human-centric explanations for Automated Vehicle (AV) decisions, and leveraging Artificial Intelligence (AI) for retrospective driving video analysis. This study developed a dynamic scene retrieval system using Contrastive Language–Image Pretraining (CLIP) models, which can be [...] Read more.
Scene understanding is essential for enhancing driver safety, generating human-centric explanations for Automated Vehicle (AV) decisions, and leveraging Artificial Intelligence (AI) for retrospective driving video analysis. This study developed a dynamic scene retrieval system using Contrastive Language–Image Pretraining (CLIP) models, which can be optimized for real-time deployment on edge devices. The proposed system outperforms state-of-the-art in-context learning methods, including the zero-shot capabilities of GPT-4o, particularly in complex scenarios. By conducting frame-level analyses on the Honda Scenes Dataset, which contains a collection of about 80 h of annotated driving videos capturing diverse real-world road and weather conditions, our study highlights the robustness of CLIP models in learning visual concepts from natural language supervision. The results also showed that fine-tuning the CLIP models, such as ViT-L/14 (Vision Transformer) and ViT-B/32, significantly improved scene classification, achieving a top F1-score of 91.1%. These results demonstrate the ability of the system to deliver rapid and precise scene recognition, which can be used to meet the critical requirements of advanced driver assistance systems (ADASs). This study shows the potential of CLIP models to provide scalable and efficient frameworks for dynamic scene understanding and classification. Furthermore, this work lays the groundwork for advanced autonomous vehicle technologies by fostering a deeper understanding of driver behavior, road conditions, and safety-critical scenarios, marking a significant step toward smarter, safer, and more context-aware autonomous driving systems. Full article
(This article belongs to the Special Issue Intelligent Transportation Systems and Sustainable Smart Cities)
Show Figures

Figure 1

20 pages, 14467 KiB  
Article
Backward Integration of Nonlinear Shallow Water Model: Part 2: Vortex Merger
by Wen-Yih Sun
Atmosphere 2025, 16(4), 365; https://doi.org/10.3390/atmos16040365 - 24 Mar 2025
Viewed by 138
Abstract
Using the same initial condition in vortex merger, the backward integrations of height and vorticity from Sun’s shallow water model are the images of forward integrations. The system becomes very stable after long forward or backward integration (integration time > 120). If we [...] Read more.
Using the same initial condition in vortex merger, the backward integrations of height and vorticity from Sun’s shallow water model are the images of forward integrations. The system becomes very stable after long forward or backward integration (integration time > 120). If we integrate forward in time from initial t = t0 to t = tf, then, use the result as the initial condition to integrate backward from tf to t0. The difference between backward integration and the forward initial system at t = t0 increases as integration time increases. The difference is small if tf is less than 40 and backward integration can retrieve the original initial pattern well. If tf > 60, the difference becomes noticeable and the backward integration cannot retrieve the original pattern because of the accumulation of small difference generated at each time-step between forward and backward numerical calculations. The reverse time can be extended using implicit schemes or more complicated time-step schemes. Full article
(This article belongs to the Section Biosphere/Hydrosphere/Land–Atmosphere Interactions)
Show Figures

Figure 1

21 pages, 12241 KiB  
Article
A Social Assistance System for Augmented Reality Technology to Redound Face Blindness with 3D Face Recognition
by Wen-Hau Jain, Bing-Gang Jhong and Mei-Yung Chen
Electronics 2025, 14(7), 1244; https://doi.org/10.3390/electronics14071244 - 21 Mar 2025
Viewed by 227
Abstract
The objective of this study is to develop an Augmented Reality (AR) visual aid system to help patients with prosopagnosia recognize faces in social situations and everyday life. The primary contribution of this study is the use of 3D face models as the [...] Read more.
The objective of this study is to develop an Augmented Reality (AR) visual aid system to help patients with prosopagnosia recognize faces in social situations and everyday life. The primary contribution of this study is the use of 3D face models as the basis of data augmentation for facial recognition, which has practical applications for various social situations that patients with prosopagnosia find themselves in. The study comprises the following components: First, the affordances of Active Stereoscopy and stereo cameras were combined. Second, deep learning was employed to reconstruct a detailed 3D face model in real-time based on data from the 3D point cloud and the 2D image. Data were also retrieved from seven angles of the subject’s face to improve the accuracy of face recognition from the subject’s profile and in a range of dynamic interactions. Second, the data derived from the first step were entered into a convolutional neural network (CNN), which then generated a 128-dimensional characteristic vector. Next, the system deployed Structured Query Language (SQL) to compute and compare Euclidean distances to determine the smallest Euclidean distance and match it to the name that corresponded to the face; tagged face data were projected by the camera onto the AR lenses. The findings of this study show that our AR system has a robustness of more than 99% in terms of face recognition. This method offers a higher practical value than traditional 2D face recognition methods when it comes to large-pose 3D face recognition in day-to-day life. Full article
(This article belongs to the Special Issue Real-Time Computer Vision)
Show Figures

Figure 1

23 pages, 7045 KiB  
Article
Optimizing Text Recognition in Mechanical Drawings: A Comprehensive Approach
by Javier Villena Toro and Mehdi Tarkian
Machines 2025, 13(3), 254; https://doi.org/10.3390/machines13030254 - 20 Mar 2025
Viewed by 121
Abstract
The digitalization of engineering drawings is a pivotal step toward automating and improving the efficiency of product design and manufacturing systems (PDMSs). This study presents eDOCr2, a framework that combines traditional OCR and image processing to extract structured information from mechanical drawings. It [...] Read more.
The digitalization of engineering drawings is a pivotal step toward automating and improving the efficiency of product design and manufacturing systems (PDMSs). This study presents eDOCr2, a framework that combines traditional OCR and image processing to extract structured information from mechanical drawings. It segments drawings into key elements—such as information blocks, dimensions, and feature control frames—achieving a text recall of 93.75% and a character error rate (CER) below 1% in a benchmark with drawings from different sources. To improve semantic understanding and reasoning, eDOCr2 integrates Vision Language models (Qwen2-VL-7B and GPT-4o) after segmentation to verify, filter, or retrieve information. This integration enables PDMS applications such as automated design validation, quality control, or manufacturing assessment. The code is available on Github. Full article
Show Figures

Figure 1

13 pages, 5778 KiB  
Article
Single-Shot Wavefront Sensing in Focal Plane Imaging Using Transformer Networks
by Hangning Kou, Jingliang Gu, Jiang You, Min Wan, Zixun Ye, Zhengjiao Xiang and Xian Yue
Optics 2025, 6(1), 11; https://doi.org/10.3390/opt6010011 - 20 Mar 2025
Viewed by 165
Abstract
Wavefront sensing is an essential technique in optical imaging, adaptive optics, and atmospheric turbulence correction. Traditional wavefront reconstruction methods, including the Gerchberg–Saxton (GS) algorithm and phase diversity (PD) techniques, are often limited by issues such as low inversion accuracy, slow convergence, and the [...] Read more.
Wavefront sensing is an essential technique in optical imaging, adaptive optics, and atmospheric turbulence correction. Traditional wavefront reconstruction methods, including the Gerchberg–Saxton (GS) algorithm and phase diversity (PD) techniques, are often limited by issues such as low inversion accuracy, slow convergence, and the presence of multiple possible solutions. Recent developments in deep learning have led to new methods, although conventional CNN-based models still face challenges in effectively capturing global context. To overcome these limitations, we propose a Transformer-based single-shot wavefront sensing method, which directly reconstructs wavefront aberrations from focal plane intensity images. Our model integrates a Normalization-based Attention Module (NAM) into the CoAtNet architecture, which strengthens feature extraction and leads to more accurate wavefront characterization. Experimental results in both simulated and real-world conditions indicate that our method achieves a 4.5% reduction in normalized wavefront error (NWE) compared to ResNet34, suggesting improved performance over conventional deep learning models. Additionally, by leveraging Walsh function modulation, our approach resolves the multiple-solution problem inherent in phase retrieval techniques. The proposed model achieves high accuracy, fast convergence, and simplicity in implementation, making it a promising solution for wavefront sensing applications. Full article
(This article belongs to the Section Engineering Optics)
Show Figures

Figure 1

21 pages, 4753 KiB  
Article
Evaluation of Scale Effects on UAV-Based Hyperspectral Imaging for Remote Sensing of Vegetation
by Tie Wang, Tingyu Guan, Feng Qiu, Leizhen Liu, Xiaokang Zhang, Hongda Zeng and Qian Zhang
Remote Sens. 2025, 17(6), 1080; https://doi.org/10.3390/rs17061080 - 19 Mar 2025
Viewed by 210
Abstract
With the rapid advancement of unmanned aerial vehicles (UAVs) in recent years, UAV-based remote sensing has emerged as a highly efficient and practical tool for environmental monitoring. In vegetation remote sensing, UAVs equipped with hyperspectral sensors can capture detailed spectral information, enabling precise [...] Read more.
With the rapid advancement of unmanned aerial vehicles (UAVs) in recent years, UAV-based remote sensing has emerged as a highly efficient and practical tool for environmental monitoring. In vegetation remote sensing, UAVs equipped with hyperspectral sensors can capture detailed spectral information, enabling precise monitoring of plant health and the retrieval of physiological and biochemical parameters. A critical aspect of UAV-based vegetation remote sensing is the accurate acquisition of canopy reflectance. However, due to the mobility of UAVs and the variation in flight altitude, the data are susceptible to scale effects, where changes in spatial resolution can significantly impact the canopy reflectance. This study investigates the spatial scale issue of UAV hyperspectral imaging, focusing on how varying flight altitudes influence atmospheric correction, vegetation viewer geometry, and canopy heterogeneity. Using hyperspectral images captured at different flight altitudes at a Chinese fir forest stand, we propose two atmospheric correction methods: one based on a uniform grey reference panel at the same altitude and another based on altitude-specific grey reference panels. The reflectance spectra and vegetation indices, including NDVI, EVI, PRI, and CIRE, were computed and analyzed across different altitudes. The results show significant variations in vegetation indices at lower altitudes, with NDVI and CIRE demonstrating the largest changes between 50 m and 100 m, due to the heterogeneous forest canopy structure and near-infrared scattering. For instance, NDVI increased by 18% from 50 m to 75 m and stabilized after 100 m, while the standard deviation decreased by 32% from 50 m to 250 m, indicating reduced heterogeneity effects. Similarly, PRI exhibited notable increases at lower altitudes, attributed to changes in viewer geometry, canopy shadowing and soil background proportions, stabilizing above 100 m. Above 100 m, the impact of canopy heterogeneity diminished, and variations in vegetation indices became minimal (<3%), although viewer geometry effects persisted. These findings emphasize that conducting UAV hyperspectral observations at altitudes above at least 100 m minimizes scale effects, ensuring more consistent and reliable data for vegetation monitoring. The study highlights the importance of standardized atmospheric correction protocols and optimal altitude selection to improve the accuracy and comparability of UAV-based hyperspectral data, contributing to advancements in vegetation remote sensing and carbon estimation. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

23 pages, 10486 KiB  
Article
A Preliminary Assessment of the VIIRS Cloud Top and Base Height Environmental Data Record Reprocessing
by Qian Liu, Xianjun Hao, Cheng-Zhi Zou, Likun Wang, John J. Qu and Banghua Yan
Remote Sens. 2025, 17(6), 1036; https://doi.org/10.3390/rs17061036 - 15 Mar 2025
Viewed by 246
Abstract
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite has been continuously providing global environmental data records (EDRs) for more than one decade since its launch in 2011. Recently, the VIIRS EDRs of cloud features have been [...] Read more.
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite has been continuously providing global environmental data records (EDRs) for more than one decade since its launch in 2011. Recently, the VIIRS EDRs of cloud features have been reprocessed using unified and consistent algorithm for selected periods to minimize or remove the inconsistencies due to different versions of retrieval algorithms as well as input VIIRS sensor data records (SDRs) adopted by different periods of operational EDRs. This study conducts the first simultaneous quality and accuracy assessment of reprocessed Cloud Top Height (CTH) and Cloud Base Height (CBH) products against both the operational VIIRS EDRs and corresponding cloud height measurements from the active sensors of NASA’s CloudSat-CALIPSO system. In general, the reprocessed CTH and CBH EDRs show strong similarities and correlations with CloudSat-CALIPSOs, with coefficients of determination (R2) reaching 0.82 and 0.77, respectively. Additionally, the reprocessed VIIRS cloud height products demonstrate significant improvements in retrieving high-altitude clouds and in sensitivity to cloud height dynamics. It outperforms the operational product in capturing very high CTHs exceeding 15 km and exhibits CBH probability patterns more closely aligned with CloudSat-CALIPSO measurements. This preliminary assessment enhances data applicability of remote sensing products for atmospheric and climate research, allowing for more accurate cloud measurements and advancing environmental monitoring efforts. Full article
(This article belongs to the Special Issue Satellite-Based Climate Change and Sustainability Studies)
Show Figures

Figure 1

31 pages, 15757 KiB  
Article
Single Fringe Phase Retrieval for Translucent Object Measurements Using a Deep Convolutional Generative Adversarial Network
by Jiayan He, Yuanchang Huang, Juhao Wu, Yadong Tang and Wenlong Wang
Sensors 2025, 25(6), 1823; https://doi.org/10.3390/s25061823 - 14 Mar 2025
Viewed by 163
Abstract
Fringe projection profilometry (FPP) is a measurement technique widely used in the field of 3D reconstruction. However, it faces issues of phase shift and reduced fringe modulation depth when measuring translucent objects, leading to decreased measurement accuracy. To reduce the impact of surface [...] Read more.
Fringe projection profilometry (FPP) is a measurement technique widely used in the field of 3D reconstruction. However, it faces issues of phase shift and reduced fringe modulation depth when measuring translucent objects, leading to decreased measurement accuracy. To reduce the impact of surface scattering effects on the wrapped phase during actual measurement, we propose a single-frame phase retrieval method named GAN-PhaseNet to improve the subsequent measurement accuracy for translucent objects. The network primarily relies on a generative adversarial network framework, with significant enhancements implemented in the generator network, including integrating the U-net++ architecture, Resnet101 as the backbone network for feature extraction, and a multilevel attention module for fully utilizing the high-level features of the source image. The results of the ablation and comparison experiment show that the proposed method has superior phase retrieval results, not only achieving the accuracy of the conventional method on objects with no scattering effect and a slight scattering effect but also obtaining the lowest errors on objects with severe scattering effects when compared with other phase retrieval convolution neural networks (CDLP, Unet-Phase, and DCFPP). Under varying noise levels and fringe frequencies, the proposed method demonstrates excellent robustness and generalization capabilities. It can be applied to computational imaging techniques in the fringe projection field, introducing new ideas for the measurement of translucent objects. Full article
(This article belongs to the Special Issue Convolutional Neural Network Technology for 3D Imaging and Sensing)
Show Figures

Figure 1

12 pages, 612 KiB  
Review
Recent Advances in the Diagnosis and Management of Retrograde Ejaculation: A Narrative Review
by Charalampos Konstantinidis, Athanasios Zachariou, Evangelini Evgeni, Selahittin Çayan, Luca Boeri and Ashok Agarwal
Diagnostics 2025, 15(6), 726; https://doi.org/10.3390/diagnostics15060726 - 14 Mar 2025
Viewed by 344
Abstract
Retrograde ejaculation (RE) is a condition where the forward expulsion of seminal fluid is impaired, leading to infertility and psychological distress in affected individuals. This narrative review examines the etiology, pathophysiology, diagnosis, and management of RE, emphasizing its impact on male fertility. RE [...] Read more.
Retrograde ejaculation (RE) is a condition where the forward expulsion of seminal fluid is impaired, leading to infertility and psychological distress in affected individuals. This narrative review examines the etiology, pathophysiology, diagnosis, and management of RE, emphasizing its impact on male fertility. RE may result in the partial or complete absence of the ejaculate. Causes of RE include anatomical, neurological, pharmacological, and endocrine factors, with common triggers such as diabetes, spinal cord injury, and prostate surgery. Diagnosis primarily involves the patient history, a laboratory analysis of post-ejaculatory urine samples, and advanced imaging techniques. Management strategies for RE include pharmacological interventions, surgical approaches, and assisted reproductive technologies (ARTs). Sympathomimetic and parasympatholytic agents have demonstrated some success but are limited by side effects and variability in outcomes. ARTs, particularly with sperm retrieved from post-ejaculatory urine, offer a viable alternative for conception, with techniques such as urine alkalization and advanced sperm processing showing promising results. Despite these advancements, treatment efficacy remains inconsistent, with many studies relying on small sample sizes and lacking robust clinical trials. Future research should focus on refining diagnostic tools, optimizing ART protocols, and developing minimally invasive treatments. By addressing these gaps, healthcare providers can improve fertility outcomes and the quality of life for patients with RE. Full article
(This article belongs to the Section Clinical Diagnosis and Prognosis)
Show Figures

Figure 1

9 pages, 2599 KiB  
Case Report
Spontaneous Intracranial Hypotension in a Patient with Systemic Lupus Erythematosus and End-Stage Renal Failure: A Case Report and a Literature Review
by Konstantinos Paterakis, Alexandros Brotis, Adamantios Kalogeras, Maria Karagianni, Theodosios Spiliotopoulos, Christina Arvaniti, Argiro Petsiti, Marianna Vlychou, Efthimios Dardiotis, Eleni Arnaoutoglou and Kostas N. Fountas
Brain Sci. 2025, 15(3), 296; https://doi.org/10.3390/brainsci15030296 - 12 Mar 2025
Viewed by 135
Abstract
Background and Objectives: End-stage renal failure (ESRF) patients are at an increased risk of various neurological complications, particularly after hemodialysis. The current case report describes a rare presentation of spontaneous intracranial hypotension (SIH) in a patient with ESRF caused by systemic lupus [...] Read more.
Background and Objectives: End-stage renal failure (ESRF) patients are at an increased risk of various neurological complications, particularly after hemodialysis. The current case report describes a rare presentation of spontaneous intracranial hypotension (SIH) in a patient with ESRF caused by systemic lupus erythematosus (SLE). Methods: We present our case report. We also performed a systematic literature search in PubMed, Scopus, and Dimensions for the current literature review. Results: A total of 296 unique articles were identified, and their full text was retrieved. However, only one case report was relevant to our study and is summarized thereunder. The treatment approach involved high-dose intravenous steroids, surgical evacuation of the cranial subdural collections, and epidural blood patches to seal the presumed dural defect. Conclusions: This case report describes a rare presentation of SIH in a young patient with ESRF due to SLE. Diagnostic imaging revealed extensive subdural and epidural fluid collections in the brain and spinal cord, respectively, along with a few T2 FLAIR hyperintensities noted in the right thalamus, left cerebellar hemisphere, and right occipital gyrus that subsequently resolved. The treatment approach involved high-dose intravenous steroids, surgical evacuation of the cranial subdural collections, and epidural blood patches to seal the presumed dural defect. Full article
(This article belongs to the Section Neurosurgery and Neuroanatomy)
Show Figures

Figure 1

30 pages, 34873 KiB  
Article
Text-Guided Synthesis in Medical Multimedia Retrieval: A Framework for Enhanced Colonoscopy Image Classification and Segmentation
by Ojonugwa Oluwafemi Ejiga Peter, Opeyemi Taiwo Adeniran, Adetokunbo MacGregor John-Otumu, Fahmi Khalifa and Md Mahmudur Rahman
Algorithms 2025, 18(3), 155; https://doi.org/10.3390/a18030155 - 9 Mar 2025
Viewed by 384
Abstract
The lack of extensive, varied, and thoroughly annotated datasets impedes the advancement of artificial intelligence (AI) for medical applications, especially colorectal cancer detection. Models trained with limited diversity often display biases, especially when utilized on disadvantaged groups. Generative models (e.g., DALL-E 2, Vector-Quantized [...] Read more.
The lack of extensive, varied, and thoroughly annotated datasets impedes the advancement of artificial intelligence (AI) for medical applications, especially colorectal cancer detection. Models trained with limited diversity often display biases, especially when utilized on disadvantaged groups. Generative models (e.g., DALL-E 2, Vector-Quantized Generative Adversarial Network (VQ-GAN)) have been used to generate images but not colonoscopy data for intelligent data augmentation. This study developed an effective method for producing synthetic colonoscopy image data, which can be used to train advanced medical diagnostic models for robust colorectal cancer detection and treatment. Text-to-image synthesis was performed using fine-tuned Visual Large Language Models (LLMs). Stable Diffusion and DreamBooth Low-Rank Adaptation produce images that look authentic, with an average Inception score of 2.36 across three datasets. The validation accuracy of various classification models Big Transfer (BiT), Fixed Resolution Residual Next Generation Network (FixResNeXt), and Efficient Neural Network (EfficientNet) were 92%, 91%, and 86%, respectively. Vision Transformer (ViT) and Data-Efficient Image Transformers (DeiT) had an accuracy rate of 93%. Secondly, for the segmentation of polyps, the ground truth masks are generated using Segment Anything Model (SAM). Then, five segmentation models (U-Net, Pyramid Scene Parsing Network (PSNet), Feature Pyramid Network (FPN), Link Network (LinkNet), and Multi-scale Attention Network (MANet)) were adopted. FPN produced excellent results, with an Intersection Over Union (IoU) of 0.64, an F1 score of 0.78, a recall of 0.75, and a Dice coefficient of 0.77. This demonstrates strong performance in terms of both segmentation accuracy and overlap metrics, with particularly robust results in balanced detection capability as shown by the high F1 score and Dice coefficient. This highlights how AI-generated medical images can improve colonoscopy analysis, which is critical for early colorectal cancer detection. Full article
Show Figures

Figure 1

Back to TopTop