Next Issue
Volume 8, July
Previous Issue
Volume 8, May
 
 

J. Imaging, Volume 8, Issue 6 (June 2022) – 25 articles

Cover Story (view full-size image): Digital holography is well adapted to measure any modifications related to any objects. The method refers to digital holographic interferometry where the phase change between two states of the object is of interest. However, the phase images are corrupted by the speckle decorrelation noise. In this paper, we address the question of de-noising in holographic interferometry when phase data are polluted with speckle noise. We present a new database of phase fringe images for the evaluation of de-noising algorithms in digital holography. In this database, the simulated phase maps present characteristics such as the size of the speckle grains and the noise level of the fringes which can be controlled by the generation process. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 13983 KiB  
Review
Nonlinear Reconstruction of Images from Patterns Generated by Deterministic or Random Optical Masks—Concepts and Review of Research
by Daniel Smith, Shivasubramanian Gopinath, Francis Gracy Arockiaraj, Andra Naresh Kumar Reddy, Vinoth Balasubramani, Ravi Kumar, Nitin Dubey, Soon Hock Ng, Tomas Katkus, Shakina Jothi Selva, Dhanalakshmi Renganathan, Manueldoss Beaula Ruby Kamalam, Aravind Simon John Francis Rajeswary, Srinivasan Navaneethakrishnan, Stephen Rajkumar Inbanathan, Sandhra-Mirella Valdma, Periyasamy Angamuthu Praveen, Jayavel Amudhavel, Manoj Kumar, Rashid A. Ganeev, Pierre J. Magistretti, Christian Depeursinge, Saulius Juodkazis, Joseph Rosen and Vijayakumar Anandadd Show full author list remove Hide full author list
J. Imaging 2022, 8(6), 174; https://doi.org/10.3390/jimaging8060174 - 20 Jun 2022
Cited by 16 | Viewed by 3309
Abstract
Indirect-imaging methods involve at least two steps, namely optical recording and computational reconstruction. The optical-recording process uses an optical modulator that transforms the light from the object into a typical intensity distribution. This distribution is numerically processed to reconstruct the object’s image corresponding [...] Read more.
Indirect-imaging methods involve at least two steps, namely optical recording and computational reconstruction. The optical-recording process uses an optical modulator that transforms the light from the object into a typical intensity distribution. This distribution is numerically processed to reconstruct the object’s image corresponding to different spatial and spectral dimensions. There have been numerous optical-modulation functions and reconstruction methods developed in the past few years for different applications. In most cases, a compatible pair of the optical-modulation function and reconstruction method gives optimal performance. A new reconstruction method, termed nonlinear reconstruction (NLR), was developed in 2017 to reconstruct the object image in the case of optical-scattering modulators. Over the years, it has been revealed that the NLR can reconstruct an object’s image modulated by an axicons, bifocal lenses and even exotic spiral diffractive elements, which generate deterministic optical fields. Apparently, NLR seems to be a universal reconstruction method for indirect imaging. In this review, the performance of NLR isinvestigated for many deterministic and stochastic optical fields. Simulation and experimental results for different cases are presented and discussed. Full article
Show Figures

Figure 1

24 pages, 747 KiB  
Article
No-Reference Quality Assessment of Authentically Distorted Images Based on Local and Global Features
by Domonkos Varga
J. Imaging 2022, 8(6), 173; https://doi.org/10.3390/jimaging8060173 - 19 Jun 2022
Cited by 5 | Viewed by 2792
Abstract
With the development of digital imaging techniques, image quality assessment methods are receiving more attention in the literature. Since distortion-free versions of camera images in many practical, everyday applications are not available, the need for effective no-reference image quality assessment algorithms is growing. [...] Read more.
With the development of digital imaging techniques, image quality assessment methods are receiving more attention in the literature. Since distortion-free versions of camera images in many practical, everyday applications are not available, the need for effective no-reference image quality assessment algorithms is growing. Therefore, this paper introduces a novel no-reference image quality assessment algorithm for the objective evaluation of authentically distorted images. Specifically, we apply a broad spectrum of local and global feature vectors to characterize the variety of authentic distortions. Among the employed local features, the statistics of popular local feature descriptors, such as SURF, FAST, BRISK, or KAZE, are proposed for NR-IQA; other features are also introduced to boost the performances of local features. The proposed method was compared to 12 other state-of-the-art algorithms on popular and accepted benchmark datasets containing RGB images with authentic distortions (CLIVE, KonIQ-10k, and SPAQ). The introduced algorithm significantly outperforms the state-of-the-art in terms of correlation with human perceptual quality ratings. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

13 pages, 2081 KiB  
Article
Evaluation of STIR Library Adapted for PET Scanners with Non-Cylindrical Geometry
by Viet Dao, Ekaterina Mikhaylova, Max L. Ahnen, Jannis Fischer, Kris Thielemans and Charalampos Tsoumpas
J. Imaging 2022, 8(6), 172; https://doi.org/10.3390/jimaging8060172 - 18 Jun 2022
Cited by 2 | Viewed by 2116
Abstract
Software for Tomographic Image Reconstruction (STIR) is an open source C++ library used to reconstruct single photon emission tomography and positron emission tomography (PET) data. STIR has an experimental scanner geometry modelling feature to accurately model detector placement. In this study, we test [...] Read more.
Software for Tomographic Image Reconstruction (STIR) is an open source C++ library used to reconstruct single photon emission tomography and positron emission tomography (PET) data. STIR has an experimental scanner geometry modelling feature to accurately model detector placement. In this study, we test and improve this new feature using several types of data: Monte Carlo simulations and measured phantom data acquired from a dedicated brain PET prototype scanner. The results show that the new geometry class applied to non-cylindrical PET scanners improved spatial resolution, uniformity, and image contrast. These are directly observed in the reconstructions of small features in the test quality phantom. Overall, we conclude that the revised “BlocksOnCylindrical” class will be a valuable addition to the next STIR software release with adjustments of existing features (Single Scatter Simulation, forward projection, attenuation corrections) to “BlocksOnCylindrical”. Full article
Show Figures

Figure 1

12 pages, 944 KiB  
Article
HFM: A Hybrid Feature Model Based on Conditional Auto Encoders for Zero-Shot Learning
by Fadi Al Machot, Mohib Ullah and Habib Ullah
J. Imaging 2022, 8(6), 171; https://doi.org/10.3390/jimaging8060171 - 16 Jun 2022
Cited by 6 | Viewed by 1820
Abstract
Zero-Shot Learning (ZSL) is related to training machine learning models capable of classifying or predicting classes (labels) that are not involved in the training set (unseen classes). A well-known problem in Deep Learning (DL) is the requirement for large amount of training data. [...] Read more.
Zero-Shot Learning (ZSL) is related to training machine learning models capable of classifying or predicting classes (labels) that are not involved in the training set (unseen classes). A well-known problem in Deep Learning (DL) is the requirement for large amount of training data. Zero-Shot learning is a straightforward approach that can be applied to overcome this problem. We propose a Hybrid Feature Model (HFM) based on conditional autoencoders for training a classical machine learning model on pseudo training data generated by two conditional autoencoders (given the semantic space as a condition): (a) the first autoencoder is trained with the visual space concatenated with the semantic space and (b) the second autoencoder is trained with the visual space as an input. Then, the decoders of both autoencoders are fed by the test data of the unseen classes to generate pseudo training data. To classify the unseen classes, the pseudo training data are combined to train a support vector machine. Tests on four different benchmark datasets show that the proposed method shows promising results compared to the current state-of-the-art when it comes to settings for both standard Zero-Shot Learning (ZSL) and Generalized Zero-Shot Learning (GZSL). Full article
(This article belongs to the Special Issue Unsupervised Deep Learning and Its Applications in Imaging Processing)
Show Figures

Figure 1

11 pages, 2393 KiB  
Article
A Deep-Learning Model for Real-Time Red Palm Weevil Detection and Localization
by Majed Alsanea, Shabana Habib, Noreen Fayyaz Khan, Mohammed F. Alsharekh, Muhammad Islam and Sheroz Khan
J. Imaging 2022, 8(6), 170; https://doi.org/10.3390/jimaging8060170 - 15 Jun 2022
Cited by 11 | Viewed by 2919
Abstract
Background and motivation: Over the last two decades, particularly in the Middle East, Red Palm Weevils (RPW, Rhynchophorus ferruginous) have proved to be the most destructive pest of palm trees across the globe. Problem: The RPW has caused considerable damage to various palm [...] Read more.
Background and motivation: Over the last two decades, particularly in the Middle East, Red Palm Weevils (RPW, Rhynchophorus ferruginous) have proved to be the most destructive pest of palm trees across the globe. Problem: The RPW has caused considerable damage to various palm species. The early identification of the RPW is a challenging task for good date production since the identification will prevent palm trees from being affected by the RPW. This is one of the reasons why the use of advanced technology will help in the prevention of the spread of the RPW on palm trees. Many researchers have worked on finding an accurate technique for the identification, localization and classification of the RPW pest. This study aimed to develop a model that can use a deep-learning approach to identify and discriminate between the RPW and other insects living in palm tree habitats using a deep-learning technique. Researchers had not applied deep learning to the classification of red palm weevils previously. Methods: In this study, a region-based convolutional neural network (R-CNN) algorithm was used to detect the location of the RPW in an image by building bounding boxes around the image. A CNN algorithm was applied in order to extract the features to enclose with the bounding boxes—the selection target. In addition, these features were passed through the classification and regression layers to determine the presence of the RPW with a high degree of accuracy and to locate its coordinates. Results: As a result of the developed model, the RPW can be quickly detected with a high accuracy of 100% in infested palm trees at an early stage. In the Al-Qassim region, which has thousands of farms, the model sets the path for deploying an efficient, low-cost RPW detection and classification technology for palm trees. Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning: Trends and Applications)
Show Figures

Figure 1

9 pages, 872 KiB  
Article
Mobile-PolypNet: Lightweight Colon Polyp Segmentation Network for Low-Resource Settings
by Ranit Karmakar and Saeid Nooshabadi
J. Imaging 2022, 8(6), 169; https://doi.org/10.3390/jimaging8060169 - 14 Jun 2022
Cited by 4 | Viewed by 2228
Abstract
Colon polyps, small clump of cells on the lining of the colon, can lead to colorectal cancer (CRC), one of the leading types of cancer globally. Hence, early detection of these polyps automatically is crucial in the prevention of CRC. The deep learning [...] Read more.
Colon polyps, small clump of cells on the lining of the colon, can lead to colorectal cancer (CRC), one of the leading types of cancer globally. Hence, early detection of these polyps automatically is crucial in the prevention of CRC. The deep learning models proposed for the detection and segmentation of colorectal polyps are resource-consuming. This paper proposes a lightweight deep learning model for colorectal polyp segmentation that achieved state-of-the-art accuracy while significantly reducing the model size and complexity. The proposed deep learning autoencoder model employs a set of state-of-the-art architectural blocks and optimization objective functions to achieve the desired efficiency. The model is trained and tested on five publicly available colorectal polyp segmentation datasets (CVC-ClinicDB, CVC-ColonDB, EndoScene, Kvasir, and ETIS). We also performed ablation testing on the model to test various aspects of the autoencoder architecture. We performed the model evaluation by using most of the common image-segmentation metrics. The backbone model achieved a DICE score of 0.935 on the Kvasir dataset and 0.945 on the CVC-ClinicDB dataset, improving the accuracy by 4.12% and 5.12%, respectively, over the current state-of-the-art network, while using 88 times fewer parameters, 40 times less storage space, and being computationally 17 times more efficient. Our ablation study showed that the addition of ConvSkip in the autoencoder slightly improves the model’s performance but it was not significant (p-value = 0.815). Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

19 pages, 3494 KiB  
Article
Estimating Muscle Activity from the Deformation of a Sequential 3D Point Cloud
by Hui Niu, Takahiro Ito, Damien Desclaux, Ko Ayusawa, Yusuke Yoshiyasu, Ryusuke Sagawa and Eiichi Yoshida
J. Imaging 2022, 8(6), 168; https://doi.org/10.3390/jimaging8060168 - 13 Jun 2022
Cited by 3 | Viewed by 2433
Abstract
Estimation of muscle activity is very important as it can be a cue to assess a person’s movements and intentions. If muscle activity states can be obtained through non-contact measurement, through visual measurement systems, for example, muscle activity will provide data support and [...] Read more.
Estimation of muscle activity is very important as it can be a cue to assess a person’s movements and intentions. If muscle activity states can be obtained through non-contact measurement, through visual measurement systems, for example, muscle activity will provide data support and help for various study fields. In the present paper, we propose a method to predict human muscle activity from skin surface strain. This requires us to obtain a 3D reconstruction model with a high relative accuracy. The problem is that reconstruction errors due to noise on raw data generated in a visual measurement system are inevitable. In particular, the independent noise between each frame on the time series makes it difficult to accurately track the motion. In order to obtain more precise information about the human skin surface, we propose a method that introduces a temporal constraint in the non-rigid registration process. We can achieve more accurate tracking of shape and motion by constraining the point cloud motion over the time series. Using surface strain as input, we build a multilayer perceptron artificial neural network for inferring muscle activity. In the present paper, we investigate simple lower limb movements to train the network. As a result, we successfully achieve the estimation of muscle activity via surface strain. Full article
(This article belongs to the Special Issue Advances in Body Scanning)
Show Figures

Figure 1

13 pages, 745 KiB  
Article
Secure Image Encryption Using Chaotic, Hybrid Chaotic and Block Cipher Approach
by Nirmal Chaudhary, Tej Bahadur Shahi and Arjun Neupane
J. Imaging 2022, 8(6), 167; https://doi.org/10.3390/jimaging8060167 - 10 Jun 2022
Cited by 17 | Viewed by 3766
Abstract
Secure image transmission is one of the most challenging problems in the age of communication technology. Millions of people use and transfer images for either personal or commercial purposes over the internet. One way of achieving secure image transmission over the network is [...] Read more.
Secure image transmission is one of the most challenging problems in the age of communication technology. Millions of people use and transfer images for either personal or commercial purposes over the internet. One way of achieving secure image transmission over the network is encryption techniques that convert the original image into a non-understandable or scrambled form, called a cipher image, so that even if the attacker gets access to the cipher they would not be able to retrieve the original image. In this study, chaos-based image encryption and block cipher techniques are implemented and analyzed for image encryption. Arnold cat map in combination with a logistic map are used as native chaotic and hybrid chaotic approaches respectively whereas advanced encryption standard (AES) is used as a block cipher approach. The chaotic and AES methods are applied to encrypt images and are subjected to measures of different performance parameters such as peak signal to noise ratio (PSNR), number of pixels change rate (NPCR), unified average changing intensity (UACI), and histogram and computation time analysis to measure the strength of each algorithm. The results show that the hybrid chaotic map has better NPCR and UACI values which makes it more robust to differential attacks or chosen plain text attacks. The Arnold cat map is computationally efficient in comparison to the other two approaches. However, AES has a lower PSNR value (7.53 to 11.93) and has more variation between histograms of original and cipher images, thereby indicating that it is more resistant to statistical attacks than the other two approaches. Full article
(This article belongs to the Special Issue Visualisation and Cybersecurity)
Show Figures

Figure 1

21 pages, 7407 KiB  
Article
Computational Analysis of Correlations between Image Aesthetic and Image Naturalness in the Relation with Image Quality
by Quyet-Tien Le, Patricia Ladret, Huu-Tuan Nguyen and Alice Caplier
J. Imaging 2022, 8(6), 166; https://doi.org/10.3390/jimaging8060166 - 9 Jun 2022
Cited by 2 | Viewed by 2040
Abstract
The main purpose of this paper is the study of the correlations between Image Aesthetic (IA) and Image Naturalness (IN) and the analysis of the influence of IA and IN on Image Quality (IQ) in different contexts. The first contribution is a study [...] Read more.
The main purpose of this paper is the study of the correlations between Image Aesthetic (IA) and Image Naturalness (IN) and the analysis of the influence of IA and IN on Image Quality (IQ) in different contexts. The first contribution is a study about the potential relationships between IA and IN. For that study, two sub-questions are considered. The first one is to validate the idea that IA and IN are not correlated to each other. The second one is about the influence of IA and IN features on Image Naturalness Assessment (INA) and Image Aesthetic Assessment (IAA), respectively. Secondly, it is obvious that IQ is related to IA and IN, but the exact influence of IA and IN on IQ has not been evaluated. Besides that, the context impact on those influences has not been clarified, so the second contribution is to investigate the influence of IA and IN on IQ in different contexts. The results obtained from rigorous experiments prove that although there are moderate and weak correlations between IA and IN, they are still two different components of IQ. It also appears that viewers’ IQ perception is affected by some contextual factors, and the influence of IA and IN on IQ depends on the considered context. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

11 pages, 24296 KiB  
Article
Deep Learning Network for Speckle De-Noising in Severe Conditions
by Marie Tahon, Silvio Montrésor and Pascal Picart
J. Imaging 2022, 8(6), 165; https://doi.org/10.3390/jimaging8060165 - 9 Jun 2022
Cited by 6 | Viewed by 1886
Abstract
Digital holography is well adapted to measure any modifications related to any objects. The method refers to digital holographic interferometry where the phase change between two states of the object is of interest. However, the phase images are corrupted by the speckle decorrelation [...] Read more.
Digital holography is well adapted to measure any modifications related to any objects. The method refers to digital holographic interferometry where the phase change between two states of the object is of interest. However, the phase images are corrupted by the speckle decorrelation noise. In this paper, we address the question of de-noising in holographic interferometry when phase data are polluted with speckle noise. We present a new database of phase fringe images for the evaluation of de-noising algorithms in digital holography. In this database, the simulated phase maps present characteristics such as the size of the speckle grains and the noise level of the fringes, which can be controlled by the generation process. Deep neural network architectures are trained with sets of phase maps having differentiated parameters according to the features. The performances of the new models are evaluated with a set of test fringe patterns whose characteristics are representative of severe conditions in terms of input SNR and speckle grain size. For this, four metrics are considered, which are the PSNR, the phase error, the perceived quality index and the peak-to-valley ratio. Results demonstrate that the models trained with phase maps with a diversity of noise characteristics lead to improving their efficiency, their robustness and their generality on phase maps with severe noise. Full article
(This article belongs to the Special Issue Digital Holography: Development and Application)
Show Figures

Figure 1

27 pages, 12444 KiB  
Article
Fabrication of Black Body Grids by Thick Film Printing for Quantitative Neutron Imaging
by Martin Wissink, Kirk Goldenberger, Luke Ferguson, Yuxuan Zhang, Hassina Bilheux, Jacob LaManna, David Jacobson, Michael Kass, Charles Finney and Jonathan Willocks
J. Imaging 2022, 8(6), 164; https://doi.org/10.3390/jimaging8060164 - 8 Jun 2022
Cited by 1 | Viewed by 2187
Abstract
Neutron imaging offers deep penetration through many high-Z materials while also having high sensitivity to certain low-Z isotopes such as 1H, 6Li, and 10B. This unique combination of properties has made neutron imaging an attractive tool for a wide range [...] Read more.
Neutron imaging offers deep penetration through many high-Z materials while also having high sensitivity to certain low-Z isotopes such as 1H, 6Li, and 10B. This unique combination of properties has made neutron imaging an attractive tool for a wide range of material science and engineering applications. However, measurements made by neutron imaging or tomography are generally qualitative in nature due to the inability of detectors to discriminate between neutrons which have been transmitted through the sample and neutrons which are scattered by the sample or within the detector. Recent works have demonstrated that deploying a grid of small black bodies (BBs) in front of the sample can allow for the scattered neutrons to be measured at the BB locations and subsequently subtracted from the total measured intensity to yield a quantitative transmission measurement. While this method can be very effective, factors such as the scale and composition of the sample, the beam divergence, and the resolution and construction of the detector may require optimization of the grid design to remove all measurement biases within a given experimental setup. Therefore, it is desirable to have a method by which BB grids may be rapidly and inexpensively produced such that they can easily be tailored to specific applications. In this work, we present a method for fabricating BB patterns by thick film printing of Gd2O3 and evaluate the performance with variation in feature size and number of print layers with cold and thermal neutrons. Full article
(This article belongs to the Special Issue Computational Methods for Neutron Imaging)
Show Figures

Figure 1

27 pages, 2559 KiB  
Article
Graphical Image Region Extraction with K-Means Clustering and Watershed
by Sandra Jardim, João António and Carlos Mora
J. Imaging 2022, 8(6), 163; https://doi.org/10.3390/jimaging8060163 - 8 Jun 2022
Cited by 15 | Viewed by 3769
Abstract
With a wide range of applications, image segmentation is a complex and difficult preprocessing step that plays an important role in automatic visual systems, which accuracy impacts, not only on segmentation results, but directly affects the effectiveness of the follow-up tasks. Despite the [...] Read more.
With a wide range of applications, image segmentation is a complex and difficult preprocessing step that plays an important role in automatic visual systems, which accuracy impacts, not only on segmentation results, but directly affects the effectiveness of the follow-up tasks. Despite the many advances achieved in the last decades, image segmentation remains a challenging problem, particularly, the segmenting of color images due to the diverse inhomogeneities of color, textures and shapes present in the descriptive features of the images. In trademark graphic images segmentation, beyond these difficulties, we must also take into account the high noise and low resolution, which are often present. Trademark graphic images can also be very heterogeneous with regard to the elements that make them up, which can be overlapping and with varying lighting conditions. Due to the immense variation encountered in corporate logos and trademark graphic images, it is often difficult to select a single method for extracting relevant image regions in a way that produces satisfactory results. Many of the hybrid approaches that integrate the Watershed and K-Means algorithms involve processing very high quality and visually similar images, such as medical images, meaning that either approach can be tweaked to work on images that follow a certain pattern. Trademark images are totally different from each other and are usually fully colored. Our system solves this difficulty given it is a generalized implementation designed to work in most scenarios, through the use of customizable parameters and completely unbiased for an image type. In this paper, we propose a hybrid approach to Image Region Extraction that focuses on automated region proposal and segmentation techniques. In particular, we analyze popular techniques such as K-Means Clustering and Watershedding and their effectiveness when deployed in a hybrid environment to be applied to a highly variable dataset. The proposed system consists of a multi-stage algorithm that takes as input an RGB image and produces multiple outputs, corresponding to the extracted regions. After preprocessing steps, a K-Means function with random initial centroids and a user-defined value for k is executed over the RGB image, generating a gray-scale segmented image, to which a threshold method is applied to generate a binary mask, containing the necessary information to generate a distance map. Then, the Watershed function is performed over the distance map, using the markers defined by the Connected Component Analysis function that labels regions on 8-way pixel connectivity, ensuring that all regions are correctly found. Finally, individual objects are labelled for extraction through a contour method, based on border following. The achieved results show adequate region extraction capabilities when processing graphical images from different datasets, where the system correctly distinguishes the most relevant visual elements of images with minimal tweaking. Full article
(This article belongs to the Special Issue Image Segmentation Techniques: Current Status and Future Directions)
Show Figures

Figure 1

14 pages, 1984 KiB  
Article
Ultraviolet Fluorescence Photography—Choosing the Correct Filters for Imaging
by Jonathan Crowther
J. Imaging 2022, 8(6), 162; https://doi.org/10.3390/jimaging8060162 - 7 Jun 2022
Cited by 4 | Viewed by 3399
Abstract
Ultraviolet (UV) fluorescence is a valuable tool for the imaging of a wide range of subjects. Like all imaging techniques, the key to success depends on the correct choice of equipment and approach used. In fluorescence photography, a filter is placed in front [...] Read more.
Ultraviolet (UV) fluorescence is a valuable tool for the imaging of a wide range of subjects. Like all imaging techniques, the key to success depends on the correct choice of equipment and approach used. In fluorescence photography, a filter is placed in front of the camera lens to block unwanted short-wavelength light from entering the camera, which would compromise the image. However, some filters exhibit fluorescence under UV light and can therefore have the potential to produce a color cast on the image. Filters also vary in how well they block unwanted light. A range of commonly used optical filters was assessed for fluorescence under UV light, and their optical transmission between 250 nm and 800 nm was measured. Finally, a simple method to enable the researcher to determine the fluorescence of the filters that they are using or wish to use for their work is described. The results indicate that the filters tested demonstrated a wide range of fluorescence under UV light and varying degrees of UV blocking. Some filters tested had equivalent or reduced fluorescence compared to Schott KV-418, which is a widely used, but, unfortunately, no longer manufactured UV blocking filter commonly used for fluorescence photography. Full article
(This article belongs to the Special Issue Spectral Imaging for Cultural Heritage)
Show Figures

Figure 1

11 pages, 1579 KiB  
Article
Detection Rate and Variability in Measurement of Mandibular Incisive Canal on Cone-Beam Computed Tomography: A Study of 220 Dentate Hemi-Mandibles from Italy
by Andrea Borghesi, Diego Di Salvo, Pietro Ciolli, Teresa Falcone, Marco Ravanelli, Davide Farina and Nicola Carapella
J. Imaging 2022, 8(6), 161; https://doi.org/10.3390/jimaging8060161 - 7 Jun 2022
Viewed by 2546
Abstract
The mandibular incisive canal (MIC) is a small bony channel located in the interforaminal region; it represents the anterior continuation of the mandibular canal. Cone-beam computed tomography (CBCT) is the most commonly utilized radiological technique for assessing the MIC. The main purpose of [...] Read more.
The mandibular incisive canal (MIC) is a small bony channel located in the interforaminal region; it represents the anterior continuation of the mandibular canal. Cone-beam computed tomography (CBCT) is the most commonly utilized radiological technique for assessing the MIC. The main purpose of this study was to evaluate the detectability and variability in measurements of the MIC on CBCT. A total of 220 dentate hemi-mandibles were retrospectively selected for this study. For each hemi-mandible, the detectability, diameter, and distance of the MIC from anatomical landmarks (cortical plates and tooth apices) were evaluated in consensus by two observers. The analysis was performed at four different levels (first premolar, canine, lateral incisor, and central incisor) and was repeated after one month. The variability of MIC measurements was expressed as the coefficient of repeatability (CR), obtained from the Bland–Altman analysis. The MIC detection rate reduced from the first premolar to the central incisor (from 82.3% to 0.5%). The CR of MIC measurements (diameter and distances from anatomical landmarks) was ≤0.74 mm. Although the MIC is difficult to detect in a non-negligible percentage of cases, the limited variability in measurements confirms that CBCT is an effective technique for the assessment of the MIC. Full article
(This article belongs to the Special Issue New Frontiers of Advanced Imaging in Dentistry)
Show Figures

Figure 1

18 pages, 2021 KiB  
Article
A Brief Survey on No-Reference Image Quality Assessment Methods for Magnetic Resonance Images
by Igor Stępień and Mariusz Oszust
J. Imaging 2022, 8(6), 160; https://doi.org/10.3390/jimaging8060160 - 4 Jun 2022
Cited by 9 | Viewed by 3542
Abstract
No-reference image quality assessment (NR-IQA) methods automatically and objectively predict the perceptual quality of images without access to a reference image. Therefore, due to the lack of pristine images in most medical image acquisition systems, they play a major role in supporting the [...] Read more.
No-reference image quality assessment (NR-IQA) methods automatically and objectively predict the perceptual quality of images without access to a reference image. Therefore, due to the lack of pristine images in most medical image acquisition systems, they play a major role in supporting the examination of resulting images and may affect subsequent treatment. Their usage is particularly important in magnetic resonance imaging (MRI) characterized by long acquisition times and a variety of factors that influence the quality of images. In this work, a survey covering recently introduced NR-IQA methods for the assessment of MR images is presented. First, typical distortions are reviewed and then popular NR methods are characterized, taking into account the way in which they describe MR images and create quality models for prediction. The survey also includes protocols used to evaluate the methods and popular benchmark databases. Finally, emerging challenges are outlined along with an indication of the trends towards creating accurate image prediction models. Full article
Show Figures

Figure 1

18 pages, 651 KiB  
Review
Attention-Setting and Human Mental Function
by Thomas Sanocki and Jong Han Lee
J. Imaging 2022, 8(6), 159; https://doi.org/10.3390/jimaging8060159 - 1 Jun 2022
Cited by 3 | Viewed by 2446
Abstract
This article provides an introduction to experimental research on top-down human attention in complex scenes, written for cognitive scientists in general. We emphasize the major effects of goals and intention on mental function, measured with behavioral experiments. We describe top-down attention as an [...] Read more.
This article provides an introduction to experimental research on top-down human attention in complex scenes, written for cognitive scientists in general. We emphasize the major effects of goals and intention on mental function, measured with behavioral experiments. We describe top-down attention as an open category of mental actions that initiates particular task sets, which are assembled from a wide range of mental processes. We call this attention-setting. Experiments on visual search, task switching, and temporal attention are described and extended to the important human time scale of seconds. Full article
(This article belongs to the Special Issue Human Attention and Visual Cognition)
Show Figures

Figure 1

43 pages, 26516 KiB  
Article
Atmospheric Correction for High-Resolution Shape from Shading on Mars
by Marcel Hess, Moritz Tenthoff, Kay Wohlfarth and Christian Wöhler
J. Imaging 2022, 8(6), 158; https://doi.org/10.3390/jimaging8060158 - 1 Jun 2022
Cited by 1 | Viewed by 2083
Abstract
Digital Elevation Models (DEMs) of planet Mars are crucial for many remote sensing applications and for landing site characterization of rover missions. Shape from Shading (SfS) is known to work well as a complementary method to greatly enhance the quality of photogrammetrically obtained [...] Read more.
Digital Elevation Models (DEMs) of planet Mars are crucial for many remote sensing applications and for landing site characterization of rover missions. Shape from Shading (SfS) is known to work well as a complementary method to greatly enhance the quality of photogrammetrically obtained DEMs of planetary surfaces with respect to the effective resolution and the overall accuracy. In this work, we extend our previous lunar shape and albedo from shading framework by embedding the Hapke photometric reflectance model in an atmospheric model such that it is applicable to Mars. Compared to previous approaches, the proposed method is capable of directly estimating the atmospheric parameters from a given scene without the need for external data, and assumes a spatially varying albedo. The DEMs are generated from imagery of the Context Camera (CTX) onboard the Mars Reconnaissance Orbiter (MRO) and are validated for clear and opaque atmospheric conditions. We analyze the necessity of using atmospheric compensation depending on the atmospheric conditions. For low optical depths, the Hapke model without an atmospheric component is still applicable to the Martian surface. For higher optical depths, atmospheric compensation is required to obtain good quality DEMs. Full article
(This article belongs to the Special Issue Photometric Stereo)
Show Figures

Figure 1

18 pages, 1345 KiB  
Article
Embedded Quantitative MRI T Mapping Using Non-Linear Primal-Dual Proximal Splitting
by Matti Hanhela, Antti Paajanen, Mikko J. Nissi and Ville Kolehmainen
J. Imaging 2022, 8(6), 157; https://doi.org/10.3390/jimaging8060157 - 31 May 2022
Cited by 3 | Viewed by 1785
Abstract
Quantitative MRI (qMRI) methods allow reducing the subjectivity of clinical MRI by providing numerical values on which diagnostic assessment or predictions of tissue properties can be based. However, qMRI measurements typically take more time than anatomical imaging due to requiring multiple measurements with [...] Read more.
Quantitative MRI (qMRI) methods allow reducing the subjectivity of clinical MRI by providing numerical values on which diagnostic assessment or predictions of tissue properties can be based. However, qMRI measurements typically take more time than anatomical imaging due to requiring multiple measurements with varying contrasts for, e.g., relaxation time mapping. To reduce the scanning time, undersampled data may be combined with compressed sensing (CS) reconstruction techniques. Typical CS reconstructions first reconstruct a complex-valued set of images corresponding to the varying contrasts, followed by a non-linear signal model fit to obtain the parameter maps. We propose a direct, embedded reconstruction method for T1ρ mapping. The proposed method capitalizes on a known signal model to directly reconstruct the desired parameter map using a non-linear optimization model. The proposed reconstruction method also allows directly regularizing the parameter map of interest and greatly reduces the number of unknowns in the reconstruction, which are key factors in the performance of the reconstruction method. We test the proposed model using simulated radially sampled data from a 2D phantom and 2D cartesian ex vivo measurements of a mouse kidney specimen. We compare the embedded reconstruction model to two CS reconstruction models and in the cartesian test case also the direct inverse fast Fourier transform. The T1ρ RMSE of the embedded reconstructions was reduced by 37–76% compared to the CS reconstructions when using undersampled simulated data with the reduction growing with larger acceleration factors. The proposed, embedded model outperformed the reference methods on the experimental test case as well, especially providing robustness with higher acceleration factors. Full article
(This article belongs to the Special Issue The Present and the Future of Imaging)
Show Figures

Figure 1

25 pages, 16364 KiB  
Article
Low-Cost Probabilistic 3D Denoising with Applications for Ultra-Low-Radiation Computed Tomography
by Illia Horenko, Lukáš Pospíšil, Edoardo Vecchi, Steffen Albrecht, Alexander Gerber, Beate Rehbock, Albrecht Stroh and Susanne Gerber
J. Imaging 2022, 8(6), 156; https://doi.org/10.3390/jimaging8060156 - 31 May 2022
Cited by 1 | Viewed by 2620
Abstract
We propose a pipeline for synthetic generation of personalized Computer Tomography (CT) images, with a radiation exposure evaluation and a lifetime attributable risk (LAR) assessment. We perform a patient-specific performance evaluation for a broad range of denoising algorithms (including the most popular deep [...] Read more.
We propose a pipeline for synthetic generation of personalized Computer Tomography (CT) images, with a radiation exposure evaluation and a lifetime attributable risk (LAR) assessment. We perform a patient-specific performance evaluation for a broad range of denoising algorithms (including the most popular deep learning denoising approaches, wavelets-based methods, methods based on Mumford–Shah denoising, etc.), focusing both on accessing the capability to reduce the patient-specific CT-induced LAR and on computational cost scalability. We introduce a parallel Probabilistic Mumford–Shah denoising model (PMS) and show that it markedly-outperforms the compared common denoising methods in denoising quality and cost scaling. In particular, we show that it allows an approximately 22-fold robust patient-specific LAR reduction for infants and a 10-fold LAR reduction for adults. Using a normal laptop, the proposed algorithm for PMS allows cheap and robust (with a multiscale structural similarity index >90%) denoising of very large 2D videos and 3D images (with over 107 voxels) that are subject to ultra-strong noise (Gaussian and non-Gaussian) for signal-to-noise ratios far below 1.0. The code is provided for open access. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

19 pages, 5841 KiB  
Article
Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning
by Kyriakos D. Apostolidis and George A. Papakostas
J. Imaging 2022, 8(6), 155; https://doi.org/10.3390/jimaging8060155 - 30 May 2022
Cited by 12 | Viewed by 2654
Abstract
In the past years, Deep Neural Networks (DNNs) have become popular in many disciplines such as Computer Vision (CV), and the evolution of hardware has helped researchers to develop many powerful Deep Learning (DL) models to deal with several problems. One of the [...] Read more.
In the past years, Deep Neural Networks (DNNs) have become popular in many disciplines such as Computer Vision (CV), and the evolution of hardware has helped researchers to develop many powerful Deep Learning (DL) models to deal with several problems. One of the most important challenges in the CV area is Medical Image Analysis. However, adversarial attacks have proven to be an important threat to vision systems by significantly reducing the performance of the models. This paper brings to light a different side of digital watermarking, as a potential black-box adversarial attack. In this context, apart from proposing a new category of adversarial attacks named watermarking attacks, we highlighted a significant problem, as the massive use of watermarks, for security reasons, seems to pose significant risks to vision systems. For this purpose, a moment-based local image watermarking method is implemented on three modalities, Magnetic Resonance Images (MRI), Computed Tomography (CT-scans), and X-ray images. The introduced methodology was tested on three state-of-the art CV models, DenseNet 201, DenseNet169, and MobileNetV2. The results revealed that the proposed attack achieved over 50% degradation of the model’s performance in terms of accuracy. Additionally, MobileNetV2 was the most vulnerable model and the modality with the biggest reduction was CT-scans. Full article
(This article belongs to the Special Issue Intelligent Strategies for Medical Image Analysis)
Show Figures

Figure 1

18 pages, 5861 KiB  
Article
Evaluating the Influence of ipRGCs on Color Discrimination
by Masaya Ohtsu, Akihiro Kurata, Keita Hirai, Midori Tanaka and Takahiko Horiuchi
J. Imaging 2022, 8(6), 154; https://doi.org/10.3390/jimaging8060154 - 28 May 2022
Viewed by 1897
Abstract
To investigate the influence of intrinsically photosensitive retinal ganglion cells (ipRGCs) on color discrimination, it is necessary to create two metameric light stimuli (metameric ipRGC stimuli) with the same amount of cone and rod stimulation, but different amounts of ipRGC stimulation. However, since [...] Read more.
To investigate the influence of intrinsically photosensitive retinal ganglion cells (ipRGCs) on color discrimination, it is necessary to create two metameric light stimuli (metameric ipRGC stimuli) with the same amount of cone and rod stimulation, but different amounts of ipRGC stimulation. However, since the spectral sensitivity functions of cones and rods overlap with those of ipRGCs in a wavelength band, it has been difficult to independently control the amount of stimulation of ipRGCs only. In this study, we first propose a method for calculating metameric ipRGC stimulation based on the orthogonal basis functions of human photoreceptor cells. Then, we clarify the controllable range of metameric ipRGC stimulation within a color gamut. Finally, to investigate the color discrimination by metameric ipRGC stimuli, we conduct subjective evaluation experiments on 24 chromaticity coordinates using a multispectral projector. The results reveal a correlation between differences in the amount of ipRGC stimulation and differences in color appearance, indicating that ipRGCs may influence color discrimination. Full article
(This article belongs to the Special Issue Advances in Color Imaging)
Show Figures

Figure 1

19 pages, 704 KiB  
Review
A Structured and Methodological Review on Vision-Based Hand Gesture Recognition System
by Fahmid Al Farid, Noramiza Hashim, Junaidi Abdullah, Md Roman Bhuiyan, Wan Noor Shahida Mohd Isa, Jia Uddin, Mohammad Ahsanul Haque and Mohd Nizam Husen
J. Imaging 2022, 8(6), 153; https://doi.org/10.3390/jimaging8060153 - 26 May 2022
Cited by 33 | Viewed by 7268
Abstract
Researchers have recently focused their attention on vision-based hand gesture recognition. However, due to several constraints, achieving an effective vision-driven hand gesture recognition system in real time has remained a challenge. This paper aims to uncover the limitations faced in image acquisition through [...] Read more.
Researchers have recently focused their attention on vision-based hand gesture recognition. However, due to several constraints, achieving an effective vision-driven hand gesture recognition system in real time has remained a challenge. This paper aims to uncover the limitations faced in image acquisition through the use of cameras, image segmentation and tracking, feature extraction, and gesture classification stages of vision-driven hand gesture recognition in various camera orientations. This paper looked at research on vision-based hand gesture recognition systems from 2012 to 2022. Its goal is to find areas that are getting better and those that need more work. We used specific keywords to find 108 articles in well-known online databases. In this article, we put together a collection of the most notable research works related to gesture recognition. We suggest different categories for gesture recognition-related research with subcategories to create a valuable resource in this domain. We summarize and analyze the methodologies in tabular form. After comparing similar types of methodologies in the gesture recognition field, we have drawn conclusions based on our findings. Our research also looked at how well the vision-based system recognized hand gestures in terms of recognition accuracy. There is a wide variation in identification accuracy, from 68% to 97%, with the average being 86.6 percent. The limitations considered comprise multiple text and interpretations of gestures and complex non-rigid hand characteristics. In comparison to current research, this paper is unique in that it discusses all types of gesture recognition techniques. Full article
(This article belongs to the Special Issue Advances in Human Action Recognition Using Deep Learning)
Show Figures

Figure 1

16 pages, 692 KiB  
Article
Coded DNN Watermark: Robustness against Pruning Models Using Constant Weight Code
by Tatsuya Yasui, Takuro Tanaka, Asad Malik and Minoru Kuribayashi
J. Imaging 2022, 8(6), 152; https://doi.org/10.3390/jimaging8060152 - 26 May 2022
Cited by 3 | Viewed by 2308
Abstract
Deep Neural Network (DNN) watermarking techniques are increasingly being used to protect the intellectual property of DNN models. Basically, DNN watermarking is a technique to insert side information into the DNN model without significantly degrading the performance of its original task. A pruning [...] Read more.
Deep Neural Network (DNN) watermarking techniques are increasingly being used to protect the intellectual property of DNN models. Basically, DNN watermarking is a technique to insert side information into the DNN model without significantly degrading the performance of its original task. A pruning attack is a threat to DNN watermarking, wherein the less important neurons in the model are pruned to make it faster and more compact. As a result, removing the watermark from the DNN model is possible. This study investigates a channel coding approach to protect DNN watermarking against pruning attacks. The channel model differs completely from conventional models involving digital images. Determining the suitable encoding methods for DNN watermarking remains an open problem. Herein, we presented a novel encoding approach using constant weight codes to protect the DNN watermarking against pruning attacks. The experimental results confirmed that the robustness against pruning attacks could be controlled by carefully setting two thresholds for binary symbols in the codeword. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

19 pages, 3297 KiB  
Article
Bladder Wall Segmentation and Characterization on MR Images: Computer-Aided Spina Bifida Diagnosis
by Rania Trigui, Mouloud Adel, Mathieu Di Bisceglie, Julien Wojak, Jessica Pinol, Alice Faure and Kathia Chaumoitre
J. Imaging 2022, 8(6), 151; https://doi.org/10.3390/jimaging8060151 - 25 May 2022
Viewed by 1698
Abstract
(1) Background: Segmentation of the bladder inner’s wall and outer boundaries on Magnetic Resonance Images (MRI) is a crucial step for the diagnosis and the characterization of the bladder state and function. This paper proposes an optimized system for the segmentation and the [...] Read more.
(1) Background: Segmentation of the bladder inner’s wall and outer boundaries on Magnetic Resonance Images (MRI) is a crucial step for the diagnosis and the characterization of the bladder state and function. This paper proposes an optimized system for the segmentation and the classification of the bladder wall. (2) Methods: For each image of our data set, the region of interest corresponding to the bladder wall was extracted using LevelSet contour-based segmentation. Several features were computed from the extracted wall on T2 MRI images. After an automatic selection of the sub-vector containing most discriminant features, two supervised learning algorithms were tested using a bio-inspired optimization algorithm. (3) Results: The proposed system based on the improved LevelSet algorithm proved its efficiency in bladder wall segmentation. Experiments also showed that Support Vector Machine (SVM) classifier, optimized by Gray Wolf Optimizer (GWO) and using Radial Basis Function (RBF) kernel outperforms the Random Forest classification algorithm with a set of selected features. (4) Conclusions: A computer-aided optimized system based on segmentation and characterization, of bladder wall on MRI images for classification purposes is proposed. It can significantly be helpful for radiologists as a part of spina bifida study. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

22 pages, 14687 KiB  
Article
IR Reflectography, Pulse-Compression Thermography, MA-XRF, and Radiography: A Full-Thickness Study of a 16th-Century Panel Painting Copy of Raphael
by Tiziana Cavaleri, Claudia Pelosi, Marco Ricci, Stefano Laureti, Francesco Paolo Romano, Claudia Caliri, Bernadette Ventura, Stefania De Blasi and Marco Gargano
J. Imaging 2022, 8(6), 150; https://doi.org/10.3390/jimaging8060150 - 24 May 2022
Cited by 6 | Viewed by 2331
Abstract
The potential of any multi-analytical and non-invasive approach to the study of cultural heritage, both for conservation and scientific investigation purposes, is gaining increasing interest, and it was tested in this paper, focusing on the panel painting Madonna della Tenda (Musei Reali, Turin), [...] Read more.
The potential of any multi-analytical and non-invasive approach to the study of cultural heritage, both for conservation and scientific investigation purposes, is gaining increasing interest, and it was tested in this paper, focusing on the panel painting Madonna della Tenda (Musei Reali, Turin), identified as a 16th-century copy of the painting by Raffaello Sanzio. As a part of a broader diagnostic campaign carried out at the Centro Conservazione e Restauro, La Venaria Reale in Turin, Italy, the potential of the combination of X-ray radiography, pulse-compression thermography, macro X-ray fluorescence, and IR reflectography was tested to investigate the wooden support and all the preparatory phases for the realization of the painting. The results of the optical microscopy and SEM/EDS analyses on a multi-layered micro-sample were used for a precise comparison, integration, and/or confirmation of what was suggested by the non-invasive techniques. Particularly, the radiographic and thermographic techniques allowed for an in-depth study of a hole, interestingly present on the panel’s back surface, detecting the trajectory of the wood grain and confirming the presence of an old wood knot, as well as of a tau-shaped element—potentially a cracked and unfilled area of the wooden support—near the hollow. The combination of radiography, macro X-ray fluorescence, Near Infrared (NIR), and Short Wave Infrared (SWIR) reflectography allowed for an inspection of the ground layer, imprimitura, engravings, and underdrawing, not only revealing interesting technical-executive aspects of the artwork realization, but also highlighting the advantages of an integrated reading of data obtained from the different analytical techniques. Full article
(This article belongs to the Special Issue Spectral Imaging for Cultural Heritage)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop