Intelligent Processing on Image and Optical Information

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Optics and Lasers".

Deadline for manuscript submissions: closed (15 January 2020) | Viewed by 76545

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor

Special Issue Information

Dear Colleagues,

Intelligent image and optical information processing have significantly contributed to the recent epoch of artificial intelligence and smart cars. Certainly, information acquired by various imaging techniques is of tremendous value, thus intelligent analysis of them is necessary to make the best use of it.

This special issue focuses on the vast range of intelligent processing of image and optical information acquired by various imaging methods. Images are commonly formed via visible light; three-dimensional information is acquired by multi-view imaging or digital holography; infrared, terahertz, and millimeter waves are good resources in a non-visible environment. Synthetic aperture radar and radiographic or ultrasonic imaging constitute military, industrial, and medical regimes. The objectives of intelligent processing range from the refinement of raw data to the symbolic representation and visualization of real world. It comes through unsupervised or supervised learning based on statistical and mathematical models or computational algorithms.

Intelligent processing on image and optical information has been widely involved in a variety of research fields such as video surveillance, biometric recognition, non-destructive testing, medical diagnosis, robotic sensing, compressed sensing, autonomous driving, three-dimensional scene reconstruction, and others. The latest technological developments will be shared through this special issue. We invite researchers and investigators to contribute their original research or review articles to this special issue.

Prof. Dr. Seokwon Yeom
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Intelligent image processing 
  • Machine and robot vision
  • Optical information processing
  • IR, THz, MMW, SAR image analysis 
  • Bio-medical image analysis 
  • Three-dimensional information processing 
  • Image detection, recognition, and tracking 
  • Segmentation and feature extraction 
  • Image registration and fusion 
  • Image enhancement and restoration

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

4 pages, 150 KiB  
Editorial
Special Issue on Intelligent Processing on Image and Optical Information
by Seokwon Yeom
Appl. Sci. 2020, 10(11), 3911; https://doi.org/10.3390/app10113911 - 5 Jun 2020
Viewed by 1421
Abstract
Intelligent image and optical information processing have paved the way for the recent epoch of new intelligence and information era [...] Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)

Research

Jump to: Editorial, Review

19 pages, 17230 KiB  
Article
Multifocus Image Fusion Using a Sparse and Low-Rank Matrix Decomposition for Aviator’s Night Vision Goggle
by Bo-Lin Jian, Wen-Lin Chu, Yu-Chung Li and Her-Terng Yau
Appl. Sci. 2020, 10(6), 2178; https://doi.org/10.3390/app10062178 - 23 Mar 2020
Cited by 3 | Viewed by 3080
Abstract
This study proposed the concept of sparse and low-rank matrix decomposition to address the need for aviator’s night vision goggles (NVG) automated inspection processes when inspecting equipment availability. First, the automation requirements include machinery and motor-driven focus knob of NVGs and image capture [...] Read more.
This study proposed the concept of sparse and low-rank matrix decomposition to address the need for aviator’s night vision goggles (NVG) automated inspection processes when inspecting equipment availability. First, the automation requirements include machinery and motor-driven focus knob of NVGs and image capture using cameras to achieve autofocus. Traditionally, passive autofocus involves first computing of sharpness of each frame and then use of a search algorithm to quickly find the sharpest focus. In this study, the concept of sparse and low-rank matrix decomposition was adopted to achieve autofocus calculation and image fusion. Image fusion can solve the multifocus problem caused by mechanism errors. Experimental results showed that the sharpest image frame and its nearby frame can be image-fused to resolve minor errors possibly arising from the image-capture mechanism. In this study, seven samples and 12 image-fusing indicators were employed to verify the image fusion based on variance calculated in a discrete cosine transform domain without consistency verification, with consistency verification, structure-aware image fusion, and the proposed image fusion method. Experimental results showed that the proposed method was superior to other methods and compared the autofocus put forth in this paper and the normalized gray-level variance sharpness results in the documents to verify accuracy. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

11 pages, 3622 KiB  
Article
Unsupervised Generation and Synthesis of Facial Images via an Auto-Encoder-Based Deep Generative Adversarial Network
by Jeong gi Kwak and Hanseok Ko
Appl. Sci. 2020, 10(6), 1995; https://doi.org/10.3390/app10061995 - 14 Mar 2020
Cited by 4 | Viewed by 2118
Abstract
The processing of facial images is an important task, because it is required for a large number of real-world applications. As deep-learning models evolve, they require a huge number of images for training. In reality, however, the number of images available is limited. [...] Read more.
The processing of facial images is an important task, because it is required for a large number of real-world applications. As deep-learning models evolve, they require a huge number of images for training. In reality, however, the number of images available is limited. Generative adversarial networks (GANs) have thus been utilized for database augmentation, but they suffer from unstable training, low visual quality, and a lack of diversity. In this paper, we propose an auto-encoder-based GAN with an enhanced network structure and training scheme for Database (DB) augmentation and image synthesis. Our generator and decoder are divided into two separate modules that each take input vectors for low-level and high-level features; these input vectors affect all layers within the generator and decoder. The effectiveness of the proposed method is demonstrated by comparing it with baseline methods. In addition, we introduce a new scheme that can combine two existing images without the need for extra networks based on the auto-encoder structure of the discriminator in our model. We add a novel double-constraint loss to make the encoded latent vectors equal to the input vectors. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

23 pages, 6662 KiB  
Article
Boundary Matching and Interior Connectivity-Based Cluster Validity Anlysis
by Qi Li, Shihong Yue, Yaru Wang, Mingliang Ding, Jia Li and Zeying Wang
Appl. Sci. 2020, 10(4), 1337; https://doi.org/10.3390/app10041337 - 16 Feb 2020
Cited by 3 | Viewed by 2244
Abstract
The evaluation of clustering results plays an important role in clustering analysis. However, the existing validity indices are limited to a specific clustering algorithm, clustering parameter, and assumption in practice. In this paper, we propose a novel validity index to solve the above [...] Read more.
The evaluation of clustering results plays an important role in clustering analysis. However, the existing validity indices are limited to a specific clustering algorithm, clustering parameter, and assumption in practice. In this paper, we propose a novel validity index to solve the above problems based on two complementary measures: boundary points matching and interior points connectivity. Firstly, when any clustering algorithm is performed on a dataset, we extract all boundary points for the dataset and its partitioned clusters using a nonparametric metric. The measure of boundary points matching is computed. Secondly, the interior points connectivity of both the dataset and all the partitioned clusters are measured. The proposed validity index can evaluate different clustering results on the dataset obtained from different clustering algorithms, which cannot be evaluated by the existing validity indices at all. Experimental results demonstrate that the proposed validity index can evaluate clustering results obtained by using an arbitrary clustering algorithm and find the optimal clustering parameters. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Graphical abstract

12 pages, 1336 KiB  
Article
Zebrafish Larvae Phenotype Classification from Bright-field Microscopic Images Using a Two-Tier Deep-Learning Pipeline
by Shang Shang, Sijie Lin and Fengyu Cong
Appl. Sci. 2020, 10(4), 1247; https://doi.org/10.3390/app10041247 - 13 Feb 2020
Cited by 11 | Viewed by 4023
Abstract
Classification of different zebrafish larvae phenotypes is useful for studying the environmental influence on embryo development. However, the scarcity of well-annotated training images and fuzzy inter-phenotype differences hamper the application of machine-learning methods in phenotype classification. This study develops a deep-learning approach to [...] Read more.
Classification of different zebrafish larvae phenotypes is useful for studying the environmental influence on embryo development. However, the scarcity of well-annotated training images and fuzzy inter-phenotype differences hamper the application of machine-learning methods in phenotype classification. This study develops a deep-learning approach to address these challenging problems. A convolutional network model with compressed separable convolution kernels is adopted to address the overfitting issue caused by insufficient training data. A two-tier classification pipeline is designed to improve the classification accuracy based on fuzzy phenotype features. Our method achieved an averaged accuracy of 91% for all the phenotypes and maximum accuracy of 100% for some phenotypes (e.g., dead and chorion). We also compared our method with the state-of-the-art methods based on the same dataset. Our method obtained dramatic accuracy improvement up to 22% against the existing method. This study offers an effective deep-learning solution for classifying difficult zebrafish larvae phenotypes based on very limited training data. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Graphical abstract

15 pages, 13323 KiB  
Article
Detecting Green Mold Pathogens on Lemons Using Hyperspectral Images
by Yuriy Vashpanov, Gwanghee Heo, Yongsuk Kim, Tetiana Venkel and Jung-Young Son
Appl. Sci. 2020, 10(4), 1209; https://doi.org/10.3390/app10041209 - 11 Feb 2020
Cited by 8 | Viewed by 3994
Abstract
Hyperspectral images in the spectral wavelength range of 500 nm to 650 nm are used to detect green mold pathogens, which are parasitic on the surface of lemons. The images reveal that the spectral range of 500 nm to 560 nm is appropriate [...] Read more.
Hyperspectral images in the spectral wavelength range of 500 nm to 650 nm are used to detect green mold pathogens, which are parasitic on the surface of lemons. The images reveal that the spectral range of 500 nm to 560 nm is appropriate for detecting the early stage of development of the pathogen in the lemon, because the spectral intensity is proportional to the infection degree. Within the range, it was found that the dominant spectral wavelengths of the fresh lemon and the green mold pathogen are 580 nm and 550 nm, respectively, with the 550 nm being the most sensitive in detecting the pathogen with spectral imaging. The spectral intensity ratio of the infected lemon to the fresh one in the spectral range of 500 nm to 560 nm increases with the increasing degree of the infection. Therefore, the ratio can be used to effectively estimate the degree of lemons infecting by the green mold pathogens. It also shows that the sudden decrease of the spectral intensity corresponding to the dominant spectral wavelength of the fresh lemon, together with the neighboring spectral wavelengths can be used to classify fresh and contaminated lemons. The spectral intensity ratio of discriminating the fresh lemon from the infected one is calculated as 1.15. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

20 pages, 1862 KiB  
Article
An Effective Optimization Method for Machine Learning Based on ADAM
by Dokkyun Yi, Jaehyun Ahn and Sangmin Ji
Appl. Sci. 2020, 10(3), 1073; https://doi.org/10.3390/app10031073 - 5 Feb 2020
Cited by 94 | Viewed by 7189
Abstract
A machine is taught by finding the minimum value of the cost function which is induced by learning data. Unfortunately, as the amount of learning increases, the non-liner activation function in the artificial neural network (ANN), the complexity of the artificial intelligence structures, [...] Read more.
A machine is taught by finding the minimum value of the cost function which is induced by learning data. Unfortunately, as the amount of learning increases, the non-liner activation function in the artificial neural network (ANN), the complexity of the artificial intelligence structures, and the cost function’s non-convex complexity all increase. We know that a non-convex function has local minimums, and that the first derivative of the cost function is zero at a local minimum. Therefore, the methods based on a gradient descent optimization do not undergo further change when they fall to a local minimum because they are based on the first derivative of the cost function. This paper introduces a novel optimization method to make machine learning more efficient. In other words, we construct an effective optimization method for non-convex cost function. The proposed method solves the problem of falling into a local minimum by adding the cost function in the parameter update rule of the ADAM method. We prove the convergence of the sequences generated from the proposed method and the superiority of the proposed method by numerical comparison with gradient descent (GD, ADAM, and AdaMax). Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

16 pages, 19657 KiB  
Article
Image Completion with Hybrid Interpolation in Tensor Representation
by Rafał Zdunek and Tomasz Sadowski
Appl. Sci. 2020, 10(3), 797; https://doi.org/10.3390/app10030797 - 22 Jan 2020
Cited by 7 | Viewed by 2263
Abstract
The issue of image completion has been developed considerably over the last two decades, and many computational strategies have been proposed to fill-in missing regions in an incomplete image. When the incomplete image contains many small-sized irregular missing areas, a good alternative seems [...] Read more.
The issue of image completion has been developed considerably over the last two decades, and many computational strategies have been proposed to fill-in missing regions in an incomplete image. When the incomplete image contains many small-sized irregular missing areas, a good alternative seems to be the matrix or tensor decomposition algorithms that yield low-rank approximations. However, this approach uses heuristic rank adaptation techniques, especially for images with many details. To tackle the obstacles of low-rank completion methods, we propose to model the incomplete images with overlapping blocks of Tucker decomposition representations where the factor matrices are determined by a hybrid version of the Gaussian radial basis function and polynomial interpolation. The experiments, carried out for various image completion and resolution up-scaling problems, demonstrate that our approach considerably outperforms the baseline and state-of-the-art low-rank completion methods. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

18 pages, 7422 KiB  
Article
Error Resilience for Block Compressed Sensing with Multiple-Channel Transmission
by Hsiang-Cheh Huang, Po-Liang Chen and Feng-Cheng Chang
Appl. Sci. 2020, 10(1), 161; https://doi.org/10.3390/app10010161 - 24 Dec 2019
Cited by 11 | Viewed by 1844
Abstract
Compressed sensing is well known for its superior compression performance, in existing schemes, in lossy compression. Conventional research aims to reach a larger compression ratio at the encoder, with acceptable quality reconstructed images at the decoder. This implies looking for compression performance with [...] Read more.
Compressed sensing is well known for its superior compression performance, in existing schemes, in lossy compression. Conventional research aims to reach a larger compression ratio at the encoder, with acceptable quality reconstructed images at the decoder. This implies looking for compression performance with error-free transmission between the encoder and the decoder. Besides looking at compression performance, we applied block compressed sensing to digital images for robust transmission. For transmission over lossy channels, error propagation or data loss can be expected, and protection mechanisms for compressed sensing signals are required for guaranteed quality of the reconstructed images. We propose transmitting compressed sensing signals over multiple independent channels for robust transmission. By introducing correlations with multiple-description coding, which is an effective means for error resilient coding, errors induced in the lossy channels can effectively be alleviated. Simulation results presented the applicability and superiority of performance, depicting the effectiveness of protection of compressed sensing signals. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

13 pages, 2664 KiB  
Article
Feature Extraction with Discrete Non-Separable Shearlet Transform and Its Application to Surface Inspection of Continuous Casting Slabs
by Xiaoming Liu, Ke Xu, Peng Zhou and Huajie Liu
Appl. Sci. 2019, 9(21), 4668; https://doi.org/10.3390/app9214668 - 1 Nov 2019
Cited by 2 | Viewed by 2223
Abstract
A new feature extraction technique called DNST-GLCM-KSR (discrete non-separable shearlet transform-gray-level co-occurrence matrix-kernel spectral regression) is presented according to the direction and texture information of surface defects of continuous casting slabs with complex backgrounds. The discrete non-separable shearlet transform (DNST) is a new [...] Read more.
A new feature extraction technique called DNST-GLCM-KSR (discrete non-separable shearlet transform-gray-level co-occurrence matrix-kernel spectral regression) is presented according to the direction and texture information of surface defects of continuous casting slabs with complex backgrounds. The discrete non-separable shearlet transform (DNST) is a new multi-scale geometric analysis method that provides excellent localization properties and directional selectivity. The gray-level co-occurrence matrix (GLCM) is a texture feature extraction technology. We combine DNST features with GLCM features to characterize defects of the continuous casting slabs. Since the combination feature is high-dimensional and redundant, kernel spectral regression (KSR) algorithm was used to remove redundancy. The low-dimension features obtained and labels data were inputted to a support vector machine (SVM) for classification. The samples collected from the continuous casting slab industrial production line—including cracks, scales, lighting variation, and slag marks—and the proposed scheme were tested. The test results show that the scheme can improve the classification accuracy to 96.37%, which provides a new approach for surface defect recognition of continuous casting slabs. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

16 pages, 3239 KiB  
Article
Multi-Sensor Face Registration Based on Global and Local Structures
by Wei Li, Mingli Dong, Naiguang Lu, Xiaoping Lou and Wanyong Zhou
Appl. Sci. 2019, 9(21), 4623; https://doi.org/10.3390/app9214623 - 30 Oct 2019
Cited by 6 | Viewed by 2471
Abstract
The work reported in this paper aims at utilizing the global geometrical relationship and local shape feature to register multi-spectral images for fusion-based face recognition. We first propose a multi-spectral face images registration method based on both global and local structures of feature [...] Read more.
The work reported in this paper aims at utilizing the global geometrical relationship and local shape feature to register multi-spectral images for fusion-based face recognition. We first propose a multi-spectral face images registration method based on both global and local structures of feature point sets. In order to combine the global geometrical relationship and local shape feature in a new Student’s t Mixture probabilistic model framework. On the one hand, we use inner-distance shape context as the local shape descriptors of feature point sets. On the other hand, we formulate the feature point sets registration of the multi-spectral face images as the Student’s t Mixture probabilistic model estimation, and local shape descriptors are used to replace the mixing proportions of the prior Student’s t Mixture Model. Furthermore, in order to improve the anti-interference performance of face recognition techniques, a guided filtering and gradient preserving image fusion strategy is used to fuse the registered multi-spectral face image. It can make the multi-spectral fusion image hold more apparent details of the visible image and thermal radiation information of the infrared image. Subjective and objective registration experiments are conducted with manual selected landmarks and real multi-spectral face images. The qualitative and quantitative comparisons with the state-of-the-art methods demonstrate the accuracy and robustness of our proposed method in solving the multi-spectral face image registration problem. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

16 pages, 4407 KiB  
Article
Determination of the Optimal State of Dough Fermentation in Bread Production by Using Optical Sensors and Deep Learning
by Lino Antoni Giefer, Michael Lütjen, Ann-Kathrin Rohde and Michael Freitag
Appl. Sci. 2019, 9(20), 4266; https://doi.org/10.3390/app9204266 - 11 Oct 2019
Cited by 7 | Viewed by 3746
Abstract
Dough fermentation plays an essential role in the bread production process, and its success is critical to producing high-quality products. In Germany, the number of stores per bakery chain has increased within the last years as well as the trend to finish the [...] Read more.
Dough fermentation plays an essential role in the bread production process, and its success is critical to producing high-quality products. In Germany, the number of stores per bakery chain has increased within the last years as well as the trend to finish the bakery products local at the stores. There is an unsatisfied demand for skilled workers, which leads to an increasing number of untrained and inexperienced employees at the stores. This paper proposes a method for the automatic monitoring of the fermentation process based on optical techniques. By using a combination of machine learning and superellipsoid model fitting, we have developed an instance segmentation and parameter estimation method for dough objects that are positioned inside a fermentation chamber. In our method we measure the given topography at discrete points in time using a movable laser sensor system that is located at the back of the fermentation chamber. By applying the superellipsoid model fitting method, we estimated the volume of each object and achieved results with a deviation of approximately 10% on average. Thereby, the volume gradient is monitored continuously and represents the progress of the fermentation state. Exploratory tests show the reliability and the potential of our method, which is particularly suitable for local stores but also for high volume production in bakery plants. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

17 pages, 14717 KiB  
Article
A Correction Method for Heat Wave Distortion in Digital Image Correlation Measurements Based on Background-Oriented Schlieren
by Chang Ma, Zhoumo Zeng, Hui Zhang and Xiaobo Rui
Appl. Sci. 2019, 9(18), 3851; https://doi.org/10.3390/app9183851 - 13 Sep 2019
Cited by 8 | Viewed by 4488
Abstract
Digital image correlation (DIC) is a kind of displacement and strain measurement technique. It can realize non-contact and full-field measurement and is widely used in the testing and research of mechanical properties of materials at high temperatures. However, many factors affect measurement accuracy. [...] Read more.
Digital image correlation (DIC) is a kind of displacement and strain measurement technique. It can realize non-contact and full-field measurement and is widely used in the testing and research of mechanical properties of materials at high temperatures. However, many factors affect measurement accuracy. As the high temperature environment is complex, the impact of heat waves on DIC is the most significant factor. In order to correct the disturbance in DIC measurement caused by heat waves, this paper proposes a method based on the background-oriented schlieren (BOS) technique. The spot pattern on the surface of a specimen in digital image correlation can be used as the background in the background-oriented schlieren technique. The BOS technique can measure the distortion information of the images caused by heat flow field. The specimen images taken through the heat waves can be corrected using the distortion information. Besides, the characteristics of distortions due to heat waves are also studied in this paper. The experiment results verify that the proposed method can effectively eliminate heat wave disturbances in DIC measurements. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

12 pages, 3178 KiB  
Article
Automatic Zebrafish Egg Phenotype Recognition from Bright-Field Microscopic Images Using Deep Convolutional Neural Network
by Shang Shang, Ling Long, Sijie Lin and Fengyu Cong
Appl. Sci. 2019, 9(16), 3362; https://doi.org/10.3390/app9163362 - 15 Aug 2019
Cited by 10 | Viewed by 4457
Abstract
Zebrafish eggs are widely used in biological experiments to study the environmental and genetic influence on embryo development. Due to the high throughput of microscopic imaging, automated analysis of zebrafish egg microscopic images is highly demanded. However, machine learning algorithms for zebrafish egg [...] Read more.
Zebrafish eggs are widely used in biological experiments to study the environmental and genetic influence on embryo development. Due to the high throughput of microscopic imaging, automated analysis of zebrafish egg microscopic images is highly demanded. However, machine learning algorithms for zebrafish egg image analysis suffer from the problems of small imbalanced training dataset and subtle inter-class differences. In this study, we developed an automated zebrafish egg microscopic image analysis algorithm based on deep convolutional neural network (CNN). To tackle the problem of insufficient training data, the strategies of transfer learning and data augmentation were used. We also adopted the global averaged pooling technique to overcome the subtle phenotype differences between the fertilized and unfertilized eggs. Experimental results of a five-fold cross-validation test showed that the proposed method yielded a mean classification accuracy of 95.0% and a maximum accuracy of 98.8%. The network also demonstrated higher classification accuracy and better convergence performance than conventional CNN methods. This study extends the deep learning technique to zebrafish egg phenotype classification and paves the way for automatic bright-field microscopic image analysis. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Graphical abstract

17 pages, 7855 KiB  
Article
Real-Time Automated Segmentation and Classification of Calcaneal Fractures in CT Images
by Wahyu Rahmaniar and Wen-June Wang
Appl. Sci. 2019, 9(15), 3011; https://doi.org/10.3390/app9153011 - 26 Jul 2019
Cited by 16 | Viewed by 7678
Abstract
Calcaneal fractures often occur because of accidents during exercise or activities. In general, the detection of the calcaneal fracture is still carried out manually through CT image observation, and as a result, there is a lack of precision in the analysis. This paper [...] Read more.
Calcaneal fractures often occur because of accidents during exercise or activities. In general, the detection of the calcaneal fracture is still carried out manually through CT image observation, and as a result, there is a lack of precision in the analysis. This paper proposes a computer-aid method for the calcaneal fracture detection to acquire a faster and more detailed observation. First, the anatomical plane orientation of the tarsal bone in the input image is selected to determine the location of the calcaneus. Then, several fragments of the calcaneus image are detected and marked by color segmentation. The Sanders system is used to classify fractures in transverse and coronal images into four types, based on the number of fragments. In sagittal image, fractures are classified into three types based on the involvement of the fracture area. The experimental results show that the proposed method achieves a high precision rate of 86%, with a fast computational performance of 133 frames per second (fps), used to analyze the severity of injury to the calcaneus. The results in the test image are validated based on the assessment and evaluation carried out by the physician on the reference datasets. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

16 pages, 4973 KiB  
Article
A Novel Extraction Method for Wildlife Monitoring Images with Wireless Multimedia Sensor Networks (WMSNs)
by Wending Liu, Hanxing Liu, Yuan Wang, Xiaorui Zheng and Junguo Zhang
Appl. Sci. 2019, 9(11), 2276; https://doi.org/10.3390/app9112276 - 2 Jun 2019
Cited by 6 | Viewed by 2877
Abstract
In remote areas, wireless multimedia sensor networks (WMSNs) have limited energy, and the data processing of wildlife monitoring images always suffers from energy consumption limitations. Generally, only part of each wildlife image is valuable. Therefore, the above mentioned issue could be avoided by [...] Read more.
In remote areas, wireless multimedia sensor networks (WMSNs) have limited energy, and the data processing of wildlife monitoring images always suffers from energy consumption limitations. Generally, only part of each wildlife image is valuable. Therefore, the above mentioned issue could be avoided by transmitting the target area. Inspired by this transport strategy, in this paper, we propose an image extraction method with a low computational complexity, which can be adapted to extract the target area (i.e., the animal) and its background area according to the characteristics of the image pixels. Specifically, we first reconstruct a color space model via a CIELUV (LUV) color space framework to extract the color parameters. Next, according to the importance of the Hermite polynomial, a Hermite filter is utilized to extract the texture features, which ensures the accuracy of the split extraction of wildlife images. Then, an adaptive mean-shift algorithm is introduced to cluster texture features and color space information, realizing the extraction of the foreground area in the monitoring image. To verify the performance of the algorithm, a demonstration of the extraction of field-captured wildlife images is presented. Further, we conduct a comparative experiment with N-cuts (N-cuts), the existing aggregating super-pixels (SAS) algorithm, and the histogram contrast saliency detection (HCS) algorithm. A comparison of the results shows that the proposed algorithm for monitoring image target area extraction increased the average pixel accuracy by 11.25%, 5.46%, and 10.39%, respectively; improved the relative limit measurement accuracy by 1.83%, 5.28%, and 12.05%, respectively; and increased the average mean intersection over the union by 7.09%, 14.96%, and 19.14%, respectively. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Graphical abstract

13 pages, 1275 KiB  
Article
A Texture Classification Approach Based on the Integrated Optimization for Parameters and Features of Gabor Filter via Hybrid Ant Lion Optimizer
by Mingwei Wang, Lang Gao, Xiaohui Huang, Ying Jiang and Xianjun Gao
Appl. Sci. 2019, 9(11), 2173; https://doi.org/10.3390/app9112173 - 28 May 2019
Cited by 12 | Viewed by 2646
Abstract
Texture classification is an important topic for many applications in machine vision and image analysis, and Gabor filter is considered one of the most efficient tools for analyzing texture features at multiple orientations and scales. However, the parameter settings of each filter are [...] Read more.
Texture classification is an important topic for many applications in machine vision and image analysis, and Gabor filter is considered one of the most efficient tools for analyzing texture features at multiple orientations and scales. However, the parameter settings of each filter are crucial for obtaining accurate results, and they may not be adaptable to different kinds of texture features. Moreover, there is redundant information included in the process of texture feature extraction that contributes little to the classification. In this paper, a new texture classification technique is detailed. The approach is based on the integrated optimization of the parameters and features of Gabor filter, and obtaining satisfactory parameters and the best feature subset is viewed as a combinatorial optimization problem that can be solved by maximizing the objective function using hybrid ant lion optimizer (HALO). Experimental results, particularly fitness values, demonstrate that HALO is more effective than the other algorithms discussed in this paper, and the optimal parameters and features of Gabor filter are balanced between efficiency and accuracy. The method is feasible, reasonable, and can be utilized for practical applications of texture classification. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

20 pages, 9645 KiB  
Article
IMU-Aided High-Frequency Lidar Odometry for Autonomous Driving
by Hanzhang Xue, Hao Fu and Bin Dai
Appl. Sci. 2019, 9(7), 1506; https://doi.org/10.3390/app9071506 - 11 Apr 2019
Cited by 24 | Viewed by 6295
Abstract
For autonomous driving, it is important to obtain precise and high-frequency localization information. This paper proposes a novel method in which the Inertial Measurement Unit (IMU), wheel encoder, and lidar odometry are utilized together to estimate the ego-motion of an unmanned ground vehicle. [...] Read more.
For autonomous driving, it is important to obtain precise and high-frequency localization information. This paper proposes a novel method in which the Inertial Measurement Unit (IMU), wheel encoder, and lidar odometry are utilized together to estimate the ego-motion of an unmanned ground vehicle. The IMU is fused with the wheel encoder to obtain the motion prior, and it is involved in three levels of the lidar odometry: Firstly, we use the IMU information to rectify the intra-frame distortion of the lidar scan, which is caused by the vehicle’s own movement; secondly, the IMU provides a better initial guess for the lidar odometry; and thirdly, the IMU is fused with the lidar odometry in an Extended Kalman filter framework. In addition, an efficient method for hand–eye calibration between the IMU and the lidar is proposed. To evaluate the performance of our method, extensive experiments are performed and our system can output stable, accurate, and high-frequency localization results in diverse environment without any prior information. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

12 pages, 2376 KiB  
Article
Change Detection of Water Resources via Remote Sensing: An L-V-NSCT Approach
by Wang Xin, Tang Can, Wang Wei and Li Ji
Appl. Sci. 2019, 9(6), 1223; https://doi.org/10.3390/app9061223 - 22 Mar 2019
Cited by 8 | Viewed by 2389
Abstract
Aiming at the change detection of water resources via remote sensing, the non-subsampling contour transformation method combining a log-vari model and the Stractural Similarity of Variogram (VSSIM) model, namely log-vari and VSSIM based non-subsampled contourlet transform (L-V-NSCT) approach, is proposed. Firstly, a differential [...] Read more.
Aiming at the change detection of water resources via remote sensing, the non-subsampling contour transformation method combining a log-vari model and the Stractural Similarity of Variogram (VSSIM) model, namely log-vari and VSSIM based non-subsampled contourlet transform (L-V-NSCT) approach, is proposed. Firstly, a differential image construction method based on non-subsampled contourlet transform (NSCT) texture analysis is designed to extract the low-frequency and high-frequency texture features of the objects in the images. Secondly, the texture features of rivers, lakes and other objects in the images are accurately classified. Finally, the change detection results of regions of interest are extracted and evaluated. In this experiment, the L-V-NSCT approach is compared with other methods with the results showing the effectiveness of this method. The change in Dongting Lake is also analyzed, which can be used as a reference for relevant administrative departments. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Graphical abstract

Review

Jump to: Editorial, Research

16 pages, 738 KiB  
Review
Review on Computer Aided Weld Defect Detection from Radiography Images
by Wenhui Hou, Dashan Zhang, Ye Wei, Jie Guo and Xiaolong Zhang
Appl. Sci. 2020, 10(5), 1878; https://doi.org/10.3390/app10051878 - 10 Mar 2020
Cited by 71 | Viewed by 7722
Abstract
The weld defects inspection from radiography films is critical for assuring the serviceability and safety of weld joints. The various limitations of human interpretation made the development of innovative computer-aided techniques for automatic detection from radiography images an interest point of recent studies. [...] Read more.
The weld defects inspection from radiography films is critical for assuring the serviceability and safety of weld joints. The various limitations of human interpretation made the development of innovative computer-aided techniques for automatic detection from radiography images an interest point of recent studies. The studies of automatic defect inspection are synthetically concluded from three aspects: pre-processing, defect segmentation and defect classification. The achievement and limitations of traditional defect classification method based on the feature extraction, selection and classifier are summarized. Then the applications of novel models based on learning(especially deep learning) were introduced. Finally, the achievement of automation methods were discussed and the challenges of current technology are presented for future research for both weld quality management and computer science researchers. Full article
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information)
Show Figures

Figure 1

Back to TopTop