Advances in Image Feature Extraction and Selection

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (25 September 2020) | Viewed by 18604

Special Issue Editor

Special Issue Information

Dear Colleagues,

Today, humanity has more data than ever. It is crucial to work with uncovering actionable, rather than interesting data—knowing the difference between interesting data and useful data. amongst the important aspects in machine learning are “feature selection” and “feature extraction”.

A universal problem of intelligent (learning) approaches is where to focus their attention. It is very crucial to understand “What are the aspects of the problem at hand are important/necessary to solve it?”, i.e., discriminate between the relevant and irrelevant parts of imaging data.

The problem is selecting some subsets of a learning algorithm’s input variables upon which it should focus its attention, while ignoring the rest—in other words, dimensionality reduction, which is something that we, as Humans, constantly do.

Feature selection becomes necessary, especially when dealing with a large number of data (dimensionality reduction), improving, significantly, a learning algorithm’s performance.

In real-world applications, this is usually not possible: For most problems, it is computationally intractable to search the whole space of possible feature subsets; one usually has to settle for approximations of the optimal subset; most of the research in this area is devoted to finding efficient search-heuristics.

The aim of this Special Issue is to present and highlight novel algorithms, architectures, techniques, and applications for image feature extraction and selection.

Dr. Pier Luigi Mazzeo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Object detection and recognition
  • Feature selection
  • Feature extraction
  • Classifier design
  • Machine learning
  • Computer vision
  • Convolutional neural network
  • Content-based image retrieval
  • Principle component analysis
  • Discriminant analysis

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 7138 KiB  
Article
Task-Driven Learned Hyperspectral Data Reduction Using End-to-End Supervised Deep Learning
by Mathé T. Zeegers, Daniël M. Pelt, Tristan van Leeuwen, Robert van Liere and Kees Joost Batenburg
J. Imaging 2020, 6(12), 132; https://doi.org/10.3390/jimaging6120132 - 2 Dec 2020
Cited by 7 | Viewed by 4650
Abstract
An important challenge in hyperspectral imaging tasks is to cope with the large number of spectral bins. Common spectral data reduction methods do not take prior knowledge about the task into account. Consequently, sparsely occurring features that may be essential for the imaging [...] Read more.
An important challenge in hyperspectral imaging tasks is to cope with the large number of spectral bins. Common spectral data reduction methods do not take prior knowledge about the task into account. Consequently, sparsely occurring features that may be essential for the imaging task may not be preserved in the data reduction step. Convolutional neural network (CNN) approaches are capable of learning the specific features relevant to the particular imaging task, but applying them directly to the spectral input data is constrained by the computational efficiency. We propose a novel supervised deep learning approach for combining data reduction and image analysis in an end-to-end architecture. In our approach, the neural network component that performs the reduction is trained such that image features most relevant for the task are preserved in the reduction step. Results for two convolutional neural network architectures and two types of generated datasets show that the proposed Data Reduction CNN (DRCNN) approach can produce more accurate results than existing popular data reduction methods, and can be used in a wide range of problem settings. The integration of knowledge about the task allows for more image compression and higher accuracies compared to standard data reduction methods. Full article
(This article belongs to the Special Issue Advances in Image Feature Extraction and Selection)
Show Figures

Graphical abstract

15 pages, 5237 KiB  
Article
A Siamese Neural Network for Non-Invasive Baggage Re-Identification
by Pier Luigi Mazzeo, Christian Libetta, Paolo Spagnolo and Cosimo Distante
J. Imaging 2020, 6(11), 126; https://doi.org/10.3390/jimaging6110126 - 20 Nov 2020
Cited by 4 | Viewed by 2680
Abstract
Baggage travelling on a conveyor belt in the sterile area (the rear collector located after the check-in counters) often gets stuck due to traffic jams, mainly caused by incorrect entries from the check-in counters on the collector belt. Using suitcase appearance captured on [...] Read more.
Baggage travelling on a conveyor belt in the sterile area (the rear collector located after the check-in counters) often gets stuck due to traffic jams, mainly caused by incorrect entries from the check-in counters on the collector belt. Using suitcase appearance captured on the Baggage Handling System (BHS) and airport checkpoints and their re-identification allows for us to handle baggage safer and faster. In this paper, we propose a Siamese Neural Network-based model that is able to estimate the baggage similarity: given a set of training images of the same suitcase (taken in different conditions), the network predicts whether the two input images belong to the same baggage identity. The proposed network learns discriminative features in order to measure the similarity among two different images of the same baggage identity. It can be easily applied on different pre-trained backbones. We demonstrate our model in a publicly available suitcase dataset that outperforms the leading latest state-of-the-art architecture in terms of accuracy. Full article
(This article belongs to the Special Issue Advances in Image Feature Extraction and Selection)
Show Figures

Figure 1

15 pages, 12348 KiB  
Article
Ensemble of ERDTs for Spectral–Spatial Classification of Hyperspectral Images Using MRS Object-Guided Morphological Profiles
by Alim Samat, Erzhu Li, Sicong Liu, Zelang Miao and Wei Wang
J. Imaging 2020, 6(11), 114; https://doi.org/10.3390/jimaging6110114 - 26 Oct 2020
Cited by 2 | Viewed by 1873
Abstract
In spectral-spatial classification of hyperspectral image tasks, the performance of conventional morphological profiles (MPs) that use a sequence of structural elements (SEs) with predefined sizes and shapes could be limited by mismatching all the sizes and shapes of real-world objects in an image. [...] Read more.
In spectral-spatial classification of hyperspectral image tasks, the performance of conventional morphological profiles (MPs) that use a sequence of structural elements (SEs) with predefined sizes and shapes could be limited by mismatching all the sizes and shapes of real-world objects in an image. To overcome such limitation, this paper proposes the use of object-guided morphological profiles (OMPs) by adopting multiresolution segmentation (MRS)-based objects as SEs for morphological closing and opening by geodesic reconstruction. Additionally, the ExtraTrees, bagging, adaptive boosting (AdaBoost), and MultiBoost ensemble versions of the extremely randomized decision trees (ERDTs) are introduced and comparatively investigated for spectral-spatial classification of hyperspectral images. Two hyperspectral benchmark images are used to validate the proposed approaches in terms of classification accuracy. The experimental results confirm the effectiveness of the proposed spatial feature extractors and ensemble classifiers. Full article
(This article belongs to the Special Issue Advances in Image Feature Extraction and Selection)
Show Figures

Figure 1

12 pages, 6013 KiB  
Article
Automatic Recognition of Dendritic Solidification Structures: DenMap
by Bogdan Nenchev, Joel Strickland, Karl Tassenberg, Samuel Perry, Simon Gill and Hongbiao Dong
J. Imaging 2020, 6(4), 19; https://doi.org/10.3390/jimaging6040019 - 3 Apr 2020
Cited by 15 | Viewed by 4350
Abstract
Dendrites are the predominant solidification structures in directionally solidified alloys and control the maximum length scale for segregation. The conventional industrial method for identification of dendrite cores and primary dendrite spacing is performed by time-consuming laborious manual measurement. In this work we developed [...] Read more.
Dendrites are the predominant solidification structures in directionally solidified alloys and control the maximum length scale for segregation. The conventional industrial method for identification of dendrite cores and primary dendrite spacing is performed by time-consuming laborious manual measurement. In this work we developed a novel DenMap image processing and pattern recognition algorithm to identify dendritic cores. Systematic row scan with a specially selected template image over an image of interest is applied via a normalised cross-correlation algorithm. The DenMap algorithm locates the exact dendritic core position with a 98% accuracy for a batch of SEM images of typical as-cast CMSX-4® microstructures in under 90 s per image. Such accuracy is achieved due to a sequence of specially selected image pre-processing methods. Coupled with statistical analysis the model has the potential to gather large quantities of structural data accurately and rapidly, allowing for optimisation and quality control of industrial processes to improve mechanical and creep performance of materials. Full article
(This article belongs to the Special Issue Advances in Image Feature Extraction and Selection)
Show Figures

Graphical abstract

13 pages, 1757 KiB  
Article
Multiple Query Content-Based Image Retrieval Using Relevance Feature Weight Learning
by Abeer Al-Mohamade, Ouiem Bchir and Mohamed Maher Ben Ismail
J. Imaging 2020, 6(1), 2; https://doi.org/10.3390/jimaging6010002 - 17 Jan 2020
Cited by 18 | Viewed by 4339
Abstract
We propose a novel multiple query retrieval approach, named weight-learner, which relies on visual feature discrimination to estimate the distances between the query images and images in the database. For each query image, this discrimination consists of learning, in an unsupervised manner, the [...] Read more.
We propose a novel multiple query retrieval approach, named weight-learner, which relies on visual feature discrimination to estimate the distances between the query images and images in the database. For each query image, this discrimination consists of learning, in an unsupervised manner, the optimal relevance weight for each visual feature/descriptor. These feature relevance weights are designed to reduce the semantic gap between the extracted visual features and the user’s high-level semantics. We mathematically formulate the proposed solution through the minimization of some objective functions. This optimization aims to produce optimal feature relevance weights with respect to the user query. The proposed approach is assessed using an image collection from the Corel database. Full article
(This article belongs to the Special Issue Advances in Image Feature Extraction and Selection)
Show Figures

Figure 1

Back to TopTop