sensors-logo

Journal Browser

Journal Browser

Deep Learning for Semantic Segmentation and Explainable AI Based on Sensing Technology

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (20 June 2023) | Viewed by 12114

Special Issue Editors


E-Mail Website
Guest Editor
Middlesex College, Department of Computer Science, Western University, London, ON N6A 3K7, Canada
Interests: intelligent systems; document image analysis; pattern recognition; machine learning

E-Mail Website
Guest Editor
School of Electrical Engineering and Computer Science, National University of Science and Technology, Islamabad, Pakistan
Interests: forensics and document image analysis; machine learning; pattern recognition

Special Issue Information

Dear Colleagues,

Semantic segmentation is a key problem in computer vision that can lead to comprehensive analysis and understanding of a given sample. An increasing number of applications show the importance of image understanding as a core problem, especially in the medical field, through knowledge inference from imagery data. In recent years, COVID-19 had an enormous impact on societies and in particular on individuals' health. For this reason, researchers paid particular attention to studies on medical images such as chest X-rays, demonstrating the potential for semantic analysis and interpretation in this field. The interpretation of a medical image explains part of an image and it emphasizes the importance of the data depicted by an image.

The correct interpretation is a desired quality of deep learning models that needs to be achieved through application development. The process of deep learning raises confidence if it produces the correct interpretation. The determination of correct interpretation is possible through data transparency which has become a fundamental part of deep learning research, therefore, the decisions made by deep learning models require explanation. The explanation is generated through knowledge inference which is learnt by the deep learning architectures. Regardless of an image type, nowadays, explanation is becoming an integral part of deep learning models. And these studies combined with sensing technology have derived many aspects of research and application, including the availability of low-cost consumer 3D sensors, rapid advances in deep learning, and medical image application.

Deep learning does not play an active role in learning the given samples, but how the decision has been reached by controlling the parameters is important to know for a reliable decision-making process. Therefore, it is important to understand how deep learning can learn the semantic segmentation of textual and medical images.

This Special Issue focuses on deep learning models presented to deal with semantic analysis and interpretation of images. It is anticipated that the submitted papers will discuss problem conceptualization, data representation, feature analysis, deep learning models, comparisons to available work and substantive interpretation of results.

Dr. Saad Bin Ahmed
Dr. Muhammad Imran Malik
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable AI
  • deep neural networks
  • semantic analysis
  • image segmentation
  • medical image
  • image classification
  • text analysis
  • pattern recognition
  • natural language processing
  • image interpretation
  • sensors for computer vision
  • smart sensors

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 3908 KiB  
Article
The NWRD Dataset: An Open-Source Annotated Segmentation Dataset of Diseased Wheat Crop
by Hirra Anwar, Saad Ullah Khan, Muhammad Mohsin Ghaffar, Muhammad Fayyaz, Muhammad Jawad Khan, Christian Weis, Norbert Wehn and Faisal Shafait
Sensors 2023, 23(15), 6942; https://doi.org/10.3390/s23156942 - 4 Aug 2023
Cited by 1 | Viewed by 2857
Abstract
Wheat stripe rust disease (WRD) is extremely detrimental to wheat crop health, and it severely affects the crop yield, increasing the risk of food insecurity. Manual inspection by trained personnel is carried out to inspect the disease spread and extent of damage to [...] Read more.
Wheat stripe rust disease (WRD) is extremely detrimental to wheat crop health, and it severely affects the crop yield, increasing the risk of food insecurity. Manual inspection by trained personnel is carried out to inspect the disease spread and extent of damage to wheat fields. However, this is quite inefficient, time-consuming, and laborious, owing to the large area of wheat plantations. Artificial intelligence (AI) and deep learning (DL) offer efficient and accurate solutions to such real-world problems. By analyzing large amounts of data, AI algorithms can identify patterns that are difficult for humans to detect, enabling early disease detection and prevention. However, deep learning models are data-driven, and scarcity of data related to specific crop diseases is one major hindrance in developing models. To overcome this limitation, in this work, we introduce an annotated real-world semantic segmentation dataset named the NUST Wheat Rust Disease (NWRD) dataset. Multileaf images from wheat fields under various illumination conditions with complex backgrounds were collected, preprocessed, and manually annotated to construct a segmentation dataset specific to wheat stripe rust disease. Classification of WRD into different types and categories is a task that has been solved in the literature; however, semantic segmentation of wheat crops to identify the specific areas of plants and leaves affected by the disease remains a challenge. For this reason, in this work, we target semantic segmentation of WRD to estimate the extent of disease spread in wheat fields. Sections of fields where the disease is prevalent need to be segmented to ensure that the sick plants are quarantined and remedial actions are taken. This will consequently limit the use of harmful fungicides only on the targeted disease area instead of the majority of wheat fields, promoting environmentally friendly and sustainable farming solutions. Owing to the complexity of the proposed NWRD segmentation dataset, in our experiments, promising results were obtained using the UNet semantic segmentation model and the proposed adaptive patching with feedback (APF) technique, which produced a precision of 0.506, recall of 0.624, and F1 score of 0.557 for the rust class. Full article
Show Figures

Figure 1

11 pages, 658 KiB  
Article
Adapting Static and Contextual Representations for Policy Gradient-Based Summarization
by Ching-Sheng Lin, Jung-Sing Jwo and Cheng-Hsiung Lee
Sensors 2023, 23(9), 4513; https://doi.org/10.3390/s23094513 - 5 May 2023
Cited by 1 | Viewed by 1316
Abstract
Considering the ever-growing volume of electronic documents made available in our daily lives, the need for an efficient tool to capture their gist increases as well. Automatic text summarization, which is a process of shortening long text and extracting valuable information, has been [...] Read more.
Considering the ever-growing volume of electronic documents made available in our daily lives, the need for an efficient tool to capture their gist increases as well. Automatic text summarization, which is a process of shortening long text and extracting valuable information, has been of great interest for decades. Due to the difficulties of semantic understanding and the requirement of large training data, the development of this research field is still challenging and worth investigating. In this paper, we propose an automated text summarization approach with the adaptation of static and contextual representations based on an extractive approach to address the research gaps. To better obtain the semantic expression of the given text, we explore the combination of static embeddings from GloVe (Global Vectors) and the contextual embeddings from BERT (Bidirectional Encoder Representations from Transformer) and GPT (Generative Pre-trained Transformer) based models. In order to reduce human annotation costs, we employ policy gradient reinforcement learning to perform unsupervised training. We conduct empirical studies on the public dataset, Gigaword. The experimental results show that our approach achieves promising performance and is competitive with various state-of-the-art approaches. Full article
Show Figures

Figure 1

14 pages, 17554 KiB  
Article
DEHA-Net: A Dual-Encoder-Based Hard Attention Network with an Adaptive ROI Mechanism for Lung Nodule Segmentation
by Muhammad Usman and Yeong-Gil Shin
Sensors 2023, 23(4), 1989; https://doi.org/10.3390/s23041989 - 10 Feb 2023
Cited by 10 | Viewed by 2151
Abstract
Measuring pulmonary nodules accurately can help the early diagnosis of lung cancer, which can increase the survival rate among patients. Numerous techniques for lung nodule segmentation have been developed; however, most of them either rely on the 3D volumetric region of interest (VOI) [...] Read more.
Measuring pulmonary nodules accurately can help the early diagnosis of lung cancer, which can increase the survival rate among patients. Numerous techniques for lung nodule segmentation have been developed; however, most of them either rely on the 3D volumetric region of interest (VOI) input by radiologists or use the 2D fixed region of interest (ROI) for all the slices of computed tomography (CT) scan. These methods only consider the presence of nodules within the given VOI, which limits the networks’ ability to detect nodules outside the VOI and can also encompass unnecessary structures in the VOI, leading to potentially inaccurate segmentation. In this work, we propose a novel approach for 3D lung nodule segmentation that utilizes the 2D region of interest (ROI) inputted from a radiologist or computer-aided detection (CADe) system. Concretely, we developed a two-stage lung nodule segmentation technique. Firstly, we designed a dual-encoder-based hard attention network (DEHA-Net) in which the full axial slice of thoracic computed tomography (CT) scan, along with an ROI mask, were considered as input to segment the lung nodule in the given slice. The output of DEHA-Net, the segmentation mask of the lung nodule, was inputted to the adaptive region of interest (A-ROI) algorithm to automatically generate the ROI masks for the surrounding slices, which eliminated the need for any further inputs from radiologists. After extracting the segmentation along the axial axis, at the second stage, we further investigated the lung nodule along sagittal and coronal views by employing DEHA-Net. All the estimated masks were inputted into the consensus module to obtain the final volumetric segmentation of the nodule. The proposed scheme was rigorously evaluated on the lung image database consortium and image database resource initiative (LIDC/IDRI) dataset, and an extensive analysis of the results was performed. The quantitative analysis showed that the proposed method not only improved the existing state-of-the-art methods in terms of dice score but also showed significant robustness against different types, shapes, and dimensions of the lung nodules. The proposed framework achieved the average dice score, sensitivity, and positive predictive value of 87.91%, 90.84%, and 89.56%, respectively. Full article
Show Figures

Figure 1

20 pages, 3931 KiB  
Article
Point Cloud Deep Learning Network Based on Balanced Sampling and Hybrid Pooling
by Chunyuan Deng, Zhenyun Peng, Zhencheng Chen and Ruixing Chen
Sensors 2023, 23(2), 981; https://doi.org/10.3390/s23020981 - 14 Jan 2023
Cited by 5 | Viewed by 2004
Abstract
The automatic semantic segmentation of point cloud data is important for applications in the fields of machine vision, virtual reality, and smart cities. The processing capability of the point cloud segmentation method with PointNet++ as the baseline needs to be improved for extremely [...] Read more.
The automatic semantic segmentation of point cloud data is important for applications in the fields of machine vision, virtual reality, and smart cities. The processing capability of the point cloud segmentation method with PointNet++ as the baseline needs to be improved for extremely imbalanced point cloud scenes. To address this problem, in this study, we designed a weighted sampling method based on farthest point sampling (FPS), which adjusts the sampling weight value according to the loss value of the model to equalize the sampling process. We also introduced the relational learning of the neighborhood space of the sampling center point in the feature encoding process, where the feature importance is distinguished by using a self-attention model. Finally, the global–local features were aggregated and transmitted using the hybrid pooling method. The experimental results of the six-fold crossover experiment showed that on the S3DIS semantic segmentation dataset, the proposed network achieved 9.5% and 11.6% improvement in overall point-wise accuracy (OA) and mean of class-wise intersection over union (MIoU), respectively, compared with the baseline. On the Vaihingen dataset, the proposed network achieved 4.2% and 3.9% improvement in OA and MIoU, respectively, compared with the baseline. Compared with the segmentation results of other network models on public datasets, our algorithm achieves a good balance between OA and MIoU. Full article
Show Figures

Figure 1

Review

Jump to: Research

29 pages, 1678 KiB  
Review
Analysis of Hyperspectral Data to Develop an Approach for Document Images
by Zainab Zaman, Saad Bin Ahmed and Muhammad Imran Malik
Sensors 2023, 23(15), 6845; https://doi.org/10.3390/s23156845 - 1 Aug 2023
Cited by 10 | Viewed by 2684
Abstract
Hyperspectral data analysis is being utilized as an effective and compelling tool for image processing, providing unprecedented levels of information and insights for various applications. In this manuscript, we have compiled and presented a comprehensive overview of recent advances in hyperspectral data analysis [...] Read more.
Hyperspectral data analysis is being utilized as an effective and compelling tool for image processing, providing unprecedented levels of information and insights for various applications. In this manuscript, we have compiled and presented a comprehensive overview of recent advances in hyperspectral data analysis that can provide assistance for the development of customized techniques for hyperspectral document images. We review the fundamental concepts of hyperspectral imaging, discuss various techniques for data acquisition, and examine state-of-the-art approaches to the preprocessing, feature extraction, and classification of hyperspectral data by taking into consideration the complexities of document images. We also explore the possibility of utilizing hyperspectral imaging for addressing critical challenges in document analysis, including document forgery, ink age estimation, and text extraction from degraded or damaged documents. Finally, we discuss the current limitations of hyperspectral imaging and identify future research directions in this rapidly evolving field. Our review provides a valuable resource for researchers and practitioners working on document image processing and highlights the potential of hyperspectral imaging for addressing complex challenges in this domain. Full article
Show Figures

Figure 1

Back to TopTop