sensors-logo

Journal Browser

Journal Browser

Application of Semantic Technologies in Sensors and Sensing Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 111043

Special Issue Editors

Department of Computer Engineering, Gachon University, Sungnam, Republic of Korea
Interests: intelligent information processing; recommnendation system; semantic web and ontology
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science, William Paterson University, Wayne, New Jersey 07470, USA
Interests: vehicular communication; network security; Internet of Things; machine learning
Special Issues, Collections and Topics in MDPI journals
Department of Artificial Intelligence Engineering, Chosun University, Gwangju, Republic of Korea
Interests: biometrics; signal processing; bio-signal; pattern recognition; deep learning

Special Issue Information

Dear Colleagues,

For sensors and networks in various environments, semantic technologies are attractive tools for allowing interoperability and composability with high-level information abstraction. To enable automated high-level smart applications, intelligent sensors and sensing systems should be seamlessly, trustworthily, and securely integrated. Semantic metadata can assist the accessibility of these aspects by providing contextual information so that machine learning can be used to process sensor data and achieve interoperability. Since careful management and administration of digital resources are prerequisites for knowledge discovery and innovation, processing raw sensory data is becoming more important. Additionally, various applications that use semantic technology are necessary to improve performance of sensing systems and enhance the quality of our lives.

As the availability of information from a wide variety of domains increases, semantic technologies have received significant attention and support from various areas. This Special Issue aims to present high-quality research and recent semantic technologies in sensors and sensing systems which can be applied to various application domains. We welcome submissions from any area that touches a subset of the aformentioned aspects in sensors, sensing systems, and related domains.  We also invite submissions that apply novel approaches and present significant advances of the state of the art regarding semantic technologies.

Topics relevant to this Special Issue include, but are not limited to, the following:

  • Cognitive modelling and semantic Web;
  • Semantic modeling; 
  • Semantic reasoning aided by machine learning; 
  • Semantic-based Big Data mining; 
  • Semantics of information;
  • IoT ontologies; 
  • Optimization of semantic technologies; 
  • Service provision optimization through semantic technologies; 
  • Standardization activities on semantic technologies; 
  • Application, deployment, testbed, and experiments in sensing system;
  • Decision guidance and support systems;
  • Decision-making in sensor and sensing systems;
  • Data privacy and system security using semantics;
  • Reinforcement Learning for sensing systems;
  • Communication-inspired machine learning;
  • Sensing and Localization;
  • AI-driven semantic applications;
  • Machine learning techniques for sensing systems;
  • Semantics in connected and autonomous vehicles.

Dr. Chang Choi
Dr. Kiho Lim
Dr. Gyuho Choi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • semantics
  • sensor
  • sensing systems
  • machine learning
  • AI
  • Smart Security

Published Papers (37 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 2386 KiB  
Article
PCTC-Net: A Crack Segmentation Network with Parallel Dual Encoder Network Fusing Pre-Conv-Based Transformers and Convolutional Neural Networks
by Ji-Hwan Moon, Gyuho Choi, Yu-Hwan Kim and Won-Yeol Kim
Sensors 2024, 24(5), 1467; https://doi.org/10.3390/s24051467 - 24 Feb 2024
Viewed by 611
Abstract
Cracks are common defects that occur on the surfaces of objects and structures. Crack detection is a critical maintenance task that traditionally requires manual labor. Large-scale manual inspections are expensive. Research has been conducted to replace expensive human labor with cheaper computing resources. [...] Read more.
Cracks are common defects that occur on the surfaces of objects and structures. Crack detection is a critical maintenance task that traditionally requires manual labor. Large-scale manual inspections are expensive. Research has been conducted to replace expensive human labor with cheaper computing resources. Recently, crack segmentation based on convolutional neural networks (CNNs) and transformers has been actively investigated for local and global information. However, the transformer is data-intensive owing to its weak inductive bias. Existing labeled datasets for crack segmentation are relatively small. Additionally, a limited amount of fine-grained crack data is available. To address this data-intensive problem, we propose a parallel dual encoder network fusing Pre-Conv-based Transformers and convolutional neural networks (PCTC-Net). The Pre-Conv module automatically optimizes each color channel with a small spatial kernel before the input of the transformer. The proposed model, PCTC-Net, was tested with the DeepCrack, Crack500, and Crackseg9k datasets. The experimental results showed that our model achieved higher generalization performance, stability, and F1 scores than the SOTA model DTrC-Net. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

15 pages, 1934 KiB  
Article
Anomaly Detection in Time Series Data Using Reversible Instance Normalized Anomaly Transformer
by Ranjai Baidya and Heon Jeong
Sensors 2023, 23(22), 9272; https://doi.org/10.3390/s23229272 - 19 Nov 2023
Viewed by 1383
Abstract
Anomalies are infrequent in nature, but detecting these anomalies could be crucial for the proper functioning of any system. The rarity of anomalies could be a challenge for their detection as detection models are required to depend on the relations of the datapoints [...] Read more.
Anomalies are infrequent in nature, but detecting these anomalies could be crucial for the proper functioning of any system. The rarity of anomalies could be a challenge for their detection as detection models are required to depend on the relations of the datapoints with their adjacent datapoints. In this work, we use the rarity of anomalies to detect them. For this, we introduce the reversible instance normalized anomaly transformer (RINAT). Rooted in the foundational principles of the anomaly transformer, RINAT incorporates both prior and series associations for each time point. The prior association uses a learnable Gaussian kernel to ensure a thorough understanding of the adjacent concentration inductive bias. In contrast, the series association method uses self-attention techniques to specifically focus on the original raw data. Furthermore, because anomalies are rare in nature, we utilize normalized data to identify series associations and employ non-normalized data to uncover prior associations. This approach enhances the modelled series associations and, consequently, improves the association discrepancies. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

14 pages, 4415 KiB  
Article
Enhancing Industrial Communication with Ethernet/Internet Protocol: A Study and Analysis of Real-Time Cooperative Robot Communication and Automation via Transmission Control Protocol/Internet Protocol
by JuYong Seong, Rahul Ranjan, Joongeup Kye, Seungjae Lee and Sungchul Lee
Sensors 2023, 23(20), 8580; https://doi.org/10.3390/s23208580 - 19 Oct 2023
Cited by 1 | Viewed by 1265
Abstract
This study explores the important task of validating data exchange between a control box, a Programmable Logic Controller (PLC), and a robot in an industrial setting. To achieve this, we adopt a unique approach utilizing both a virtual PLC simulator and an actual [...] Read more.
This study explores the important task of validating data exchange between a control box, a Programmable Logic Controller (PLC), and a robot in an industrial setting. To achieve this, we adopt a unique approach utilizing both a virtual PLC simulator and an actual PLC device. We introduce an innovative industrial communication module to facilitate the efficient collection and storage of data among these interconnected entities. The main aim of this inquiry is to examine the implementation of Ethernet/IP (EIP), a relatively new addition to the industrial network scenery. It was designed using ODVA’s Common Industrial Protocol (CIP™). The Costumed real-time data communication module was programmed in C++ for the Linux Debian platform and elegantly demonstrates the impressive versatility of EIP as a means for effective data transfer in an industrial environment. The study’s findings provide valuable insights into Ethernet/IP’s functionalities and capabilities in industrial networks, bringing attention to its possible applications in industrial robotics. By connecting theoretical knowledge and practical implementation, this research makes a significant contribution to the continued development of industrial communication systems, ultimately improving the efficiency and effectiveness of automation processes. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

14 pages, 4140 KiB  
Article
Visual-Based Children and Pet Rescue from Suffocation and Incidence of Hyperthermia Death in Enclosed Vehicles
by Mona M. Moussa, Rasha Shoitan, Young-Im Cho and Mohamed S. Abdallah
Sensors 2023, 23(16), 7025; https://doi.org/10.3390/s23167025 - 08 Aug 2023
Viewed by 1456
Abstract
Over the past several years, many children have died from suffocation due to being left inside a closed vehicle on a sunny day. Various vehicle manufacturers have proposed a variety of technologies to locate an unattended child in a vehicle, including pressure sensors, [...] Read more.
Over the past several years, many children have died from suffocation due to being left inside a closed vehicle on a sunny day. Various vehicle manufacturers have proposed a variety of technologies to locate an unattended child in a vehicle, including pressure sensors, passive infrared motion sensors, temperature sensors, and microwave sensors. However, these methods have not yet reliably located forgotten children in the vehicle. Recently, visual-based methods have taken the attention of manufacturers after the emergence of deep learning technology. However, the existing methods focus only on the forgotten child and neglect a forgotten pet. Furthermore, their systems only detect the presence of a child in the car with or without their parents. Therefore, this research introduces a visual-based framework to reduce hyperthermia deaths in enclosed vehicles. This visual-based system detects objects inside a vehicle; if the child or pet are without an adult, a notification is sent to the parents. First, a dataset is constructed for vehicle interiors containing children, pets, and adults. The proposed dataset is collected from different online sources, considering varying illumination, skin color, pet type, clothes, and car brands for guaranteed model robustness. Second, blurring, sharpening, brightness, contrast, noise, perspective transform, and fog effect augmentation algorithms are applied to these images to increase the training data. The augmented images are annotated with three classes: child, pet, and adult. This research concentrates on fine-tuning different state-of-the-art real-time detection models to detect objects inside the vehicle: NanoDet, YOLOv6_1, YOLOv6_3, and YOLO7. The simulation results demonstrate that YOLOv6_1 presents significant values with 96% recall, 95% precision, and 95% F1. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

19 pages, 3659 KiB  
Article
Enhancing Speech Emotion Recognition Using Dual Feature Extraction Encoders
by Ilkhomjon Pulatov, Rashid Oteniyazov, Fazliddin Makhmudov and Young-Im Cho
Sensors 2023, 23(14), 6640; https://doi.org/10.3390/s23146640 - 24 Jul 2023
Cited by 1 | Viewed by 1961
Abstract
Understanding and identifying emotional cues in human speech is a crucial aspect of human–computer communication. The application of computer technology in dissecting and deciphering emotions, along with the extraction of relevant emotional characteristics from speech, forms a significant part of this process. The [...] Read more.
Understanding and identifying emotional cues in human speech is a crucial aspect of human–computer communication. The application of computer technology in dissecting and deciphering emotions, along with the extraction of relevant emotional characteristics from speech, forms a significant part of this process. The objective of this study was to architect an innovative framework for speech emotion recognition predicated on spectrograms and semantic feature transcribers, aiming to bolster performance precision by acknowledging the conspicuous inadequacies in extant methodologies and rectifying them. To procure invaluable attributes for speech detection, this investigation leveraged two divergent strategies. Primarily, a wholly convolutional neural network model was engaged to transcribe speech spectrograms. Subsequently, a cutting-edge Mel-frequency cepstral coefficient feature abstraction approach was adopted and integrated with Speech2Vec for semantic feature encoding. These dual forms of attributes underwent individual processing before they were channeled into a long short-term memory network and a comprehensive connected layer for supplementary representation. By doing so, we aimed to bolster the sophistication and efficacy of our speech emotion detection model, thereby enhancing its potential to accurately recognize and interpret emotion from human speech. The proposed mechanism underwent a rigorous evaluation process employing two distinct databases: RAVDESS and EMO-DB. The outcome displayed a predominant performance when juxtaposed with established models, registering an impressive accuracy of 94.8% on the RAVDESS dataset and a commendable 94.0% on the EMO-DB dataset. This superior performance underscores the efficacy of our innovative system in the realm of speech emotion recognition, as it outperforms current frameworks in accuracy metrics. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

14 pages, 1621 KiB  
Article
Applying Enhanced Real-Time Monitoring and Counting Method for Effective Traffic Management in Tashkent
by Alpamis Kutlimuratov, Jamshid Khamzaev, Temur Kuchkorov, Muhammad Shahid Anwar and Ahyoung Choi
Sensors 2023, 23(11), 5007; https://doi.org/10.3390/s23115007 - 23 May 2023
Cited by 5 | Viewed by 1648
Abstract
This study describes an applied and enhanced real-time vehicle-counting system that is an integral part of intelligent transportation systems. The primary objective of this study was to develop an accurate and reliable real-time system for vehicle counting to mitigate traffic congestion in a [...] Read more.
This study describes an applied and enhanced real-time vehicle-counting system that is an integral part of intelligent transportation systems. The primary objective of this study was to develop an accurate and reliable real-time system for vehicle counting to mitigate traffic congestion in a designated area. The proposed system can identify and track objects inside the region of interest and count detected vehicles. To enhance the accuracy of the system, we used the You Only Look Once version 5 (YOLOv5) model for vehicle identification owing to its high performance and short computing time. Vehicle tracking and the number of vehicles acquired used the DeepSort algorithm with the Kalman filter and Mahalanobis distance as the main components of the algorithm and the proposed simulated loop technique, respectively. Empirical results were obtained using video images taken from a closed-circuit television (CCTV) camera on Tashkent roads and show that the counting system can produce 98.1% accuracy in 0.2408 s. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

18 pages, 5471 KiB  
Article
EFFNet-CA: An Efficient Driver Distraction Detection Based on Multiscale Features Extractions and Channel Attention Mechanism
by Taimoor Khan, Gyuho Choi and Sokjoon Lee
Sensors 2023, 23(8), 3835; https://doi.org/10.3390/s23083835 - 08 Apr 2023
Cited by 5 | Viewed by 2500
Abstract
Driver distraction is considered a main cause of road accidents, every year, thousands of people obtain serious injuries, and most of them lose their lives. In addition, a continuous increase can be found in road accidents due to driver’s distractions, such as talking, [...] Read more.
Driver distraction is considered a main cause of road accidents, every year, thousands of people obtain serious injuries, and most of them lose their lives. In addition, a continuous increase can be found in road accidents due to driver’s distractions, such as talking, drinking, and using electronic devices, among others. Similarly, several researchers have developed different traditional deep learning techniques for the efficient detection of driver activity. However, the current studies need further improvement due to the higher number of false predictions in real time. To cope with these issues, it is significant to develop an effective technique which detects driver’s behavior in real time to prevent human lives and their property from being damaged. In this work, we develop a convolutional neural network (CNN)-based technique with the integration of a channel attention (CA) mechanism for efficient and effective detection of driver behavior. Moreover, we compared the proposed model with solo and integration flavors of various backbone models and CA such as VGG16, VGG16+CA, ResNet50, ResNet50+CA, Xception, Xception+CA, InceptionV3, InceptionV3+CA, and EfficientNetB0. Additionally, the proposed model obtained optimal performance in terms of evaluation metrics, for instance, accuracy, precision, recall, and F1-score using two well-known datasets such as AUC Distracted Driver (AUCD2) and State Farm Distracted Driver Detection (SFD3). The proposed model achieved 99.58% result in terms of accuracy using SFD3 while 98.97% accuracy on AUCD2 datasets. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

12 pages, 3127 KiB  
Article
Real-Time Context-Aware Recommendation System for Tourism
by JunHo Yoon and Chang Choi
Sensors 2023, 23(7), 3679; https://doi.org/10.3390/s23073679 - 02 Apr 2023
Cited by 3 | Viewed by 3043
Abstract
Recently, the tourism trend has been shifting towards the Tourism 2.0 paradigm due to increased travel experiences and the increase in acquiring and sharing information through the Internet. The Tourism 2.0 paradigm requires developing intelligent tourism service tools for positive effects such as [...] Read more.
Recently, the tourism trend has been shifting towards the Tourism 2.0 paradigm due to increased travel experiences and the increase in acquiring and sharing information through the Internet. The Tourism 2.0 paradigm requires developing intelligent tourism service tools for positive effects such as time savings and marketing utilization. Existing tourism service tools recommend tourist destinations based on the relationship between tourists and tourist destinations or tourism patterns, so it is difficult to make recommendations in situations where information is insufficient or changes in real time. In this paper, we propose a real-time recommendation system for tourism (R2Tour) that responds to changing situations in real time, such as external factors and distance information, and recommends customized tourist destinations according to the type of tourist. R2Tour trains a machine learning model with situational information such as temperature and precipitation and tourist profiles such as gender and age to recommend the top five nearby tourist destinations. To verify the recommendation performance of R2Tour, six machine learning models, including K-NN and SVM, and information on tourist attractions in Jeju Island were used. As a result of the experiment, R2Tour was verified with accuracy of 77.3%, micro-F1 0.773, and macro-F1 0.415. Since R2Tour trains tourism patterns based on situational information, it is possible to recommend new tourist destinations and respond to changing situations in real time. In the future, R2Tour can be installed in vehicles to recommend nearby tourist destinations or expanded to tasks in the tourism industry, such as a smart target advertising system. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

20 pages, 3033 KiB  
Article
Evaluating Synthetic Medical Images Using Artificial Intelligence with the GAN Algorithm
by Akmalbek Bobomirzaevich Abdusalomov, Rashid Nasimov, Nigorakhon Nasimova, Bahodir Muminov and Taeg Keun Whangbo
Sensors 2023, 23(7), 3440; https://doi.org/10.3390/s23073440 - 24 Mar 2023
Cited by 9 | Viewed by 2787
Abstract
In recent years, considerable work has been conducted on the development of synthetic medical images, but there are no satisfactory methods for evaluating their medical suitability. Existing methods mainly evaluate the quality of noise in the images, and the similarity of the images [...] Read more.
In recent years, considerable work has been conducted on the development of synthetic medical images, but there are no satisfactory methods for evaluating their medical suitability. Existing methods mainly evaluate the quality of noise in the images, and the similarity of the images to the real images used to generate them. For this purpose, they use feature maps of images extracted in different ways or distribution of images set. Then, the proximity of synthetic images to the real set is evaluated using different distance metrics. However, it is not possible to determine whether only one synthetic image was generated repeatedly, or whether the synthetic set exactly repeats the training set. In addition, most evolution metrics take a lot of time to calculate. Taking these issues into account, we have proposed a method that can quantitatively and qualitatively evaluate synthetic images. This method is a combination of two methods, namely, FMD and CNN-based evaluation methods. The estimation methods were compared with the FID method, and it was found that the FMD method has a great advantage in terms of speed, while the CNN method has the ability to estimate more accurately. To evaluate the reliability of the methods, a dataset of different real images was checked. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

18 pages, 1501 KiB  
Article
Predicting the Output of Solar Photovoltaic Panels in the Absence of Weather Data Using Only the Power Output of the Neighbouring Sites
by Heon Jeong
Sensors 2023, 23(7), 3399; https://doi.org/10.3390/s23073399 - 23 Mar 2023
Viewed by 1529
Abstract
There is an increasing need for capable models in the forecast of the output of solar photovoltaic panels. These models are vital for optimizing the performance and maintenance of PV systems. There is also a shortage of studies on forecasts of the output [...] Read more.
There is an increasing need for capable models in the forecast of the output of solar photovoltaic panels. These models are vital for optimizing the performance and maintenance of PV systems. There is also a shortage of studies on forecasts of the output power of solar photovoltaics sites in the absence of meteorological data. Unlike common methods, this study explores numerous machine learning algorithms for forecasting the output of solar photovoltaic panels in the absence of weather data such as temperature, humidity and wind speed, which are often used when forecasting the output of solar PV panels. The considered models include Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Recurrent Neural Network (RNN) and Transformer. These models were used with the data collected from 50 different solar photo voltaic sites in South Korea, which consist of readings of the output of each of the sites collected at regular intervals. This study focuses on obtaining multistep forecasts for the multi-in multi-out, multi-in uni-out and uni-in uni-out settings. Detailed experimentation was carried out in each of these settings. Finally, for each of these settings and different lookback and forecast lengths, the best models were also identified. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

18 pages, 7948 KiB  
Article
A YOLOv6-Based Improved Fire Detection Approach for Smart City Environments
by Saydirasulov Norkobil Saydirasulovich, Akmalbek Abdusalomov, Muhammad Kafeel Jamil, Rashid Nasimov, Dinara Kozhamzharova and Young-Im Cho
Sensors 2023, 23(6), 3161; https://doi.org/10.3390/s23063161 - 16 Mar 2023
Cited by 23 | Viewed by 4404
Abstract
Authorities and policymakers in Korea have recently prioritized improving fire prevention and emergency response. Governments seek to enhance community safety for residents by constructing automated fire detection and identification systems. This study examined the efficacy of YOLOv6, a system for object identification running [...] Read more.
Authorities and policymakers in Korea have recently prioritized improving fire prevention and emergency response. Governments seek to enhance community safety for residents by constructing automated fire detection and identification systems. This study examined the efficacy of YOLOv6, a system for object identification running on an NVIDIA GPU platform, to identify fire-related items. Using metrics such as object identification speed, accuracy research, and time-sensitive real-world applications, we analyzed the influence of YOLOv6 on fire detection and identification efforts in Korea. We conducted trials using a fire dataset comprising 4000 photos collected through Google, YouTube, and other resources to evaluate the viability of YOLOv6 in fire recognition and detection tasks. According to the findings, YOLOv6’s object identification performance was 0.98, with a typical recall of 0.96 and a precision of 0.83. The system achieved an MAE of 0.302%. These findings suggest that YOLOv6 is an effective technique for detecting and identifying fire-related items in photos in Korea. Multi-class object recognition using random forests, k-nearest neighbors, support vector, logistic regression, naive Bayes, and XGBoost was performed on the SFSC data to evaluate the system’s capacity to identify fire-related objects. The results demonstrate that for fire-related objects, XGBoost achieved the highest object identification accuracy, with values of 0.717 and 0.767. This was followed by random forest, with values of 0.468 and 0.510. Finally, we tested YOLOv6 in a simulated fire evacuation scenario to gauge its practicality in emergencies. The results show that YOLOv6 can accurately identify fire-related items in real time within a response time of 0.66 s. Therefore, YOLOv6 is a viable option for fire detection and recognition in Korea. The XGBoost classifier provides the highest accuracy when attempting to identify objects, achieving remarkable results. Furthermore, the system accurately identifies fire-related objects while they are being detected in real-time. This makes YOLOv6 an effective tool to use in fire detection and identification initiatives. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

17 pages, 1464 KiB  
Article
Modeling Topics in DFA-Based Lemmatized Gujarati Text
by Uttam Chauhan, Shrusti Shah, Dharati Shiroya, Dipti Solanki, Zeel Patel, Jitendra Bhatia, Sudeep Tanwar, Ravi Sharma, Verdes Marina and Maria Simona Raboaca
Sensors 2023, 23(5), 2708; https://doi.org/10.3390/s23052708 - 01 Mar 2023
Cited by 1 | Viewed by 1597
Abstract
Topic modeling is a machine learning algorithm based on statistics that follows unsupervised machine learning techniques for mapping a high-dimensional corpus to a low-dimensional topical subspace, but it could be better. A topic model’s topic is expected to be interpretable as a concept, [...] Read more.
Topic modeling is a machine learning algorithm based on statistics that follows unsupervised machine learning techniques for mapping a high-dimensional corpus to a low-dimensional topical subspace, but it could be better. A topic model’s topic is expected to be interpretable as a concept, i.e., correspond to human understanding of a topic occurring in texts. While discovering corpus themes, inference constantly uses vocabulary that impacts topic quality due to its size. Inflectional forms are in the corpus. Since words frequently appear in the same sentence and are likely to have a latent topic, practically all topic models rely on co-occurrence signals between various terms in the corpus. The topics get weaker because of the abundance of distinct tokens in languages with extensive inflectional morphology. Lemmatization is often used to preempt this problem. Gujarati is one of the morphologically rich languages, as a word may have several inflectional forms. This paper proposes a deterministic finite automaton (DFA) based lemmatization technique for the Gujarati language to transform lemmas into their root words. The set of topics is then inferred from this lemmatized corpus of Gujarati text. We employ statistical divergence measurements to identify semantically less coherent (overly general) topics. The result shows that the lemmatized Gujarati corpus learns more interpretable and meaningful subjects than unlemmatized text. Finally, results show that lemmatization curtails the size of vocabulary decreases by 16% and the semantic coherence for all three measurements—Log Conditional Probability, Pointwise Mutual Information, and Normalized Pointwise Mutual Information—from −9.39 to −7.49, −6.79 to −5.18, and −0.23 to −0.17, respectively. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

13 pages, 645 KiB  
Article
A Time Synchronization Protocol for Barrage Relay Networks
by Woong Son, Jungwook Choi, Soobum Park, Howon Lee and Bang Chul Jung
Sensors 2023, 23(5), 2447; https://doi.org/10.3390/s23052447 - 22 Feb 2023
Cited by 1 | Viewed by 1931
Abstract
Time-division multiple access (TDMA)-based medium access control (MAC) protocol has been widely used for avoiding access conflicts in wireless multi-hop ad hoc networks, where the time synchronization among wireless nodes is essential. In this paper, we propose a novel time synchronization protocol for [...] Read more.
Time-division multiple access (TDMA)-based medium access control (MAC) protocol has been widely used for avoiding access conflicts in wireless multi-hop ad hoc networks, where the time synchronization among wireless nodes is essential. In this paper, we propose a novel time synchronization protocol for TDMA-based cooperative multi-hop wireless ad hoc networks, which are also called barrage relay networks (BRNs). The proposed time synchronization protocol is based on cooperative relay transmissions to send time synchronization messages. We also propose a network time reference (NTR) selection technique for improving the convergence time and average time error. In the proposed NTR selection technique, each node overhears the user identifier (UID) of other nodes, hop count (HC) from them to itself, and network degree, which denotes the number of 1-hop neighbor nodes. Then, the node with the minimum HC from all other nodes is selected as the NTR node. If there are multiple nodes with the minimum HC, the node with the larger degree is selected as the NTR node. To the best of our knowledge, the proposed time synchronization protocol with the NTR selection is introduced for the first time for cooperative (barrage) relay networks in this paper. Through computer simulations, we validate the proposed time synchronization protocol in terms of the average time error under various practical network scenarios. Furthermore, we also compare the performance of the proposed protocol with the conventional time synchronization methods. It is shown that the proposed protocol significantly outperforms the conventional methods in terms of the average time error and convergence time. The proposed protocol is shown to be more robust against packet loss as well. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

19 pages, 2027 KiB  
Article
Improved Cattle Disease Diagnosis Based on Fuzzy Logic Algorithms
by Dilmurod Turimov Mustapoevich, Dilnoz Muhamediyeva Tulkunovna, Lola Safarova Ulmasovna, Holida Primova and Wooseong Kim
Sensors 2023, 23(4), 2107; https://doi.org/10.3390/s23042107 - 13 Feb 2023
Cited by 10 | Viewed by 2528
Abstract
The health and productivity of animals, as well as farmers’ financial well-being, can be significantly impacted by cattle illnesses. Accurate and timely diagnosis is therefore essential for effective disease management and control. In this study, we consider the development of models and algorithms [...] Read more.
The health and productivity of animals, as well as farmers’ financial well-being, can be significantly impacted by cattle illnesses. Accurate and timely diagnosis is therefore essential for effective disease management and control. In this study, we consider the development of models and algorithms for diagnosing diseases in cattle based on Sugeno’s fuzzy inference. To achieve this goal, an analytical review of mathematical methods for diagnosing animal diseases and soft computing methods for solving classification problems was performed. Based on the clinical signs of diseases, an algorithm was proposed to build a knowledge base to diagnose diseases in cattle. This algorithm serves to increase the reliability of informative features. Based on the proposed algorithm, a program for diagnosing diseases in cattle was developed. Afterward, a computational experiment was performed. The results of the computational experiment are additional tools for decision-making on the diagnosis of a disease in cattle. Using the developed program, a Sugeno fuzzy logic model was built for diagnosing diseases in cattle. The analysis of the adequacy of the results obtained from the Sugeno fuzzy logic model was performed. The processes of solving several existing (model) classification and evaluation problems and comparing the results with several existing algorithms are considered. The results obtained enable it to be possible to promptly diagnose and perform certain therapeutic measures as well as reduce the time of data analysis and increase the efficiency of diagnosing cattle. The scientific novelty of this study is the creation of an algorithm for building a knowledge base and improving the algorithm for constructing the Sugeno fuzzy logic model for diagnosing diseases in cattle. The findings of this study can be widely used in veterinary medicine in solving the problems of diagnosing diseases in cattle and substantiating decision-making in intelligent systems. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

17 pages, 2955 KiB  
Article
User Preference-Based Video Synopsis Using Person Appearance and Motion Descriptions
by Rasha Shoitan, Mona M. Moussa, Sawsan Morkos Gharghory, Heba A. Elnemr, Young-Im Cho and Mohamed S. Abdallah
Sensors 2023, 23(3), 1521; https://doi.org/10.3390/s23031521 - 30 Jan 2023
Cited by 1 | Viewed by 1468
Abstract
During the last decade, surveillance cameras have spread quickly; their spread is predicted to increase rapidly in the following years. Therefore, browsing and analyzing these vast amounts of created surveillance videos effectively is vital in surveillance applications. Recently, a video synopsis approach was [...] Read more.
During the last decade, surveillance cameras have spread quickly; their spread is predicted to increase rapidly in the following years. Therefore, browsing and analyzing these vast amounts of created surveillance videos effectively is vital in surveillance applications. Recently, a video synopsis approach was proposed to reduce the surveillance video duration by rearranging the objects to present them in a portion of time. However, performing a synopsis for all the persons in the video is not efficacious for crowded videos. Different clustering and user-defined query methods are introduced to generate the video synopsis according to general descriptions such as color, size, class, and motion. This work presents a user-defined query synopsis video based on motion descriptions and specific visual appearance features such as gender, age, carrying something, having a baby buggy, and upper and lower clothing color. The proposed method assists the camera monitor in retrieving people who meet certain appearance constraints and people who enter a predefined area or move in a specific direction to generate the video, including a suspected person with specific features. After retrieving the persons, a whale optimization algorithm is applied to arrange these persons reserving chronological order, reducing collisions, and assuring a short synopsis video. The evaluation of the proposed work for the retrieval process in terms of precision, recall, and F1 score ranges from 83% to 100%, while for the video synopsis process, the synopsis video length compared to the original video is decreased by 68% to 93.2%, and the interacting tube pairs are preserved in the synopsis video by 78.6% to 100%. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

19 pages, 14299 KiB  
Article
An Improved Forest Fire Detection Method Based on the Detectron2 Model and a Deep Learning Approach
by Akmalbek Bobomirzaevich Abdusalomov, Bappy MD Siful Islam, Rashid Nasimov, Mukhriddin Mukhiddinov and Taeg Keun Whangbo
Sensors 2023, 23(3), 1512; https://doi.org/10.3390/s23031512 - 29 Jan 2023
Cited by 36 | Viewed by 5501
Abstract
With an increase in both global warming and the human population, forest fires have become a major global concern. This can lead to climatic shifts and the greenhouse effect, among other adverse outcomes. Surprisingly, human activities have caused a disproportionate number of forest [...] Read more.
With an increase in both global warming and the human population, forest fires have become a major global concern. This can lead to climatic shifts and the greenhouse effect, among other adverse outcomes. Surprisingly, human activities have caused a disproportionate number of forest fires. Fast detection with high accuracy is the key to controlling this unexpected event. To address this, we proposed an improved forest fire detection method to classify fires based on a new version of the Detectron2 platform (a ground-up rewrite of the Detectron library) using deep learning approaches. Furthermore, a custom dataset was created and labeled for the training model, and it achieved higher precision than the other models. This robust result was achieved by improving the Detectron2 model in various experimental scenarios with a custom dataset and 5200 images. The proposed model can detect small fires over long distances during the day and night. The advantage of using the Detectron2 algorithm is its long-distance detection of the object of interest. The experimental results proved that the proposed forest fire detection method successfully detected fires with an improved precision of 99.3%. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

22 pages, 5689 KiB  
Article
Development of Language Models for Continuous Uzbek Speech Recognition System
by Abdinabi Mukhamadiyev, Mukhriddin Mukhiddinov, Ilyos Khujayarov, Mannon Ochilov and Jinsoo Cho
Sensors 2023, 23(3), 1145; https://doi.org/10.3390/s23031145 - 19 Jan 2023
Cited by 7 | Viewed by 3056
Abstract
Automatic speech recognition systems with a large vocabulary and other natural language processing applications cannot operate without a language model. Most studies on pre-trained language models have focused on more popular languages such as English, Chinese, and various European languages, but there is [...] Read more.
Automatic speech recognition systems with a large vocabulary and other natural language processing applications cannot operate without a language model. Most studies on pre-trained language models have focused on more popular languages such as English, Chinese, and various European languages, but there is no publicly available Uzbek speech dataset. Therefore, language models of low-resource languages need to be studied and created. The objective of this study is to address this limitation by developing a low-resource language model for the Uzbek language and understanding linguistic occurrences. We proposed the Uzbek language model named UzLM by examining the performance of statistical and neural-network-based language models that account for the unique features of the Uzbek language. Our Uzbek-specific linguistic representation allows us to construct more robust UzLM, utilizing 80 million words from various sources while using the same or fewer training words, as applied in previous studies. Roughly sixty-eight thousand different words and 15 million sentences were collected for the creation of this corpus. The experimental results of our tests on the continuous recognition of Uzbek speech show that, compared with manual encoding, the use of neural-network-based language models reduced the character error rate to 5.26%. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

23 pages, 5670 KiB  
Article
Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People
by Mukhriddin Mukhiddinov, Oybek Djuraev, Farkhod Akhmedov, Abdinabi Mukhamadiyev and Jinsoo Cho
Sensors 2023, 23(3), 1080; https://doi.org/10.3390/s23031080 - 17 Jan 2023
Cited by 26 | Viewed by 6733
Abstract
Current artificial intelligence systems for determining a person’s emotions rely heavily on lip and mouth movement and other facial features such as eyebrows, eyes, and the forehead. Furthermore, low-light images are typically classified incorrectly because of the dark region around the eyes and [...] Read more.
Current artificial intelligence systems for determining a person’s emotions rely heavily on lip and mouth movement and other facial features such as eyebrows, eyes, and the forehead. Furthermore, low-light images are typically classified incorrectly because of the dark region around the eyes and eyebrows. In this work, we propose a facial emotion recognition method for masked facial images using low-light image enhancement and feature analysis of the upper features of the face with a convolutional neural network. The proposed approach employs the AffectNet image dataset, which includes eight types of facial expressions and 420,299 images. Initially, the facial input image’s lower parts are covered behind a synthetic mask. Boundary and regional representation methods are used to indicate the head and upper features of the face. Secondly, we effectively adopt a facial landmark detection method-based feature extraction strategy using the partially covered masked face’s features. Finally, the features, the coordinates of the landmarks that have been identified, and the histograms of the oriented gradients are then incorporated into the classification procedure using a convolutional neural network. An experimental evaluation shows that the proposed method surpasses others by achieving an accuracy of 69.3% on the AffectNet dataset. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

16 pages, 6361 KiB  
Article
Improved Face Detection Method via Learning Small Faces on Hard Images Based on a Deep Learning Approach
by Dilnoza Mamieva, Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov and Taeg Keun Whangbo
Sensors 2023, 23(1), 502; https://doi.org/10.3390/s23010502 - 02 Jan 2023
Cited by 23 | Viewed by 5695
Abstract
Most facial recognition and face analysis systems start with facial detection. Early techniques, such as Haar cascades and histograms of directed gradients, mainly rely on features that had been manually developed from particular images. However, these techniques are unable to correctly synthesize images [...] Read more.
Most facial recognition and face analysis systems start with facial detection. Early techniques, such as Haar cascades and histograms of directed gradients, mainly rely on features that had been manually developed from particular images. However, these techniques are unable to correctly synthesize images taken in untamed situations. However, deep learning’s quick development in computer vision has also sped up the development of a number of deep learning-based face detection frameworks, many of which have significantly improved accuracy in recent years. When detecting faces in face detection software, the difficulty of detecting small, scale, position, occlusion, blurring, and partially occluded faces in uncontrolled conditions is one of the problems of face identification that has been explored for many years but has not yet been entirely resolved. In this paper, we propose Retina net baseline, a single-stage face detector, to handle the challenging face detection problem. We made network improvements that boosted detection speed and accuracy. In Experiments, we used two popular datasets, such as WIDER FACE and FDDB. Specifically, on the WIDER FACE benchmark, our proposed method achieves AP of 41.0 at speed of 11.8 FPS with a single-scale inference strategy and AP of 44.2 with multi-scale inference strategy, which are results among one-stage detectors. Then, we trained our model during the implementation using the PyTorch framework, which provided an accuracy of 95.6% for the faces, which are successfully detected. Visible experimental results show that our proposed model outperforms seamless detection and recognition results achieved using performance evaluation matrices. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

14 pages, 2815 KiB  
Article
Feature Selection Method Using Multi-Agent Reinforcement Learning Based on Guide Agents
by Minwoo Kim, Jinhee Bae, Bohyun Wang, Hansol Ko and Joon S. Lim
Sensors 2023, 23(1), 98; https://doi.org/10.3390/s23010098 - 22 Dec 2022
Cited by 5 | Viewed by 1955
Abstract
In this study, we propose a method to automatically find features from a dataset that are effective for classification or prediction, using a new method called multi-agent reinforcement learning and a guide agent. Each feature of the dataset has one of the main [...] Read more.
In this study, we propose a method to automatically find features from a dataset that are effective for classification or prediction, using a new method called multi-agent reinforcement learning and a guide agent. Each feature of the dataset has one of the main and guide agents, and these agents decide whether to select a feature. Main agents select the optimal features, and guide agents present the criteria for judging the main agents’ actions. After obtaining the main and guide rewards for the features selected by the agents, the main agent that behaves differently from the guide agent updates their Q-values by calculating the learning reward delivered to the main agents. The behavior comparison helps the main agent decide whether its own behavior is correct, without using other algorithms. After performing this process for each episode, the features are finally selected. The feature selection method proposed in this study uses multiple agents, reducing the number of actions each agent can perform and finding optimal features effectively and quickly. Finally, comparative experimental results on multiple datasets show that the proposed method can select effective features for classification and increase classification accuracy. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

20 pages, 8768 KiB  
Article
Light-Weight Deep Learning Techniques with Advanced Processing for Real-Time Hand Gesture Recognition
by Mohamed S. Abdallah, Gerges H. Samaan, Abanoub R. Wadie, Fazliddin Makhmudov and Young-Im Cho
Sensors 2023, 23(1), 2; https://doi.org/10.3390/s23010002 - 20 Dec 2022
Cited by 7 | Viewed by 3477
Abstract
In the discipline of hand gesture and dynamic sign language recognition, deep learning approaches with high computational complexity and a wide range of parameters have been an extremely remarkable success. However, the implementation of sign language recognition applications for mobile phones with restricted [...] Read more.
In the discipline of hand gesture and dynamic sign language recognition, deep learning approaches with high computational complexity and a wide range of parameters have been an extremely remarkable success. However, the implementation of sign language recognition applications for mobile phones with restricted storage and computing capacities is usually greatly constrained by those limited resources. In light of this situation, we suggest lightweight deep neural networks with advanced processing for real-time dynamic sign language recognition (DSLR). This paper presents a DSLR application to minimize the gap between hearing-impaired communities and regular society. The DSLR application was developed using two robust deep learning models, the GRU and the 1D CNN, combined with the MediaPipe framework. In this paper, the authors implement advanced processes to solve most of the DSLR problems, especially in real-time detection, e.g., differences in depth and location. The solution method consists of three main parts. First, the input dataset is preprocessed with our algorithm to standardize the number of frames. Then, the MediaPipe framework extracts hands and poses landmarks (features) to detect and locate them. Finally, the features of the models are passed after processing the unification of the depth and location of the body to recognize the DSL accurately. To accomplish this, the authors built a new American video-based sign dataset and named it DSL-46. DSL-46 contains 46 daily used signs that were presented with all the needed details and properties for recording the new dataset. The results of the experiments show that the presented solution method can recognize dynamic signs extremely fast and accurately, even in real-time detection. The DSLR reaches an accuracy of 98.8%, 99.84%, and 88.40% on the DSL-46, LSA64, and LIBRAS-BSL datasets, respectively. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

15 pages, 3029 KiB  
Article
Improved Agricultural Field Segmentation in Satellite Imagery Using TL-ResUNet Architecture
by Furkat Safarov, Kuchkorov Temurbek, Djumanov Jamoljon, Ochilov Temur, Jean Chamberlain Chedjou, Akmalbek Bobomirzaevich Abdusalomov and Young-Im Cho
Sensors 2022, 22(24), 9784; https://doi.org/10.3390/s22249784 - 13 Dec 2022
Cited by 20 | Viewed by 2930
Abstract
Currently, there is a growing population around the world, and this is particularly true in developing countries, where food security is becoming a major problem. Therefore, agricultural land monitoring, land use classification and analysis, and achieving high yields through efficient land use are [...] Read more.
Currently, there is a growing population around the world, and this is particularly true in developing countries, where food security is becoming a major problem. Therefore, agricultural land monitoring, land use classification and analysis, and achieving high yields through efficient land use are important research topics in precision agriculture. Deep learning-based algorithms for the classification of satellite images provide more reliable and accurate results than traditional classification algorithms. In this study, we propose a transfer learning based residual UNet architecture (TL-ResUNet) model, which is a semantic segmentation deep neural network model of land cover classification and segmentation using satellite images. The proposed model combines the strengths of residual network, transfer learning, and UNet architecture. We tested the model on public datasets such as DeepGlobe, and the results showed that our proposed model outperforms the classic models initiated with random weights and pre-trained ImageNet coefficients. The TL-ResUNet model outperforms other models on several metrics commonly used as accuracy and performance measures for semantic segmentation tasks. Particularly, we obtained an IoU score of 0.81 on the validation subset of the DeepGlobe dataset for the TL-ResUNet model. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

20 pages, 7577 KiB  
Article
Computer State Evaluation Using Adaptive Neuro-Fuzzy Inference Systems
by Abror Buriboev and Azamjon Muminov
Sensors 2022, 22(23), 9502; https://doi.org/10.3390/s22239502 - 05 Dec 2022
Viewed by 1649
Abstract
Several crucial system design and deployment decisions, including workload management, sizing, capacity planning, and dynamic rule generation in dynamic systems such as computers, depend on predictive analysis of resource consumption. An analysis of the computer components’ utilizations and their workloads is the best [...] Read more.
Several crucial system design and deployment decisions, including workload management, sizing, capacity planning, and dynamic rule generation in dynamic systems such as computers, depend on predictive analysis of resource consumption. An analysis of the computer components’ utilizations and their workloads is the best way to assess the performance of the computer’s state. Especially, analyzing the particular or whole influence of components on another component gives more reliable information about the state of computer systems. There are many evaluation techniques proposed by researchers. The bulk of them have complicated metrics and parameters such as utilization, time, throughput, latency, delay, speed, frequency, and the percentage which are difficult to understand and use in the assessing process. According to these, we proposed a simplified evaluation method using components’ utilization in percentage scale and its linguistic values. The use of the adaptive neuro-fuzzy inference system (ANFIS) model and fuzzy set theory offers fantastic prospects to realize use impact analyses. The purpose of the study is to examine the usage impact of memory, cache, storage, and bus on CPU performance using the Sugeno type and Mamdani type ANFIS models to determine the state of the computer system. The suggested method is founded on keeping an eye on how computer parts behave. The developed method can be applied for all kinds of computing system, such as personal computers, mainframes, and supercomputers by considering that the inference engine of the proposed ANFIS model requires only its own behavior data of computers’ components and the number of inputs can be enriched according to the type of computer, for instance, in cloud computers’ case the added number of clients and network quality can be used as the input parameters. The models present linguistic and quantity results which are convenient to understand performance issues regarding specific bottlenecks and determining the relationship of components. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

14 pages, 2480 KiB  
Article
Enhanced Classification of Dog Activities with Quaternion-Based Fusion Approach on High-Dimensional Raw Data from Wearable Sensors
by Azamjon Muminov, Mukhriddin Mukhiddinov and Jinsoo Cho
Sensors 2022, 22(23), 9471; https://doi.org/10.3390/s22239471 - 04 Dec 2022
Cited by 3 | Viewed by 1701
Abstract
The employment of machine learning algorithms to the data provided by wearable movement sensors is one of the most common methods to detect pets’ behaviors and monitor their well-being. However, defining features that lead to highly accurate behavior classification is quite challenging. To [...] Read more.
The employment of machine learning algorithms to the data provided by wearable movement sensors is one of the most common methods to detect pets’ behaviors and monitor their well-being. However, defining features that lead to highly accurate behavior classification is quite challenging. To address this problem, in this study we aim to classify six main dog activities (standing, walking, running, sitting, lying down, and resting) using high-dimensional sensor raw data. Data were received from the accelerometer and gyroscope sensors that are designed to be attached to the dog’s smart costume. Once data are received, the module computes a quaternion value for each data point that provides handful features for classification. Next, to perform the classification, we used several supervised machine learning algorithms, such as the Gaussian naïve Bayes (GNB), Decision Tree (DT), K-nearest neighbor (KNN), and support vector machine (SVM). In order to evaluate the performance, we finally compared the proposed approach’s F-score accuracies with the accuracy of classic approach performance, where sensors’ data are collected without computing the quaternion value and directly utilized by the model. Overall, 18 dogs equipped with harnesses participated in the experiment. The results of the experiment show a significantly enhanced classification with the proposed approach. Among all the classifiers, the GNB classification model achieved the highest accuracy for dog behavior. The behaviors are classified with F-score accuracies of 0.94, 0.86, 0.94, 0.89, 0.95, and 1, respectively. Moreover, it has been observed that the GNB classifier achieved 93% accuracy on average with the dataset consisting of quaternion values. In contrast, it was only 88% when the model used the dataset from sensors’ data. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

14 pages, 6426 KiB  
Article
An Entropy Analysis-Based Window Size Optimization Scheme for Merging LiDAR Data Frames
by Taesik Kim, Jinman Jung, Hong Min and Young-Hoon Jung
Sensors 2022, 22(23), 9293; https://doi.org/10.3390/s22239293 - 29 Nov 2022
Cited by 1 | Viewed by 1463
Abstract
LiDAR is a useful technology for gathering point cloud data from its environment and has been adapted to many applications. We use a cost-efficient LiDAR system attached to a moving object to estimate the location of the moving object using referenced linear structures. [...] Read more.
LiDAR is a useful technology for gathering point cloud data from its environment and has been adapted to many applications. We use a cost-efficient LiDAR system attached to a moving object to estimate the location of the moving object using referenced linear structures. In the stationary state, the accuracy of extracting linear structures is low given the low-cost LiDAR. We propose a merging scheme for the LiDAR data frames to improve the accuracy by using the movement of the moving object. The proposed scheme tries to find the optimal window size by means of an entropy analysis. The optimal window size is determined by finding the minimum point between the entropy indicator of the ideal result and the entropy indicator of the actual result of each window size. The proposed indicator can describe the accuracy of the entire path of the moving object at each window size using a simple single value. The experimental results show that the proposed scheme can improve the linear structure extraction accuracy. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

19 pages, 1109 KiB  
Article
Automatic Classification Service System for Citrus Pest Recognition Based on Deep Learning
by Saebom Lee, Gyuho Choi, Hyun-Cheol Park and Chang Choi
Sensors 2022, 22(22), 8911; https://doi.org/10.3390/s22228911 - 18 Nov 2022
Cited by 5 | Viewed by 3063
Abstract
Plant diseases are a major cause of reduction in agricultural output, which leads to severe economic losses and unstable food supply. The citrus plant is an economically important fruit crop grown and produced worldwide. However, citrus plants are easily affected by various factors, [...] Read more.
Plant diseases are a major cause of reduction in agricultural output, which leads to severe economic losses and unstable food supply. The citrus plant is an economically important fruit crop grown and produced worldwide. However, citrus plants are easily affected by various factors, such as climate change, pests, and diseases, resulting in reduced yield and quality. Advances in computer vision in recent years have been widely used for plant disease detection and classification, providing opportunities for early disease detection, and resulting in improvements in agriculture. Particularly, the early and accurate detection of citrus diseases, which are vulnerable to pests, is very important to prevent the spread of pests and reduce crop damage. Research on citrus pest disease is ongoing, but it is difficult to apply research results to cultivation owing to a lack of datasets for research and limited types of pests. In this study, we built a dataset by self-collecting a total of 20,000 citrus pest images, including fruits and leaves, from actual cultivation sites. The constructed dataset was trained, verified, and tested using a model that had undergone five transfer learning steps. All models used in the experiment had an average accuracy of 97% or more and an average f1 score of 96% or more. We built a web application server using the EfficientNet-b0 model, which exhibited the best performance among the five learning models. The built web application tested citrus pest disease using image samples collected from websites other than the self-collected image samples and prepared data, and both samples correctly classified the disease. The citrus pest automatic diagnosis web system using the model proposed in this study plays a useful auxiliary role in recognizing and classifying citrus diseases. This can, in turn, help improve the overall quality of citrus fruits. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

12 pages, 2992 KiB  
Article
Zoom-In Neural Network Deep-Learning Model for Alzheimer’s Disease Assessments
by Bohyun Wang and Joon S. Lim
Sensors 2022, 22(22), 8887; https://doi.org/10.3390/s22228887 - 17 Nov 2022
Cited by 2 | Viewed by 1499
Abstract
Deep neural networks have been successfully applied to generate predictive patterns from medical and diagnostic data. This paper presents an approach for assessing persons with Alzheimer’s disease (AD) mild cognitive impairment (MCI), compared with normal control (NC) persons, using the zoom-in neural network [...] Read more.
Deep neural networks have been successfully applied to generate predictive patterns from medical and diagnostic data. This paper presents an approach for assessing persons with Alzheimer’s disease (AD) mild cognitive impairment (MCI), compared with normal control (NC) persons, using the zoom-in neural network (ZNN) deep-learning algorithm. ZNN stacks a set of zoom-in learning units (ZLUs) in a feedforward hierarchy without backpropagation. The resting-state fMRI (rs-fMRI) dataset for AD assessments was obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). The Automated Anatomical Labeling (AAL-90) atlas, which provides 90 neuroanatomical functional regions, was used to assess and detect the implicated regions in the course of AD. The features of the ZNN are extracted from the 140-time series rs-fMRI voxel values in a region of the brain. ZNN yields the three classification accuracies of AD versus MCI and NC, NC versus AD and MCI, and MCI versus AD and NC of 97.7%, 84.8%, and 72.7%, respectively, with the seven discriminative regions of interest (ROIs) in the AAL-90. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

18 pages, 6580 KiB  
Article
Development of Real-Time Landmark-Based Emotion Recognition CNN for Masked Faces
by Akhmedov Farkhod, Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov and Young-Im Cho
Sensors 2022, 22(22), 8704; https://doi.org/10.3390/s22228704 - 11 Nov 2022
Cited by 25 | Viewed by 3510
Abstract
Owing to the availability of a wide range of emotion recognition applications in our lives, such as for mental status calculation, the demand for high-performance emotion recognition approaches remains uncertain. Nevertheless, the wearing of facial masks has been indispensable during the COVID-19 pandemic. [...] Read more.
Owing to the availability of a wide range of emotion recognition applications in our lives, such as for mental status calculation, the demand for high-performance emotion recognition approaches remains uncertain. Nevertheless, the wearing of facial masks has been indispensable during the COVID-19 pandemic. In this study, we propose a graph-based emotion recognition method that adopts landmarks on the upper part of the face. Based on the proposed approach, several pre-processing steps were applied. After pre-processing, facial expression features need to be extracted from facial key points. The main steps of emotion recognition on masked faces include face detection by using Haar–Cascade, landmark implementation through a media-pipe face mesh model, and model training on seven emotional classes. The FER-2013 dataset was used for model training. An emotion detection model was developed for non-masked faces. Thereafter, landmarks were applied to the upper part of the face. After the detection of faces and landmark locations were extracted, we captured coordinates of emotional class landmarks and exported to a comma-separated values (csv) file. After that, model weights were transferred to the emotional classes. Finally, a landmark-based emotion recognition model for the upper facial parts was tested both on images and in real time using a web camera application. The results showed that the proposed model achieved an overall accuracy of 91.2% for seven emotional classes in the case of an image application. Image based emotion detection of the proposed model accuracy showed relatively higher results than the real-time emotion detection. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

23 pages, 3358 KiB  
Article
Modeling and Applying Implicit Dormant Features for Recommendation via Clustering and Deep Factorization
by Alpamis Kutlimuratov, Akmalbek Bobomirzaevich Abdusalomov, Rashid Oteniyazov, Sanjar Mirzakhalilov and Taeg Keun Whangbo
Sensors 2022, 22(21), 8224; https://doi.org/10.3390/s22218224 - 27 Oct 2022
Cited by 11 | Viewed by 1349
Abstract
E-commerce systems experience poor quality of performance when the number of records in the customer database increases due to the gradual growth of customers and products. Applying implicit hidden features into the recommender system (RS) plays an important role in enhancing its performance [...] Read more.
E-commerce systems experience poor quality of performance when the number of records in the customer database increases due to the gradual growth of customers and products. Applying implicit hidden features into the recommender system (RS) plays an important role in enhancing its performance due to the original dataset’s sparseness. In particular, we can comprehend the relationship between products and customers by analyzing the hierarchically expressed hidden implicit features of them. Furthermore, the effectiveness of rating prediction and system customization increases when the customer-added tag information is combined with hierarchically structured hidden implicit features. For these reasons, we concentrate on early grouping of comparable customers using the clustering technique as a first step, and then, we further enhance the efficacy of recommendations by obtaining implicit hidden features and combining them via customer’s tag information, which regularizes the deep-factorization procedure. The idea behind the proposed method was to cluster customers early via a customer rating matrix and deeply factorize a basic WNMF (weighted nonnegative matrix factorization) model to generate customers preference’s hierarchically structured hidden implicit features and product characteristics in each cluster, which reveals a deep relationship between them and regularizes the prediction procedure via an auxiliary parameter (tag information). The testimonies and empirical findings supported the viability of the proposed approach. Especially, MAE of the rating prediction was 0.8011 with 60% training dataset size, while the error rate was equal to 0.7965 with 80% training dataset size. Moreover, MAE rates were 0.8781 and 0.9046 in new 50 and 100 customer cold-start scenarios, respectively. The proposed model outperformed other baseline models that independently employed the major properties of customers, products, or tags in the prediction process. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

20 pages, 7083 KiB  
Article
Improved Classification Approach for Fruits and Vegetables Freshness Based on Deep Learning
by Mukhriddin Mukhiddinov, Azamjon Muminov and Jinsoo Cho
Sensors 2022, 22(21), 8192; https://doi.org/10.3390/s22218192 - 26 Oct 2022
Cited by 27 | Viewed by 8481
Abstract
Classification of fruit and vegetable freshness plays an essential role in the food industry. Freshness is a fundamental measure of fruit and vegetable quality that directly affects the physical health and purchasing motivation of consumers. In addition, it is a significant determinant of [...] Read more.
Classification of fruit and vegetable freshness plays an essential role in the food industry. Freshness is a fundamental measure of fruit and vegetable quality that directly affects the physical health and purchasing motivation of consumers. In addition, it is a significant determinant of market price; thus, it is imperative to study the freshness of fruits and vegetables. Owing to similarities in color, texture, and external environmental changes, such as shadows, lighting, and complex backgrounds, the automatic recognition and classification of fruits and vegetables using machine vision is challenging. This study presents a deep-learning system for multiclass fruit and vegetable categorization based on an improved YOLOv4 model that first recognizes the object type in an image before classifying it into one of two categories: fresh or rotten. The proposed system involves the development of an optimized YOLOv4 model, creating an image dataset of fruits and vegetables, data argumentation, and performance evaluation. Furthermore, the backbone of the proposed model was enhanced using the Mish activation function for more precise and rapid detection. Compared with the previous YOLO series, a complete experimental evaluation of the proposed method can obtain a higher average precision than the original YOLOv4 and YOLOv3 with 50.4%, 49.3%, and 41.7%, respectively. The proposed system has outstanding prospects for the construction of an autonomous and real-time fruit and vegetable classification system for the food industry and marketplaces and can also help visually impaired people to choose fresh food and avoid food poisoning. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

21 pages, 3178 KiB  
Article
Improved Feature Parameter Extraction from Speech Signals Using Machine Learning Algorithm
by Akmalbek Bobomirzaevich Abdusalomov, Furkat Safarov, Mekhriddin Rakhimov, Boburkhon Turaev and Taeg Keun Whangbo
Sensors 2022, 22(21), 8122; https://doi.org/10.3390/s22218122 - 24 Oct 2022
Cited by 22 | Viewed by 4012
Abstract
Speech recognition refers to the capability of software or hardware to receive a speech signal, identify the speaker’s features in the speech signal, and recognize the speaker thereafter. In general, the speech recognition process involves three main steps: acoustic processing, feature extraction, and [...] Read more.
Speech recognition refers to the capability of software or hardware to receive a speech signal, identify the speaker’s features in the speech signal, and recognize the speaker thereafter. In general, the speech recognition process involves three main steps: acoustic processing, feature extraction, and classification/recognition. The purpose of feature extraction is to illustrate a speech signal using a predetermined number of signal components. This is because all information in the acoustic signal is excessively cumbersome to handle, and some information is irrelevant in the identification task. This study proposes a machine learning-based approach that performs feature parameter extraction from speech signals to improve the performance of speech recognition applications in real-time smart city environments. Moreover, the principle of mapping a block of main memory to the cache is used efficiently to reduce computing time. The block size of cache memory is a parameter that strongly affects the cache performance. In particular, the implementation of such processes in real-time systems requires a high computation speed. Processing speed plays an important role in speech recognition in real-time systems. It requires the use of modern technologies and fast algorithms that increase the acceleration in extracting the feature parameters from speech signals. Problems with overclocking during the digital processing of speech signals have yet to be completely resolved. The experimental results demonstrate that the proposed method successfully extracts the signal features and achieves seamless classification performance compared to other conventional speech recognition algorithms. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

18 pages, 6129 KiB  
Article
Fast Registration of Point Cloud Based on Custom Semantic Extraction
by Jianing Wu, Zhang Xiao, Fan Chen, Tianlin Peng, Zhi Xiong and Fengwei Yuan
Sensors 2022, 22(19), 7479; https://doi.org/10.3390/s22197479 - 02 Oct 2022
Cited by 2 | Viewed by 1603
Abstract
With the increase in the amount of 3D point cloud data and the wide application of point cloud registration in various fields, the question of whether it is possible to quickly extract the key points of registration and perform accurate coarse registration has [...] Read more.
With the increase in the amount of 3D point cloud data and the wide application of point cloud registration in various fields, the question of whether it is possible to quickly extract the key points of registration and perform accurate coarse registration has become a question to be urgently answered. In this paper, we proposed a novel semantic segmentation algorithm that enables the extracted feature point cloud to have a clustering effect for fast registration. First of all, an adaptive technique was proposed to determine the domain radius of a local point. Secondly, the feature intensity of the point is scored through the regional fluctuation coefficient and stationary coefficient calculated by the normal vector, and the high feature region to be registered is preliminarily determined. In the end, FPFH is used to describe the geometric features of the extracted semantic feature point cloud, so as to realize the coarse registration from the local point cloud to the overall point cloud. The results show that the point cloud can be roughly segmented based on the uniqueness of semantic features. The use of a semantic feature point cloud can make the point cloud have a very fast response speed based on the accuracy of coarse registration, almost equal to that of using the original point cloud, which is conducive to the rapid determination of the initial attitude. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

15 pages, 7428 KiB  
Article
Improved Real-Time Fire Warning System Based on Advanced Technologies for Visually Impaired People
by Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov, Alpamis Kutlimuratov and Taeg Keun Whangbo
Sensors 2022, 22(19), 7305; https://doi.org/10.3390/s22197305 - 26 Sep 2022
Cited by 31 | Viewed by 3393
Abstract
Early fire detection and notification techniques provide fire prevention and safety information to blind and visually impaired (BVI) people within a short period of time in emergency situations when fires occur in indoor environments. Given its direct impact on human safety and the [...] Read more.
Early fire detection and notification techniques provide fire prevention and safety information to blind and visually impaired (BVI) people within a short period of time in emergency situations when fires occur in indoor environments. Given its direct impact on human safety and the environment, fire detection is a difficult but crucial problem. To prevent injuries and property damage, advanced technology requires appropriate methods for detecting fires as quickly as possible. In this study, to reduce the loss of human lives and property damage, we introduce the development of the vision-based early flame recognition and notification approach using artificial intelligence for assisting BVI people. The proposed fire alarm control system for indoor buildings can provide accurate information on fire scenes. In our proposed method, all the processes performed manually were automated, and the performance efficiency and quality of fire classification were improved. To perform real-time monitoring and enhance the detection accuracy of indoor fire disasters, the proposed system uses the YOLOv5m model, which is an updated version of the traditional YOLOv5. The experimental results show that the proposed system successfully detected and notified the occurrence of catastrophic fires with high speed and accuracy at any time of day or night, regardless of the shape or size of the fire. Finally, we compared the competitiveness level of our method with that of other conventional fire-detection methods to confirm the seamless classification results achieved using performance evaluation matrices. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

29 pages, 1828 KiB  
Article
Day-Ahead Hourly Solar Irradiance Forecasting Based on Multi-Attributed Spatio-Temporal Graph Convolutional Network
by Hyeon-Ju Jeon, Min-Woo Choi and O-Joun Lee
Sensors 2022, 22(19), 7179; https://doi.org/10.3390/s22197179 - 21 Sep 2022
Cited by 7 | Viewed by 1915
Abstract
Solar irradiance forecasting is fundamental and essential for commercializing solar energy generation by overcoming output variability. Accurate forecasting depends on historical solar irradiance data, correlations between various meteorological variables (e.g., wind speed, humidity, and cloudiness), and influences between the weather contexts of spatially [...] Read more.
Solar irradiance forecasting is fundamental and essential for commercializing solar energy generation by overcoming output variability. Accurate forecasting depends on historical solar irradiance data, correlations between various meteorological variables (e.g., wind speed, humidity, and cloudiness), and influences between the weather contexts of spatially adjacent regions. However, existing studies have been limited to spatiotemporal analysis of a few variables, which have clear correlations with solar irradiance (e.g., sunshine duration), and do not attempt to establish atmospheric contextual information from a variety of meteorological variables. Therefore, this study proposes a novel solar irradiance forecasting model that represents atmospheric parameters observed from multiple stations as an attributed dynamic network and analyzes temporal changes in the network by extending existing spatio-temporal graph convolutional network (ST-GCN) models. By comparing the proposed model with existing models, we also investigated the contributions of (i) the spatial adjacency of the stations, (ii) temporal changes in the meteorological variables, and (iii) the variety of variables to the forecasting performance. We evaluated the performance of the proposed and existing models by predicting the hourly solar irradiance at observation stations in the Korean Peninsula. The experimental results showed that the three features are synergistic and have correlations that are difficult to establish using single-aspect analysis. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

9 pages, 1191 KiB  
Article
Reliability of the In Silico Prediction Approach to In Vitro Evaluation of Bacterial Toxicity
by Sung-Yoon Ahn, Mira Kim, Ji-Eun Bae, Iel-Soo Bang and Sang-Woong Lee
Sensors 2022, 22(17), 6557; https://doi.org/10.3390/s22176557 - 31 Aug 2022
Cited by 4 | Viewed by 1569
Abstract
Several pathogens that spread through the air are highly contagious, and related infectious diseases are more easily transmitted through airborne transmission under indoor conditions, as observed during the COVID-19 pandemic. Indoor air contaminated by microorganisms, including viruses, bacteria, and fungi, or by derived [...] Read more.
Several pathogens that spread through the air are highly contagious, and related infectious diseases are more easily transmitted through airborne transmission under indoor conditions, as observed during the COVID-19 pandemic. Indoor air contaminated by microorganisms, including viruses, bacteria, and fungi, or by derived pathogenic substances, can endanger human health. Thus, identifying and analyzing the potential pathogens residing in the air are crucial to preventing disease and maintaining indoor air quality. Here, we applied deep learning technology to analyze and predict the toxicity of bacteria in indoor air. We trained the ProtBert model on toxic bacterial and virulence factor proteins and applied them to predict the potential toxicity of some bacterial species by analyzing their protein sequences. The results reflect the results of the in vitro analysis of their toxicity in human cells. The in silico-based simulation and the obtained results demonstrated that it is plausible to find possible toxic sequences in unknown protein sequences. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

17 pages, 4027 KiB  
Article
Attention 3D U-Net with Multiple Skip Connections for Segmentation of Brain Tumor Images
by Jakhongir Nodirov, Akmalbek Bobomirzaevich Abdusalomov and Taeg Keun Whangbo
Sensors 2022, 22(17), 6501; https://doi.org/10.3390/s22176501 - 29 Aug 2022
Cited by 30 | Viewed by 6342
Abstract
Among researchers using traditional and new machine learning and deep learning techniques, 2D medical image segmentation models are popular. Additionally, 3D volumetric data recently became more accessible, as a result of the high number of studies conducted in recent years regarding the creation [...] Read more.
Among researchers using traditional and new machine learning and deep learning techniques, 2D medical image segmentation models are popular. Additionally, 3D volumetric data recently became more accessible, as a result of the high number of studies conducted in recent years regarding the creation of 3D volumes. Using these 3D data, researchers have begun conducting research on creating 3D segmentation models, such as brain tumor segmentation and classification. Since a higher number of crucial features can be extracted using 3D data than 2D data, 3D brain tumor detection models have increased in popularity among researchers. Until now, various significant research works have focused on the 3D version of the U-Net and other popular models, such as 3D U-Net and V-Net, while doing superior research works. In this study, we used 3D brain image data and created a new architecture based on a 3D U-Net model that uses multiple skip connections with cost-efficient pretrained 3D MobileNetV2 blocks and attention modules. These pretrained MobileNetV2 blocks assist our architecture by providing smaller parameters to maintain operable model size in terms of our computational capability and help the model to converge faster. We added additional skip connections between the encoder and decoder blocks to ease the exchange of extracted features between the two blocks, which resulted in the maximum use of the features. We also used attention modules to filter out irrelevant features coming through the skip connections and, thus, preserved more computational power while achieving improved accuracy. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

Review

Jump to: Research

104 pages, 1701 KiB  
Review
Graph Representation Learning and Its Applications: A Survey
by Van Thuy Hoang, Hyeon-Ju Jeon, Eun-Soon You, Yoewon Yoon, Sungyeop Jung and O-Joun Lee
Sensors 2023, 23(8), 4168; https://doi.org/10.3390/s23084168 - 21 Apr 2023
Cited by 7 | Viewed by 6253
Abstract
Graphs are data structures that effectively represent relational data in the real world. Graph representation learning is a significant task since it could facilitate various downstream tasks, such as node classification, link prediction, etc. Graph representation learning aims to map graph entities to [...] Read more.
Graphs are data structures that effectively represent relational data in the real world. Graph representation learning is a significant task since it could facilitate various downstream tasks, such as node classification, link prediction, etc. Graph representation learning aims to map graph entities to low-dimensional vectors while preserving graph structure and entity relationships. Over the decades, many models have been proposed for graph representation learning. This paper aims to show a comprehensive picture of graph representation learning models, including traditional and state-of-the-art models on various graphs in different geometric spaces. First, we begin with five types of graph embedding models: graph kernels, matrix factorization models, shallow models, deep-learning models, and non-Euclidean models. In addition, we also discuss graph transformer models and Gaussian embedding models. Second, we present practical applications of graph embedding models, from constructing graphs for specific domains to applying models to solve tasks. Finally, we discuss challenges for existing models and future research directions in detail. As a result, this paper provides a structured overview of the diversity of graph embedding models. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

Back to TopTop