Advances in Intelligent Data Analysis and Its Applications, 2nd Edition

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 21728

Special Issue Editors

Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, School of Computer and Information Technology, Shanxi University, Taiyuan 030006, China
Interests: data mining; granular computing; intelligent decision making
Special Issues, Collections and Topics in MDPI journals
College of Artificial Intelligence, Southwest University, Chongqing 400715, China
Interests: data mining; cognitive computation; granular computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Research Base of Intelligent Manufacturing Service, Chongqing Technology and Business University, Chongqing 400067, China
Interests: Markov jump systems; stochastic systems; event-triggered schemes; filtering design; controller design; cyber-attacks; time-delay; robust control
Special Issues, Collections and Topics in MDPI journals
School of Mathematics and Statistics, Southwest University, Chongqing 400715, China
Interests: data mining; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The rapid expansion of cloud computing, the Internet of Things (IoT), and the industrial Internet has given rise to a plethora of intricate data analysis tasks within the framework of societal and economic development. In grappling with these multifaceted challenges, the central role of computational intelligence becomes evident, encompassing the utilization of expansive models and the employment of cognitive analysis techniques.

Within the context of addressing data analysis dilemmas, a fundamental quandary surfaces: the effective management, modeling, and processing of the extensive and heterogeneous datasets acquired through the adoption of these emergent technologies. Consequently, there exists an imperative to delve into efficacious models and methodologies that leverage the potential of computational intelligence for the facilitation of intelligent data analysis and applications. In the contemporary milieu, a diverse cohort of scholars and practitioners has collectively woven a rich fabric of intelligent data analysis and applications from a myriad of vantage points. These encompass disciplines spanning data mining, machine learning, natural language processing, granular computing, social networks, machine vision, cognitive computation, and other hybrid paradigms.

Given the inundation of intricate data in the tangible world, the exploration of intelligent data analysis and applications assumes paramount significance across an array of scenarios in the epoch of big data. Such undertakings not only serve to confront immediate challenges but also to enrich the tapestry of the computer science and engineering community, propelling us toward a future characterized by enhanced data literacy and technological advancement.

The inaugural volume of the Special Issue "Advances in Intelligent Data Analysis and its Applications" has been successful, featuring a collection of high-quality papers. Building upon this initial achievement, the objective of this Special Issue is to continue gathering recent advancements in the field of intelligent data analysis and exploring their practical applications across a spectrum of real-world domains. These domains encompass finance, medical diagnosis, business intelligence, engineering, environmental science, and more. We invite submissions of original research contributions, substantially extended renditions of conference papers, and comprehensive review articles. The topics of interest span a broad spectrum and include, but are not limited to, the following areas:

  • Intelligent data mining algorithms and their practical applications;
  • Utilizing machine learning techniques for intelligent data analysis;
  • Advancements in natural language processing for data analysis;
  • Intelligent granular computing models and their real-world use cases;
  • Applying intelligent data analysis to glean insights from social networks;
  • Harnessing machine vision for data analysis and interpretation;
  • Innovations in hybrid models that combine cognitive computation and intelligent data analysis.

Dr. Chao Zhang
Dr. Wentao Li
Dr. Huiyan Zhang
Dr. Tao Zhan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data mining
  • data analysis
  • cloud computing
  • machine learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 555 KiB  
Article
Model Reduction Method for Spacecraft Electrical System Based on Singular Perturbation Theory
by Lifeng Wang, Yelun Peng and Juan Luo
Electronics 2024, 13(21), 4291; https://doi.org/10.3390/electronics13214291 - 31 Oct 2024
Viewed by 355
Abstract
Accurate and efficient modeling and simulation of spacecraft electrical systems are crucial because of their complexity. However, existing models often struggle to balance simulation efficiency and accuracy. This paper introduces a model reduction method based on singular perturbation theory to simplify the full-order [...] Read more.
Accurate and efficient modeling and simulation of spacecraft electrical systems are crucial because of their complexity. However, existing models often struggle to balance simulation efficiency and accuracy. This paper introduces a model reduction method based on singular perturbation theory to simplify the full-order model of spacecraft electrical systems. The experimental results show that the reduced-order simplified model saves 50% of the simulation time with almost no degradation in the simulation accuracy and can be applied to real-world scenarios, such as digital twins. This method offers a new approach for rapid simulation of spacecraft electrical systems and has broad application prospects. Full article
Show Figures

Figure 1

13 pages, 850 KiB  
Article
Patent Keyword Analysis Using Regression Modeling Based on Quantile Cumulative Distribution Function
by Sangsung Park and Sunghae Jun
Electronics 2024, 13(21), 4247; https://doi.org/10.3390/electronics13214247 - 30 Oct 2024
Viewed by 481
Abstract
Patents contain detailed information of researched and developed technologies. We analyzed patent documents to understand the technology in a given domain. For the patent data analysis, we extracted the keywords from the patent documents using text mining techniques. Next, we built a patent [...] Read more.
Patents contain detailed information of researched and developed technologies. We analyzed patent documents to understand the technology in a given domain. For the patent data analysis, we extracted the keywords from the patent documents using text mining techniques. Next, we built a patent document–keyword matrix using the patent keywords and analyzed the matrix data using statistical methods. Each element of the matrix represents the frequency of a keyword that occurs in a patent document. In general, most of the elements were zero because the keyword becomes a column of the matrix even if it occurs in only one document. Due to this zero-inflated problem, we experienced difficulty in analyzing patent keywords using existing statistical methods such as linear regression analysis. The purpose of this paper is to build a statistical model to solve the zero-inflated problem. In this paper, we propose a regression model based on quantile cumulative distribution function to solve this problem that occurs in patent keyword analysis. We perform experiments to show the performance of our proposed method using patent documents related to blockchain technology. We compare regression modeling based on a quantile cumulative distribution function with convenient models such as linear regression modeling. We expect that this paper will contribute to overcoming the zero-inflated problem in patent keyword analysis performed in various technology fields. Full article
Show Figures

Figure 1

16 pages, 2344 KiB  
Article
ADYOLOv5-Face: An Enhanced YOLO-Based Face Detector for Small Target Faces
by Linrunjia Liu, Gaoshuai Wang and Qiguang Miao
Electronics 2024, 13(21), 4184; https://doi.org/10.3390/electronics13214184 - 25 Oct 2024
Viewed by 494
Abstract
Benefiting from advancements in generic object detectors, significant progress has been achieved in the field of face detection. Among these algorithms, the You Only Look Once (YOLO) series plays an important role due to its low training computation cost. However, we have observed [...] Read more.
Benefiting from advancements in generic object detectors, significant progress has been achieved in the field of face detection. Among these algorithms, the You Only Look Once (YOLO) series plays an important role due to its low training computation cost. However, we have observed that face detectors based on lightweight YOLO models struggle with accurately detecting small faces. This is because they preserve more semantic information for large faces while compromising the detailed information for small faces. To address this issue, this study makes two contributions to enhance detection performance, particularly for small faces: (1) modifying the neck part of the architecture by integrating a Gather-and-Distribute mechanism instead of the traditional Feature Pyramid Network to tackle the information fusion challenges inherent in YOLO-based models; and (2) incorporating an additional detection head specifically designed for detecting small faces. To evaluate the performance of the proposed face detector, we introduce a new dataset named XD-Face for the face detection task. In the experimental section, the proposed model is trained using the Wider Face dataset and evaluated on both Wider Face and XD-face datasets. Experimental results demonstrate that the proposed face detector outperforms other excellent face detectors across all datasets involving small faces and achieved improvements of 1.1%, 1.09%, and 1.35% in the AP50 metric on the WiderFace validation dataset compared to the baseline YOLOv5s-based face detector. Full article
Show Figures

Figure 1

19 pages, 3206 KiB  
Article
A Hybrid Approach to Modeling Heart Rate Response for Personalized Fitness Recommendations Using Wearable Data
by Hyston Kayange, Jonghyeok Mun, Yohan Park, Jongsun Choi and Jaeyoung Choi
Electronics 2024, 13(19), 3888; https://doi.org/10.3390/electronics13193888 - 30 Sep 2024
Viewed by 753
Abstract
Heart rate (HR) is a key indicator of fitness and cardiovascular health, and accurate HR monitoring and prediction are essential for enhancing personalized fitness experiences. The rise of wearable technology has significantly improved the ability to track personal health, including HR metrics. Accurate [...] Read more.
Heart rate (HR) is a key indicator of fitness and cardiovascular health, and accurate HR monitoring and prediction are essential for enhancing personalized fitness experiences. The rise of wearable technology has significantly improved the ability to track personal health, including HR metrics. Accurate modeling of HR response during workouts is crucial for providing effective fitness recommendations, which help users achieve their goals while maintaining safe workout intensities. Although several HR monitoring and prediction models have been developed for personalized fitness recommendations, many remain impractical for real-world applications, and the domain of personalization in fitness applications still lacks sufficient research and innovation. This paper presents a hybrid approach to modeling HR response to workout intensity for personalized fitness recommendations. The proposed approach integrates a physiological model using Dynamic Bayesian Networks (DBNs) to capture heart rate dynamics during workout sessions. DBNs, combined with Long Short-Term Memory (LSTM) networks, model the evolution of HR over time based on workout intensity and individual fitness characteristics. The DBN parameters are dynamically derived from flexible neural networks that account for each user’s personalized health state, enabling the prediction of a full HR profile for each workout, while incorporating factors such as workout history and environmental factors. An adaptive feature selection module further enhances the model’s performance by focusing on relevant data and ensuring responsiveness to new data. We validated the proposed approach on the FitRec dataset, and experimental results show that our model can accurately predict HR responses to workout intensity in future sessions, achieving an average mean absolute error of 5.2 BPM per workout—significantly improving upon existing models. In addition to HR prediction, the model provides real-time fitness personalized recommendations based on individual’s observed workout intensity to an exercise. These findings demonstrate the model’s effectiveness in delivering precise, user personalized heart response to exercise with potential applications in fitness apps for personalized training and health monitoring. Full article
Show Figures

Figure 1

22 pages, 1611 KiB  
Article
Bayesian Modeling of Travel Times on the Example of Food Delivery: Part 2—Model Creation and Handling Uncertainty
by Jan Pomykacz, Justyna Gibas and Jerzy Baranowski
Electronics 2024, 13(17), 3418; https://doi.org/10.3390/electronics13173418 - 28 Aug 2024
Viewed by 689
Abstract
The e-commerce sector is in a constant state of growth and evolution, particularly within its subdomain of online food delivery. As such, ensuring customer satisfaction is critical for companies working in this field. One way to achieve this is by providing an accurate [...] Read more.
The e-commerce sector is in a constant state of growth and evolution, particularly within its subdomain of online food delivery. As such, ensuring customer satisfaction is critical for companies working in this field. One way to achieve this is by providing an accurate delivery time estimation. While companies can track couriers via GPS, they often lack real-time data on traffic and road conditions, complicating delivery time predictions. To address this, a range of statistical and machine learning techniques are employed, including neural networks and specialized expert systems, with different degrees of success. One issue with neural networks and machine learning models is their heavy dependence on vast, high-quality data. To mitigate this issue, we propose two Bayesian generalized linear models to predict the time of delivery. Utilizing a linear combination of predictor variables, we generate a practical range of outputs with the Hamiltonian Monte Carlo sampling method. These models offer a balance of generality and adaptability, allowing for tuning with expert knowledge. They were compared with the PSIS-LOO criteria and WAIC. The results show that both models accurately estimated delivery times from the dataset while maintaining numerical stability. A model with more predictor variables proved to be more accurate. Full article
Show Figures

Figure 1

19 pages, 15843 KiB  
Article
Bayesian Modeling of Travel Times on the Example of Food Delivery: Part 1—Spatial Data Analysis and Processing
by Justyna Gibas, Jan Pomykacz and Jerzy Baranowski
Electronics 2024, 13(17), 3387; https://doi.org/10.3390/electronics13173387 - 26 Aug 2024
Cited by 1 | Viewed by 653
Abstract
Online food delivery services are rapidly growing in popularity, making customer satisfaction critical for company success in a competitive market. Accurate delivery time predictions are key to ensuring high customer satisfaction. While various methods for travel time estimation exist, effective data analysis and [...] Read more.
Online food delivery services are rapidly growing in popularity, making customer satisfaction critical for company success in a competitive market. Accurate delivery time predictions are key to ensuring high customer satisfaction. While various methods for travel time estimation exist, effective data analysis and processing are often overlooked. This paper addresses this gap by leveraging spatial data analysis and preprocessing techniques to enhance the data quality used in Bayesian models for predicting food delivery times. We utilized the OSRM API to generate routes that accurately reflect real-world conditions. Next, we visualized these routes using various techniques to identify and examine suspicious results. Our analysis of route distribution identified two groups of outliers, leading us to establish an appropriate boundary for maximum route distance to be used in future Bayesian modeling. A total 3% of the data were classified as outliers, and 15% of the samples contained invalid data. The spatial analysis revealed that these outliers were primarily deliveries to the outskirts or beyond the city limits. Spatial analysis shows that the Indian OFD market has similar trends to the Chinese and English markets and is concentrated in densely populated areas. By refining the data quality through these methods, we aim to improve the accuracy of delivery time predictions, ultimately enhancing customer satisfaction. Full article
Show Figures

Figure 1

28 pages, 19321 KiB  
Article
Neuromarketing and Big Data Analysis of Banking Firms’ Website Interfaces and Performance
by Nikolaos T. Giannakopoulos, Damianos P. Sakas and Stavros P. Migkos
Electronics 2024, 13(16), 3256; https://doi.org/10.3390/electronics13163256 - 16 Aug 2024
Viewed by 1207
Abstract
In today’s competitive digital landscape, banking firms must leverage qualitative and quantitative analysis to enhance their website interfaces, ensuring they meet user needs and expectations. By combining detailed user feedback with data-driven insights, banks can create more intuitive and engaging online experiences, ultimately [...] Read more.
In today’s competitive digital landscape, banking firms must leverage qualitative and quantitative analysis to enhance their website interfaces, ensuring they meet user needs and expectations. By combining detailed user feedback with data-driven insights, banks can create more intuitive and engaging online experiences, ultimately driving customer satisfaction and loyalty. Thus, the need for website customer behavior analysis to evaluate its interface is critical. This study focused on the five biggest banking firms and collected big data from their websites. Statistical analysis was followed to validate findings and ensure the reliability of the results. At the same time, agent-based modeling (ABM) and System Dynamics (SD) were utilized to simulate user behavior, thereby allowing for the prediction of responses to interface changes and the optimization of their website, and to obtain a comprehensive understanding of user behavior, thereby enabling banking firms to create more intuitive and user-friendly website interfaces. This interdisciplinary approach found that various website analytical metrics, such as organic and paid traffic costs, referral domains, and email sources, tend to impact banking firms’ purchase conversion, display ads, organic traffic, and bounce rate. Moreover, these insights into banking firms’ website visibility, combined with the behavioral data of the neuromarketing study, indicate specific areas for their website interface and performance improvement. Full article
Show Figures

Figure 1

14 pages, 3705 KiB  
Article
Navigation Based on Hybrid Decentralized and Centralized Training and Execution Strategy for Multiple Mobile Robots Reinforcement Learning
by Yanyan Dai, Deokgyu Kim and Kidong Lee
Electronics 2024, 13(15), 2927; https://doi.org/10.3390/electronics13152927 - 24 Jul 2024
Cited by 1 | Viewed by 586
Abstract
In addressing the complex challenges of path planning in multi-robot systems, this paper proposes a novel Hybrid Decentralized and Centralized Training and Execution (DCTE) Strategy, aimed at optimizing computational efficiency and system performance. The strategy solves the prevalent issues of collision and coordination [...] Read more.
In addressing the complex challenges of path planning in multi-robot systems, this paper proposes a novel Hybrid Decentralized and Centralized Training and Execution (DCTE) Strategy, aimed at optimizing computational efficiency and system performance. The strategy solves the prevalent issues of collision and coordination through a tiered optimization process. The DCTE strategy commences with an initial decentralized path planning step based on Deep Q-Network (DQN), where each robot independently formulates its path. This is followed by a centralized collision detection the analysis of which serves to identify potential intersections or collision risks. Paths confirmed as non-intersecting are used for execution, while those in collision areas prompt a dynamic re-planning step using DQN. Robots treat each other as dynamic obstacles to circumnavigate, ensuring continuous operation without disruptions. The final step involves linking the newly optimized paths with the original safe paths to form a complete and secure execution route. This paper demonstrates how this structured strategy not only mitigates collision risks but also significantly improves the computational efficiency of multi-robot systems. The reinforcement learning time was significantly shorter, with the DCTE strategy requiring only 3 min and 36 s compared to 5 min and 33 s in the comparison results of the simulation section. The improvement underscores the advantages of the proposed method in enhancing the effectiveness and efficiency of multi-robot systems. Full article
Show Figures

Figure 1

29 pages, 5818 KiB  
Article
An Online Review Data-Driven Fuzzy Large-Scale Group Decision-Making Method Based on Dual Fine-Tuning
by Xuechan Yuan, Tingyu Xu, Shiqi He and Chao Zhang
Electronics 2024, 13(14), 2702; https://doi.org/10.3390/electronics13142702 - 10 Jul 2024
Viewed by 720
Abstract
Large-scale group decision-making (LSGDM) involves aggregating the opinions of participating decision-makers into collective opinions and selecting optimal solutions, addressing challenges such as a large number of participants, significant scale, and a low consensus. In real-world scenarios of LSGDM, various challenges are often encountered [...] Read more.
Large-scale group decision-making (LSGDM) involves aggregating the opinions of participating decision-makers into collective opinions and selecting optimal solutions, addressing challenges such as a large number of participants, significant scale, and a low consensus. In real-world scenarios of LSGDM, various challenges are often encountered due to factors such as fuzzy uncertainties in decision information, the large size of decision groups, and the diverse backgrounds of participants. This paper introduces a dual fine-tuning-based LSGDM method using an online review. Initially, the sentiment analysis is conducted on online review data, and the identified sentiment words are graded and quantified into a fuzzy data set to understand the emotional tendencies of the text. Then, the Louvain algorithm is used to cluster the decision-makers. Meanwhile, a method combining Euclidean distances with Wasserstein distances is introduced to accurately measure data similarities and improve clustering performances. During the consensus-reaching process (CRP), a two-stage approach is employed to adjust the scores: to begin with, by refining the scores of the decision representatives via minor-scale group adjustments to generate a score matrix. Then, by identifying the scores corresponding to the minimum consensus level in the matrix for adjustment. Subsequently, the final adjusted score matrix is integrated with the prospect–regret theory to derive the comprehensive brand scores and rankings. Ultimately, the practicality and efficiency of the proposed model are demonstrated using a case study focused on the purchase of solar lamps. In summary, not only does the model effectively extract the online review data and enhance decision efficiency via clustering, but the dual fine-tuning mechanism in the model to improve consensus attainment also reduces the number of adjustment rounds and avoids multiple cycles without achieving the consensus. Full article
Show Figures

Figure 1

15 pages, 490 KiB  
Article
An Architecture as an Alternative to Gradient Boosted Decision Trees for Multiple Machine Learning Tasks
by Lei Du, Haifeng Song, Yingying Xu and Songsong Dai
Electronics 2024, 13(12), 2291; https://doi.org/10.3390/electronics13122291 - 12 Jun 2024
Viewed by 948
Abstract
Deep networks-based models have achieved excellent performances in various applications for extracting discriminative feature representations by convolutional neural networks (CNN) or recurrent neural networks (RNN). However, CNN or RNN may not work when handling data without temporal/spatial structures. Therefore, finding a new technique [...] Read more.
Deep networks-based models have achieved excellent performances in various applications for extracting discriminative feature representations by convolutional neural networks (CNN) or recurrent neural networks (RNN). However, CNN or RNN may not work when handling data without temporal/spatial structures. Therefore, finding a new technique to extract features instead of CNN or RNN is a necessity. Gradient Boosted Decision Trees (GBDT) can select the features with the largest information gain when building trees. In this paper, we propose an architecture based on the ensemble of decision trees and neural network (NN) for multiple machine learning tasks, e.g., classification, regression, and ranking. It can be regarded as an extension of the widely used deep-networks-based model, in which we use GBDT instead of CNN or RNN. This architecture consists of two main parts: (1) the decision forest layers, which focus on learning features from the input data, (2) the fully connected layers, which focus on distilling knowledge from the decision forest layers. Powered by these two parts, the proposed model could handle data without temporal/spatial structures. This model can be efficiently trained by stochastic gradient descent via back-propagation. The empirical evaluation results of different machine learning tasks demonstrate the the effectiveness of the proposed method. Full article
Show Figures

Figure 1

17 pages, 4465 KiB  
Article
An Advanced Approach to Object Detection and Tracking in Robotics and Autonomous Vehicles Using YOLOv8 and LiDAR Data Fusion
by Yanyan Dai, Deokgyu Kim and Kidong Lee
Electronics 2024, 13(12), 2250; https://doi.org/10.3390/electronics13122250 - 7 Jun 2024
Cited by 1 | Viewed by 1766
Abstract
Accurately and reliably perceiving the environment is a major challenge in autonomous driving and robotics research. Traditional vision-based methods often suffer from varying lighting conditions, occlusions, and complex environments. This paper addresses these challenges by combining a deep learning-based object detection algorithm, YOLOv8, [...] Read more.
Accurately and reliably perceiving the environment is a major challenge in autonomous driving and robotics research. Traditional vision-based methods often suffer from varying lighting conditions, occlusions, and complex environments. This paper addresses these challenges by combining a deep learning-based object detection algorithm, YOLOv8, with LiDAR data fusion technology. The principle of this combination is to merge the advantages of these technologies: YOLOv8 excels in real-time object detection and classification through RGB images, while LiDAR provides accurate distance measurement and 3D spatial information, regardless of lighting conditions. The integration aims to apply the high accuracy and robustness of YOLOv8 in identifying and classifying objects, as well as the depth data provided by LiDAR. This combination enhances the overall environmental perception, which is critical for the reliability and safety of autonomous systems. However, this fusion brings some research challenges, including data calibration between different sensors, filtering ground points from LiDAR point clouds, and managing the computational complexity of processing large datasets. This paper presents a comprehensive approach to address these challenges. Firstly, a simple algorithm is introduced to filter out ground points from LiDAR point clouds, which are essential for accurate object detection, by setting different threshold heights based on the terrain. Secondly, YOLOv8, trained on a customized dataset, is utilized for object detection in images, generating 2D bounding boxes around detected objects. Thirdly, a calibration algorithm is developed to transform 3D LiDAR coordinates to image pixel coordinates, which are vital for correlating LiDAR data with image-based object detection results. Fourthly, a method for clustering different objects based on the fused data is proposed, followed by an object tracking algorithm to compute the 3D poses of objects and their relative distances from a robot. The Agilex Scout Mini robot, equipped with Velodyne 16-channel LiDAR and an Intel D435 camera, is employed for data collection and experimentation. Finally, the experimental results validate the effectiveness of the proposed algorithms and methods. Full article
Show Figures

Figure 1

15 pages, 1064 KiB  
Article
Local-Global Representation Enhancement for Multi-View Graph Clustering
by Xingwang Zhao, Zhedong Hou and Jie Wang
Electronics 2024, 13(9), 1788; https://doi.org/10.3390/electronics13091788 - 6 May 2024
Viewed by 1039
Abstract
In recent years, multi-view graph clustering algorithms based on representations learning have received extensive attention. However, existing algorithms are still limited in two main aspects, first, most algorithms employ graph convolution networks to learn the local representations, but the presence of high-frequency noise [...] Read more.
In recent years, multi-view graph clustering algorithms based on representations learning have received extensive attention. However, existing algorithms are still limited in two main aspects, first, most algorithms employ graph convolution networks to learn the local representations, but the presence of high-frequency noise in these representations limits the clustering performance. Second, in the process of constructing a global representation based on the local representations, most algorithms focus on the consistency of each view while ignoring complementarity, resulting in lower representation quality. To address the aforementioned issues, a local-global representation enhancement for multi-view graph clustering algorithm is proposed in this paper. First, the low-frequency signals in the local representations are enhanced by a low-pass graph encoder, which yields smoother and more suitable local representations for clustering. Second, by introducing an attention mechanism, the local embedded representations of each view can be weighted and fused to obtain a global representation. Finally, to enhance the quality of the global representation, it is jointly optimized using the neighborhood contrastive loss and reconstruction loss. The final clustering results are obtained by applying the k-means algorithm to the global representation. A wealth of experiments have validated the effectiveness and robustness of the proposed algorithm. Full article
Show Figures

Figure 1

16 pages, 1944 KiB  
Article
A Novel Framework for Risk Warning That Utilizes an Improved Generative Adversarial Network and Categorical Boosting
by Yan Peng, Yue Liu, Jie Wang and Xiao Li
Electronics 2024, 13(8), 1538; https://doi.org/10.3390/electronics13081538 - 18 Apr 2024
Viewed by 737
Abstract
To address the problems of inadequate training and low precision in prediction models with small-sample-size and incomplete data, a novel SALGAN-CatBoost-SSAGA framework is proposed in this paper. We utilize the standard K-nearest neighbor algorithm to interpolate missing values in incomplete data, and employ [...] Read more.
To address the problems of inadequate training and low precision in prediction models with small-sample-size and incomplete data, a novel SALGAN-CatBoost-SSAGA framework is proposed in this paper. We utilize the standard K-nearest neighbor algorithm to interpolate missing values in incomplete data, and employ EllipticEnvelope to identify outliers. SALGAN, a generative adversarial network with a self-attention mechanism of label awareness, is utilized to generate virtual samples and increase the diversity of the training data for model training. To avoid local optima and improve the accuracy and stability of the standard CatBoost prediction model, an improved Sparrow Search Algorithm (SSA)–Genetic Algorithm (GA) combination is adopted to construct an effective CatBoost-SSAGA model for risk warning, in which the SSAGA is used for the global parameter optimization of CatBoost. A UCI heart disease dataset is used for heart disease risk prediction. The experimental results show the superiority of the proposed model in terms of accuracy, precision, recall, and F1-values, as well as the AUC. Full article
Show Figures

Figure 1

17 pages, 4069 KiB  
Article
A Lightweight 6D Pose Estimation Network Based on Improved Atrous Spatial Pyramid Pooling
by Fupan Wang, Xiaohang Tang, Yadong Wu, Yinfan Wang, Huarong Chen, Guijuan Wang and Jing Liao
Electronics 2024, 13(7), 1321; https://doi.org/10.3390/electronics13071321 - 1 Apr 2024
Viewed by 1144
Abstract
It is difficult for lightweight neural networks to produce accurate 6DoF pose estimation effects due to their accuracy being affected by scale changes. To solve this problem, we propose a method with good performance and robustness based on previous research. The enhanced PVNet-based [...] Read more.
It is difficult for lightweight neural networks to produce accurate 6DoF pose estimation effects due to their accuracy being affected by scale changes. To solve this problem, we propose a method with good performance and robustness based on previous research. The enhanced PVNet-based method uses depth-wise convolution to build a lightweight network. In addition, coordinate attention and atrous spatial pyramid pooling are used to ensure accuracy and robustness. This method effectively reduces the network size and computational complexity and is a lightweight 6DoF pose estimation method based on monocular RGB images. Experiments on public datasets and self-built datasets show that the average ADD(-S) estimation accuracy and 2D projection index of the improved method are improved. For datasets with large changes in object scale, the estimation accuracy of the average ADD(-S) is greatly improved. Full article
Show Figures

Figure 1

13 pages, 2044 KiB  
Article
Deep Neural Network Confidence Calibration from Stochastic Weight Averaging
by Zongjing Cao, Yan Li, Dong-Ho Kim and Byeong-Seok Shin
Electronics 2024, 13(3), 503; https://doi.org/10.3390/electronics13030503 - 25 Jan 2024
Cited by 1 | Viewed by 1789
Abstract
Overconfidence in deep neural networks (DNN) reduces the model’s generalization performance and increases its risk. The deep ensemble method improves model robustness and generalization of the model by combining prediction results from multiple DNNs. However, training multiple DNNs for model averaging is a [...] Read more.
Overconfidence in deep neural networks (DNN) reduces the model’s generalization performance and increases its risk. The deep ensemble method improves model robustness and generalization of the model by combining prediction results from multiple DNNs. However, training multiple DNNs for model averaging is a time-consuming and resource-intensive process. Moreover, combining multiple base learners (also called inducers) is hard to master, and any wrong choice may result in lower prediction accuracy than from a single inducer. We propose an approximation method for deep ensembles that can obtain ensembles of multiple DNNs without any additional costs. Specifically, multiple local optimal parameters generated during the training phase are sampled and saved by using an intelligent strategy. We use cycle learning rates starting at 75% of the training process and save the weights associated with the minimum learning rate in every iteration. Saved sets of the multiple model parameters are used as weights for a new model to perform forward propagation during the testing phase. Experiments on benchmarks of two different modalities, static images and dynamic videos, show that our method not only reduces the calibration error of the model but also improves the accuracy of the model. Full article
Show Figures

Figure 1

17 pages, 3838 KiB  
Article
Visual Analysis Method for Traffic Trajectory with Dynamic Topic Movement Patterns Based on the Improved Markov Decision Process
by Huarong Chen, Yadong Wu, Huaquan Tang, Jing Lei, Guijuan Wang, Weixin Zhao, Jing Liao, Fupan Wang and Zhong Wang
Electronics 2024, 13(3), 467; https://doi.org/10.3390/electronics13030467 - 23 Jan 2024
Viewed by 1086
Abstract
The visual analysis of trajectory topics is helpful for mining potential trajectory patterns, but the traditional visual analysis method ignores the evolution of the temporal coherence of the topic. In this paper, a novel visual analysis method for dynamic topic analysis of traffic [...] Read more.
The visual analysis of trajectory topics is helpful for mining potential trajectory patterns, but the traditional visual analysis method ignores the evolution of the temporal coherence of the topic. In this paper, a novel visual analysis method for dynamic topic analysis of traffic trajectory is proposed, which is used to explore and analyze the traffic trajectory topic and evolution. Firstly, the spatial information is integrated into trajectory words, calculating the dynamic trajectory topic model based on dynamic analysis modeling and, consequently, correlating the evolution of the trajectory topic between adjacent time slices. Secondly, in the trajectory topic, a representative trajectory sequence is generated to overcome the problem of the trajectory topic model not considering the word order, based on the improved Markov Decision Process. Subsequently, a set of meaningful visual codes is designed to analyze the trajectory topic and its evolution through the parallel window visual model from a spatial-temporal perspective. Finally, a case evaluation shows that the proposed method is effective in analyzing potential trajectory movement patterns. Full article
Show Figures

Figure 1

13 pages, 670 KiB  
Article
RoCS: Knowledge Graph Embedding Based on Joint Cosine Similarity
by Lifeng Wang, Juan Luo, Shiqiao Deng and Xiuyuan Guo
Electronics 2024, 13(1), 147; https://doi.org/10.3390/electronics13010147 - 28 Dec 2023
Cited by 4 | Viewed by 1730
Abstract
Knowledge graphs usually have many missing links, and predicting the relationships between entities has become a hot research topic in recent years. Knowledge graph embedding research maps entities and relations to a low-dimensional continuous space representation to predict links between entities. The present [...] Read more.
Knowledge graphs usually have many missing links, and predicting the relationships between entities has become a hot research topic in recent years. Knowledge graph embedding research maps entities and relations to a low-dimensional continuous space representation to predict links between entities. The present research shows that the key to the knowledge graph embedding approach is the design of scoring functions. According to the scoring function, knowledge graph embedding methods can be classified into dot product models and distance models. We find that the triple scores obtained using the dot product model or the distance model were unbounded, which leads to large variance. In this paper, we propose RotatE Cosine Similarity (RoCS), a method to compute the joint cosine similarity of complex vectors as a scoring function to make the triple scores bounded. Our approach combines the rotational properties of the complex vector embedding model RotatE to model complex relational patterns. The experimental results demonstrate that the newly introduced RoCS yields substantial enhancements compared to RotatE across various knowledge graph benchmarks, improving up to 4.0% in hits at 1 (Hits@1) on WN18RR and improving up to 3.3% in Hits@1 on FB15K-237. Meanwhile, our method achieves some new state-of-the-art (SOTA), including Hits@3 of 95.6%, Hits@10 of 96.4% on WN18, and mean reciprocal rank (MRR) of 48.9% and Hits@1 of 44.5% on WN18RR. Full article
Show Figures

Figure 1

27 pages, 8733 KiB  
Article
Improved A-Star Path Planning Algorithm in Obstacle Avoidance for the Fixed-Wing Aircraft
by Jing Li, Chaopeng Yu, Ze Zhang, Zimao Sheng, Zhongping Yan, Xiaodong Wu, Wei Zhou, Yang Xie and Jun Huang
Electronics 2023, 12(24), 5047; https://doi.org/10.3390/electronics12245047 - 18 Dec 2023
Cited by 2 | Viewed by 1271
Abstract
The flight management system is a basic component of avionics for modern airliners. However, the airborne flight management system needs to be improved and relies on imports; path planning is the key to the flight management system. Based on the classical A* algorithm, [...] Read more.
The flight management system is a basic component of avionics for modern airliners. However, the airborne flight management system needs to be improved and relies on imports; path planning is the key to the flight management system. Based on the classical A* algorithm, this paper proposes an improved A* path planning algorithm, which solves the problem of low planning efficiency and following a non-smooth path. In order to solve the problem of the large amount of data calculation and long planning time of the classical A* algorithm, a new data structure called a “value table” is designed to replace the open table and close table of the classical A* algorithm to improve the retrieval efficiency, and the Heap sort algorithm is used to optimize the efficiency of node sorting. Aiming at the problem that the flight trajectory is hard to follow, the trajectory smoothing optimization algorithm combined with turning angle limit is proposed. The gray value in the digital map is added to the A* algorithm, and the calculation methods of gray cost, cumulative cost, and estimated cost are improved, which can better meet the constraints of obstacle avoidance. Through the comparative simulation verification of the algorithm, the improved A* algorithm can significantly reduce the path planning time to 1% compared to the classical A* algorithm; it can be seen that the proposed algorithm improves the efficiency of path planning and the smoother planned path, which has obvious advantages compared to the classical A* algorithm. Full article
Show Figures

Figure 1

14 pages, 2551 KiB  
Article
Joint Overlapping Event Extraction Model via Role Pre-Judgment with Trigger and Context Embeddings
by Qian Chen, Kehan Yang, Xin Guo, Suge Wang, Jian Liao and Jianxing Zheng
Electronics 2023, 12(22), 4688; https://doi.org/10.3390/electronics12224688 - 18 Nov 2023
Viewed by 1267
Abstract
The objective of event extraction is to recognize event triggers and event categories within unstructured text and produce structured event arguments. However, there is a common phenomenon of triggers and arguments of different event types in a sentence that may be the same [...] Read more.
The objective of event extraction is to recognize event triggers and event categories within unstructured text and produce structured event arguments. However, there is a common phenomenon of triggers and arguments of different event types in a sentence that may be the same word elements, which poses new challenges to this task. In this article, a joint learning framework for overlapping event extraction (ROPEE) is proposed. In this framework, a role pre-judgment module is devised prior to argument extraction. It conducts role pre-judgment by leveraging the correlation between event types and roles, as well as trigger embeddings. Experiments on the FewFC show that the proposed model outperforms other baseline models in terms of Trigger Classification, Argument Identification, and Argument Classification by 0.4%, 0.9%, and 0.6%. In scenarios of trigger overlap and argument overlap, the proposed model outperforms the baseline models in terms of Argument Identification and Argument Classification by 0.9%, 1.2%, 0.7%, and 0.6%, respectively, indicating the effectiveness of ROPEE in solving overlapping events. Full article
Show Figures

Figure 1

23 pages, 1650 KiB  
Article
Resolving Agent Conflicts Using Enhanced Uncertainty Modeling Tools for Intelligent Decision Making
by Yanhui Zhai, Zihan Jia and Deyu Li
Electronics 2023, 12(21), 4547; https://doi.org/10.3390/electronics12214547 - 5 Nov 2023
Cited by 1 | Viewed by 1137
Abstract
Conflict analysis in intelligent decision making has received increasing attention in recent years. However, few researchers have analyzed conflicts by considering trustworthiness from the perspective of common agreement and common opposition. Since L-fuzzy three-way concept lattice is able to describe both the [...] Read more.
Conflict analysis in intelligent decision making has received increasing attention in recent years. However, few researchers have analyzed conflicts by considering trustworthiness from the perspective of common agreement and common opposition. Since L-fuzzy three-way concept lattice is able to describe both the attributes that objects commonly possess and the attributes that objects commonly do not possess, this paper introduces an L-fuzzy three-way concept lattice to capture the issues on which agents commonly agree and the issues which they commonly oppose, and proposes a hybrid conflict analysis model. In order to resolve conflicts identified by the proposed model, we formulate the problem as a knapsack problem and propose a method for selecting the optimal attitude change strategy. This strategy takes into account the associated costs and aims to provide the decision maker with the most favorable decision in terms of resolving conflicts and reaching consensus. To validate the effectiveness and feasibility of the proposed model, a case study is conducted, providing evidence of the model’s efficacy and viability in resolving conflicts. Full article
Show Figures

Figure 1

Back to TopTop