Pattern Recognition and Machine Learning Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 August 2023) | Viewed by 45263

Special Issue Editors


E-Mail Website
Guest Editor
1. School of Information Engineering, East China Jiaotong University, Nanchang 330013, China
2. School of Electronic Information Engineering, Beijing Jiaotong University, Beijing 100044, China
Interests: wireless and mobile communications and the related applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 511370, China
Interests: wireless communications; security; edge computing; deep learning; federated learning; IoT networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Engineering, Brock University, St. Catharines, ON L2S 3A1, Canada
Interests: smart grids; multi-vector energy microgrids; energy systems; deep reinforcement learning; big data analytics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Nowadays, the applications of pattern recognition and artificial intelligence technologies are growing rapidly in our daily life. Among the different fields of artificial intelligence, machine learning has certainly been one of the most studied in recent years. There has been a gigantic shift in machine learning in the last few decades, which has opened unprecedented theoretic and application-based opportunities. Examples of the application of pattern recognition and machine learning technologies include communication, self-driving vehicles, gaming, and image recognition, amongst others. Despite the significant success in machine learning and pattern recognition methods in the past decade, their applications in addressing real problems are still unsatisfactory. There is still much to be studied in the related areas.

This topic is aimed at providing an interdisciplinary discussion to share the recent advancements in different areas of pattern recognition and artificial intelligence, with an emphasis on new approaches and techniques for machine-learning applications. We encourage the submission of papers with an innovative idea or research results in all aspects of pattern recognition and machine learning applications.

Topics of interest include (but are not limited to) the following:

  • Statistical and structural pattern recognition methods and applications;
  • Signal and image processing;
  • Computer vision and pattern recognition;
  • Data analytics, data mining, and computing in big data;
  • Machine-learning algorithm, model selection, clustering, and classification;
  • Methodologies, frameworks, and models for pattern recognition;
  • Machine-learning applications:
    • Image analysis;
    • communications (such as V2X, IoT, MEC);
    • information processing;
    • biometrics analysis;
    • healthcare and medical image analysis;
    • natural language processing;
    • scenario fusion and classification.

Prof. Dr. Junhui Zhao
Prof. Dr. Lisheng Fan
Dr. Shengrong Bu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • pattern recognition
  • machine learning
  • communication

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 7871 KiB  
Article
Predicting Power Generation from a Combined Cycle Power Plant Using Transformer Encoders with DNN
by Qiu Yi, Hanqing Xiong and Denghui Wang
Electronics 2023, 12(11), 2431; https://doi.org/10.3390/electronics12112431 - 27 May 2023
Cited by 3 | Viewed by 2564
Abstract
With the development of the Smart Grid, accurate prediction of power generation is becoming an increasingly crucial task. The primary goal of this research is to create an efficient and reliable forecasting model to estimate the full-load power generation of a combined-cycle power [...] Read more.
With the development of the Smart Grid, accurate prediction of power generation is becoming an increasingly crucial task. The primary goal of this research is to create an efficient and reliable forecasting model to estimate the full-load power generation of a combined-cycle power plant (CCPP). The dataset used in this research is a subset of the publicly available UCI Machine Learning Repository. It contains 9568 items of data collected from a CCPP during its full load operation over a span of six years. To enhance the accuracy of power generation forecasting, a novel forecasting method based on Transformer encoders with deep neural networks (DNN) was proposed. The proposed model exploits the ability of the Transformer encoder to extract valuable information. Furthermore, bottleneck DNN blocks and residual connections are used in the DNN component. In this study, a series of experiments were conducted, and the performance of the proposed model was evaluated against other state-of-the-art machine learning models based on the CCPP dataset. The experimental results illustrated that using Transformer encoders along with DNN can considerably improve the accuracy of predicting CCPPs power generation (RMSE = 3.5370, MAE = 2.4033, MAPE = 0.5307%, and R2 = 0.9555). Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

18 pages, 1292 KiB  
Article
Channel Estimation for High-Speed Railway Wireless Communications: A Generative Adversarial Network Approach
by Qingmiao Zhang, Hanzhi Dong and Junhui Zhao
Electronics 2023, 12(7), 1752; https://doi.org/10.3390/electronics12071752 - 6 Apr 2023
Cited by 6 | Viewed by 2856
Abstract
In high-speed railways, the wireless channel and network topology change rapidly due to the high-speed movement of trains and the constant change of the location of communication equipment. The topology is affected by channel noise, making accurate channel estimation more difficult. Therefore, the [...] Read more.
In high-speed railways, the wireless channel and network topology change rapidly due to the high-speed movement of trains and the constant change of the location of communication equipment. The topology is affected by channel noise, making accurate channel estimation more difficult. Therefore, the way to obtain accurate channel state information (CSI) is the greatest challenge. In this paper, a two-stage channel-estimation method based on generative adversarial networks (cGAN) is proposed for MIMO-OFDM systems in high-mobility scenarios. The complex channel matrix is treated as an image, and the cGAN is trained against it to generate a more realistic channel image. In addition, the noise2noise (N2N) algorithm is used to denoise the pilot signal received by the base station to improve the estimation quality. Simulation experiments have shown the proposed N2N-cGAN algorithm has better robustness. In particular, the N2N-cGAN algorithm can be adapted to the case of fewer pilot sequences. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

14 pages, 3359 KiB  
Article
Federated Deep Reinforcement Learning-Based Caching and Bitrate Adaptation for VR Panoramic Video in Clustered MEC Networks
by Yan Li
Electronics 2022, 11(23), 3968; https://doi.org/10.3390/electronics11233968 - 30 Nov 2022
Cited by 3 | Viewed by 1932
Abstract
Virtual reality (VR) panoramic video is more expressive and experiential than traditional video. With the accelerated deployment of 5G networks, VR panoramic video has experienced explosive development. The large data volume and multi-viewport characteristics of VR panoramic videos make it more difficult to [...] Read more.
Virtual reality (VR) panoramic video is more expressive and experiential than traditional video. With the accelerated deployment of 5G networks, VR panoramic video has experienced explosive development. The large data volume and multi-viewport characteristics of VR panoramic videos make it more difficult to cache and transcode them in advance. Therefore, VR panoramic video services urgently need to provide powerful caching and computing power over the edge network. To address this problem, this paper establishes a hierarchical clustered mobile edge computing (MEC) network and develops a data perception-driven clustered-edge transmission model to meet the edge computing and caching capabilities required for VR panoramic video services. The joint optimization problem of caching and bitrate adaptation can be formulated as a Markov Decision Process (MDP). The federated deep reinforcement learning (FDRL) algorithm is proposed to solve the problem of caching and bitrate adaptation (called FDRL-CBA) for VR panoramic video services. The simulation results show that FDRL-CBA can outperform existing DRL-based methods in the same scenarios in terms of cache hit rate and quality of experience (QoE). In conclusion, this work developed a FDRL-CBA algorithm based on a data perception-driven clustered-edge transmission model, called Hierarchical Clustered MEC Networks. The proposed method can improve the performance of VR panoramic video services. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

23 pages, 10596 KiB  
Article
A Non-Dominated Sorting Genetic Algorithm Based on Voronoi Diagram for Deployment of Wireless Sensor Networks on 3-D Terrains
by Yifeng Tang, Dechang Huang, Rong Li and Zhaodi Huang
Electronics 2022, 11(19), 3024; https://doi.org/10.3390/electronics11193024 - 23 Sep 2022
Cited by 4 | Viewed by 1529
Abstract
The deployment strategy for wireless sensor networks (WSNs) affects the quality of service (QoS). Adopting a reasonable deployment strategy can improve the QoS of WSNs. In this paper, the problem regarding sensor node deployment for WSNs on three-dimensional (3D) terrain is modeled as [...] Read more.
The deployment strategy for wireless sensor networks (WSNs) affects the quality of service (QoS). Adopting a reasonable deployment strategy can improve the QoS of WSNs. In this paper, the problem regarding sensor node deployment for WSNs on three-dimensional (3D) terrain is modeled as a multi-objective optimization problem. The coverage rate of the WSNs, their unbalanced energy consumption, and the number of sensor nodes are used as fitness functions for the optimization problem. We propose a non-dominated sorting genetic algorithm based on a Voronoi diagram (VNSGA) for solving the wireless sensor network deployment issue and improving the QoS of WSNs on 3D terrain. The proposed algorithm applies the Voronoi diagram to obtain the node-sensing radius and communication radius, which are suitable for 3D terrain with respect to calculating the fitness function of the optimization problem. The Pareto optimal solution is obtained by retaining the solution close to the reference point. The experiments compare the proposed algorithm with the Multi-Objective Particle Swarm Algorithm (MOPSO) and the Non-Dominated Sorting Genetic Algorithm III (NSGA-III) on two terrains with different ranges. The experimental results show that the proposed algorithm outperforms the comparison algorithm on both terrains with different range sizes. The proposed algorithm can improve the coverage to 97.95% and reduce the imbalance in energy consumption to 9.13% on large range terrain. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

17 pages, 19974 KiB  
Article
Constructing a Gene Regulatory Network Based on a Nonhomogeneous Dynamic Bayesian Network
by Jiayao Zhang, Chunling Hu and Qianqian Zhang
Electronics 2022, 11(18), 2936; https://doi.org/10.3390/electronics11182936 - 16 Sep 2022
Cited by 1 | Viewed by 1841
Abstract
Since the regulatory relationship between genes is usually non-stationary, the homogeneity assumption cannot be satisfied when modeling with dynamic Bayesian networks (DBNs). For this reason, the homogeneity assumption in dynamic Bayesian networks should be relaxed. Various methods of combining multiple changepoint processes and [...] Read more.
Since the regulatory relationship between genes is usually non-stationary, the homogeneity assumption cannot be satisfied when modeling with dynamic Bayesian networks (DBNs). For this reason, the homogeneity assumption in dynamic Bayesian networks should be relaxed. Various methods of combining multiple changepoint processes and DBNs have been proposed to relax the homogeneity assumption. When using a non-homogeneous dynamic Bayesian network to model a gene regulatory network, it is inevitable to infer the changepoints of the gene data. Based on this analysis, this paper first proposes a data-based birth move (ED-birth move). The ED-birth move makes full use of the potential information of data to infer the changepoints. The greater the Euclidean distance of the mean of the data in the two components, the more likely this data point will be selected as a new changepoint by the ED-birth move. In brief, the selection of the changepoint is proportional to the Euclidean distance of the mean on both sides of the data. Furthermore, an improved Markov chain Monte Carlo (MCMC) method is proposed, and the improved MCMC introduces the Pearson correlation coefficient (PCCs) to sample the parent node-set. The larger the absolute value of the Pearson correlation coefficient between two data points, the easier it is to be sampled. Compared with other classical models on Saccharomyces cerevisiae data, synthetic data, RAF pathway data, and Arabidopsis data, the PCCs-ED-DBN proposed in this paper improves the accuracy of gene network reconstruction and further improves the convergence and stability of the modeling process. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

12 pages, 4066 KiB  
Article
Used Car Price Prediction Based on the Iterative Framework of XGBoost+LightGBM
by Baoyang Cui, Zhonglin Ye, Haixing Zhao, Zhuome Renqing, Lei Meng and Yanlin Yang
Electronics 2022, 11(18), 2932; https://doi.org/10.3390/electronics11182932 - 16 Sep 2022
Cited by 12 | Viewed by 4708
Abstract
To better address the problem of the low prediction accuracy of used car prices under a large number of features and big data and improve the accuracy of existing deep learning models, an iterative framework combining XGBoost and LightGBM is proposed in this [...] Read more.
To better address the problem of the low prediction accuracy of used car prices under a large number of features and big data and improve the accuracy of existing deep learning models, an iterative framework combining XGBoost and LightGBM is proposed in this paper. First, the relevant data processing is carried out for the initial recognition features. Then, by training the deep residual network, the predicted results are fused with the original features as new features. Finally, the new feature group is input into the iteration framework for training, the iteration is stopped, and the results are output when the performance reaches the highest value. These experimental results show that the combination of the deep residual network and iterative framework has a better prediction accuracy than the random forest and deep residual network. At the same time, by combining the existing mainstream methods with the iterative framework, it is verified that the iterative framework proposed in this paper can be applied to other models and greatly improve the prediction performance of other models. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

11 pages, 1608 KiB  
Article
A Typed Iteration Approach for Spoken Language Understanding
by Yali Pang, Peilin Yu and Zhichang Zhang
Electronics 2022, 11(17), 2793; https://doi.org/10.3390/electronics11172793 - 5 Sep 2022
Cited by 2 | Viewed by 1752
Abstract
A spoken language understanding (SLU) system usually involves two subtasks: intent detection (ID) and slot filling (SF). Recently, joint modeling of ID and SF has been empirically demonstrated to lead to improved performance. However, the existing joint models cannot explicitly use the encoded [...] Read more.
A spoken language understanding (SLU) system usually involves two subtasks: intent detection (ID) and slot filling (SF). Recently, joint modeling of ID and SF has been empirically demonstrated to lead to improved performance. However, the existing joint models cannot explicitly use the encoded information of the two subtasks to realize mutual interaction, nor can they achieve the bidirectional connection between them. In this paper, we propose a typed abstraction mechanism to enhance the performance of intent detection by utilizing the encoded information of SF tasks. In addition, we design a typed iteration approach, which can achieve the bidirectional connection of the encoded information and mitigate the negative effects of error propagation. The experimental results on two public datasets ATIS and SNIPS present the superiority of our proposed approach over other baseline methods, indicating the effectiveness of the typed iteration approach. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

15 pages, 2880 KiB  
Article
Visible–Infrared Person Re-Identification via Global Feature Constraints Led by Local Features
by Jin Wang, Kaiwei Jiang, Tianqi Zhang, Xiang Gu, Guoqing Liu and Xin Lu
Electronics 2022, 11(17), 2645; https://doi.org/10.3390/electronics11172645 - 24 Aug 2022
Cited by 2 | Viewed by 1515
Abstract
Smart security is needed for complex scenarios such as all-weather and multi-scene environments, and visible–infrared person re-identification (VI Re-ID) has become a key technique in this field. VI Re-ID is usually modeled as a pattern recognition issue, which faces the problems of inter-modality [...] Read more.
Smart security is needed for complex scenarios such as all-weather and multi-scene environments, and visible–infrared person re-identification (VI Re-ID) has become a key technique in this field. VI Re-ID is usually modeled as a pattern recognition issue, which faces the problems of inter-modality and intra-modality discrepancies. To alleviate these problems, we designed the Local Features Leading Global Features Network (LoLeG-Net), a representation learning network. Specifically, for cross-modality discrepancies, we employed a combination of ResNet50 and non-local attention blocks to obtain the modality-shareable features and convert the problem to a single-modality person re-identification (Re-ID) problem. For intra-modality variations, we designed global feature constraints led by local features. In this method, the identity loss and hetero-center loss were employed to alleviate intra-modality variations of local features. Additionally, hard sample mining triplet loss combined with identity loss was used, ensuring the effectiveness of global features. With this method, the final extracted global features were much more robust against the background environment, pose differences, occlusion and other noise. The experiments demonstrate that LoLeG-Net is superior to existing works. The result for SYSU-MM01 was Rank-1/mAP 51.40%/51.41%, and the result for RegDB was Rank-1/mAP 76.58%/73.36%. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

19 pages, 8191 KiB  
Article
Single-Objective Particle Swarm Optimization-Based Chaotic Image Encryption Scheme
by Jingya Wang, Xianhua Song and Ahmed A. Abd El-Latif
Electronics 2022, 11(16), 2628; https://doi.org/10.3390/electronics11162628 - 22 Aug 2022
Cited by 14 | Viewed by 2825
Abstract
High security has always been the ultimate goal of image encryption, and the closer the ciphertext image is to the true random number, the higher the security. Aiming at popular chaotic image encryption methods, particle swarm optimization (PSO) is studied to select the [...] Read more.
High security has always been the ultimate goal of image encryption, and the closer the ciphertext image is to the true random number, the higher the security. Aiming at popular chaotic image encryption methods, particle swarm optimization (PSO) is studied to select the parameters and initial values of chaotic systems so that the chaotic sequence has higher entropy. Different from the other PSO-based image encryption methods, the proposed method takes the parameters and initial values of the chaotic system as particles instead of encrypted images, which makes it have lower complexity and therefore easier to be applied in real-time scenarios. To validate the optimization framework, this paper designs a new image encryption scheme. The algorithm mainly includes key selection, chaotic sequence preprocessing, block scrambling, expansion, confusion, and diffusion. The key is selected by PSO and brought into the chaotic map, and the generated chaotic sequence is preprocessed. Based on block theory, a new intrablock and interblock scrambling method is designed, which is combined with image expansion to encrypt the image. Subsequently, the confusion and diffusion framework is used as the last step of the encryption process, including row confusion diffusion and column confusion diffusion, which makes security go a step further. Several experimental tests manifest that the scenario has good encryption performance and higher security compared with some popular image encryption methods. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

13 pages, 4265 KiB  
Article
Optimizing the Quantum Circuit for Solving Boolean Equations Based on Grover Search Algorithm
by Hui Liu, Fukun Li and Yilin Fan
Electronics 2022, 11(15), 2467; https://doi.org/10.3390/electronics11152467 - 8 Aug 2022
Cited by 2 | Viewed by 2510
Abstract
The solution of nonlinear Boolean equations in a binary field plays a crucial part in cryptanalysis and computational mathematics. To speed up the process of solving Boolean equations is an urgent task that needs to be addressed. In this paper, we propose a [...] Read more.
The solution of nonlinear Boolean equations in a binary field plays a crucial part in cryptanalysis and computational mathematics. To speed up the process of solving Boolean equations is an urgent task that needs to be addressed. In this paper, we propose a method for solving Boolean equations based on the Grover algorithm combined with preprocessing using classical algorithms, optimizing the quantum circuit for solving the equations, and implementing the automatic generation of quantum circuits. The method first converted Boolean equations into Boolean expressions to construct the oracle in the Grover algorithm. The quantum circuit was emulated based on the IBM Qiskit framework and then simulated the Grover algorithm on this basis. Finally, the solution of the Boolean equation was implemented. The experimental results proved the feasibility of using the Grover algorithm to solve nonlinear Boolean equations in a binary field, and the correct answer was successfully found under the conditions that the search space was 221 and three G iterations were used. The method in this paper increases the solving scale and solving speed of Boolean equations and enlarges the application area of the Grover algorithm. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

26 pages, 3446 KiB  
Article
Compiler Optimization Parameter Selection Method Based on Ensemble Learning
by Hui Liu, Jinlong Xu, Sen Chen and Te Guo
Electronics 2022, 11(15), 2452; https://doi.org/10.3390/electronics11152452 - 6 Aug 2022
Cited by 5 | Viewed by 3057
Abstract
Iterative compilation based on machine learning can effectively predict a program’s compiler optimization parameters. Although having some limits, such as the low efficiency of optimization parameter search and prediction accuracy, machine learning-based solutions have been a frontier research field in the field of [...] Read more.
Iterative compilation based on machine learning can effectively predict a program’s compiler optimization parameters. Although having some limits, such as the low efficiency of optimization parameter search and prediction accuracy, machine learning-based solutions have been a frontier research field in the field of iterative compilation and have gained increasing attention. The research challenges are focused on learning algorithm selection, optimal parameter search, and program feature representation. For the existing problems, we propose an ensemble learning-based optimization parameter selection (ELOPS) method for the compiler. First, in order to further improve the optimization parameter search efficiency and accuracy, we proposed a multi-objective particle swarm optimization (PSO) algorithm to determine the optimal compiler parameters of the program. Second, we extracted the mixed features of the program through the feature-class relevance method, rather than using static or dynamic features alone. Finally, as the existing research usually uses a separate machine learning algorithm to build prediction models, an ensemble learning model using program features and optimization parameters was constructed to effectively predict compiler optimization parameters of the new program. Using standard performance evaluation corporation 2006 (SPEC2006) and NAS parallel benchmark (NPB) benchmarks as well as some typical scientific computing programs, we compared ELOPS with the existing methods. The experimental results showed that we can respectively achieve 1.29× and 1.26× speedup when using our method on two platforms, which are better results than those of existing methods. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

16 pages, 822 KiB  
Article
Leveraging Deep Features Enhance and Semantic-Preserving Hashing for Image Retrieval
by Xusheng Zhao and Jinglei Liu
Electronics 2022, 11(15), 2391; https://doi.org/10.3390/electronics11152391 - 30 Jul 2022
Cited by 1 | Viewed by 2001
Abstract
The hash method can convert high-dimensional data into simple binary code, which has the advantages of fast speed and small storage capacity in large-scale image retrieval and is gradually being favored by an increasing number of people. However, the traditional hash method has [...] Read more.
The hash method can convert high-dimensional data into simple binary code, which has the advantages of fast speed and small storage capacity in large-scale image retrieval and is gradually being favored by an increasing number of people. However, the traditional hash method has two common shortcomings, which affect the accuracy of image retrieval. First, most of the traditional hash methods extract many irrelevant image features, resulting in partial information bias in the binary code produced by the hash method. Furthermore, the binary code made by the traditional hash method cannot maintain the semantic similarity of the image. To find solutions to these two problems, we try a new network architecture that adds a feature enhancement layer to better extract image features, remove redundant features, and express the similarity between images through contrastive loss, thereby constructing compact exact binary code. In summary, we use the relationship between labels and image features to model them, better preserve the semantic relationship and reduce redundant features, and use a contrastive loss to compare the similarity between images, using a balance loss to produce the resulting binary code. The numbers of 0s and 1s are balanced, resulting in a more compact binary code. Extensive experiments on three commonly used datasets—CIFAR-10, NUS-WIDE, and SVHN—display that our approach (DFEH) can express good performance compared with the other most advanced approaches. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

17 pages, 3043 KiB  
Article
Block Diagonal Least Squares Regression for Subspace Clustering
by Lili Fan, Guifu Lu, Tao Liu and Yong Wang
Electronics 2022, 11(15), 2375; https://doi.org/10.3390/electronics11152375 - 29 Jul 2022
Cited by 4 | Viewed by 1514
Abstract
Least squares regression (LSR) is an effective method that has been widely used for subspace clustering. Under the conditions of independent subspaces and noise-free data, coefficient matrices can satisfy enforced block diagonal (EBD) structures and achieve good clustering results. More importantly, LSR produces [...] Read more.
Least squares regression (LSR) is an effective method that has been widely used for subspace clustering. Under the conditions of independent subspaces and noise-free data, coefficient matrices can satisfy enforced block diagonal (EBD) structures and achieve good clustering results. More importantly, LSR produces closed solutions that are easier to solve. However, solutions with block diagonal properties that have been solved using LSR are sensitive to noise or corruption as they are fragile and easily destroyed. Moreover, when using actual datasets, these structures cannot always guarantee satisfactory clustering results. Considering that block diagonal representation has excellent clustering performance, the idea of block diagonal constraints has been introduced into LSR and a new subspace clustering method, which is named block diagonal least squares regression (BDLSR), has been proposed. By using a block diagonal regularizer, BDLSR can effectively reinforce the fragile block diagonal structures of the obtained matrices and improve the clustering performance. Our experiments using several real datasets illustrated that BDLSR produced a higher clustering performance compared to other algorithms. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

16 pages, 1831 KiB  
Article
Anatomical Landmark Detection Using a Feature-Sharing Knowledge Distillation-Based Neural Network
by Di Huang, Yuzhao Wang, Yu Wang, Guishan Gu and Tian Bai
Electronics 2022, 11(15), 2337; https://doi.org/10.3390/electronics11152337 - 27 Jul 2022
Cited by 1 | Viewed by 2008
Abstract
Existing anatomical landmark detection methods consider the performance gains under heavyweight network architectures, which lead to models tending to have poor scalability and cost-effectiveness. To solve this problem, state-of-the-art knowledge distillation (KD) methods are proposed. However, they only require the teacher model to [...] Read more.
Existing anatomical landmark detection methods consider the performance gains under heavyweight network architectures, which lead to models tending to have poor scalability and cost-effectiveness. To solve this problem, state-of-the-art knowledge distillation (KD) methods are proposed. However, they only require the teacher model to guide the output of the final layer of the student model. In this way, the semantic information learned by the student model is very limited. Different from previous works, we propose a novel KD-based model-training strategy, named feature-sharing fast landmark detection (FSF-LD), which focuses on intermediate features and effectively transfers richer spatial information from the teacher model to the student model. Moreover, to generate richer and more reliable knowledge, we propose a multi-task learning structure to pretrain the teacher model before FSF-LD. Finally, a tiny and effective anatomical landmark detection model is obtained. We evaluate our proposed FSF-LD on a public 2D hand radiograph dataset, a public 2D cephalometric radiograph dataset and a private 2D hip radiograph dataset. On the 2D hand dataset, our FSF-LD has 11.7%, 12.1%, 12.0,% and 11.4% improvement on SDR (r = 2 mm, r = 2.5 mm, r = 3 mm, r = 4 mm) compared with other KD methods. The results suggest the superiority of FSF-LD in terms of model performance and cost-effectiveness. However, it is a challenge to further improve the detection accuracy of anatomical landmarks and realize the clinical application of the research results, which is also our next plan. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

Review

Jump to: Research

28 pages, 573 KiB  
Review
Recent Progress of Using Knowledge Graph for Cybersecurity
by Kai Liu, Fei Wang, Zhaoyun Ding, Sheng Liang, Zhengfei Yu and Yun Zhou
Electronics 2022, 11(15), 2287; https://doi.org/10.3390/electronics11152287 - 22 Jul 2022
Cited by 25 | Viewed by 10331
Abstract
In today’s dynamic complex cyber environments, Cyber Threat Intelligence (CTI) and the risk of cyberattacks are both increasing. This means that organizations need to have a strong understanding of both their internal CTI and their external CTI. The potential for cybersecurity knowledge graphs [...] Read more.
In today’s dynamic complex cyber environments, Cyber Threat Intelligence (CTI) and the risk of cyberattacks are both increasing. This means that organizations need to have a strong understanding of both their internal CTI and their external CTI. The potential for cybersecurity knowledge graphs is evident in their ability to aggregate and represent knowledge about cyber threats, as well as their ability to manage and reason with that knowledge. While most existing research has focused on how to create a full knowledge graph, how to utilize the knowledge graph to tackle real-world industrial difficulties in cyberattack and defense situations is still unclear. In this article, we give a quick overview of the cybersecurity knowledge graph’s core concepts, schema, and building methodologies. We also give a relevant dataset review and open-source frameworks on the information extraction and knowledge creation job to aid future studies on cybersecurity knowledge graphs. We perform a comparative assessment of the many works that expound on the recent advances in the application scenarios of cybersecurity knowledge graph in the majority of this paper. In addition, a new comprehensive classification system is developed to define the linked works from 9 core categories and 18 subcategories. Finally, based on the analyses of existing research issues, we have a detailed overview of various possible research directions. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

Back to TopTop