Topic Editors

Qingdao Institute of Software, College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, China
Center for AI Research, University of Agder, Grimstad, Norway

Advances in Integrative AI, Machine Learning, and Big Data for Transformative Applications

Abstract submission deadline
31 July 2026
Manuscript submission deadline
31 October 2026
Viewed by
7713

Topic Information

Dear Colleagues,

In recent years, Artificial Intelligence (AI), Machine Learning (ML), and Big Data have emerged as pivotal technologies driving innovation across a myriad of disciplines. These technologies have revolutionized fields such as engineering, healthcare, environmental science, and beyond, fundamentally reshaping how we approach complex problems and generate insights from vast datasets. This Special Issue aims to showcase the significant impact of AI, ML, and Big Data by exploring their transformative applications. It invites researchers and practitioners to contribute original research and reviews that highlight innovative methodologies and practical implementations in leveraging these technologies. Submissions are encouraged to address challenges and opportunities in integrating AI, ML, and Big Data across diverse domains, fostering interdisciplinary collaboration and pushing the boundaries of technological advancement. Topics of interest include, but are not limited to:

  • Integrative approaches using AI and ML in engineering, materials science, healthcare, environmental monitoring, and remote sensing.
  • AI-driven analytics and predictive modeling for optimized decision-making and resource management.
  • Innovations in ML algorithms for real-time data processing and pattern recognition.
  • Applications of Big Data in addressing societal challenges and advancing global sustainability efforts. By exploring these topics, this topic aims to provide insights into the transformative potential of AI, ML, and Big Data, catalyzing further advancements and applications across various sectors.

Dr. Peiying Zhang
Prof. Dr. Athanasios V. Vasilakos
Topic Editors

Keywords

  • artificial intelligence (AI)
  • machine learning (ML)
  • big data
  • data-driven analytics
  • predictive modeling
  • algorithm

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.3 2011 17.8 Days CHF 2400 Submit
Electronics
electronics
2.6 5.3 2012 16.8 Days CHF 2400 Submit
Information
information
2.4 6.9 2010 14.9 Days CHF 1600 Submit
Drones
drones
4.4 5.6 2017 21.7 Days CHF 2600 Submit
Sensors
sensors
3.4 7.3 2001 16.8 Days CHF 2600 Submit
Big Data and Cognitive Computing
BDCC
3.7 7.1 2017 18 Days CHF 1800 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (6 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
16 pages, 1049 KiB  
Article
Border Gateway Protocol Route Leak Detection Technique Based on Graph Features and Machine Learning
by Chen Shen, Ruixin Wang, Xiang Li, Peiying Zhang, Kai Liu and Lizhuang Tan
Electronics 2024, 13(20), 4072; https://doi.org/10.3390/electronics13204072 - 16 Oct 2024
Viewed by 651
Abstract
In the Internet, ASs are interconnected using BGP. However, due to a lack of security considerations in the design of BGP, a series of security issues arise during the propagation of routing information, such as prefix hijacking, route leakage, and AS path tampering. [...] Read more.
In the Internet, ASs are interconnected using BGP. However, due to a lack of security considerations in the design of BGP, a series of security issues arise during the propagation of routing information, such as prefix hijacking, route leakage, and AS path tampering. Therefore, this paper conducts research on the detection of route leakage. By analyzing BGP routing information, we abstract the routing propagation relationship between ASs into a network topology graph, and extract graph features from the graph abstracted from routing data at certain time intervals. Based on the structural robustness features and centrality measurement features of the graph, we determine whether a route leakage has occurred during the current time period. To this end, we use machine learning methods and propose a weighted voting model. This model trains multiple single models and assigns weights to them, and through the weighted analysis of the results of multiple models, it can determine whether a route leakage has occurred. In addition, to determine the corresponding weights, we use genetic algorithms for identifying route leaks. The experimental results show that the method used in this paper has a high accuracy rate, and compared with a single model, it performs better on multiple datasets. Full article
Show Figures

Figure 1

19 pages, 593 KiB  
Article
A Resource Allocation Algorithm for Cloud-Network Collaborative Satellite Networks with Differentiated QoS Requirements
by Zhimin Shao, Qingyang Ding, Lingzhen Meng, Tao Yang, Shengpeng Chen and Yapeng Li
Electronics 2024, 13(19), 3843; https://doi.org/10.3390/electronics13193843 - 28 Sep 2024
Viewed by 695
Abstract
With the continuous advancement of cloud computing and satellite communication technology, the cloud-network-integrated satellite network has emerged as a novel network architecture. This architecture harnesses the benefits of cloud computing and satellite communication to achieve global coverage, high reliability, and flexible information services. [...] Read more.
With the continuous advancement of cloud computing and satellite communication technology, the cloud-network-integrated satellite network has emerged as a novel network architecture. This architecture harnesses the benefits of cloud computing and satellite communication to achieve global coverage, high reliability, and flexible information services. However, as business types and user demands grow, addressing differentiated Quality of Service (QoS) requirements has become a crucial challenge for cloud-network-integrated satellite networks. Effective resource allocation algorithms are essential to meet these differentiated QoS requirements. Currently, research on resource allocation algorithms for differentiated QoS requirements in cloud-network-integrated satellite networks is still in its early stages. While some research results have been achieved, there persist issues such as high algorithm complexity, limited practicality, and a lack of effective evaluation and adjustment mechanisms. The first part of this study examines the state of research on network virtual mapping methods that are currently in use. A reinforcement-learning-based virtual network mapping approach that considers quality of service is then suggested. This algorithm aims to improve user QoS and request acceptance ratio by introducing QoS satisfaction parameters. With the same computational complexity, QoS is significantly improved. Additionally, there has been a noticeable improvement in the request acceptance ratio and resource utilization efficiency. The proposed algorithm solves existing challenges and takes a step towards more practical and efficient resource allocation in cloud-network-integrated satellite networks. Experiments have proven the practicality of the proposed virtual network embedding algorithm of Satellite Network (SN-VNE) based on Reinforcement Learning (RL) in meeting QoS and improving utilization of limited heterogeneous resources. We contrast the performance of the SN-VNE algorithm with DDRL-VNE, CDRL, and DSCD-VNE. Our algorithm improve the acceptance ratio of VNEs, long-term average revenue and delay by an average of 7.9%, 15.87%, and 63.21%, respectively. Full article
Show Figures

Figure 1

31 pages, 2928 KiB  
Review
Literature Review of Explainable Tabular Data Analysis
by Helen O’Brien Quinn, Mohamed Sedky, Janet Francis and Michael Streeton
Electronics 2024, 13(19), 3806; https://doi.org/10.3390/electronics13193806 - 26 Sep 2024
Cited by 1 | Viewed by 1561
Abstract
Explainable artificial intelligence (XAI) is crucial for enhancing transparency and trust in machine learning models, especially for tabular data used in finance, healthcare, and marketing. This paper surveys XAI techniques for tabular data, building on] previous work done, specifically a survey of explainable [...] Read more.
Explainable artificial intelligence (XAI) is crucial for enhancing transparency and trust in machine learning models, especially for tabular data used in finance, healthcare, and marketing. This paper surveys XAI techniques for tabular data, building on] previous work done, specifically a survey of explainable artificial intelligence for tabular data, and analyzes recent advancements. It categorizes and describes XAI methods relevant to tabular data, identifies domain-specific challenges and gaps, and examines potential applications and trends. Future research directions emphasize clarifying terminology, ensuring data security, creating user-centered explanations, improving interaction, developing robust evaluation metrics, and advancing adversarial example analysis. This contribution aims to bolster effective, trustworthy, and transparent decision making in the field of XAI. Full article
Show Figures

Figure 1

28 pages, 1094 KiB  
Article
Efficient Convolutional Neural Networks Utilizing Fine-Grained Fast Fourier Transforms
by Yulin Zhang, Feipeng Li, Haoke Xu, Xiaoming Li and Shan Jiang
Electronics 2024, 13(18), 3765; https://doi.org/10.3390/electronics13183765 - 22 Sep 2024
Viewed by 1404
Abstract
Convolutional Neural Networks (CNNs) are among the most prevalent deep learning techniques employed across various domains. The computational complexity of CNNs is largely attributed to the convolution operations. These operations are computationally demanding and significantly impact overall model performance. Traditional CNN implementations convert [...] Read more.
Convolutional Neural Networks (CNNs) are among the most prevalent deep learning techniques employed across various domains. The computational complexity of CNNs is largely attributed to the convolution operations. These operations are computationally demanding and significantly impact overall model performance. Traditional CNN implementations convert convolutions into matrix operations via the im2col (image to column) technique, facilitating parallelization through advanced BLAS libraries. This study identifies and investigates a significant yet intricate pattern of data redundancy within the matrix-based representation of convolutions, a pattern that, while complex, presents opportunities for optimization. Through meticulous analysis of the redundancy inherent in the im2col approach, this paper introduces a mathematically succinct matrix representation for convolution, leading to the development of an optimized FFT-based convolution with finer FFT granularity. Benchmarking demonstrates that our approach achieves an average speedup of 14 times and a maximum speedup of 17 times compared to the regular FFT convolution. Similarly, it outperforms the Im2col+GEMM approach from NVIDIA’s cuDNN library, achieving an average speedup of three times and a maximum speedup of five times. Our FineGrained FFT convolution approach, when integrated into Caffe, a widely used deep learning framework, leads to significant performance gains. Evaluations using synthetic CNNs designed for real-world applications show an average speedup of 1.67 times. Furthermore, a modified VGG network variant achieves a speedup of 1.25 times. Full article
Show Figures

Figure 1

16 pages, 11167 KiB  
Article
AbFTNet: An Efficient Transformer Network with Alignment before Fusion for Multimodal Automatic Modulation Recognition
by Meng Ning, Fan Zhou, Wei Wang, Shaoqiang Wang, Peiying Zhang and Jian Wang
Electronics 2024, 13(18), 3725; https://doi.org/10.3390/electronics13183725 - 20 Sep 2024
Viewed by 920
Abstract
Multimodal automatic modulation recognition (MAMR) has emerged as a prominent research area. The effective fusion of features from different modalities is crucial for MAMR tasks. An effective multimodal fusion mechanism should maximize the extraction and integration of complementary information. Recently, fusion methods based [...] Read more.
Multimodal automatic modulation recognition (MAMR) has emerged as a prominent research area. The effective fusion of features from different modalities is crucial for MAMR tasks. An effective multimodal fusion mechanism should maximize the extraction and integration of complementary information. Recently, fusion methods based on cross-modal attention have shown high performance. However, they overlook the differences in information intensity between different modalities, suffering from quadratic complexity. To this end, we propose an efficient Alignment before Fusion Transformer Network (AbFTNet) based on an in-phase quadrature (I/Q) and Fractional Fourier Transform (FRFT). Specifically, we first align and correlate the feature representations of different single modalities to achieve mutual information maximization. The single modality feature representations are obtained using the self-attention mechanism of the Transformer. Then, we design an efficient cross-modal aggregation promoting (CAP) module. By designing the aggregation center, we integrate two modalities to achieve the adaptive complementary learning of modal features. This operation bridges the gap in information intensity between different modalities, enabling fair interaction. To verify the effectiveness of the proposed methods, we conduct experiments on the RML2016.10a dataset. The experimental results show that multimodal fusion features significantly outperform single-modal features in classification accuracy across different signal-to-noise ratios (SNRs). Compared to other methods, AbFTNet achieves an average accuracy of 64.59%, with a 1.36% improvement over the TLDNN method, reaching the state of the art. Full article
Show Figures

Figure 1

24 pages, 6639 KiB  
Article
Road Passenger Load Probability Prediction and Path Optimization Based on Taxi Trajectory Big Data
by Guobin Gu, Benxiao Lou, Dan Zhou, Xiang Wang, Jianqiu Chen, Tao Wang, Huan Xiong and Yinong Liu
Appl. Sci. 2024, 14(17), 7756; https://doi.org/10.3390/app14177756 - 2 Sep 2024
Viewed by 1241
Abstract
This paper focuses on predicting road passenger probability and optimizing taxi driving routes based on trajectory big data. By utilizing clustering algorithms to identify key passenger points, a method for calculating and predicting road passenger probability is proposed. This method calculates the passenger [...] Read more.
This paper focuses on predicting road passenger probability and optimizing taxi driving routes based on trajectory big data. By utilizing clustering algorithms to identify key passenger points, a method for calculating and predicting road passenger probability is proposed. This method calculates the passenger probability for each road segment during different time periods and uses a BiLSTM neural network for prediction. A passenger-seeking recommendation model is then constructed with the goal of maximizing passenger probability, and it is solved using the NSGA-II algorithm. Experiments are conducted on the Chengdu taxi trajectory dataset, using MSE as the metric for model prediction accuracy. The results show that the BiLSTM prediction model improves prediction accuracy by 9.67% compared to the BP neural network and by 6.45% compared to the LSTM neural network. The proposed taxi driver passenger-seeking route selection method increases the average passenger probability by 18.95% compared to common methods. The proposed passenger-seeking recommendation framework, which includes passenger probability prediction and route optimization, maximizes road passenger efficiency and holds significant academic and practical value. Full article
Show Figures

Figure 1

Back to TopTop