Next Article in Journal
Vertex-Weighted Consensus-Based Formation Control with Area Constraints and Collision Avoidance
Previous Article in Journal
MSEANet: Multi-Scale Selective Edge Aware Network for Polyp Segmentation
Previous Article in Special Issue
Sensors, Techniques, and Future Trends of Human-Engagement-Enabled Applications: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

AI Algorithms for Positive Change in Digital Futures

by
Manolya Kavakli-Thorne
1,* and
Zhuangzhuang Dai
2
1
Aston Digital Futures Institute (ADFI), Aston University, Birmingham B4 7ET, UK
2
School of Engineering and Physical Science, Aston University, Birmingham B4 7ET, UK
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(1), 43; https://doi.org/10.3390/a18010043
Submission received: 23 December 2024 / Accepted: 9 January 2025 / Published: 13 January 2025
(This article belongs to the Special Issue AI Algorithms for Positive Change in Digital Futures)
Artificial Intelligence (AI) is transforming industries and revolutionizing how we interact with technology at an unprecedented pace, playing a crucial role in shaping our digital future. The global issues we face today are complex, and AI provides us with a valuable tool for augmenting human efforts in formulating hardware and software solutions to complex problems.
In the current age of the Fourth Industrial Revolution (Industry 4.0), to analyze the wealth of data provided by the Internet of Things (IoT), cybersecurity, mobile, business, social media applications, and medical records, there is greater need for machine learning (ML) and Artificial Intelligence (AI) algorithms [1]. Driven by increased productivity, digitalization requires novel AI algorithms to enhance safety, reduce human error, and enable more sophisticated data analysis. While AI refers to the simulation of human intelligence in machines, which allows them to perform tasks that typically require human cognitive functions such as learning, reasoning, problem solving, perception, and decision making, ML refers to technologies and algorithms that enable systems to identify patterns, make decisions, and improve themselves through experience and data. The ultimate goal of AI is to develop machines that can think, reason, act autonomously, and, in some cases, surpass human capabilities across various domains, including healthcare, finance, transportation, and entertainment. Since the birth of AI with the “Logic Theorist” program created by Allen Newell and Herbert A. Simon in 1955, AI algorithms have led to innovations such as autonomous vehicles, smart homes, automated manufacturing systems, and medical robotics, creating a digital future.
This Special Issue entitled “AI Algorithms for Positive Change in Digital Futures” covers the design, development, application, and integration of intelligent systems driven by AI and ML approaches for solving real-world problems using novel algorithms. These algorithms implement positive changes in society through computer and automation engineering, and the papers in this issue address both the theoretical and practical issues in the use of these AI and ML algorithms.
As AI continues to reshape industries, gaining a strong foundation in these core algorithms will position us to contribute to the future of technology. With applications spanning natural language processing, computer vision, computer games, and robotics, understanding and implementing AI algorithms is now seen as a gateway to transforming the world. AI algorithms can be divided into several subcategories, such as the following:
  • Search algorithms are used for solving complex problems. These algorithms are designed to explore vast search spaces, find optimal solutions, and make well-informed decisions by navigating through large datasets. Search algorithms can be divided into several subgroups, such as uninformed search algorithms with no additional information about the goal to navigate through large sets of possibilities to identify the best solution; informed search algorithms, which use heuristics and additional data; local search algorithms, which are used in optimization problems; adversarial search algorithms, which are used in games and competitive environments where AI agents must act against opponents; and dynamic programming algorithms that break down problems into smaller, simpler subproblems to solve them recursively. Among these, we can list Depth-First Search [2] and Breadth-First Search [3], for traversing or searching tree or graph data structures; Alpha-Beta Pruning [4]; and Monte Carlo Tree Search [5], to name a few.
  • Optimization algorithms, including gradient descent and genetic programming, refine solutions, ensuring outcomes that align with specific goals, and are used for finding optimal solutions. These algorithms have extensive applications in AI-driven processes, ML model training, robotics, and data analysis. The subcategories of these are Linear Programming Algorithms used in optimization problems to maximize or minimize objective functions; optimization algorithms used in ML model training, parameter tuning, and AI model development; and constraint satisfaction problems that are used in scheduling, resource allocation, and automated planning to satisfy a set of constraints. Among these, we can list Genetic Algorithms [6], Ant Colony Optimization [7], Particle Swarm Optimization [8], and Bayesian Optimization [9], to name a few.
  • Supervised learning algorithms [10] enable machines to learn patterns and relationships from labeled data. These algorithms teach models how to map inputs to corresponding outputs and make accurate predictions and decisions based on past observations by training on input–output pairs. Among supervised learning algorithms, there are several subgroups, such as the following:
    Linear models are used for tasks requiring simple predictions such as regression and classification. They assume a linear relationship between input features and outputs; examples include Simple Linear Regression, Multiple Linear Regression, and Bayesian Regression.
    Classification algorithms, such as K-Nearest Neighbors (KNN), Support Vector Machines (SVMs), and Decision Tree Algorithms, are used for tasks where the output is categorical. They assign data points to predefined classes or categories.
    Regularization techniques are used for preventing overfitting in machine learning models by penalizing complex models, such as Lasso and Ridge.
    Ensemble learning algorithms combine multiple ML models to improve performance. These methods are highly effective in reducing variance and bias, resulting in better generalization on unseen data, such as Bootstrap Aggregation, random forest, and AdaBoost.
    Generative models estimate the distribution of data, making them powerful for tasks such as classification and anomaly detection, such as Gaussian Discriminant Analysis (GDA), Linear Discriminant Analysis (LDA), and Hidden Markov Models (HMMs).
    Time Series Forecasting Algorithms are used for predicting future values based on historical data trends, for example, ARIMA and SARIMA. These techniques are widely used in financial forecasting, stock market predictions, and supply chain management.
  • Unsupervised learning algorithms [11] play a crucial role in tasks such as clustering, anomaly detection, and dimensionality reduction for uncovering hidden patterns in data without relying on labeled examples. They help machines to explore and understand the structure of large datasets and are mostly used in finance, healthcare, and e-commerce.
    Clustering algorithms group data points into distinct clusters based on similarity, such as through K-Means Clustering, Fuzzy C-Means (FCM) Clustering, and Gaussian Mixture Models (GMMs). They are used for market segmentation, customer profiling, and image recognition.
    Association rule mining algorithms, such as Z-Score, Local Outlier Factor (LOF), and Isolation Forest, identify interesting relationships between variables in large datasets. They are used in market analysis to understand purchase patterns.
    Dimensionality reduction techniques simplify complex datasets by reducing the number of variables. They preserve essential information for feature extraction, data compression, and noise reduction to improve computational efficiency and visualize high-dimensional data; examples include Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Latent Dirichlet Allocation (LDA).
  • Neural networks, inspired by the processing of the human brain, allow machines to process complex data and learn autonomously. They are used across multiple domains, including image processing, natural language processing (NLP), and unsupervised learning. Neural networks can be categorized into various groups:
    Feed-forward neural networks [12] are the simplest type of artificial neural network, such as Perceptrons, where information moves in one direction without looping back.
    Convolutional Neural Networks (CNNs) [13] automatically and adaptively learn spatial hierarchies in data. They are used for image processing, computer vision, and object detection.
    Recurrent Neural Networks (RNNs) [14] retain information from previous inputs, allowing them to model dependencies in sequences. They are specialized for processing sequential data and used for tasks such as time series forecasting and natural language processing (NLP).
    Autoencoders [15] aim to encode input data into a lower-dimensional space and then reconstruct the output to be as close to the original input as possible. They are unsupervised learning algorithms and used for dimensionality reduction, data compression, and anomaly detection.
    Attention-based algorithms [16] are widely used in natural language processing (NLP) and sequence modeling and enable AI models to focus on specific parts of the input data. Transformers, as a subclass of these, are especially efficient in handling long sequences for machine translation and text summarization.
    Generative Adversarial Networks (GANs) [17] consist of two networks, a generator and a discriminator, that compete with each other to improve the quality of the generated data. They are used for image and video generation and data augmentation.
  • Reinforcement Learning Algorithms [18] aim to optimize decision-making processes by maximizing cumulative rewards over time. They enable machines to learn through interaction with their environment. They are used in robotics, gaming, and autonomous systems. Among these, we can list Markov Decision Processes (MDPs), which provide a mathematical framework for modeling sequential decision making in complex environments; Q-learning, which allows AI agents to learn the value of an action in a particular state using rewards and penalties, without needing a model of the environment; Deep Q-Networks (DQNs), which combine Q-learning with deep learning to enable AI systems to make decisions in high-dimensional state spaces, such as video games and robotics; and Monte Carlo Tree Search (MCTS), which is a heuristic search algorithm used to determine optimal decisions by exploring possible actions and simulating outcomes in AI applications in computer games.
  • Algorithms for computer vision [19] utilize a diverse range of techniques aimed at tasks such as feature extraction, edge detection, object detection, image segmentation, and artificial image or video generation to perceive and interpret visual information. Among these, we can list feature extraction algorithms used for simplifying and representing data to make them manageable for analysis; edge detection algorithms, which identify the boundaries within images for object recognition; object detection algorithms, which identify and locate objects within images or videos, such as region-based CNNs and You Only Look Once (YOLO) models; and image segmentation, which involves partitioning an image into segments for easier analysis, such as U-Net, SegNet, and DeepLab.
  • Algorithms for natural language processing (NLP) [20] generate new visual content. These include word embedding models, which represent words in vector space, capturing semantic meanings and relationships; advanced models that enhance language understanding through contextual information, such as cross-lingual language models (XMLs), Transformer-XL, Parts-of-Speech (POS) Tagging, and Named Entity Recognition (NER), which identify grammatical structures and recognize entities in text; sentiment analysis algorithms, which determine the sentiment expressed in text, providing insights into opinions and emotions; topic modeling algorithms that extract hidden topics from text data, such as LDA and LSA; machine translation algorithms that facilitate the automatic translation of text between languages, such as Google Neural Machine Translation (GNMT), OpenNMT, and MarianMT; text summarization algorithms that condense lengthy documents into shorter documents; text generation models that produce human-like text based on input data; and question-answering algorithms that retrieve precise answers from a given context, such as RoBERTa and GPTs (Generative Pre-trained Transformers), which generates contextually relevant answers.
This Special Issue presents six papers covering a wide range of applications, including convolutional networks for indoor localization using IoT and fingerprinting, demand prediction, transformer training in text-to-speech synthesis, ensemble learning using a Deep NLP, privacy preservation using wireless sensors, and sensor-based techniques for engagement. The list of contributions to this Special Issue is as follows:
Contribution 1 discusses indoor localization techniques using Bluetooth Low Energy (BLE) and a Radio Signal Strength Indicator (RSSI) to address the limitations of GPS in indoor environments. The study evaluates the effectiveness of iBeacon transmitters for indoor positioning, comparing the Weighted Centroid Localization (WCL) and Positive Weighted Centroid Localization (PWCL) algorithms, along with fingerprinting methods enhanced with outlier detection and mapping filters. The methodology includes mapping a real environment onto a coordinate axis, collecting training data from a range of sampling points, and implementing four localization algorithms.
Contribution 2 examines various spatiotemporal influencing factors associated with travel behavior and proposes a Local–Global Dynamic Multi-Graph Convolutional Network (LGDMGCN) model, driven by multi-source data, for the multi-step prediction of station-level bike-sharing demand. In dynamically modeling temporal dependencies by incorporating multiple sources of time semantic features, a time attention mechanism is integrated to better capture variations over time. The paper considers factors related to stations and utilizes spatial semantic features to construct dynamic multi-graphs, as well as a local–global structure to capture spatial dependencies among individual bike-sharing stations and all stations collectively for the optimization of intelligent transportation systems.
Contribution 3 discusses text-to-speech (TTS) models and introduces an end-to-end TTS model for efficiently generating high-quality Kurdish audio. The paper proposes a method that leverages a Variational Autoencoder (VAE) pre-trained for audio waveform reconstruction and augmented through adversarial training. This involves aligning the prior distribution established by the pre-trained encoder with the posterior distribution of the text encoder within latent variables. Additionally, a stochastic duration predictor is incorporated to imbue synthesized Kurdish speech with diverse rhythms. By aligning latent distributions and integrating the stochastic duration predictor, the proposed method facilitates the real-time generation of natural Kurdish speech audio, offering flexibility in pitches and rhythms.
Contribution 4 explores state-of-the-art pre-trained language models (PLMs) and transfer learning in natural language processing (NLP). Unlike traditional word-embedding methods, PLMs are context-dependent and outperform conventional techniques when fine-tuned for specific tasks. The paper proposes an innovative hard voting classifier to enhance crash severity classification by combining machine learning and deep learning models with various word-embedding techniques, including BERT, RoBERTa, Word2Vec, and TF-IDF. It involves two comprehensive experiments using motorists’ crash data from the Missouri State Highway Patrol. The first experiment evaluates the performance of three machine learning models—XGBoost (XGB), random forest (RF), and naive Bayes (NB)—paired with the TF-IDF, Word2Vec, and BERT feature extraction techniques. Additionally, BERT and RoBERTa are fine-tuned with a Bidirectional Long Short-Term Memory (Bi-LSTM) classification model. The second experiment repeats the evaluation using an augmented dataset to address the severe data imbalance. The paper proposes an ensemble model that outperforms individual models in both datasets when combined with data augmentation.
Contribution 5 investigates the issue of context privacy preservation for user validation via AccesSensor in the Industrial Metaverse and presents a technological method to address the need for context privacy. Looking at existing privacy preservation solutions, the paper proposes a user validation method customized to the Industrial Metaverse’s access system and evaluated on time-based efficiency, privacy, and bandwidth utilization. The paper provides insights and recommendations for developing strong privacy protection methods in wireless sensor networks that operate within the Industrial Metaverse ecosystem.
Contribution 6 reviews sensor-based techniques and estimation methods, as well as existing application domains, to deploy engagement estimators in various use cases ranging from driver drowsiness detection to human–robot interaction (HRI) with over one hundred references. The paper focuses on the accuracy and practicality of use in different scenarios regarding each sensor modality and shows the supremacy of multimodal sensor fusion and data-driven methods in enhancing the accuracy and reliability of engagement estimation, indicating the further need for developing more efficient algorithms for real-time processing, generalizing data-driven approaches, creating more adaptive and responsive systems, and promoting user acceptance.
Finally, as the guest editor, it was my pleasure to work with the editorial staff of Algorithms to prepare this Special Issue.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflicts of interest, since her co-authored paper in this Special Issue has been externally reviewed and accepted.

List of Contributions

  • Khodamoradi, F.; Rezazadeh, J.; Ayoade, J. Accurate Indoor Localization with IoT Devices and Advanced Fingerprinting Methods. Algorithms 2024, 17, 544. https://doi.org/10.3390/a17120544.
  • Chen, J.; Huang, R. Multi-Source Data-Driven Local-Global Dynamic Multi-Graph Convolutional Network for Bike-Sharing Demands Prediction. Algorithms 2024, 17, 384. https://doi.org/10.3390/a17090384.
  • Ahmad, H.A.; Rashid, T.A. Central Kurdish Text-to-Speech Synthesis with Novel End-to-End Transformer Training. Algorithms 2024, 17, 292. https://doi.org/10.3390/a17070292.
  • Jaradat, S.; Nayak, R.; Paz, A.; Elhenawy, M. Ensemble Learning with Pre-Trained Transformers for Crash Severity Classification: A Deep NLP Approach. Algorithms 2024, 17, 284. https://doi.org/10.3390/a17070284.
  • Odeh, J.O.; Yang, X.; Nwakanma, C.I.; Dhelim, S. Context Privacy Preservation for User Validation by Wireless Sensors in the Industrial Metaverse Access System. Algorithms 2024, 17, 225. https://doi.org/10.3390/a17060225.
  • Dai, Z.; Zakka, V.G.; Manso, L.J.; Rudorfer, M.; Bernardet, U.; Zumer, J.; Kavakli-Thorne, M. Sensors, Techniques, and Future Trends of Human-Engagement-Enabled Applications: A Review. Algorithms 2024, 17, 560. https://doi.org/10.3390/a17120560.

References

  1. Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef] [PubMed]
  2. Tarjan, R. Depth-First Search and Linear Graph Algorithms. SIAM J. Comput. 1972, 1, 146–160. [Google Scholar] [CrossRef]
  3. Beamer, S.; Asanovic, K.; Patterson, D. Direction-Optimizing Breadth-First Search. Sci. Program. 2013, 21, 137–148. [Google Scholar] [CrossRef]
  4. Knuth, D.E.; Moore, R.W. An analysis of alpha-beta pruning. Artif. Intell. 1975, 6, 293–326. [Google Scholar] [CrossRef]
  5. Browne, C.B.; Powley, E.; Whitehouse, D.; Lucas, S.M.; Cowling, P.I.; Rohlfhagen, P. A Survey of Monte Carlo Tree Search Methods. In IEEE Transactions on Computational Intelligence and AI in Games; IEEE: New York, NY, USA, 2012; Volume 4, pp. 1–43. [Google Scholar] [CrossRef]
  6. Holland, J.H. Genetic Algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  7. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. In IEEE Computational Intelligence Magazine; IEEE: New York, NY, USA, 2006; Volume 1, pp. 28–39. [Google Scholar] [CrossRef]
  8. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  9. Snoek, J.; Larochelle, H.; Adams, R.P. Practical bayesian optimization of machine learning algorithms. Adv. Neural Inf. Process. Syst. 2012, 25. [Google Scholar]
  10. Sen, P.C.; Hajra, M.; Ghosh, M. Supervised Classification Algorithms in Machine Learning: A Survey and Review. In Emerging Technology in Modelling and Graphics. Advances in Intelligent Systems and Computing; Mandal, J., Bhattacharya, D., Eds.; Springer: Singapore, 2020; Volume 937. [Google Scholar] [CrossRef]
  11. Ghahramani, Z. Unsupervised Learning. In Advanced Lectures on Machine Learning; ML 2003. Lecture Notes in Computer Science; Bousquet, O., von Luxburg, U., Rätsch, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3176. [Google Scholar] [CrossRef]
  12. Bebis, G.; Georgiopoulos, M. Feed-forward neural networks. In IEEE Potentials; IEEE: New York, NY, USA, 1994; Volume 13, pp. 27–31. [Google Scholar] [CrossRef]
  13. Alzubaidi, L.; Zhang, J.; Humaidi, A.J. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
  14. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef] [PubMed]
  15. Bank, D.; Koenigstein, N.; Giryes, R. Autoencoders. In Machine Learning for Data Science Handbook; Rokach, L., Maimon, O., Shmueli, E., Eds.; Springer: Cham, Switzerland, 2023. [Google Scholar] [CrossRef]
  16. Wang, Y.; Chen, M.; Luo, T.; Saad, W.; Niyato, D.; Poor, H.V. Performance Optimization for Semantic Communications: An Attention-Based Reinforcement Learning Approach. IEEE J. Sel. Areas Commun. 2022, 40, 2598–2613. [Google Scholar] [CrossRef]
  17. Alqahtani, H.; Kavakli-Thorne, M.; Kumar, G. Applications of Generative Adversarial Networks (GANs): An Updated Review. Arch. Comput. Methods Eng. 2021, 28, 525–552. [Google Scholar] [CrossRef]
  18. Szepesvari, C. Algorithms for Reinforcement Learning; Springer Nature: Cham, Switzerland, 2022. [Google Scholar] [CrossRef]
  19. Szeliski, R. Computer Vision: Algorithms and Applications, 2nd ed.; Springer Nature: Cham, Switzerland, 2022. [Google Scholar] [CrossRef]
  20. Chowdhary, K.R. Natural Language Processing. In Fundamentals of Artificial Intelligence; Springer: New Delhi, India, 2020. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kavakli-Thorne, M.; Dai, Z. AI Algorithms for Positive Change in Digital Futures. Algorithms 2025, 18, 43. https://doi.org/10.3390/a18010043

AMA Style

Kavakli-Thorne M, Dai Z. AI Algorithms for Positive Change in Digital Futures. Algorithms. 2025; 18(1):43. https://doi.org/10.3390/a18010043

Chicago/Turabian Style

Kavakli-Thorne, Manolya, and Zhuangzhuang Dai. 2025. "AI Algorithms for Positive Change in Digital Futures" Algorithms 18, no. 1: 43. https://doi.org/10.3390/a18010043

APA Style

Kavakli-Thorne, M., & Dai, Z. (2025). AI Algorithms for Positive Change in Digital Futures. Algorithms, 18(1), 43. https://doi.org/10.3390/a18010043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop