Applied Machine Learning in Intelligent Systems

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Electronic Multimedia".

Deadline for manuscript submissions: 15 April 2025 | Viewed by 9498

Special Issue Editors


E-Mail Website
Guest Editor
Department of AI and Software, Gachon University, Seongnam-si 13120, Republic of Korea
Interests: machine learning; immersive media; AI healthcare; AR/VR/MR/metaverse; 360-degree videos; intelligent systems

E-Mail Website
Guest Editor
Department of Computer Science, Maynooth University, W23 F2K8 Maynooth, Ireland
Interests: machine learning; intelligent system; remote sensing; sustainability

E-Mail Website
Guest Editor
Department of Electrical Engineering (ESAT), KU Leuven, 3000 Leuven, Belgium
Interests: quality of experience; artificial intelligence; virtual reality; computer vision; immersive media delivery

Special Issue Information

Dear Colleagues,

Currently, applications of machine learning technologies in intelligent systems are rapidly expanding in our daily life. It refers to the utilization of machine learning techniques and algorithms to develop intelligent systems that perform specific tasks, learn, adapt, and make intelligent decisions based on data. These systems have the potential to revolutionize various industries by automating tasks, optimizing processes, and enabling new levels of personalization and efficiency. This Special Issue demonstrates the application of machine learning in human–computer interactions, the virtual reality/metaverse, images and videos, AI healthcare, and human–robotic interactions. The aim is to showcase the transformative potential of machine learning techniques in augmenting the intelligence, adaptability, and interaction capabilities of a wide array of systems across these critical domains.

This Special Issue aims to provide an interdisciplinary discussion to share the recent advancements in different areas of machine learning in intelligent systems to publish high-quality research papers with an emphasis on new approaches and techniques for machine learning applications.

The Special Issue invites researchers, practitioners, and experts to contribute their original research articles and reviews on the following topics of interest, among others.

Human-Computer Interaction (HCI):  Emotion recognition and sentiment analysis for personalized HCI experiences, user behaviour modelling and prediction in interactive systems and adaptive interfaces, as well as intelligent user experience design.

Virtual Reality/Metaverse:  Machine-learning-driven content creation and adaptation in virtual and metaverse environments, gesture and motion recognition for immersive interactions, user-centric adaptation and real-time content delivery in virtual realms, as well as AI-enhanced simulations and training within virtual contexts.

Images and Videos:  Deep learning for image and video classification, segmentation, and recognition; object detection and tracking in complex visual scenes; content-based retrieval using machine learning techniques; and real-time image and video processing for intelligent systems.

AI Healthcare:  Diagnosis and prognosis using AI-powered medical imaging analysis, personalized treatment recommendations based on patient data, health monitoring, and wearable device integration for AI-driven healthcare.

Human-Robotic Interaction:  Collaborative human–robot teamwork and coordination, multimodal interaction for seamless human–robot engagement, gesture and speech recognition for intuitive robot control, and socially aware robots that adapt to human preferences and behaviour.

Dr. Muhammad Shahid Anwar
Dr. Muhammad Salman Pathan
Dr. Maria Torres Vega
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI healthcare
  • AI integration in VR/AR/MR/metaverse
  • natural language processing
  • image/video processing
  • HCI
  • immersive interaction
  • IoT and wearable devices
  • machine learning
  • applied AI
  • intelligent systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

28 pages, 4190 KiB  
Article
Know Your Grip: Real-Time Holding Posture Recognition for Smartphones
by Rene Hörschinger, Marc Kurz and Erik Sonnleitner
Electronics 2024, 13(23), 4596; https://doi.org/10.3390/electronics13234596 - 21 Nov 2024
Viewed by 478
Abstract
This paper introduces a model that predicts four common smartphone-holding postures, aiming to enhance user interface adaptability. It is unique in being completely independent of platform and hardware, utilizing the inertial measurement unit (IMU) for real-time posture detection based on sensor data collected [...] Read more.
This paper introduces a model that predicts four common smartphone-holding postures, aiming to enhance user interface adaptability. It is unique in being completely independent of platform and hardware, utilizing the inertial measurement unit (IMU) for real-time posture detection based on sensor data collected around tap gestures. The model identifies whether the user is holding and operating the smartphone with one hand or using both hands in different configurations. For model training and validation, sensor time series data undergo extensive feature extraction, including statistical, frequency, magnitude, and wavelet analyses. These features are incorporated into 74 distinct sets, tested across various machine learning frameworks—k-nearest neighbors (KNN), support vector machine (SVM), and random forest (RF)—and evaluated for their effectiveness using metrics such as cross-validation scores, test accuracy, Kappa statistics, confusion matrices, and ROC curves. The optimized model demonstrates a high degree of accuracy, successfully predicting the holding hand with a 95.7% success rate. This approach highlights the potential of leveraging sensor data to improve mobile user experiences by adapting interfaces to natural user interactions. Full article
(This article belongs to the Special Issue Applied Machine Learning in Intelligent Systems)
Show Figures

Figure 1

33 pages, 5826 KiB  
Article
Improving Churn Detection in the Banking Sector: A Machine Learning Approach with Probability Calibration Techniques
by Alin-Gabriel Văduva, Simona-Vasilica Oprea, Andreea-Mihaela Niculae, Adela Bâra and Anca-Ioana Andreescu
Electronics 2024, 13(22), 4527; https://doi.org/10.3390/electronics13224527 - 18 Nov 2024
Viewed by 999
Abstract
Identifying and reducing customer churn have become a priority for financial institutions seeking to retain clients. Our research focuses on customer churn rate analysis using advanced machine learning (ML) techniques, leveraging a synthetic dataset sourced from the Kaggle platform. The dataset undergoes a [...] Read more.
Identifying and reducing customer churn have become a priority for financial institutions seeking to retain clients. Our research focuses on customer churn rate analysis using advanced machine learning (ML) techniques, leveraging a synthetic dataset sourced from the Kaggle platform. The dataset undergoes a preprocessing phase to select variables directly impacting customer churn behavior. SMOTETomek, a hybrid technique that combines oversampling of the minority class (churn) with SMOTE and the removal of noisy or borderline instances through Tomek links, is applied to balance the dataset and improve class separability. Two cutting-edge ML models are applied—random forest (RF) and the Light Gradient-Boosting Machine (LGBM) Classifier. To evaluate the effectiveness of these models, several key performance metrics are utilized, including precision, sensitivity, F1 score, accuracy, and Brier score, which helps assess the calibration of the predicted probabilities. A particular contribution of our research is on calibrating classification probabilities, as many ML models tend to produce uncalibrated probabilities due to the complexity of their internal mechanisms. Probability calibration techniques are employed to adjust the predicted probabilities, enhancing their reliability and interpretability. Furthermore, the Shapley Additive Explanations (SHAP) method, an explainable artificial intelligence (XAI) technique, is further implemented to increase the transparency and credibility of the model’s decision-making process. SHAP provides insights into the importance of individual features in predicting churn, providing knowledge to banking institutions for the development of personalized customer retention strategies. Full article
(This article belongs to the Special Issue Applied Machine Learning in Intelligent Systems)
Show Figures

Figure 1

17 pages, 2202 KiB  
Article
Maritime Object Detection by Exploiting Electro-Optical and Near-Infrared Sensors Using Ensemble Learning
by Muhammad Furqan Javed, Muhammad Osama Imam, Muhammad Adnan, Iqbal Murtza and Jin-Young Kim
Electronics 2024, 13(18), 3615; https://doi.org/10.3390/electronics13183615 - 11 Sep 2024
Viewed by 1054
Abstract
Object detection in maritime environments is a challenging problem because of the continuously changing background and moving objects resulting in shearing, occlusion, noise, etc. Unluckily, this problem is of critical importance since such failure may result in significant loss of human lives and [...] Read more.
Object detection in maritime environments is a challenging problem because of the continuously changing background and moving objects resulting in shearing, occlusion, noise, etc. Unluckily, this problem is of critical importance since such failure may result in significant loss of human lives and economic loss. The available object detection methods rely on radar and sonar sensors. Even with the advances in electro-optical sensors, their employment in maritime object detection is rarely considered. The proposed research aims to employ both electro-optical and near-infrared sensors for effective maritime object detection. For this, dedicated deep learning detection models are trained on electro-optical and near-infrared (NIR) sensor datasets. For this, (ResNet-50, ResNet-101, and SSD MobileNet) are utilized in both electro-optical and near-infrared space. Then, dedicated ensemble classifications are constructed on each collection of base learners from electro-optical and near-infrared spaces. After this, decisions about object detection from these spaces are combined using logical-disjunction-based final ensemble classification. This strategy is utilized to reduce false negatives effectively. To evaluate the performance of the proposed methodology, the publicly available standard Singapore Maritime Dataset is used and the results show that the proposed methodology outperforms the contemporary maritime object detection techniques with a significantly improved mean average precision. Full article
(This article belongs to the Special Issue Applied Machine Learning in Intelligent Systems)
Show Figures

Figure 1

16 pages, 3072 KiB  
Article
A Learner-Centric Explainable Educational Metaverse for Cyber–Physical Systems Engineering
by Seong-Jin Yun, Jin-Woo Kwon, Young-Hoon Lee, Jae-Heon Kim and Won-Tae Kim
Electronics 2024, 13(17), 3359; https://doi.org/10.3390/electronics13173359 - 23 Aug 2024
Viewed by 715
Abstract
Cyber–physical systems have become critical across industries. They have driven investments in education services to develop well-trained engineers. Education services for cyber–physical systems require the hiring of expert tutors with multidisciplinary knowledge, as well as acquiring expensive facilities/equipment. In response to the challenges [...] Read more.
Cyber–physical systems have become critical across industries. They have driven investments in education services to develop well-trained engineers. Education services for cyber–physical systems require the hiring of expert tutors with multidisciplinary knowledge, as well as acquiring expensive facilities/equipment. In response to the challenges posed by the need for the equipment and facilities, a metaverse-based education service that incorporates digital twins has been explored as a solution. However, the issue of recruiting expert tutors who can enhance students’ achievements remains unresolved, making it difficult to effectively cultivate talent. This paper proposes a reference architecture for a learner-centric educational metaverse with an intelligent tutoring framework as its core feature to address these issues. We develop a novel explainable artificial intelligence scheme for multi-class object detection models to assess learners’ achievements within the intelligent tutoring framework. Additionally, a genetic algorithm-based improvement search method is applied to the framework to derive personalized feedback. The proposed metaverse architecture and framework are evaluated through a case study on drone education. The experimental results show that the explainable AI scheme demonstrates an approximately 30% improvement in the explanation accuracy compared to existing methods. The survey results indicate that over 70% of learners significantly improved their skills based on the provided feedback. Full article
(This article belongs to the Special Issue Applied Machine Learning in Intelligent Systems)
Show Figures

Figure 1

19 pages, 2212 KiB  
Article
Design and Development of Multi-Agent Reinforcement Learning Intelligence on the Robotarium Platform for Embedded System Applications
by Lorenzo Canese, Gian Carlo Cardarilli, Mohammad Mahdi Dehghan Pir, Luca Di Nunzio and Sergio Spanò
Electronics 2024, 13(10), 1819; https://doi.org/10.3390/electronics13101819 - 8 May 2024
Cited by 5 | Viewed by 1217
Abstract
This research explores the use of Q-Learning for real-time swarm (Q-RTS) multi-agent reinforcement learning (MARL) algorithm for robotic applications. This study investigates the efficacy of Q-RTS in the reducing convergence time to a satisfactory movement policy through the successful implementation of four and [...] Read more.
This research explores the use of Q-Learning for real-time swarm (Q-RTS) multi-agent reinforcement learning (MARL) algorithm for robotic applications. This study investigates the efficacy of Q-RTS in the reducing convergence time to a satisfactory movement policy through the successful implementation of four and eight trained agents. Q-RTS has been shown to significantly reduce search time in terms of training iterations, from almost a million iterations with one agent to 650,000 iterations with four agents and 500,000 iterations with eight agents. The scalability of the algorithm was addressed by testing it on several agents’ configurations. A central focus was placed on the design of a sophisticated reward function, considering various postures of the agents and their critical role in optimizing the Q-learning algorithm. Additionally, this study delved into the robustness of trained agents, revealing their ability to adapt to dynamic environmental changes. The findings have broad implications for improving the efficiency and adaptability of robotic systems in various applications such as IoT and embedded systems. The algorithm was tested and implemented using the Georgia Tech Robotarium platform, showing its feasibility for the above-mentioned applications. Full article
(This article belongs to the Special Issue Applied Machine Learning in Intelligent Systems)
Show Figures

Figure 1

Review

Jump to: Research

19 pages, 1547 KiB  
Review
Advancements in TinyML: Applications, Limitations, and Impact on IoT Devices
by Abdussalam Elhanashi, Pierpaolo Dini, Sergio Saponara and Qinghe Zheng
Electronics 2024, 13(17), 3562; https://doi.org/10.3390/electronics13173562 - 8 Sep 2024
Viewed by 3604
Abstract
Artificial Intelligence (AI) and Machine Learning (ML) have experienced rapid growth in both industry and academia. However, the current ML and AI models demand significant computing and processing power to achieve desired accuracy and results, often restricting their use to high-capability devices. With [...] Read more.
Artificial Intelligence (AI) and Machine Learning (ML) have experienced rapid growth in both industry and academia. However, the current ML and AI models demand significant computing and processing power to achieve desired accuracy and results, often restricting their use to high-capability devices. With advancements in embedded system technology and the substantial development in the Internet of Things (IoT) industry, there is a growing desire to integrate ML techniques into resource-constrained embedded systems for ubiquitous intelligence. This aspiration has led to the emergence of TinyML, a specialized approach that enables the deployment of ML models on resource-constrained, power-efficient, and low-cost devices. Despite its potential, the implementation of ML on such devices presents challenges, including optimization, processing capacity, reliability, and maintenance. This article delves into the TinyML model, exploring its background, the tools that support it, and its applications in advanced technologies. By understanding these aspects, we can better appreciate how TinyML is transforming the landscape of AI and ML in embedded and IoT systems. Full article
(This article belongs to the Special Issue Applied Machine Learning in Intelligent Systems)
Show Figures

Figure 1

Back to TopTop