Machine Learning and Artificial Intelligence in Engineering Applications

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Algorithms for Multidisciplinary Applications".

Deadline for manuscript submissions: closed (15 March 2024) | Viewed by 12901

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
Systems Reliability and Industrial Safety Laboratory, Institute for Nuclear and Radiological Sciences, Energy, Technology and Safety, National Center for Scientific Research “DEMOKRITOS”, 15310 Athens, Greece
Interests: human reliability; quantitative risk assessment; hazard identification; risk management; accident analysis; process safety; oil and gas industry; offshore installations
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence is at our doorstep with machine learning services and applications incorporated in industrial, agricultural, energy, financial, healthcare, manufacturing, transportation, and logistic systems. These technological capabilities bring about tremendous world changes, boosting the economy, increasing productivity, and providing new opportunities. Moreover, all applications now rely heavily on data. As a result, information has become an essential commodity. Furthermore, the development of artificial intelligence and deep learning models with the ever-increasing human–machine interactions in everyday applications is a crucial aspect of the next Industrial Revolution.

The rapid deployment of the Internet of Things (IoT) and the integration of cloud big data concentrations lead to the ever-increasing engagement of information, which requires training in new intelligent algorithms, protocols, and processes. The growth of AI with the incorporation of machine learning and deep learning in engineering applications has enabled developers to create machines that can carry out complex manufacturing tasks. The ultimate goal is to develop systems that can learn and improve without human intervention.

Many engineering systems and applications will benefit from such unsupervised intelligent processes. In addition, natural language processing capabilities and the extensive exploitation of neural networks will provide new human–machine interactions for robotics, agriculture, process and manufacturing, and the transportation industry, while further promoting the extensive use of augmented, virtual, and mixed reality applications.

This Special Issue aims to emerge new distributed or cloud-based engineering applications, which involve smart algorithms and services targeting holistic, innovative, and sustainable systems. We encourage contributors to publish their work related to intelligent Information Systems, decision support systems, incident response systems, distributed data collection processes, and deep learning/machine learning architectures and algorithms provided as a service, associated with (but not limited to):

  • Machine learning and deep learning algorithm services, and processes for logistic, manufacturing, industrial, and safety applications;
  • Smart cities and smart home automation services and applications;
  • Smart transportation systems and services;
  • Smart medical systems and services;
  • Smart agricultural decision support systems and services;
  • Human–machine interactive and cognitive services;
  • Augmented reality, virtual reality, and mixed reality systems, services, and applications;
  • Internet of Things, smart algorithms, provided as services over distributed and cloud-based decision support systems;
  • Design, evaluation, and implementation of novel Internet of Things solutions incorporating machine learning, deep learning, and data mining logic.

We look forward to receiving your contributions.

Dr. Sotirios Kontogiannis
Dr. Myrto Konstantinidou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent information systems
  • intelligent engineering applications
  • machine learning and deep learning algorithms and applications
  • distributed information systems
  • Industry 5.0
  • cloud-based decision support systems and services
  • IoT
  • machine learning and deep learning services and applications

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

27 pages, 11496 KiB  
Article
Automatic Optimization of Deep Learning Training through Feature-Aware-Based Dataset Splitting
by Somayeh Shahrabadi, Telmo Adão, Emanuel Peres, Raul Morais, Luís G. Magalhães and Victor Alves
Algorithms 2024, 17(3), 106; https://doi.org/10.3390/a17030106 - 29 Feb 2024
Viewed by 1239
Abstract
The proliferation of classification-capable artificial intelligence (AI) across a wide range of domains (e.g., agriculture, construction, etc.) has been allowed to optimize and complement several tasks, typically operationalized by humans. The computational training that allows providing such support is frequently hindered by various [...] Read more.
The proliferation of classification-capable artificial intelligence (AI) across a wide range of domains (e.g., agriculture, construction, etc.) has been allowed to optimize and complement several tasks, typically operationalized by humans. The computational training that allows providing such support is frequently hindered by various challenges related to datasets, including the scarcity of examples and imbalanced class distributions, which have detrimental effects on the production of accurate models. For a proper approach to these challenges, strategies smarter than the traditional brute force-based K-fold cross-validation or the naivety of hold-out are required, with the following main goals in mind: (1) carrying out one-shot, close-to-optimal data arrangements, accelerating conventional training optimization; and (2) aiming at maximizing the capacity of inference models to its fullest extent while relieving computational burden. To that end, in this paper, two image-based feature-aware dataset splitting approaches are proposed, hypothesizing a contribution towards attaining classification models that are closer to their full inference potential. Both rely on strategic image harvesting: while one of them hinges on weighted random selection out of a feature-based clusters set, the other involves a balanced picking process from a sorted list that stores data features’ distances to the centroid of a whole feature space. Comparative tests on datasets related to grapevine leaves phenotyping and bridge defects showcase promising results, highlighting a viable alternative to K-fold cross-validation and hold-out methods. Full article
Show Figures

Figure 1

13 pages, 5038 KiB  
Article
Comparison of Different Radial Basis Function Networks for the Electrical Impedance Tomography (EIT) Inverse Problem
by Chowdhury Abrar Faiyaz, Pabel Shahrear, Rakibul Alam Shamim, Thilo Strauss and Taufiquar Khan
Algorithms 2023, 16(10), 461; https://doi.org/10.3390/a16100461 - 28 Sep 2023
Cited by 1 | Viewed by 1155
Abstract
This paper aims to determine whether regularization improves image reconstruction in electrical impedance tomography (EIT) using a radial basis network. The primary purpose is to investigate the effect of regularization to estimate the network parameters of the radial basis function network to solve [...] Read more.
This paper aims to determine whether regularization improves image reconstruction in electrical impedance tomography (EIT) using a radial basis network. The primary purpose is to investigate the effect of regularization to estimate the network parameters of the radial basis function network to solve the inverse problem in EIT. Our approach to studying the efficacy of the radial basis network with regularization is to compare the performance among several different regularizations, mainly Tikhonov, Lasso, and Elastic Net regularization. We vary the network parameters, including the fixed and variable widths for the Gaussian used for the network. We also perform a robustness study for comparison of the different regularizations used. Our results include (1) determining the optimal number of radial basis functions in the network to avoid overfitting; (2) comparison of fixed versus variable Gaussian width with or without regularization; (3) comparison of image reconstruction with or without regularization, in particular, no regularization, Tikhonov, Lasso, and Elastic Net; (4) comparison of both mean square and mean absolute error and the corresponding variance; and (5) comparison of robustness, in particular, the performance of the different methods concerning noise level. We conclude that by looking at the R2 score, one can determine the optimal number of radial basis functions. The fixed-width radial basis function network with regularization results in improved performance. The fixed-width Gaussian with Tikhonov regularization performs very well. The regularization helps reconstruct the images outside of the training data set. The regularization may cause the quality of the reconstruction to deteriorate; however, the stability is much improved. In terms of robustness, the RBF with Lasso and Elastic Net seem very robust compared to Tikhonov. Full article
Show Figures

Figure 1

19 pages, 5893 KiB  
Article
Using an Opportunity Matrix to Select Centers for RBF Neural Networks
by Daniel S. Soper
Algorithms 2023, 16(10), 455; https://doi.org/10.3390/a16100455 - 23 Sep 2023
Viewed by 1042
Abstract
When designed correctly, radial basis function (RBF) neural networks can approximate mathematical functions to any arbitrary degree of precision. Multilayer perceptron (MLP) neural networks are also universal function approximators, but RBF neural networks can often be trained several orders of magnitude more quickly [...] Read more.
When designed correctly, radial basis function (RBF) neural networks can approximate mathematical functions to any arbitrary degree of precision. Multilayer perceptron (MLP) neural networks are also universal function approximators, but RBF neural networks can often be trained several orders of magnitude more quickly than an MLP network with an equivalent level of function approximation capability. The primary challenge with designing a high-quality RBF neural network is selecting the best values for the network’s “centers”, which can be thought of as geometric locations within the input space. Traditionally, the locations for the RBF nodes’ centers are chosen either through random sampling of the training data or by using k-means clustering. The current paper proposes a new algorithm for selecting the locations of the centers by relying on a structure known as an “opportunity matrix”. The performance of the proposed algorithm is compared against that of the random sampling and k-means clustering methods using a large set of experiments involving both a real-world dataset from the steel industry and a variety of mathematical and statistical functions. The results indicate that the proposed opportunity matrix algorithm is almost always much better at selecting locations for an RBF network’s centers than either of the two traditional techniques, yielding RBF neural networks with superior function approximation capabilities. Full article
Show Figures

Figure 1

14 pages, 2706 KiB  
Article
An Aspect-Oriented Approach to Time-Constrained Strategies in Smart City IoT Applications
by Vyas O’Neill and Ben Soh
Algorithms 2023, 16(10), 454; https://doi.org/10.3390/a16100454 - 23 Sep 2023
Viewed by 1000
Abstract
The Internet of Things (IoT) is growing rapidly in various domains, including smart city applications. In many cases, IoT data in smart city applications have time constraints in which they are relevant and acceptable to the task at hand—a window of validity (WoV). [...] Read more.
The Internet of Things (IoT) is growing rapidly in various domains, including smart city applications. In many cases, IoT data in smart city applications have time constraints in which they are relevant and acceptable to the task at hand—a window of validity (WoV). Existing algorithms, such as ex post facto adjustment, data offloading, fog computing, and blockchain applications, generally focus on managing the time-validity of data. In this paper, we consider that the functional components of the IoT devices’ decision-making strategies themselves may also be defined in terms of a WoV. We propose an aspect-oriented mechanism to supervise the execution of the IoT device’s strategy, manage the WoV constraints, and resolve invalidated functional components through communication in the multi-agent system. The applicability of our proposed approach is considered with respect to the improved cost, service life, and environmental outcomes for IoT devices in a smart cities context. Full article
Show Figures

Graphical abstract

22 pages, 610 KiB  
Article
Deep Learning Stranded Neural Network Model for the Detection of Sensory Triggered Events
by Sotirios Kontogiannis, Theodosios Gkamas and Christos Pikridas
Algorithms 2023, 16(4), 202; https://doi.org/10.3390/a16040202 - 10 Apr 2023
Cited by 1 | Viewed by 1759
Abstract
Maintenance processes are of high importance for industrial plants. They have to be performed regularly and uninterruptedly. To assist maintenance personnel, industrial sensors monitored by distributed control systems observe and collect several machinery parameters in the cloud. Then, machine learning algorithms try to [...] Read more.
Maintenance processes are of high importance for industrial plants. They have to be performed regularly and uninterruptedly. To assist maintenance personnel, industrial sensors monitored by distributed control systems observe and collect several machinery parameters in the cloud. Then, machine learning algorithms try to match patterns and classify abnormal behaviors. This paper presents a new deep learning model called stranded-NN. This model uses a set of NN models of variable layer depths depending on the input. This way, the proposed model can classify different types of emergencies occurring in different time intervals; real-time, close-to-real-time, or periodic. The proposed stranded-NN model has been compared against existing fixed-depth MLPs and LSTM networks used by the industry. Experimentation has shown that the stranded-NN model can outperform fixed depth MLPs 15–21% more in terms of accuracy for real-time events and at least 10–14% more for close-to-real-time events. Regarding LSTMs of the same memory depth as the NN strand input, the stranded NN presents similar results in terms of accuracy for a specific number of strands. Nevertheless, the stranded-NN model’s ability to maintain multiple trained strands makes it a superior and more flexible classification and prediction solution than its LSTM counterpart, as well as being faster at training and classification. Full article
Show Figures

Figure 1

18 pages, 1699 KiB  
Article
Hyperparameter Optimization Using Successive Halving with Greedy Cross Validation
by Daniel S. Soper
Algorithms 2023, 16(1), 17; https://doi.org/10.3390/a16010017 - 27 Dec 2022
Cited by 8 | Viewed by 2658
Abstract
Training and evaluating the performance of many competing Artificial Intelligence (AI)/Machine Learning (ML) models can be very time-consuming and expensive. Furthermore, the costs associated with this hyperparameter optimization task grow exponentially when cross validation is used during the model selection process. Finding ways [...] Read more.
Training and evaluating the performance of many competing Artificial Intelligence (AI)/Machine Learning (ML) models can be very time-consuming and expensive. Furthermore, the costs associated with this hyperparameter optimization task grow exponentially when cross validation is used during the model selection process. Finding ways of quickly identifying high-performing models when conducting hyperparameter optimization with cross validation is hence an important problem in AI/ML research. Among the proposed methods of accelerating hyperparameter optimization, successive halving has emerged as a popular, state-of-the-art early stopping algorithm. Concurrently, recent work on cross validation has yielded a greedy cross validation algorithm that prioritizes the most promising candidate AI/ML models during the early stages of the model selection process. The current paper proposes a greedy successive halving algorithm in which greedy cross validation is integrated into successive halving. An extensive series of experiments is then conducted to evaluate the comparative performance of the proposed greedy successive halving algorithm. The results show that the quality of the AI/ML models selected by the greedy successive halving algorithm is statistically identical to those selected by standard successive halving, but that greedy successive halving is typically more than 3.5 times faster than standard successive halving. Full article
Show Figures

Figure 1

Review

Jump to: Research

36 pages, 804 KiB  
Review
Object Detection in Autonomous Vehicles under Adverse Weather: A Review of Traditional and Deep Learning Approaches
by Noor Ul Ain Tahir, Zuping Zhang, Muhammad Asim, Junhong Chen and Mohammed ELAffendi
Algorithms 2024, 17(3), 103; https://doi.org/10.3390/a17030103 - 26 Feb 2024
Viewed by 2322
Abstract
Enhancing the environmental perception of autonomous vehicles (AVs) in intelligent transportation systems requires computer vision technology to be effective in detecting objects and obstacles, particularly in adverse weather conditions. Adverse weather circumstances present serious difficulties for object-detecting systems, which are essential to contemporary [...] Read more.
Enhancing the environmental perception of autonomous vehicles (AVs) in intelligent transportation systems requires computer vision technology to be effective in detecting objects and obstacles, particularly in adverse weather conditions. Adverse weather circumstances present serious difficulties for object-detecting systems, which are essential to contemporary safety procedures, infrastructure for monitoring, and intelligent transportation. AVs primarily depend on image processing algorithms that utilize a wide range of onboard visual sensors for guidance and decisionmaking. Ensuring the consistent identification of critical elements such as vehicles, pedestrians, and road lanes, even in adverse weather, is a paramount objective. This paper not only provides a comprehensive review of the literature on object detection (OD) under adverse weather conditions but also delves into the ever-evolving realm of the architecture of AVs, challenges for automated vehicles in adverse weather, the basic structure of OD, and explores the landscape of traditional and deep learning (DL) approaches for OD within the realm of AVs. These approaches are essential for advancing the capabilities of AVs in recognizing and responding to objects in their surroundings. This paper further investigates previous research that has employed both traditional and DL methodologies for the detection of vehicles, pedestrians, and road lanes, effectively linking these approaches with the evolving field of AVs. Moreover, this paper offers an in-depth analysis of the datasets commonly employed in AV research, with a specific focus on the detection of key elements in various environmental conditions, and then summarizes the evaluation matrix. We expect that this review paper will help scholars to gain a better understanding of this area of research. Full article
Show Figures

Figure 1

Back to TopTop