applsci-logo

Journal Browser

Journal Browser

Machine Learning in Computer Engineering Applications

A topical collection in Applied Sciences (ISSN 2076-3417). This collection belongs to the section "Computing and Artificial Intelligence".

Viewed by 83982

Editor


E-Mail Website1 Website2
Collection Editor
Institute of Applied Computer Science, Lodz University of Technology (TUL), 90-924 Lodz, Poland
Interests: computer engineering; electrical engineering; machine learning; modelling and monitoring of industrial objects and processes; time series prediction; signal, image and video processing; text processing and analysis; applied computational intelligence
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

Machine learning is a dynamically developing branch of artificial intelligence, which has a wide range of applications. In particular, it becomes an indispensable part of computer systems dealing with complex problems, difficult or infeasible to solve by means of conventional algorithms.

In computational intelligence, machine learning algorithms are used to build models of systems or processes, which are based on experimental data and offer good generalization properties. This means that such data-driven models can be applied to make reliable predictions or decisions for new data which were not previously available during the learning process. At present, machine learning is an important element of intelligent computer systems with numerous applications in engineering, medicine, economics, education, etc.

The aim of this Topical Collection is to provide a comprehensive appraisal of innovative applications of machine learning algorithms in computer engineering employing novel approaches and methods, including deep learning, hybrid models, multimodal data fusion etc.

This Topical Collection will focus on the applications of machine learning in different fields of computer engineering. Topics of interest include but are not limited to the following:

  • Machine learning and decision making in engineering and economics;
  • Intelligent sensors and systems in machine vision and control;
  • Machine learning methods to process monitoring and prediction;
  • Pattern recognition in medical diagnosis;
  • Big data analysis;
  • Natural language processing;
  • AI-based efficient energy management;
  • Deep learning architectures;
  • Data-driven models;

Machine learning in different applications.

Prof. Dr. Lidia Jackowska-Strumillo
Collection Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine learning
  • Deep learning
  • Data-driven models
  • Artificial neural networks
  • Computer engineering
  • Intelligent systems
  • Computational intelligence

Published Papers (21 papers)

2023

Jump to: 2022, 2021

16 pages, 1524 KiB  
Article
Kernel Learning by Spectral Representation and Gaussian Mixtures
by Luis R. Pena-Llamas, Ramon O. Guardado-Medina, Arturo Garcia and Andres Mendez-Vazquez
Appl. Sci. 2023, 13(4), 2473; https://doi.org/10.3390/app13042473 - 14 Feb 2023
Viewed by 1477
Abstract
One of the main tasks in kernel methods is the selection of adequate mappings into higher dimension in order to improve class classification. However, this tends to be time consuming, and it may not finish with the best separation between classes. Therefore, there [...] Read more.
One of the main tasks in kernel methods is the selection of adequate mappings into higher dimension in order to improve class classification. However, this tends to be time consuming, and it may not finish with the best separation between classes. Therefore, there is a need for better methods that are able to extract distance and class separation from data. This work presents a novel approach for learning such mappings by using locally stationary kernels, spectral representations and Gaussian mixtures. Full article
Show Figures

Figure 1

2022

Jump to: 2023, 2021

15 pages, 4278 KiB  
Article
Research on Bone Stick Text Recognition Method with Multi-Scale Feature Fusion
by Mengxiu Du, Huiqin Wang, Rui Liu, Ke Wang and Zhan Wang
Appl. Sci. 2022, 12(24), 12507; https://doi.org/10.3390/app122412507 - 7 Dec 2022
Viewed by 1503
Abstract
Bone sticks are composed of thin slices of animal bones created by ancient people, which mainly served the functions of fixing books, writing scripts, and divination. The bone stick script is an essential material for studying the history of Chinese Western Han script. [...] Read more.
Bone sticks are composed of thin slices of animal bones created by ancient people, which mainly served the functions of fixing books, writing scripts, and divination. The bone stick script is an essential material for studying the history of Chinese Western Han script. Using a neural network for text recognition can quickly interpret ancient text, while extracting deeper semantic information, neural networks will also lose superficial image details. After multi-layer convolution and pooling of bone sticks, the continuous loss of superficial details affects classification accuracy. At the same time, the unbalanced distribution of bone stick quantity leads to a low recognition rate with small samples of bone sticks. Aiming to solve the above problems, a bone stick recognition method based on multi-scale features and focal loss function is proposed. Firstly, based on the residual network ResNet, the output features of the first layer and four Conv_x layers are pooled globally to reduce the feature dimension of each channel, and the channel splicing method is used to add different depths of base information to the original high-level features, which improves the detail feature extraction ability of the model. Secondly, in view of the unbalanced distribution of the bone stick data, the original cross-entropy loss function is replaced by the focus loss function, which increases the penalty for classification errors and improves the recognition rate of classes with few training samples. Experimental results show that the recognition accuracy of the proposed method on the bone stick data set is up to 90.5%. Full article
Show Figures

Figure 1

11 pages, 1413 KiB  
Communication
Machine Learning and Sustainable Mobility: The Case of the University of Foggia (Italy)
by Giulio Mario Cappelletti, Luca Grilli, Carlo Russo and Domenico Santoro
Appl. Sci. 2022, 12(17), 8774; https://doi.org/10.3390/app12178774 - 31 Aug 2022
Cited by 2 | Viewed by 2001
Abstract
Thanks to the development of increasingly sophisticated machine-learning techniques, it is possible to improve predictions of a particular phenomenon. In this paper, after analyzing data relating to the mobility habits of University of Foggia (UniFG) community members, we apply logistic regression and cross [...] Read more.
Thanks to the development of increasingly sophisticated machine-learning techniques, it is possible to improve predictions of a particular phenomenon. In this paper, after analyzing data relating to the mobility habits of University of Foggia (UniFG) community members, we apply logistic regression and cross validation to determine the information that is missing in the dataset (so-called imputation process). Our goal is to make it possible to obtain the missing information that can be useful for calculating sustainability indicators and that allow the UniFG Rectorate to improve its sustainable mobility policies by encouraging methods that are as appropriate as possible to the users’ needs. Full article
Show Figures

Figure 1

9 pages, 11897 KiB  
Article
Design of an In-Process Quality Monitoring Strategy for FDM-Type 3D Printer Using Deep Learning
by Gabriel Avelino R. Sampedro, Danielle Jaye S. Agron, Gabriel Chukwunonso Amaizu, Dong-Seong Kim and Jae-Min Lee
Appl. Sci. 2022, 12(17), 8753; https://doi.org/10.3390/app12178753 - 31 Aug 2022
Cited by 25 | Viewed by 3471
Abstract
Additive manufacturing is one of the rising manufacturing technologies in the future; however, due to its operational mechanism, printing failures are still prominent, leading to waste of both time and resources. The development of a real-time process monitoring system with the ability to [...] Read more.
Additive manufacturing is one of the rising manufacturing technologies in the future; however, due to its operational mechanism, printing failures are still prominent, leading to waste of both time and resources. The development of a real-time process monitoring system with the ability to properly forecast anomalous behaviors within fused deposition modeling (FDM) additive manufacturing is proposed as a solution to the particular problem of nozzle clogging. A set of collaborative sensors is used to accumulate time-series data and its processing into the proposed machine learning algorithm. The multi-head encoder–decoder temporal convolutional network (MH-ED-TCN) extracts features from data, interprets its effect on the different processes which occur during an operational printing cycle, and classifies the normal manufacturing operation from the malfunctioning operation. The tests performed yielded a 97.2% accuracy in anticipating the future behavior of a 3D printer. Full article
Show Figures

Figure 1

19 pages, 39069 KiB  
Article
Autonomous Temporal Pseudo-Labeling for Fish Detection
by Ricardo J. M. Veiga, Iñigo E. Ochoa, Adela Belackova, Luís Bentes, João P. Silva, Jorge Semião and João M. F. Rodrigues
Appl. Sci. 2022, 12(12), 5910; https://doi.org/10.3390/app12125910 - 10 Jun 2022
Cited by 6 | Viewed by 2124
Abstract
The first major step in training an object detection model to different classes from the available datasets is the gathering of meaningful and properly annotated data. This recurring task will determine the length of any project, and, more importantly, the quality of the [...] Read more.
The first major step in training an object detection model to different classes from the available datasets is the gathering of meaningful and properly annotated data. This recurring task will determine the length of any project, and, more importantly, the quality of the resulting models. This obstacle is amplified when the data available for the new classes are scarce or incompatible, as in the case of fish detection in the open sea. This issue was tackled using a mixed and reversed approach: a network is initiated with a noisy dataset of the same species as our classes (fish), although in different scenarios and conditions (fish from Australian marine fauna), and we gathered the target footage (fish from Portuguese marine fauna; Atlantic Ocean) for the application without annotations. Using the temporal information of the detected objects and augmented techniques during later training, it was possible to generate highly accurate labels from our targeted footage. Furthermore, the data selection method retained the samples of each unique situation, filtering repetitive data, which would bias the training process. The obtained results validate the proposed method of automating the labeling processing, resorting directly to the final application as the source of training data. The presented method achieved a mean average precision of 93.11% on our own data, and 73.61% on unseen data, an increase of 24.65% and 25.53% over the baseline of the noisy dataset, respectively. Full article
Show Figures

Figure 1

15 pages, 4616 KiB  
Article
Machine Learning-Based Highway Truck Commodity Classification Using Logo Data
by Pan He, Aotian Wu, Xiaohui Huang, Anand Rangarajan and Sanjay Ranka
Appl. Sci. 2022, 12(4), 2075; https://doi.org/10.3390/app12042075 - 16 Feb 2022
Cited by 5 | Viewed by 2373
Abstract
In this paper, we propose a novel approach to commodity classification from surveillance videos by utilizing logo data on trucks. Broadly, most logos can be classified as predominantly text or predominantly images. For the former, we leverage state-of-the-art deep-learning-based text recognition algorithms on [...] Read more.
In this paper, we propose a novel approach to commodity classification from surveillance videos by utilizing logo data on trucks. Broadly, most logos can be classified as predominantly text or predominantly images. For the former, we leverage state-of-the-art deep-learning-based text recognition algorithms on images. For the latter, we develop a two-stage image retrieval algorithm consisting of a universal logo detection stage that outputs all potential logo positions, followed by a logo recognition stage designed to incorporate advanced image representations. We develop an integrated approach to combine predictions from both the text-based and image-based solutions, which can help determine the commodity type that is potentially being hauled by trucks. We evaluated these models on videos collected in collaboration with the state transportation entity and achieved promising performance. This, along with prior work on trailer classification, can be effectively used for automatically deriving commodity types for trucks moving on highways. Full article
Show Figures

Figure 1

16 pages, 1408 KiB  
Article
Enhanced Generative Adversarial Networks with Restart Learning Rate in Discriminator
by Kun Li and Dae-Ki Kang
Appl. Sci. 2022, 12(3), 1191; https://doi.org/10.3390/app12031191 - 24 Jan 2022
Cited by 4 | Viewed by 3523
Abstract
A series of Generative Adversarial Networks (GANs) could effectively capture the salient features in the dataset in an adversarial way, thereby generating target data. The discriminator of GANs provides significant information to update parameters in the generator and itself. However, the discriminator usually [...] Read more.
A series of Generative Adversarial Networks (GANs) could effectively capture the salient features in the dataset in an adversarial way, thereby generating target data. The discriminator of GANs provides significant information to update parameters in the generator and itself. However, the discriminator usually becomes converged before the generator has been well trained. Due to this problem, GANs frequently fail to converge and are led to mode collapse. This situation can cause inadequate learning. In this paper, we apply restart learning in the discriminator of the GAN model, which could bring more meaningful updates for the training process. Based on CIFAR-10 and Align Celeba, the experiment results show that the proposed method could improve the performance of a DCGAN with a low FID score over a stable learning rate scheme. Compared with two other stable GANs—SNGAN and WGAN-GP—the DCGAN with a restart schedule had a satisfying performance. Compared with the Two Time-Scale Update Rule, the restart learning rate is more conducive to the training of DCGAN. The empirical analysis indicates four main parameters have varying degrees of influence on the proposed method and present an appropriate parameter setting. Full article
Show Figures

Figure 1

2021

Jump to: 2023, 2022

14 pages, 2173 KiB  
Article
InterTwin: Deep Learning Approaches for Computing Measures of Effectiveness for Traffic Intersections
by Yashaswi Karnati, Rahul Sengupta and Sanjay Ranka
Appl. Sci. 2021, 11(24), 11637; https://doi.org/10.3390/app112411637 - 8 Dec 2021
Viewed by 2407
Abstract
Microscopic simulation-based approaches are extensively used for determining good signal timing plans on traffic intersections. Measures of Effectiveness (MOEs) such as wait time, throughput, fuel consumption, emission, and delays can be derived for variable signal timing parameters, traffic flow patterns, etc. However, these [...] Read more.
Microscopic simulation-based approaches are extensively used for determining good signal timing plans on traffic intersections. Measures of Effectiveness (MOEs) such as wait time, throughput, fuel consumption, emission, and delays can be derived for variable signal timing parameters, traffic flow patterns, etc. However, these techniques are computationally intensive, especially when the number of signal timing scenarios to be simulated are large. In this paper, we propose InterTwin, a Deep Neural Network architecture based on Spatial Graph Convolution and Encoder-Decoder Recurrent networks that can predict the MOEs efficiently and accurately for a wide variety of signal timing and traffic patterns. Our methods can generate probability distributions of MOEs and are not limited to mean and standard deviation. Additionally, GPU implementations using InterTwin can derive MOEs, at least four to five orders of magnitude faster than microscopic simulations on a conventional 32 core CPU machine. Full article
Show Figures

Figure 1

21 pages, 752 KiB  
Article
Deep Reinforcement Learning Algorithms for Path Planning Domain in Grid-like Environment
by Maciej Grzelczak and Piotr Duch
Appl. Sci. 2021, 11(23), 11335; https://doi.org/10.3390/app112311335 - 30 Nov 2021
Cited by 4 | Viewed by 3392
Abstract
Recently, more and more solutions have utilised artificial intelligence approaches in order to enhance or optimise processes to achieve greater sustainability. One of the most pressing issues is the emissions caused by cars; in this paper, the problem of optimising the route of [...] Read more.
Recently, more and more solutions have utilised artificial intelligence approaches in order to enhance or optimise processes to achieve greater sustainability. One of the most pressing issues is the emissions caused by cars; in this paper, the problem of optimising the route of delivery cars is tackled. In this paper, the applicability of the deep reinforcement learning algorithms with regards to the aforementioned problem is tested on a simulation game designed and implemented to pose various challenges such as constant change of delivery locations. The algorithms chosen for this task are Advantage Actor-Critic (A2C) with and without Proximal Policy Optimisation (PPO). These novel and advanced reinforcement learning algorithms have yet not been utilised in similar scenarios. The differences in performance and learning process of those are visualised and discussed. It is demonstrated that both of those algorithms present a slow but steady learning curve, which is an expected effect of reinforcement learning algorithms, leading to a conclusion that the algorithms would discover an optimal policy with an adequately long learning process. Additionally, the benefits of the Proximal Policy Optimisation algorithm are proven by the enhanced learning curve with comparison to the Advantage Actor-Critic approach, as the learning process is characterised by faster growth with a significantly smaller variation. Finally, the applicability of such algorithms in the described scenarios is discussed, alongside the possible improvements and future work. Full article
Show Figures

Figure 1

26 pages, 596 KiB  
Article
Assignments as Influential Factor to Improve the Prediction of Student Performance in Online Courses
by Aurora Esteban, Cristóbal Romero and Amelia Zafra
Appl. Sci. 2021, 11(21), 10145; https://doi.org/10.3390/app112110145 - 29 Oct 2021
Cited by 7 | Viewed by 2574
Abstract
Studies on the prediction of student success in distance learning have explored mainly demographics factors and student interactions with the virtual learning environments. However, it is remarkable that a very limited number of studies use information about the assignments submitted by students as [...] Read more.
Studies on the prediction of student success in distance learning have explored mainly demographics factors and student interactions with the virtual learning environments. However, it is remarkable that a very limited number of studies use information about the assignments submitted by students as influential factor to predict their academic achievement. This paper aims to explore the real importance of assignment information for solving students’ performance prediction in distance learning and evaluate the beneficial effect of including this information. We investigate and compare this factor and its potential from two information representation approaches: the traditional representation based on single instances and a more flexible representation based on Multiple Instance Learning (MIL), focus on handle weakly labeled data. A comparative study is carried out using the Open University Learning Analytics dataset, one of the most important public datasets in education provided by one of the greatest online universities of United Kingdom. The study includes a wide set of different types of machine learning algorithms addressed from the two data representation commented, showing that algorithms using only information about assignments with a representation based on MIL can outperform more than 20% the accuracy with respect to a representation based on single instance learning. Thus, it is concluded that applying an appropriate representation that eliminates the sparseness of data allows to show the relevance of a factor, such as the assignments submitted, not widely used to date to predict students’ academic performance. Moreover, a comparison with previous works on the same dataset and problem shows that predictive models based on MIL using only assignments information obtain competitive results compared to previous studies that include other factors to predict students performance. Full article
Show Figures

Figure 1

16 pages, 24528 KiB  
Article
Human Behavior Analysis: A Survey on Action Recognition
by Bruno Degardin and Hugo Proença
Appl. Sci. 2021, 11(18), 8324; https://doi.org/10.3390/app11188324 - 8 Sep 2021
Cited by 12 | Viewed by 3968
Abstract
The visual recognition and understanding of human actions remain an active research domain of computer vision, being the scope of various research works over the last two decades. The problem is challenging due to its many interpersonal variations in appearance and motion dynamics [...] Read more.
The visual recognition and understanding of human actions remain an active research domain of computer vision, being the scope of various research works over the last two decades. The problem is challenging due to its many interpersonal variations in appearance and motion dynamics between humans, without forgetting the environmental heterogeneity between different video images. This complexity splits the problem into two major categories: action classification, recognising the action being performed in the scene, and spatiotemporal action localisation, concerning recognising multiple localised human actions present in the scene. Previous surveys mainly focus on the evolution of this field, from handcrafted features to deep learning architectures. However, this survey presents an overview of both categories and respective evolution within each one, the guidelines that should be followed and the current benchmarks employed for performance comparison between the state-of-the-art methods. Full article
Show Figures

Figure 1

12 pages, 421 KiB  
Article
Outlier Recognition via Linguistic Aggregation of Graph Databases
by Adam Niewiadomski, Agnieszka Duraj and Monika Bartczak
Appl. Sci. 2021, 11(16), 7434; https://doi.org/10.3390/app11167434 - 12 Aug 2021
Viewed by 1598
Abstract
Datasets frequently contain uncertain data that, if not interpreted with care, may affect information analysis negatively. Such rare, strange, or imperfect data, here called “outliers” or “exceptions” can be ignored in further processing or, on the other hand, handled by dedicated algorithms to [...] Read more.
Datasets frequently contain uncertain data that, if not interpreted with care, may affect information analysis negatively. Such rare, strange, or imperfect data, here called “outliers” or “exceptions” can be ignored in further processing or, on the other hand, handled by dedicated algorithms to decide if they contain valuable, though very rare, information. There are different definitions and methods for handling outliers, and here, we are interested, in particular, in those based on linguistic quantification and fuzzy logic. In this paper, for the first time, we apply definitions of outliers and methods for recognizing them based on fuzzy sets and linguistically quantified statements to find outliers in non-relational, here graph-oriented, databases. These methods are proposed and exemplified to identify objects being outliers (e.g., to exclude them from processing). The novelty of this paper are the definitions and recognition algorithms for outliers using fuzzy logic and linguistic quantification, if traditional quantitative and/or measurable information is inaccessible, that frequently takes place in the graph nature of considered datasets. Full article
Show Figures

Figure 1

23 pages, 1392 KiB  
Review
Sentiment Analysis of Students’ Feedback with NLP and Deep Learning: A Systematic Mapping Study
by Zenun Kastrati, Fisnik Dalipi, Ali Shariq Imran, Krenare Pireva Nuci and Mudasir Ahmad Wani
Appl. Sci. 2021, 11(9), 3986; https://doi.org/10.3390/app11093986 - 28 Apr 2021
Cited by 119 | Viewed by 21695
Abstract
In the last decade, sentiment analysis has been widely applied in many domains, including business, social networks and education. Particularly in the education domain, where dealing with and processing students’ opinions is a complicated task due to the nature of the language used [...] Read more.
In the last decade, sentiment analysis has been widely applied in many domains, including business, social networks and education. Particularly in the education domain, where dealing with and processing students’ opinions is a complicated task due to the nature of the language used by students and the large volume of information, the application of sentiment analysis is growing yet remains challenging. Several literature reviews reveal the state of the application of sentiment analysis in this domain from different perspectives and contexts. However, the body of literature is lacking a review that systematically classifies the research and results of the application of natural language processing (NLP), deep learning (DL), and machine learning (ML) solutions for sentiment analysis in the education domain. In this article, we present the results of a systematic mapping study to structure the published information available. We used a stepwise PRISMA framework to guide the search process and searched for studies conducted between 2015 and 2020 in the electronic research databases of the scientific literature. We identified 92 relevant studies out of 612 that were initially found on the sentiment analysis of students’ feedback in learning platform environments. The mapping results showed that, despite the identified challenges, the field is rapidly growing, especially regarding the application of DL, which is the most recent trend. We identified various aspects that need to be considered in order to contribute to the maturity of research and development in the field. Among these aspects, we highlighted the need of having structured datasets, standardized solutions and increased focus on emotional expression and detection. Full article
Show Figures

Figure 1

22 pages, 3289 KiB  
Article
Intelligent Scheduling with Reinforcement Learning
by Bruno Cunha, Ana Madureira, Benjamim Fonseca and João Matos
Appl. Sci. 2021, 11(8), 3710; https://doi.org/10.3390/app11083710 - 20 Apr 2021
Cited by 15 | Viewed by 6071
Abstract
In this paper, we present and discuss an innovative approach to solve Job Shop scheduling problems based on machine learning techniques. Traditionally, when choosing how to solve Job Shop scheduling problems, there are two main options: either use an efficient heuristic that provides [...] Read more.
In this paper, we present and discuss an innovative approach to solve Job Shop scheduling problems based on machine learning techniques. Traditionally, when choosing how to solve Job Shop scheduling problems, there are two main options: either use an efficient heuristic that provides a solution quickly, or use classic optimization approaches (e.g., metaheuristics) that take more time but will output better solutions, closer to their optimal value. In this work, we aim to create a novel architecture that incorporates reinforcement learning into scheduling systems in order to improve their overall performance and overcome the limitations that current approaches present. It is also intended to investigate the development of a learning environment for reinforcement learning agents to be able to solve the Job Shop scheduling problem. The reported experimental results and the conducted statistical analysis conclude about the benefits of using an intelligent agent created with reinforcement learning techniques. The main contribution of this work is proving that reinforcement learning has the potential to become the standard method whenever a solution is necessary quickly, since it solves any problem in very few seconds with high quality, approximate to the optimal methods. Full article
Show Figures

Figure 1

28 pages, 524 KiB  
Article
A Hybrid Metaheuristics Parameter Tuning Approach for Scheduling through Racing and Case-Based Reasoning
by Ivo Pereira, Ana Madureira, Eliana Costa e Silva and Ajith Abraham
Appl. Sci. 2021, 11(8), 3325; https://doi.org/10.3390/app11083325 - 7 Apr 2021
Cited by 7 | Viewed by 2714
Abstract
In real manufacturing environments, scheduling can be defined as the problem of effectively and efficiently assigning tasks to specific resources. Metaheuristics are often used to obtain near-optimal solutions in an efficient way. The parameter tuning of metaheuristics allows flexibility and leads to robust [...] Read more.
In real manufacturing environments, scheduling can be defined as the problem of effectively and efficiently assigning tasks to specific resources. Metaheuristics are often used to obtain near-optimal solutions in an efficient way. The parameter tuning of metaheuristics allows flexibility and leads to robust results, but requires careful specifications. The a priori definition of parameter values is complex, depending on the problem instances and resources. This paper implements a novel approach to the automatic specification of metaheuristic parameters, for solving the scheduling problem. This novel approach incorporates two learning techniques, namely, racing and case-based reasoning (CBR), to provide the system with the ability to learn from previous cases. In order to evaluate the contributions of the proposed approach, a computational study was performed, focusing on comparing our results previous published results. All results were validated by analyzing the statistical significance, allowing us to conclude the statistically significant advantage of the use of the novel proposed approach. Full article
Show Figures

Figure 1

15 pages, 4127 KiB  
Article
Learning Local Descriptor for Comparing Renders with Real Images
by Pamir Ghimire, Igor Jovančević and Jean-José Orteu
Appl. Sci. 2021, 11(8), 3301; https://doi.org/10.3390/app11083301 - 7 Apr 2021
Cited by 2 | Viewed by 1937
Abstract
We present a method to train a deep-network-based feature descriptor to calculate discriminative local descriptions from renders and corresponding real images with similar geometry. We are interested in using such descriptors for automatic industrial visual inspection whereby the inspection camera has been coarsely [...] Read more.
We present a method to train a deep-network-based feature descriptor to calculate discriminative local descriptions from renders and corresponding real images with similar geometry. We are interested in using such descriptors for automatic industrial visual inspection whereby the inspection camera has been coarsely localized with respect to a relatively large mechanical assembly and presence of certain components needs to be checked compared to the reference computer-aided design model (CAD). We aim to perform the task by comparing the real inspection image with the render of textureless 3D CAD using the learned descriptors. The descriptor was trained to capture geometric features while staying invariant to image domain. Patch pairs for training the descriptor were extracted in a semisupervised manner from a small data set of 100 pairs of real images and corresponding renders that were manually finely registered starting from a relatively coarse localization of the inspection camera. Due to the small size of the training data set, the descriptor network was initialized with weights from classification training on ImageNet. A two-step training is proposed for addressing the problem of domain adaptation. The first, “bootstrapping”, is a classification training to obtain good initial weights for second training step, triplet-loss training, that provides weights for extracting the discriminative features comparable using l2 distance. The descriptor was tested for comparing renders and real images through two approaches: finding local correspondences between the images through nearest neighbor matching and transforming the images into Bag of Visual Words (BoVW) histograms. We observed that learning a robust cross-domain descriptor is feasible, even with a small data set, and such features might be of interest for CAD-based inspection of mechanical assemblies, and related applications such as tracking or finely registered augmented reality. To the best of our knowledge, this is the first work that reports learning local descriptors for comparing renders with real inspection images. Full article
Show Figures

Figure 1

16 pages, 26815 KiB  
Article
Data Augmentation Using Generative Adversarial Network for Automatic Machine Fault Detection Based on Vibration Signals
by Van Bui, Tung Lam Pham, Huy Nguyen and Yeong Min Jang
Appl. Sci. 2021, 11(5), 2166; https://doi.org/10.3390/app11052166 - 1 Mar 2021
Cited by 11 | Viewed by 4120
Abstract
In the last decade, predictive maintenance has attracted a lot of attention in industrial factories because of its wide use of the Internet of Things and artificial intelligence algorithms for data management. However, in the early phases where the abnormal and faulty machines [...] Read more.
In the last decade, predictive maintenance has attracted a lot of attention in industrial factories because of its wide use of the Internet of Things and artificial intelligence algorithms for data management. However, in the early phases where the abnormal and faulty machines rarely appeared in factories, there were limited sets of machine fault samples. With limited fault samples, it is difficult to perform a training process for fault classification due to the imbalance of input data. Therefore, data augmentation was required to increase the accuracy of the learning model. However, there were limited methods to generate and evaluate the data applied for data analysis. In this paper, we introduce a method of using the generative adversarial network as the fault signal augmentation method to enrich the dataset. The enhanced data set could increase the accuracy of the machine fault detection model in the training process. We also performed fault detection using a variety of preprocessing approaches and classified the models to evaluate the similarities between the generated data and authentic data. The generated fault data has high similarity with the original data and it significantly improves the accuracy of the model. The accuracy of fault machine detection reaches 99.41% with 20% original fault machine data set and 93.1% with 0% original fault machine data set (only use generate data only). Based on this, we concluded that the generated data could be used to mix with original data and improve the model performance. Full article
Show Figures

Figure 1

22 pages, 640 KiB  
Article
Outlier Detection for Multivariate Time Series Using Dynamic Bayesian Networks
by Jorge L. Serras, Susana Vinga and Alexandra M. Carvalho
Appl. Sci. 2021, 11(4), 1955; https://doi.org/10.3390/app11041955 - 23 Feb 2021
Cited by 6 | Viewed by 3909
Abstract
Outliers are observations suspected of not having been generated by the underlying process of the remaining data. Many applications require a way of identifying interesting or unusual patterns in multivariate time series (MTS), now ubiquitous in many applications; however, most outlier detection methods [...] Read more.
Outliers are observations suspected of not having been generated by the underlying process of the remaining data. Many applications require a way of identifying interesting or unusual patterns in multivariate time series (MTS), now ubiquitous in many applications; however, most outlier detection methods focus solely on univariate series. We propose a complete and automatic outlier detection system covering the pre-processing of MTS data that adopts a dynamic Bayesian network (DBN) modeling algorithm. The latter encodes optimal inter and intra-time slice connectivity of transition networks capable of capturing conditional dependencies in MTS datasets. A sliding window mechanism is employed to score each MTS transition gradually, given the DBN model. Two score-analysis strategies are studied to assure an automatic classification of anomalous data. The proposed approach is first validated in simulated data, demonstrating the performance of the system. Further experiments are made on real data, by uncovering anomalies in distinct scenarios such as electrocardiogram series, mortality rate data, and written pen digits. The developed system proved beneficial in capturing unusual data resulting from temporal contexts, being suitable for any MTS scenario. A widely accessible web application employing the complete system is publicly available jointly with a tutorial. Full article
Show Figures

Figure 1

19 pages, 7383 KiB  
Article
Online Intelligent Perception of Pantograph and Catenary System Status Based on Parameter Adaptation
by Yuan Shen, Xiao Pan and Luonan Chang
Appl. Sci. 2021, 11(4), 1948; https://doi.org/10.3390/app11041948 - 23 Feb 2021
Cited by 7 | Viewed by 2978
Abstract
Online autonomous perception of pantograph catenary system status is of great significance for railway autonomous operation and maintenance (RIOM). Image sensors combined with an image processing algorithm can realize the automatic acquisition of the pantograph catenary condition; however, it is difficult to meet [...] Read more.
Online autonomous perception of pantograph catenary system status is of great significance for railway autonomous operation and maintenance (RIOM). Image sensors combined with an image processing algorithm can realize the automatic acquisition of the pantograph catenary condition; however, it is difficult to meet the demand of long-term stable condition acquisition, which restricts the implementation of online contact state feedback and the realization of railway automation. This paper proposes an online intelligent perception of the pantograph and catenary system (PCS) status based on parameter adaptation to realize fast and stable state analysis when the train is in long-term operation outdoors. First, according to the feature of the contact point, we used histogram of gradient (HoG) features and one-dimensional signal combined with a KCF tracker as the baseline method. Then, a result discriminator located by L1 and hash similarity constraints was used to construct a closed-loop parameter adaptive localization framework, which retrieves and updates parameters when tracking failure occurs. After that, a pruned RefineDet method was used to detect pantograph horns and sparks, which, together with the contact points localization method, ensure the long-term stability of feature localization in PCS images. Then, based on the stereo cameras model, the three-dimensional trajectory of the whole pantograph body can be reconstructed by the image features, and we obtained pantograph catenary contact parameters including the pantograph slide posture, contact line offset, arc detection, separation detection, etc. Our method has been tested on more than 16,000 collected image pairs and the results show that the proposed method has a better positioning effect than the state-of-art method, and realizes the online acquisition of pantograph catenary contact state, representing a significant contribution to RIOM. Full article
Show Figures

Figure 1

22 pages, 1386 KiB  
Article
Predicting Student Academic Performance by Means of Associative Classification
by Luca Cagliero, Lorenzo Canale, Laura Farinetti, Elena Baralis and Enrico Venuto
Appl. Sci. 2021, 11(4), 1420; https://doi.org/10.3390/app11041420 - 4 Feb 2021
Cited by 20 | Viewed by 3311
Abstract
The Learning Analytics community has recently paid particular attention to early predict learners’ performance. An established approach entails training classification models from past learner-related data in order to predict the exam success rate of a student well before the end of the course. [...] Read more.
The Learning Analytics community has recently paid particular attention to early predict learners’ performance. An established approach entails training classification models from past learner-related data in order to predict the exam success rate of a student well before the end of the course. Early predictions allow teachers to put in place targeted actions, e.g., supporting at-risk students to avoid exam failures or course dropouts. Although several machine learning and data mining solutions have been proposed to learn accurate predictors from past data, the interpretability and explainability of the best performing models is often limited. Therefore, in most cases, the reasons behind classifiers’ decisions remain unclear. This paper proposes an Explainable Learning Analytics solution to analyze learner-generated data acquired by our technical university, which relies on a blended learning model. It adopts classification techniques to early predict the success rate of about 5000 students who were enrolled in the first year courses of our university. It proposes to apply associative classifiers at different time points and to explore the characteristics of the models that led to assign pass or fail success rates. Thanks to their inherent interpretability, associative models can be manually explored by domain experts with the twofold aim at validating classifier outcomes through local rule-based explanations and identifying at-risk/successful student profiles by interpreting the global rule-based model. The results of an in-depth empirical evaluation demonstrate that associative models (i) perform as good as the best performing classification models, and (ii) give relevant insights into the per-student success rate assignments. Full article
Show Figures

Figure 1

19 pages, 4945 KiB  
Article
Pose Measurement for Unmanned Aerial Vehicle Based on Rigid Skeleton
by Jingyu Zhang, Zhen Liu and Guangjun Zhang
Appl. Sci. 2021, 11(4), 1373; https://doi.org/10.3390/app11041373 - 3 Feb 2021
Viewed by 2743
Abstract
Pose measurement is a necessary technology for UAV navigation. Accurate pose measurement is the most important guarantee for a UAV stable flight. UAV pose measurement methods mostly use image matching with aircraft models or 2D points corresponding with 3D points. These methods will [...] Read more.
Pose measurement is a necessary technology for UAV navigation. Accurate pose measurement is the most important guarantee for a UAV stable flight. UAV pose measurement methods mostly use image matching with aircraft models or 2D points corresponding with 3D points. These methods will lead to pose measurement errors due to inaccurate contour and key feature point extraction. In order to solve these problems, a pose measurement method based on the structural characteristics of aircraft rigid skeleton is proposed in this paper. The depth information is introduced to guide and label the 2D feature points to eliminate the feature mismatch and segment the region. The space points obtained from the marked feature points fit the space linear equation of the rigid skeleton, and the UAV attitude is calculated by combining with the geometric model. This method does not need cooperative identification of the aircraft model, and can stably measure the position and attitude of short-range UAV in various environments. The effectiveness and reliability of the proposed method are verified by experiments on a visual simulation platform. The method proposed can prevent aircraft collision and ensure the safety of UAV navigation in autonomous refueling or formation flight. Full article
Show Figures

Figure 1

Back to TopTop