Human Understandable Artificial Intelligence

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (17 April 2023) | Viewed by 60946

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, North Dakota State University, Fargo, ND 58102, USA
Interests: artificial/computational Intelligence; autonomy applications in aerospace; cybersecurity; 3D printing command/control and assessment; educational assessment in computing disciplines
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence has been shown to be effective across numerous domains. From robotics to scientific data analysis to decision support systems, it is regularly used every day. The prevalence of AI has led to concerns that the decisions that systems make must be equitable and accurate. In order to make sure that systems operate effectively, determine how they can be improved, and ensure that they are fair, humans must be able to understand how AIs make decisions. To this end, some have proposed AI techniques that can be explained in terms of decision-making electronic processes. Others, though, have argued that the decision rationale should be human understandable and understandable by those using and impacted by the systems—not just technical experts. Both techniques are areas of ongoing advancement.

This Special Issue focuses on human understandable artificial intelligence systems. It welcomes papers on new and adapted understandable AI systems as well as papers relating to questions of system ethics, AI regulation and policy, and application domain efficacy. Papers on supporting technologies and educational efforts related to understandable AI are also within scope.

Dr. Jeremy Straub
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence (AI)
  • explainable artificial intelligence (EAI)
  • human understandable
  • decision justification

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 5942 KiB  
Article
Improving Bug Assignment and Developer Allocation in Software Engineering through Interpretable Machine Learning Models
by Mina Samir, Nada Sherief and Walid Abdelmoez
Computers 2023, 12(7), 128; https://doi.org/10.3390/computers12070128 - 23 Jun 2023
Cited by 6 | Viewed by 3068
Abstract
Software engineering is a comprehensive process that requires developers and team members to collaborate across multiple tasks. In software testing, bug triaging is a tedious and time-consuming process. Assigning bugs to the appropriate developers can save time and maintain their motivation. However, without [...] Read more.
Software engineering is a comprehensive process that requires developers and team members to collaborate across multiple tasks. In software testing, bug triaging is a tedious and time-consuming process. Assigning bugs to the appropriate developers can save time and maintain their motivation. However, without knowledge about a bug’s class, triaging is difficult. Motivated by this challenge, this paper focuses on the problem of assigning a suitable developer to a new bug by analyzing the history of developers’ profiles and analyzing the history of bugs for all developers using machine learning-based recommender systems. Explainable AI (XAI) is AI that humans can understand. It contrasts with “black box” AI, which even its designers cannot explain. By providing appropriate explanations for results, users can better comprehend the underlying insight behind the outcomes, boosting the recommender system’s effectiveness, transparency, and confidence. The trained model is utilized in the recommendation stage to calculate relevance scores for developers based on expertise and past bug handling performance, ultimately presenting the developers with the highest scores as recommendations for new bugs. This approach aims to strike a balance between computational efficiency and accurate predictions, enabling efficient bug assignment while considering developer expertise and historical performance. In this paper, we propose two explainable models for recommendation. The first is an explainable recommender model for personalized developers generated from bug history to know what the preferred type of bug is for each developer. The second model is an explainable recommender model based on bugs to identify the most suitable developer for each bug from bug history. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

15 pages, 2050 KiB  
Article
Investigating the Cultural Impact on Predicting Crowd Behavior
by Fatima Jafar Muhdher, Osama Ahmed Abulnaja and Fatmah Abdulrahman Baothman
Computers 2023, 12(5), 108; https://doi.org/10.3390/computers12050108 - 21 May 2023
Cited by 1 | Viewed by 1730
Abstract
The Cultural Crowd–Artificial Neural Network (CC-ANN) takes the cultural dimensions of a crowd into account, based on Hofstede Cultural Dimensions (HCDs), to predict social and physical behavior concerning cohesion, collectivity, speed, and distance. This study examines the impact of applying the CC-ANN learning [...] Read more.
The Cultural Crowd–Artificial Neural Network (CC-ANN) takes the cultural dimensions of a crowd into account, based on Hofstede Cultural Dimensions (HCDs), to predict social and physical behavior concerning cohesion, collectivity, speed, and distance. This study examines the impact of applying the CC-ANN learning model on more cultures to test the effect of predicting crowd behavior and the relationships among their characteristics. Our previous work which applied the CC-ANN only included eight nations using the six HCDs. In this paper, we including the United Arab Emirates (UAE) in the CC-ANN as a new culture which aided a comparative study with four HCDs, with and without the UAE, using Mean Squared Error (MSE) for evaluation. The results indicated that most of the best-case experiments involved the UAE having the lowest MSE: 0.127, 0.014, and 0.010, which enhanced the CC-ANN model’s ability to predict crowd behavior. Moreover, the links between the cultural, sociological, and physical properties of crowds can be seen from a broader perspective with stronger correlations using the CC-ANN in more countries with diverse cultures. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

16 pages, 6436 KiB  
Article
Pedestrian Detection with LiDAR Technology in Smart-City Deployments–Challenges and Recommendations
by Pedro Torres, Hugo Marques and Paulo Marques
Computers 2023, 12(3), 65; https://doi.org/10.3390/computers12030065 - 17 Mar 2023
Cited by 4 | Viewed by 3492
Abstract
This paper describes a real case implementation of an automatic pedestrian-detection solution, implemented in the city of Aveiro, Portugal, using affordable LiDAR technology and open, publicly available, pedestrian-detection frameworks based on machine-learning algorithms. The presented solution makes it possible to anonymously identify pedestrians, [...] Read more.
This paper describes a real case implementation of an automatic pedestrian-detection solution, implemented in the city of Aveiro, Portugal, using affordable LiDAR technology and open, publicly available, pedestrian-detection frameworks based on machine-learning algorithms. The presented solution makes it possible to anonymously identify pedestrians, and extract associated information such as position, walking velocity and direction in certain areas of interest such as pedestrian crossings or other points of interest in a smart-city context. All data computation (3D point-cloud processing) is performed at edge nodes, consisting of NVIDIA Jetson Nano and Xavier platforms, which ingest 3D point clouds from Velodyne VLP-16 LiDARs. High-performance real-time computation is possible at these edge nodes through CUDA-enabled GPU-accelerated computations. The MQTT protocol is used to interconnect publishers (edge nodes) with consumers (the smart-city platform). The results show that using currently affordable LiDAR sensors in a smart-city context, despite the advertising characteristics referring to having a range of up to 100 m, presents great challenges for the automatic detection of objects at these distances. The authors were able to efficiently detect pedestrians up to 15 m away, depending on the sensor height and tilt. Based on the implementation challenges, the authors present usage recommendations to get the most out of the used technologies. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

15 pages, 794 KiB  
Article
A Long Short-Term Memory Network Using Resting-State Electroencephalogram to Predict Outcomes Following Moderate Traumatic Brain Injury
by Nor Safira Elaina Mohd Noor, Haidi Ibrahim, Chi Qin Lai and Jafri Malin Abdullah
Computers 2023, 12(2), 45; https://doi.org/10.3390/computers12020045 - 20 Feb 2023
Cited by 2 | Viewed by 2116
Abstract
Although traumatic brain injury (TBI) is a global public health issue, not all injuries necessitate additional hospitalisation. Thinking, memory, attention, personality, and movement can all be negatively impacted by TBI. However, only a small proportion of nonsevere TBIs necessitate prolonged observation. Clinicians would [...] Read more.
Although traumatic brain injury (TBI) is a global public health issue, not all injuries necessitate additional hospitalisation. Thinking, memory, attention, personality, and movement can all be negatively impacted by TBI. However, only a small proportion of nonsevere TBIs necessitate prolonged observation. Clinicians would benefit from an electroencephalography (EEG)-based computational intelligence model for outcome prediction by having access to an evidence-based analysis that would allow them to securely discharge patients who are at minimal risk of TBI-related mortality. Despite the increasing popularity of EEG-based deep learning research to create predictive models with breakthrough performance, particularly in epilepsy prediction, its use in clinical decision making for the diagnosis and prognosis of TBI has not been as widely exploited. Therefore, utilising 60s segments of unprocessed resting-state EEG data as input, we suggest a long short-term memory (LSTM) network that can distinguish between improved and unimproved outcomes in moderate TBI patients. Complex feature extraction and selection are avoided in this architecture. The experimental results show that, with a classification accuracy of 87.50 ± 0.05%, the proposed prognostic model outperforms three related works. The results suggest that the proposed methodology is an efficient and reliable strategy to assist clinicians in creating an automated tool for predicting treatment outcomes from EEG signals. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

16 pages, 637 KiB  
Article
Explainable AI-Based DDOS Attack Identification Method for IoT Networks
by Chathuranga Sampath Kalutharage, Xiaodong Liu, Christos Chrysoulas, Nikolaos Pitropakis and Pavlos Papadopoulos
Computers 2023, 12(2), 32; https://doi.org/10.3390/computers12020032 - 3 Feb 2023
Cited by 24 | Viewed by 5404
Abstract
The modern digitized world is mainly dependent on online services. The availability of online systems continues to be seriously challenged by distributed denial of service (DDoS) attacks. The challenge in mitigating attacks is not limited to identifying DDoS attacks when they happen, but [...] Read more.
The modern digitized world is mainly dependent on online services. The availability of online systems continues to be seriously challenged by distributed denial of service (DDoS) attacks. The challenge in mitigating attacks is not limited to identifying DDoS attacks when they happen, but also identifying the streams of attacks. However, existing attack detection methods cannot accurately and efficiently detect DDoS attacks. To this end, we propose an explainable artificial intelligence (XAI)-based novel method to identify DDoS attacks. This method detects abnormal behaviours of network traffic flows by analysing the traffic at the network layer. Moreover, it chooses the most influential features for each anomalous instance with influence weight and then sets a threshold value for each feature. Hence, this DDoS attack detection method defines security policies based on each feature threshold value for application-layer-based, volumetric-based, and transport control protocol (TCP) state-exhaustion-based features. Since the proposed method is based on layer three traffic, it can identify DDoS attacks on both Internet of Things (IoT) and traditional networks. Extensive experiments were performed on the University of Sannio, Benevento Instrution Detection System (USB-IDS) dataset, which consists of different types of DDoS attacks to test the performance of the proposed solution. The results of the comparison show that the proposed method provides greater detection accuracy and attack certainty than the state-of-the-art methods. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

25 pages, 17445 KiB  
Article
Estimation of Excitation Current of a Synchronous Machine Using Machine Learning Methods
by Matko Glučina, Nikola Anđelić, Ivan Lorencin and Zlatan Car
Computers 2023, 12(1), 1; https://doi.org/10.3390/computers12010001 - 20 Dec 2022
Cited by 4 | Viewed by 4899
Abstract
A synchronous machine is an electro-mechanical converter consisting of a stator and a rotor. The stator is the stationary part of a synchronous machine that is made of phase-shifted armature windings in which voltage is generated and the rotor is the rotating part [...] Read more.
A synchronous machine is an electro-mechanical converter consisting of a stator and a rotor. The stator is the stationary part of a synchronous machine that is made of phase-shifted armature windings in which voltage is generated and the rotor is the rotating part made using permanent magnets or electromagnets. The excitation current is a significant parameter of the synchronous machine, and it is of immense importance to continuously monitor possible value changes to ensure the smooth and high-quality operation of the synchronous machine itself. The purpose of this paper is to estimate the excitation current on a publicly available dataset, using the following input parameters: Iy: load current; PF: power factor; e: power factor error; and df: changing of excitation current of synchronous machine, using artificial intelligence algorithms. The algorithms used in this research were: k-nearest neighbors, linear, random forest, ridge, stochastic gradient descent, support vector regressor, multi-layer perceptron, and extreme gradient boost regressor, where the worst result was elasticnet, with R2 = −0.0001, MSE = 0.0297, and MAPE = 0.1442; the best results were provided by extreme boosting regressor, with R2¯ = 0.9963, MSE¯ = 0.0001, and MAPE¯ = 0.0057, respectively. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

29 pages, 7324 KiB  
Article
Learning Explainable Disentangled Representations of E-Commerce Data by Aligning Their Visual and Textual Attributes
by Katrien Laenen and Marie-Francine Moens
Computers 2022, 11(12), 182; https://doi.org/10.3390/computers11120182 - 10 Dec 2022
Cited by 3 | Viewed by 2020
Abstract
Understanding multimedia content remains a challenging problem in e-commerce search and recommendation applications. It is difficult to obtain item representations that capture the relevant product attributes since these product attributes are fine-grained and scattered across product images with huge visual variations and product [...] Read more.
Understanding multimedia content remains a challenging problem in e-commerce search and recommendation applications. It is difficult to obtain item representations that capture the relevant product attributes since these product attributes are fine-grained and scattered across product images with huge visual variations and product descriptions that are noisy and incomplete. In addition, the interpretability and explainability of item representations have become more important in order to make e-commerce applications more intelligible to humans. Multimodal disentangled representation learning, where the independent generative factors of multimodal data are identified and encoded in separate subsets of features in the feature space, is an interesting research area to explore in an e-commerce context given the benefits of the resulting disentangled representations such as generalizability, robustness and interpretability. However, the characteristics of real-word e-commerce data, such as the extensive visual variation, noisy and incomplete product descriptions, and complex cross-modal relations of vision and language, together with the lack of an automatic interpretation method to explain the contents of disentangled representations, means that current approaches for multimodal disentangled representation learning do not suffice for e-commerce data. Therefore, in this work, we design an explainable variational autoencoder framework (E-VAE) which leverages visual and textual item data to obtain disentangled item representations by jointly learning to disentangle the visual item data and to infer a two-level alignment of the visual and textual item data in a multimodal disentangled space. As such, E-VAE tackles the main challenges in disentangling multimodal e-commerce data. Firstly, with the weak supervision of the two-level alignment our E-VAE learns to steer the disentanglement process towards discovering the relevant factors of variations in the multimodal data and to ignore irrelevant visual variations which are abundant in e-commerce data. Secondly, to the best of our knowledge our E-VAE is the first VAE-based framework that has an automatic interpretation mechanism that allows to explain the components of the disentangled item representations with text. With our textual explanations we provide insight in the quality of the disentanglement. Furthermore, we demonstrate that with our explainable disentangled item representations we achieve state-of-the-art outfit recommendation results on the Polyvore Outfits dataset and report new state-of-the-art cross-modal search results on the Amazon Dresses dataset. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

15 pages, 6387 KiB  
Article
Features Engineering for Malware Family Classification Based API Call
by Ammar Yahya Daeef, Ali Al-Naji and Javaan Chahl
Computers 2022, 11(11), 160; https://doi.org/10.3390/computers11110160 - 11 Nov 2022
Cited by 8 | Viewed by 2904
Abstract
Malware is used to carry out malicious operations on networks and computer systems. Consequently, malware classification is crucial for preventing malicious attacks. Application programming interfaces (APIs) are ideal candidates for characterizing malware behavior. However, the primary challenge is to produce API call features [...] Read more.
Malware is used to carry out malicious operations on networks and computer systems. Consequently, malware classification is crucial for preventing malicious attacks. Application programming interfaces (APIs) are ideal candidates for characterizing malware behavior. However, the primary challenge is to produce API call features for classification algorithms to achieve high classification accuracy. To achieve this aim, this work employed the Jaccard similarity and visualization analysis to find the hidden patterns created by various malware API calls. Traditional machine learning classifiers, i.e., random forest (RF), support vector machine (SVM), and k-nearest neighborhood (KNN), were used in this research as alternatives to existing neural networks, which use millions of length API call sequences. The benchmark dataset used in this study contains 7107 samples of API call sequences (labeled to eight different malware families). The results showed that RF with the proposed API call features outperformed the LSTM (long short-term memory) and gated recurrent unit (GRU)-based methods against overall evaluation metrics. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

14 pages, 1973 KiB  
Article
Predicting Breast Cancer from Risk Factors Using SVM and Extra-Trees-Based Feature Selection Method
by Ganjar Alfian, Muhammad Syafrudin, Imam Fahrurrozi, Norma Latif Fitriyani, Fransiskus Tatas Dwi Atmaji, Tri Widodo, Nurul Bahiyah, Filip Benes and Jongtae Rhee
Computers 2022, 11(9), 136; https://doi.org/10.3390/computers11090136 - 12 Sep 2022
Cited by 66 | Viewed by 7590
Abstract
Developing a prediction model from risk factors can provide an efficient method to recognize breast cancer. Machine learning (ML) algorithms have been applied to increase the efficiency of diagnosis at the early stage. This paper studies a support vector machine (SVM) combined with [...] Read more.
Developing a prediction model from risk factors can provide an efficient method to recognize breast cancer. Machine learning (ML) algorithms have been applied to increase the efficiency of diagnosis at the early stage. This paper studies a support vector machine (SVM) combined with an extremely randomized trees classifier (extra-trees) to provide a diagnosis of breast cancer at the early stage based on risk factors. The extra-trees classifier was used to remove irrelevant features, while SVM was utilized to diagnose the breast cancer status. A breast cancer dataset consisting of 116 subjects was utilized by machine learning models to predict breast cancer, while the stratified 10-fold cross-validation was employed for the model evaluation. Our proposed combined SVM and extra-trees model reached the highest accuracy up to 80.23%, which was significantly better than the other ML model. The experimental results demonstrated that by applying extra-trees-based feature selection, the average ML prediction accuracy was improved by up to 7.29% as contrasted to ML without the feature selection method. Our proposed model is expected to increase the efficiency of breast cancer diagnosis based on risk factors. In addition, we presented the proposed prediction model that could be employed for web-based breast cancer prediction. The proposed model is expected to improve diagnostic decision-support systems by predicting breast cancer disease accurately. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

23 pages, 5124 KiB  
Article
On Predicting Soccer Outcomes in the Greek League Using Machine Learning
by Marios-Christos Malamatinos, Eleni Vrochidou and George A. Papakostas
Computers 2022, 11(9), 133; https://doi.org/10.3390/computers11090133 - 31 Aug 2022
Cited by 8 | Viewed by 5989
Abstract
The global expansion of the sports betting industry has brought the prediction of outcomes of sport events into the foreground of scientific research. In this work, soccer outcome prediction methods are evaluated, focusing on the Greek Super League. Data analysis, including data cleaning, [...] Read more.
The global expansion of the sports betting industry has brought the prediction of outcomes of sport events into the foreground of scientific research. In this work, soccer outcome prediction methods are evaluated, focusing on the Greek Super League. Data analysis, including data cleaning, Sequential Forward Selection (SFS), feature engineering methods and data augmentation is conducted. The most important features are used to train five machine learning models: k-Nearest Neighbor (k-NN), LogitBoost (LB), Support Vector Machine (SVM), Random Forest (RF) and CatBoost (CB). For comparative reasons, the best model is also tested on the English Premier League and the Dutch Eredivisie, exploiting data statistics from six seasons from 2014 to 2020. Convolutional neural networks (CNN) and transfer learning are also tested by encoding tabular data to images, using 10-fold cross-validation, after applying grid and randomized hyperparameter tuning: DenseNet201, InceptionV3, MobileNetV2 and ResNet101V2. This is the first time the Greek Super League is investigated in depth, providing important features and comparative performance between several machine and deep learning models, as well as between other leagues. Experimental results in all cases demonstrate that the most accurate prediction model is the CB, reporting 67.73% accuracy, while the Greek Super League is the most predictable league. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

15 pages, 3216 KiB  
Article
Walsh–Hadamard Kernel Feature-Based Image Compression Using DCT with Bi-Level Quantization
by Dibyalekha Nayak, Kananbala Ray, Tejaswini Kar and Chiman Kwan
Computers 2022, 11(7), 110; https://doi.org/10.3390/computers11070110 - 4 Jul 2022
Cited by 2 | Viewed by 2682
Abstract
To meet the high bit rate requirements in many multimedia applications, a lossy image compression algorithm based on Walsh–Hadamard kernel-based feature extraction, discrete cosine transform (DCT), and bi-level quantization is proposed in this paper. The selection of the quantization matrix of the block [...] Read more.
To meet the high bit rate requirements in many multimedia applications, a lossy image compression algorithm based on Walsh–Hadamard kernel-based feature extraction, discrete cosine transform (DCT), and bi-level quantization is proposed in this paper. The selection of the quantization matrix of the block is made based on a weighted combination of the block feature strength (BFS) of the block extracted by projecting the selected Walsh–Hadamard basis kernels on an image block. The BFS is compared with an automatically generated threshold for applying the specific quantization matrix for compression. In this paper, higher BFS blocks are processed via DCT and high Q matrix, and blocks with lower feature strength are processed via DCT and low Q matrix. So, blocks with higher feature strength are less compressed and vice versa. The proposed algorithm is compared to different DCT and block truncation coding (BTC)-based approaches based on the quality parameters, such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) at constant bits per pixel (bpp). The proposed method shows significant improvements in performance over standard JPEG and recent approaches at lower bpp. It achieved an average PSNR of 35.61 dB and an average SSIM of 0.90 at a bpp of 0.5 and better perceptual quality with lower visual artifacts. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

24 pages, 2100 KiB  
Article
Learning-Oriented QoS- and Drop-Aware Task Scheduling for Mixed-Criticality Systems
by Behnaz Ranjbar, Hamidreza Alikhani, Bardia Safaei, Alireza Ejlali and Akash Kumar
Computers 2022, 11(7), 101; https://doi.org/10.3390/computers11070101 - 22 Jun 2022
Cited by 5 | Viewed by 2329
Abstract
In Mixed-Criticality (MC) systems, multiple functions with different levels of criticality are integrated into a common platform in order to meet the intended space, cost, and timing requirements in all criticality levels. To guarantee the correct, and on-time execution of higher criticality tasks [...] Read more.
In Mixed-Criticality (MC) systems, multiple functions with different levels of criticality are integrated into a common platform in order to meet the intended space, cost, and timing requirements in all criticality levels. To guarantee the correct, and on-time execution of higher criticality tasks in emergency modes, various design-time scheduling policies have been recently presented. These techniques are mostly pessimistic, as the occurrence of worst-case scenario at run-time is a rare event. Nevertheless, they lead to an under-utilized system due to frequent drops of Low-Criticality (LC) tasks, and creation of unused slack times due to the quick execution of high-criticality tasks. Accordingly, this paper proposes a novel optimistic scheme, that introduces a learning-based drop-aware task scheduling mechanism, which carefully monitors the alterations in the behaviour of the MC system at run-time, to exploit the generated dynamic slacks for reducing the LC tasks penalty and preventing frequent drops of LC tasks in the future. Based on an extensive set of experiments, our observations have shown that the proposed approach exploits accumulated dynamic slack generated at run-time, by 9.84% more on average compared to existing works, and is able to reduce the deadline miss rate by up to 51.78%, and 33.27% on average, compared to state-of-the-art works. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Graphical abstract

21 pages, 2268 KiB  
Article
Botanical Leaf Disease Detection and Classification Using Convolutional Neural Network: A Hybrid Metaheuristic Enabled Approach
by Madhumini Mohapatra, Ami Kumar Parida, Pradeep Kumar Mallick, Mikhail Zymbler and Sachin Kumar
Computers 2022, 11(5), 82; https://doi.org/10.3390/computers11050082 - 20 May 2022
Cited by 16 | Viewed by 3748
Abstract
Botanical plants suffer from several types of diseases that must be identified early to improve the production of fruits and vegetables. Mango fruit is one of the most popular and desirable fruits worldwide due to its taste and richness in vitamins. However, plant [...] Read more.
Botanical plants suffer from several types of diseases that must be identified early to improve the production of fruits and vegetables. Mango fruit is one of the most popular and desirable fruits worldwide due to its taste and richness in vitamins. However, plant diseases also affect these plants’ production and quality. This study proposes a convolutional neural network (CNN)-based metaheuristic approach for disease diagnosis and detection. The proposed approach involves preprocessing, image segmentation, feature extraction, and disease classification. First, the image of mango leaves is enhanced using histogram equalization and contrast enhancement. Then, a geometric mean-based neutrosophic with a fuzzy c-means method is used for segmentation. Next, the essential features are retrieved from the segmented images, including the Upgraded Local Binary Pattern (ULBP), color, and pixel features. Finally, these features are given into the disease detection phase, which is modeled using a Convolutional Neural Network (CNN) (deep learning model). Furthermore, to enhance the classification accuracy of CNN, the weights are fine-tuned using a new hybrid optimization model referred to as Cat Swarm Updated Black Widow Model (CSUBW). The new hybrid optimization model is developed by hybridizing the standard Cat Swarm Optimization Algorithm (CSO) and Black Widow Optimization Algorithm (BWO). Finally, a performance evaluation is undergone to validate the efficiency of the projected model. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

20 pages, 5245 KiB  
Article
A Real Time Arabic Sign Language Alphabets (ArSLA) Recognition Model Using Deep Learning Architecture
by Zaran Alsaadi, Easa Alshamani, Mohammed Alrehaili, Abdulmajeed Ayesh D. Alrashdi, Saleh Albelwi and Abdelrahman Osman Elfaki
Computers 2022, 11(5), 78; https://doi.org/10.3390/computers11050078 - 10 May 2022
Cited by 19 | Viewed by 6592
Abstract
Currently, treating sign language issues and producing high quality solutions has attracted researchers and practitioners’ attention due to the considerable prevalence of hearing disabilities around the world. The literature shows that Arabic Sign Language (ArSL) is one of the most popular sign languages [...] Read more.
Currently, treating sign language issues and producing high quality solutions has attracted researchers and practitioners’ attention due to the considerable prevalence of hearing disabilities around the world. The literature shows that Arabic Sign Language (ArSL) is one of the most popular sign languages due to its rate of use. ArSL is categorized into two groups: The first group is ArSL, where words are represented by signs, i.e., pictures. The second group is ArSl alphabetic (ArSLA), where each Arabic letter is represented by a sign. This paper introduces a real time ArSLA recognition model using deep learning architecture. As a methodology, the proceeding steps were followed. First, a trusted scientific ArSLA dataset was located. Second, the best deep learning architectures were chosen by investigating related works. Third, an experiment was conducted to test the previously selected deep learning architectures. Fourth, the deep learning architecture was selected based on extracted results. Finally, a real time recognition system was developed. The results of the experiment show that the AlexNet architecture is the best due to its high accuracy rate. The model was developed based on AlexNet architecture and successfully tested at real time with a 94.81% accuracy rate. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

28 pages, 18492 KiB  
Article
The Influence of Genetic Algorithms on Learning Possibilities of Artificial Neural Networks
by Martin Kotyrba, Eva Volna, Hashim Habiballa and Josef Czyz
Computers 2022, 11(5), 70; https://doi.org/10.3390/computers11050070 - 29 Apr 2022
Cited by 8 | Viewed by 4018
Abstract
The presented research study focuses on demonstrating the learning ability of a neural network using a genetic algorithm and finding the most suitable neural network topology for solving a demonstration problem. The network topology is significantly dependent on the level of generalization. More [...] Read more.
The presented research study focuses on demonstrating the learning ability of a neural network using a genetic algorithm and finding the most suitable neural network topology for solving a demonstration problem. The network topology is significantly dependent on the level of generalization. More robust topology of a neural network is usually more suitable for particular details in the training set and it loses the ability to abstract general information. Therefore, we often design the network topology by taking into the account the required generalization, rather than the aspect of theoretical calculations. The next part of the article presents research whether a modification of the parameters of the genetic algorithm can achieve optimization and acceleration of the neural network learning process. The function of the neural network and its learning by using the genetic algorithm is demonstrated in a program for solving a computer game. The research focuses mainly on the assessment of the influence of changes in neural networks’ topology and changes in parameters in genetic algorithm on the achieved results and speed of neural network training. The achieved results are statistically presented and compared depending on the network topology and changes in the learning algorithm. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Graphical abstract

Back to TopTop