Previous Issue
Volume 5, March
 
 

AI, Volume 5, Issue 2 (June 2024) – 12 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
32 pages, 4863 KiB  
Article
From Eye Movements to Personality Traits: A Machine Learning Approach in Blood Donation Advertising
by Stefanos Balaskas, Maria Koutroumani, Maria Rigou and Spiros Sirmakessis
AI 2024, 5(2), 635-666; https://doi.org/10.3390/ai5020034 - 10 May 2024
Viewed by 124
Abstract
Blood donation heavily depends on voluntary involvement, but the problem of motivating and retaining potential blood donors remains. Understanding the personality traits of donors can assist in this case, bridging communication gaps and increasing participation and retention. To this end, an eye-tracking experiment [...] Read more.
Blood donation heavily depends on voluntary involvement, but the problem of motivating and retaining potential blood donors remains. Understanding the personality traits of donors can assist in this case, bridging communication gaps and increasing participation and retention. To this end, an eye-tracking experiment was designed to examine the viewing behavior of 75 participants as they viewed various blood donation-related advertisements. The purpose of these stimuli was to elicit various types of emotions (positive/negative) and message framings (altruistic/egoistic) to investigate cognitive reactions that arise from donating blood using eye-tracking parameters such as the fixation duration, fixation count, saccade duration, and saccade amplitude. The results indicated significant differences among the eye-tracking metrics, suggesting that visual engagement varies considerably in response to different types of advertisements. The fixation duration also revealed substantial differences in emotions, logo types, and emotional arousal, suggesting that the nature of stimuli can affect how viewers disperse their attention. The saccade amplitude and saccade duration were also affected by the message framings, thus indicating their relevance to eye movement behavior. Generalised linear models (GLMs) showed significant influences of personality trait effects on eye-tracking metrics, including a negative association between honesty–humility and fixation duration and a positive link between openness and both the saccade duration and fixation count. These results indicate that personality traits can significantly impact visual attention processes. The present study broadens the current research frontier by employing machine learning techniques on the collected eye-tracking data to identify personality traits that can influence donation decisions and experiences. Participants’ eye movements were analysed to categorize their dominant personality traits using hierarchical clustering, while machine learning algorithms, including Support Vector Machine (SVM), Random Forest, and k-Nearest Neighbours (KNN), were employed to predict personality traits. Among the models, SVM and KNN exhibited high accuracy (86.67%), while Random Forest scored considerably lower (66.67%). This investigation reveals that computational models can infer personality traits from eye movements, which shows great potential for psychological profiling and human–computer interaction. This study integrates psychology research and machine learning, paving the way for further studies on personality assessment by eye tracking. Full article
(This article belongs to the Special Issue Machine Learning for HCI: Cases, Trends and Challenges)
Show Figures

Figure 1

17 pages, 3166 KiB  
Article
Remote Sensing Crop Water Stress Determination Using CNN-ViT Architecture
by Kawtar Lehouel, Chaima Saber, Mourad Bouziani and Reda Yaagoubi
AI 2024, 5(2), 618-634; https://doi.org/10.3390/ai5020033 - 9 May 2024
Viewed by 188
Abstract
Efficiently determining crop water stress is vital for optimising irrigation practices and enhancing agricultural productivity. In this realm, the synergy of deep learning with remote sensing technologies offers a significant opportunity. This study introduces an innovative end-to-end deep learning pipeline for within-field crop [...] Read more.
Efficiently determining crop water stress is vital for optimising irrigation practices and enhancing agricultural productivity. In this realm, the synergy of deep learning with remote sensing technologies offers a significant opportunity. This study introduces an innovative end-to-end deep learning pipeline for within-field crop water determination. This involves the following: (1) creating an annotated dataset for crop water stress using Landsat 8 imagery, (2) deploying a standalone vision transformer model ViT, and (3) the implementation of a proposed CNN-ViT model. This approach allows for a comparative analysis between the two architectures, ViT and CNN-ViT, in accurately determining crop water stress. The results of our study demonstrate the effectiveness of the CNN-ViT framework compared to the standalone vision transformer model. The CNN-ViT approach exhibits superior performance, highlighting its enhanced accuracy and generalisation capabilities. The findings underscore the significance of an integrated deep learning pipeline combined with remote sensing data in the determination of crop water stress, providing a reliable and scalable tool for real-time monitoring and resource management contributing to sustainable agricultural practices. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

16 pages, 3532 KiB  
Article
Robotics Perception: Intention Recognition to Determine the Handball Occurrence during a Football or Soccer Match
by Mohammad Mehedi Hassan, Stephen Karungaru and Kenji Terada
AI 2024, 5(2), 602-617; https://doi.org/10.3390/ai5020032 - 8 May 2024
Viewed by 275
Abstract
In football or soccer, a referee controls the game based on the set rules. The decisions made by the referee are final and can’t be appealed. Some of the decisions, especially after a handball event, whether to award a penalty kick or a [...] Read more.
In football or soccer, a referee controls the game based on the set rules. The decisions made by the referee are final and can’t be appealed. Some of the decisions, especially after a handball event, whether to award a penalty kick or a yellow/red card can greatly affect the final results of a game. It is therefore necessary that the referee does not make an error. The objective is therefore to create a system that can accurately recognize such events and make the correct decision. This study chose handball, an event that occurs in a football game (Not to be confused with the game of Handball). We define a handball event using object detection and robotic perception and decide whether it is intentional or not. Intention recognition is a robotic perception of emotion recognition. To define handball, we trained a model to detect the hand and ball which are primary objects. We then determined the intention using gaze recognition and finally combined the results to recognize a handball event. On our dataset, the results of the hand and the ball object detection were 96% and 100% respectively. With the gaze recognition at 100%, if all objects were recognized, then the intention and handball event recognition were at 100%. Full article
(This article belongs to the Section AI in Autonomous Systems)
Show Figures

Figure 1

8 pages, 215 KiB  
Communication
Ethical Considerations for Artificial Intelligence Applications for HIV
by Renee Garett, Seungjun Kim and Sean D. Young
AI 2024, 5(2), 594-601; https://doi.org/10.3390/ai5020031 - 7 May 2024
Viewed by 398
Abstract
Human Immunodeficiency Virus (HIV) is a stigmatizing disease that disproportionately affects African Americans and Latinos among people living with HIV (PLWH). Researchers are increasingly utilizing artificial intelligence (AI) to analyze large amounts of data such as social media data and electronic health records [...] Read more.
Human Immunodeficiency Virus (HIV) is a stigmatizing disease that disproportionately affects African Americans and Latinos among people living with HIV (PLWH). Researchers are increasingly utilizing artificial intelligence (AI) to analyze large amounts of data such as social media data and electronic health records (EHR) for various HIV-related tasks, from prevention and surveillance to treatment and counseling. This paper explores the ethical considerations surrounding the use of AI for HIV with a focus on acceptability, trust, fairness, and transparency. To improve acceptability and trust towards AI systems for HIV, informed consent and a Federated Learning (FL) approach are suggested. In regard to unfairness, stakeholders should be wary of AI systems for HIV further stigmatizing or even being used as grounds to criminalize PLWH. To prevent criminalization, in particular, the application of differential privacy on HIV data generated by data linkage should be studied. Participatory design is crucial in designing the AI systems for HIV to be more transparent and inclusive. To this end, the formation of a data ethics committee and the construction of relevant frameworks and principles may need to be concurrently implemented. Lastly, the question of whether the amount of transparency beyond a certain threshold may overwhelm patients, thereby unexpectedly triggering negative consequences, is posed. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
18 pages, 6698 KiB  
Article
Investigating Training Datasets of Real and Synthetic Images for Outdoor Swimmer Localisation with YOLO
by Mohsen Khan Mohammadi, Toni Schneidereit, Ashkan Mansouri Yarahmadi and Michael Breuß
AI 2024, 5(2), 576-593; https://doi.org/10.3390/ai5020030 - 1 May 2024
Viewed by 390
Abstract
In this study, we developed and explored a methodical image augmentation technique for swimmer localisation in northern German outdoor lake environments. When it comes to enhancing swimmer safety, a main issue we have to deal with is the lack of real-world training data [...] Read more.
In this study, we developed and explored a methodical image augmentation technique for swimmer localisation in northern German outdoor lake environments. When it comes to enhancing swimmer safety, a main issue we have to deal with is the lack of real-world training data of such outdoor environments. Natural lighting changes, dynamic water textures, and barely visible swimming persons are key issues to address. We account for these difficulties by adopting an effective background removal technique with available training data. This allows us to edit swimmers into natural environment backgrounds for use in subsequent image augmentation. We created 17 training datasets with real images, synthetic images, and a mixture of both to investigate different aspects and characteristics of the proposed approach. The datasets were used to train YOLO architectures for possible future applications in real-time detection. The trained frameworks were then tested and evaluated on outdoor environment imagery acquired using a safety drone to investigate and confirm their usefulness for outdoor swimmer localisation. Full article
Show Figures

Figure 1

21 pages, 14728 KiB  
Article
Development of an Attention Mechanism for Task-Adaptive Heterogeneous Robot Teaming
by Yibei Guo, Chao Huang and Rui Liu
AI 2024, 5(2), 555-575; https://doi.org/10.3390/ai5020029 - 23 Apr 2024
Viewed by 663
Abstract
The allure of team scale and functional diversity has led to the promising adoption of heterogeneous multi-robot systems (HMRS) in complex, large-scale operations such as disaster search and rescue, site surveillance, and social security. These systems, which coordinate multiple robots of varying functions [...] Read more.
The allure of team scale and functional diversity has led to the promising adoption of heterogeneous multi-robot systems (HMRS) in complex, large-scale operations such as disaster search and rescue, site surveillance, and social security. These systems, which coordinate multiple robots of varying functions and quantities, face the significant challenge of accurately assembling robot teams that meet the dynamic needs of tasks with respect to size and functionality, all while maintaining minimal resource expenditure. This paper introduces a pioneering adaptive cooperation method named inner attention (innerATT), crafted to dynamically configure teams of heterogeneous robots in response to evolving task types and environmental conditions. The innerATT method is articulated through the integration of an innovative attention mechanism within a multi-agent actor–critic reinforcement learning framework, enabling the strategic analysis of robot capabilities to efficiently form teams that fulfill specific task demands. To demonstrate the efficacy of innerATT in facilitating cooperation, experimental scenarios encompassing variations in task type (“Single Task”, “Double Task”, and “Mixed Task”) and robot availability are constructed under the themes of “task variety” and “robot availability variety.” The findings affirm that innerATT significantly enhances flexible cooperation, diminishes resource usage, and bolsters robustness in task fulfillment. Full article
Show Figures

Figure 1

5 pages, 174 KiB  
Editorial
Artificial Intelligence in Healthcare: ChatGPT and Beyond
by Tim Hulsen
AI 2024, 5(2), 550-554; https://doi.org/10.3390/ai5020028 - 19 Apr 2024
Viewed by 861
Abstract
Artificial intelligence (AI), the simulation of human intelligence processes by machines, is having a growing impact on healthcare [...] Full article
17 pages, 8939 KiB  
Article
ANNs Predicting Noisy Signals in Electronic Circuits: A Model Predicting the Signal Trend in Amplification Systems
by Alessandro Massaro
AI 2024, 5(2), 533-549; https://doi.org/10.3390/ai5020027 - 17 Apr 2024
Viewed by 575
Abstract
In the proposed paper, an artificial neural network (ANN) algorithm is applied to predict the electronic circuit outputs of voltage signals in Industry 4.0/5.0 scenarios. This approach is suitable to predict possible uncorrected behavior of control circuits affected by unknown noises, and to [...] Read more.
In the proposed paper, an artificial neural network (ANN) algorithm is applied to predict the electronic circuit outputs of voltage signals in Industry 4.0/5.0 scenarios. This approach is suitable to predict possible uncorrected behavior of control circuits affected by unknown noises, and to reproduce a testbed method simulating the noise effect influencing the amplification of an input sinusoidal voltage signal, which is a basic and fundamental signal for controlled manufacturing systems. The performed simulations take into account different noise signals changing their time-domain trend and frequency behavior to prove the possibility of predicting voltage outputs when complex signals are considered at the control circuit input, including additive disturbs and noises. The results highlight that it is possible to construct a good ANN training model by processing only the registered voltage output signals without considering the noise profile (which is typically unknown). The proposed model behaves as an electronic black box for Industry 5.0 manufacturing processes automating circuit and machine tuning procedures. By analyzing state-of-the-art ANNs, the study offers an innovative ANN-based versatile solution that is able to process various noise profiles without requiring prior knowledge of the noise characteristics. Full article
Show Figures

Figure 1

17 pages, 255 KiB  
Review
Fetal Hypoxia Detection Using Machine Learning: A Narrative Review
by Nawaf Alharbi, Mustafa Youldash, Duha Alotaibi, Haya Aldossary, Reema Albrahim, Reham Alzahrani, Wahbia Ahmed Saleh, Sunday O. Olatunji and May Issa Aldossary
AI 2024, 5(2), 516-532; https://doi.org/10.3390/ai5020026 - 13 Apr 2024
Viewed by 613
Abstract
Fetal hypoxia is a condition characterized by a lack of oxygen supply in a developing fetus in the womb. It can cause potential risks, leading to abnormalities, birth defects, and even mortality. Cardiotocograph (CTG) monitoring is among the techniques that can detect any [...] Read more.
Fetal hypoxia is a condition characterized by a lack of oxygen supply in a developing fetus in the womb. It can cause potential risks, leading to abnormalities, birth defects, and even mortality. Cardiotocograph (CTG) monitoring is among the techniques that can detect any signs of fetal distress, including hypoxia. Due to the critical importance of interpreting the results of this test, it is essential to accompany these tests with the evolving available technology to classify cases of hypoxia into three cases: normal, suspicious, or pathological. Furthermore, Machine Learning (ML) is a blossoming technique constantly developing and aiding in medical studies, particularly fetal health prediction. Notwithstanding the past endeavors of health providers to detect hypoxia in fetuses, implementing ML and Deep Learning (DL) techniques ensures more timely and precise detection of fetal hypoxia by efficiently and accurately processing complex patterns in large datasets. Correspondingly, this review paper aims to explore the application of artificial intelligence models using cardiotocographic test data. The anticipated outcome of this review is to introduce guidance for future studies to enhance accuracy in detecting cases categorized within the suspicious class, an aspect that has encountered challenges in previous studies that holds significant implications for obstetricians in effectively monitoring fetal health and making informed decisions. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

12 pages, 814 KiB  
Article
Towards an ELSA Curriculum for Data Scientists
by Maria Christoforaki and Oya Deniz Beyan
AI 2024, 5(2), 504-515; https://doi.org/10.3390/ai5020025 - 11 Apr 2024
Viewed by 585
Abstract
The use of artificial intelligence (AI) applications in a growing number of domains in recent years has put into focus the ethical, legal, and societal aspects (ELSA) of these technologies and the relevant challenges they pose. In this paper, we propose an ELSA [...] Read more.
The use of artificial intelligence (AI) applications in a growing number of domains in recent years has put into focus the ethical, legal, and societal aspects (ELSA) of these technologies and the relevant challenges they pose. In this paper, we propose an ELSA curriculum for data scientists aiming to raise awareness about ELSA challenges in their work, provide them with a common language with the relevant domain experts in order to cooperate to find appropriate solutions, and finally, incorporate ELSA in the data science workflow. ELSA should not be seen as an impediment or a superfluous artefact but rather as an integral part of the Data Science Project Lifecycle. The proposed curriculum uses the CRISP-DM (CRoss-Industry Standard Process for Data Mining) model as a backbone to define a vertical partition expressed in modules corresponding to the CRISP-DM phases. The horizontal partition includes knowledge units belonging to three strands that run through the phases, namely ethical and societal, legal and technical rendering knowledge units (KUs). In addition to the detailed description of the aforementioned KUs, we also discuss their implementation, issues such as duration, form, and evaluation of participants, as well as the variance of the knowledge level and needs of the target audience. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

22 pages, 5272 KiB  
Article
ECARRNet: An Efficient LSTM-Based Ensembled Deep Neural Network Architecture for Railway Fault Detection
by Salman Ibne Eunus, Shahriar Hossain, A. E. M. Ridwan, Ashik Adnan, Md. Saiful Islam, Dewan Ziaul Karim, Golam Rabiul Alam and Jia Uddin
AI 2024, 5(2), 482-503; https://doi.org/10.3390/ai5020024 - 8 Apr 2024
Viewed by 1265
Abstract
Accidents due to defective railway lines and derailments are common disasters that are observed frequently in Southeast Asian countries. It is imperative to run proper diagnosis over the detection of such faults to prevent such accidents. However, manual detection of such faults periodically [...] Read more.
Accidents due to defective railway lines and derailments are common disasters that are observed frequently in Southeast Asian countries. It is imperative to run proper diagnosis over the detection of such faults to prevent such accidents. However, manual detection of such faults periodically can be both time-consuming and costly. In this paper, we have proposed a Deep Learning (DL)-based algorithm for automatic fault detection in railway tracks, which we termed an Ensembled Convolutional Autoencoder ResNet-based Recurrent Neural Network (ECARRNet). We compared its output with existing DL techniques in the form of several pre-trained DL models to investigate railway tracks and determine whether they are defective or not while considering commonly prevalent faults such as—defects in rails and fasteners. Moreover, we manually collected the images from different railway tracks situated in Bangladesh and made our dataset. After comparing our proposed model with the existing models, we found that our proposed architecture has produced the highest accuracy among all the previously existing state-of-the-art (SOTA) architecture, with an accuracy of 93.28% on the full dataset. Additionally, we split our dataset into two parts having two different types of faults, which are fasteners and rails. We ran the models on those two separate datasets, obtaining accuracies of 98.59% and 92.06% on rail and fastener, respectively. Model explainability techniques like Grad-CAM and LIME were used to validate the result of the models, where our proposed model ECARRNet was seen to correctly classify and detect the regions of faulty railways effectively compared to the previously existing transfer learning models. Full article
Show Figures

Figure 1

17 pages, 4056 KiB  
Article
Visual Analytics in Explaining Neural Networks with Neuron Clustering
by Gulsum Alicioglu and Bo Sun
AI 2024, 5(2), 465-481; https://doi.org/10.3390/ai5020023 - 5 Apr 2024
Viewed by 690
Abstract
Deep learning (DL) models have achieved state-of-the-art performance in many domains. The interpretation of their working mechanisms and decision-making process is essential because of their complex structure and black-box nature, especially for sensitive domains such as healthcare. Visual analytics (VA) combined with DL [...] Read more.
Deep learning (DL) models have achieved state-of-the-art performance in many domains. The interpretation of their working mechanisms and decision-making process is essential because of their complex structure and black-box nature, especially for sensitive domains such as healthcare. Visual analytics (VA) combined with DL methods have been widely used to discover data insights, but they often encounter visual clutter (VC) issues. This study presents a compact neural network (NN) view design to reduce the visual clutter in explaining the DL model components for domain experts and end users. We utilized clustering algorithms to group hidden neurons based on their activation similarities. This design supports the overall and detailed view of the neuron clusters. We used a tabular healthcare dataset as a case study. The design for clustered results reduced visual clutter among neuron representations by 54% and connections by 88.7% and helped to observe similar neuron activations learned during the training process. Full article
(This article belongs to the Special Issue Machine Learning for HCI: Cases, Trends and Challenges)
Show Figures

Figure 1

Previous Issue
Back to TopTop