Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,542)

Search Parameters:
Keywords = generative artificial network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 6262 KiB  
Article
Data-Based Modeling and Control of a Single Link Soft Robotic Arm
by David Abraham Morales-Enríquez, Jaime Guzmán-López, Raúl Alejandro Aguilar-Ramírez, Jorge Luis Lorenzo-Martínez, Daniel Sapién-Garza, Ricardo Cortez, Norma Lozada-Castillo and Alberto Luviano-Juárez
Biomimetics 2025, 10(5), 294; https://doi.org/10.3390/biomimetics10050294 - 6 May 2025
Abstract
In this work, the position control of a cable-driven soft robot is proposed through the approximation of its kinematic model. This approximation is derived from artificial learning rules via neural networks and experimentally observed data. To improve the learning process, a combination of [...] Read more.
In this work, the position control of a cable-driven soft robot is proposed through the approximation of its kinematic model. This approximation is derived from artificial learning rules via neural networks and experimentally observed data. To improve the learning process, a combination of active sampling and Model Agnostic Meta Learning is carried out to improve the data based model to be used in the control stage through the inverse velocity kinematics derived from the data based modeling along with a self differentiation procedure to come up with the pseudo inverse of the robot Jacobian. The proposal is verified in a designed and constructed cable-driven soft robot with three actuators and position measurement through a vision system with three-dimensional motion. Some preliminary assessments (tension and repeatability) were performed to validate the robot movement generation, and, finally, a 3D reference trajectory was tracked using the proposed approach, achieving competitive tracking errors. Full article
27 pages, 663 KiB  
Systematic Review
Advances in the Automated Identification of Individual Tree Species: A Systematic Review of Drone- and AI-Based Methods in Forest Environments
by Ricardo Abreu-Dias, Juan M. Santos-Gago, Fernando Martín-Rodríguez and Luis M. Álvarez-Sabucedo
Technologies 2025, 13(5), 187; https://doi.org/10.3390/technologies13050187 - 6 May 2025
Abstract
The classification and identification of individual tree species in forest environments are critical for biodiversity conservation, sustainable forestry management, and ecological monitoring. Recent advances in drone technology and artificial intelligence have enabled new methodologies for detecting and classifying trees at an individual level. [...] Read more.
The classification and identification of individual tree species in forest environments are critical for biodiversity conservation, sustainable forestry management, and ecological monitoring. Recent advances in drone technology and artificial intelligence have enabled new methodologies for detecting and classifying trees at an individual level. However, significant challenges persist, particularly in heterogeneous forest environments with high species diversity and complex canopy structures. This systematic review explores the latest research on drone-based data collection and AI-driven classification techniques, focusing on studies that classify specific tree species rather than generic tree detection. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, peer review studies from the last decade were analyzed to identify trends in data acquisition instruments (e.g., RGB, multispectral, hyperspectral, LiDAR), preprocessing techniques, segmentation approaches, and machine learning (ML) algorithms used for classification. Findings of this study reveal that deep learning (DL) models, particularly convolutional neural networks (CNN), are increasingly replacing traditional ML methods such as random forest (RF) or support vector machines (SVMs) because there is no need for a feature extraction phase, as this is implicit in the DL models. The integration of LiDAR with hyperspectral imaging further enhances classification accuracy but remains limited due to cost constraints. Additionally, we discuss the challenges of model generalization across different forest ecosystems and propose future research directions, including the development of standardized datasets and improved model architectures for robust tree species classification. This review provides a comprehensive synthesis of existing methodologies, highlighting both advancements and persistent gaps in AI-driven forest monitoring. Full article
(This article belongs to the Collection Review Papers Collection for Advanced Technologies)
26 pages, 4881 KiB  
Article
Generative Neural Networks for Addressing the Bioequivalence of Highly Variable Drugs
by Anastasios Nikolopoulos and Vangelis D. Karalis
Algorithms 2025, 18(5), 266; https://doi.org/10.3390/a18050266 - 4 May 2025
Viewed by 59
Abstract
Bioequivalence assessment of highly variable drugs (HVDs) remains a significant challenge, as the application of scaled approaches requires replicate designs, complex statistical analyses, and varies between regulatory authorities (e.g., FDA and EMA). This study introduces the use of artificial intelligence, specifically Wasserstein Generative [...] Read more.
Bioequivalence assessment of highly variable drugs (HVDs) remains a significant challenge, as the application of scaled approaches requires replicate designs, complex statistical analyses, and varies between regulatory authorities (e.g., FDA and EMA). This study introduces the use of artificial intelligence, specifically Wasserstein Generative Adversarial Networks (WGANs), as a novel approach for bioequivalence studies of HVDs. Monte Carlo simulations were conducted to evaluate the performance of WGANs across various variability levels, population sizes, and data augmentation scales (2× and 3×). The generated data were tested for bioequivalence acceptance using both EMA and FDA scaled approaches. The WGAN approach, even applied without scaling, consistently outperformed the scaled EMA/FDA methods by effectively reducing the required sample size. Furthermore, the WGAN approach not only minimizes the sample size needed for bioequivalence studies of HVDs, but also eliminates the need for complex, costly, and time-consuming replicate designs that are prone to high dropout rates. This study demonstrates that using WGANs with 3× data augmentation can achieve bioequivalence acceptance rates exceeding 89% across all FDA and EMA criteria, with 10 out of 18 scenarios reaching 100%, highlighting the WGAN method potential to transform the design and efficiency of bioequivalence studies. This is a foundational step in utilizing WGANs for the bioequivalence assessment of HVDs, highlighting that with clear regulatory criteria, a new era for bioequivalence evaluation can begin. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

8 pages, 2864 KiB  
Perspective
Wireless Optogenetic Microsystems Accelerate Artificial Intelligence–Neuroscience Coevolution Through Embedded Closed-Loop System
by Sungcheol Hong
Micromachines 2025, 16(5), 557; https://doi.org/10.3390/mi16050557 - 3 May 2025
Viewed by 179
Abstract
Brain-inspired models in artificial intelligence (AI) originated from foundational insights in neuroscience. In recent years, this relationship has been moving toward a mutually reinforcing feedback loop. Currently, AI is significantly contributing to advancing our understanding of neuroscience. In particular, when combined with wireless [...] Read more.
Brain-inspired models in artificial intelligence (AI) originated from foundational insights in neuroscience. In recent years, this relationship has been moving toward a mutually reinforcing feedback loop. Currently, AI is significantly contributing to advancing our understanding of neuroscience. In particular, when combined with wireless optogenetics, AI enables experiments without physical constraints. Furthermore, AI-driven real-time analysis facilitates closed-loop control, allowing experimental setups across a diverse range of scenarios. And a deeper understanding of these neural networks may, in turn, contribute to future advances in AI. This work demonstrates the synergy between AI and miniaturized neural technology, particularly through wireless optogenetic systems designed for closed-loop neural control. We highlight how AI is now revolutionizing neuroscience experiments from decoding complex neural signals and quantifying behavior, to enabling closed-loop interventions and high-throughput phenotyping in freely moving subjects. Notably, AI-integrated wireless implants can monitor and modulate biological processes with unprecedented precision. We then recount how neuroscience insights derived from AI-integrated neuroscience experiments can potentially inspire the next generation of machine intelligence. Insights gained from these technologies loop back to inspire more efficient and robust AI systems. We discuss future directions in this positive feedback loop between AI and neuroscience, arguing that the coevolution of the two fields, grounded in technologies like wireless optogenetics and guided by reciprocal insight, will accelerate progress in both, while raising new challenges and opportunities for interdisciplinary collaboration. Full article
(This article belongs to the Section B:Biology and Biomedicine)
Show Figures

Figure 1

23 pages, 1539 KiB  
Review
Role and Potential of Artificial Intelligence in Biomarker Discovery and Development of Treatment Strategies for Amyotrophic Lateral Sclerosis
by Yoshihiro Kitaoka, Toshihiro Uchihashi, So Kawata, Akira Nishiura, Toru Yamamoto, Shin-ichiro Hiraoka, Yusuke Yokota, Emiko Tanaka Isomura, Mikihiko Kogo, Susumu Tanaka, Igor Spigelman and Soju Seki
Int. J. Mol. Sci. 2025, 26(9), 4346; https://doi.org/10.3390/ijms26094346 - 2 May 2025
Viewed by 279
Abstract
Neurodegenerative diseases, including amyotrophic lateral sclerosis (ALS), present significant challenges owing to their complex pathologies and a lack of curative treatments. Early detection and reliable biomarkers are critical but remain elusive. Artificial intelligence (AI) has emerged as a transformative tool, enabling advancements in [...] Read more.
Neurodegenerative diseases, including amyotrophic lateral sclerosis (ALS), present significant challenges owing to their complex pathologies and a lack of curative treatments. Early detection and reliable biomarkers are critical but remain elusive. Artificial intelligence (AI) has emerged as a transformative tool, enabling advancements in biomarker discovery, diagnostic accuracy, and therapeutic development. From optimizing clinical-trial designs to leveraging omics and neuroimaging data, AI facilitates understanding of disease and treatment innovation. Notably, technologies such as AlphaFold and deep learning models have revolutionized proteomics and neuroimaging, offering unprecedented insights into ALS pathophysiology. This review highlights the intersection of AI and ALS, exploring the current state of progress and future therapeutic prospects. Full article
Show Figures

Figure 1

25 pages, 3197 KiB  
Article
A Bio-Inspired Learning Dendritic Motion Detection Framework with Direction-Selective Horizontal Cells
by Tianqi Chen, Yuki Todo, Zhiyu Qiu, Yuxiao Hua, Hiroki Sugiura and Zheng Tang
Biomimetics 2025, 10(5), 286; https://doi.org/10.3390/biomimetics10050286 - 2 May 2025
Viewed by 96
Abstract
Motion direction detection is an essential task for both computer vision and neuroscience. Inspired by the biological theory of the human visual system, we proposed a learnable horizontal-cell-based dendritic neuron model (HCdM) that captures motion direction with high efficiency while remaining highly robust. [...] Read more.
Motion direction detection is an essential task for both computer vision and neuroscience. Inspired by the biological theory of the human visual system, we proposed a learnable horizontal-cell-based dendritic neuron model (HCdM) that captures motion direction with high efficiency while remaining highly robust. Unlike present deep learning models, which rely on extension of computation and extraction of global features, the HCdM mimics the localized processing of dendritic neurons, enabling efficient motion feature integration. Through synaptic learning that prunes unnecessary parts, our model maintains high accuracy in noised images, particularly against salt-and-pepper noise. Experimental results show that the HCdM reached over 99.5% test accuracy, maintained robust performance under 10% salt-and-pepper noise, and achieved cross-dataset generalization exceeding 80% in certain conditions. Comparisons with state-of-the-art (SOTA) models like vision transformers (ViTs) and convolutional neural networks (CNNs) demonstrate the HCdM’s robustness and efficiency. Additionally, in contrast to previous artificial visual systems (AVSs), our findings suggest that lateral geniculate nucleus (LGN) structures, though present in biological vision, may not be essential for motion direction detection. This insight provides a new direction for bio-inspired computational models. Future research will focus on hybridizing the HCdM with SOTA models that perform well on complex visual scenes to enhance its adaptability. Full article
(This article belongs to the Special Issue Dendritic Neuron Model: Theory, Design, Optimization and Applications)
27 pages, 1758 KiB  
Article
Cybersecure XAI Algorithm for Generating Recommendations Based on Financial Fundamentals Using DeepSeek
by Iván García-Magariño, Javier Bravo-Agapito and Raquel Lacuesta
AI 2025, 6(5), 95; https://doi.org/10.3390/ai6050095 - 2 May 2025
Viewed by 208
Abstract
Background: Investment decisions in stocks are one of the most complex tasks due to the uncertainty of which stocks will increase or decrease in their values. A diversified portfolio statistically reduces the risk; however, stock choice still substantially influences the profitability. Methods: This [...] Read more.
Background: Investment decisions in stocks are one of the most complex tasks due to the uncertainty of which stocks will increase or decrease in their values. A diversified portfolio statistically reduces the risk; however, stock choice still substantially influences the profitability. Methods: This work proposes a methodology to automate investment decision recommendations with clear explanations. It utilizes generative AI, guided by prompt engineering, to interpret price predictions derived from neural networks. The methodology also includes the Artificial Intelligence Trust, Risk, and Security Management (AI TRiSM) model to provide robust security recommendations for the system. The proposed system provides long-term investment recommendations based on the financial fundamentals of companies, such as the price-to-earnings ratio (PER) and the net margin of profits over the total revenue. The proposed explainable artificial intelligence (XAI) system uses DeepSeek for describing recommendations and suggested companies, as well as several charts based on Shapley additive explanation (SHAP) values and local-interpretable model-agnostic explanations (LIMEs) for showing feature importance. Results: In the experiments, we compared the profitability of the proposed portfolios, ranging from 8 to 28 stock values, with the maximum expected price increases for 4 years in the NASDAQ-100 and S&P-500, where both bull and bear markets were, respectively, considered before and after the custom duties increases in international trade by the USA in April 2025. The proposed system achieved an average profitability of 56.62% while considering 120 different portfolio recommendations. Conclusions: A t-Student test confirmed that the difference in profitability compared to the index was statistically significant. A user study revealed that the participants agreed that the portfolio explanations were useful for trusting the system, with an average score of 6.14 in a 7-point Likert scale. Full article
(This article belongs to the Special Issue AI in Finance: Leveraging AI to Transform Financial Services)
Show Figures

Figure 1

17 pages, 2743 KiB  
Article
DeepRT: A Hybrid Framework Combining Large Model Architectures and Ray Tracing Principles for 6G Digital Twin Channels
by Mingyue Li, Tao Wu, Zhirui Dong, Xiao Liu, Yiwen Lu, Shuo Zhang, Zerui Wu, Yuxiang Zhang, Li Yu and Jianhua Zhang
Electronics 2025, 14(9), 1849; https://doi.org/10.3390/electronics14091849 - 1 May 2025
Viewed by 86
Abstract
With the growing demand for wireless communication, the sixth-generation (6G) wireless network will be more complex. The digital twin channel (DTC) is envisioned as a promising enabler for 6G, as it can create an online replica of the physical channel characteristics in the [...] Read more.
With the growing demand for wireless communication, the sixth-generation (6G) wireless network will be more complex. The digital twin channel (DTC) is envisioned as a promising enabler for 6G, as it can create an online replica of the physical channel characteristics in the digital world, thereby supporting precise and adaptive communication decisions for 6G. In this article, we systematically review and summarize the existing efforts in realizing the DTC, providing a comprehensive analysis of ray tracing (RT), artificial intelligence (AI), and large model approaches. Based on this analysis, we further explore the potential of integrating large models with RT methods. By leveraging the strong generalization, multi-task processing capabilities, and multi-modal fusion capabilities of large models while incorporating physical priors from RT as expert knowledge to guide their training, there is a strong possibility of fulfilling the fast online inference and precise mapping requirements of the DTC. Therefore, we propose a novel DeepRT-enabled DTC (DRT-DTC) framework, which combines physical laws with large models like DeepSeek, offering a new vision for realizing the DTC. Two case studies are presented to demonstrate the possibility of this approach, which validate the effectiveness of physical law-based AI methods and large models in generating the DTC. Full article
(This article belongs to the Special Issue Integrated Sensing and Communications for 6G)
Show Figures

Figure 1

21 pages, 827 KiB  
Review
AI-Powered Object Detection in Radiology: Current Models, Challenges, and Future Direction
by Abdussalam Elhanashi, Sergio Saponara, Qinghe Zheng, Nawal Almutairi, Yashbir Singh, Shiba Kuanar, Farzana Ali, Orhan Unal and Shahriar Faghani
J. Imaging 2025, 11(5), 141; https://doi.org/10.3390/jimaging11050141 - 30 Apr 2025
Viewed by 182
Abstract
Artificial intelligence (AI)-based object detection in radiology can assist in clinical diagnosis and treatment planning. This article examines the AI-based object detection models currently used in many imaging modalities, including X-ray Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and Ultrasound (US). The key [...] Read more.
Artificial intelligence (AI)-based object detection in radiology can assist in clinical diagnosis and treatment planning. This article examines the AI-based object detection models currently used in many imaging modalities, including X-ray Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and Ultrasound (US). The key models from the convolutional neural network (CNN) as well as the contemporary transformer and hybrid models are analyzed based on their ability to detect pathological features, such as tumors, lesions, and tissue abnormalities. In addition, this review offers a closer look at the strengths and weaknesses of these models in terms of accuracy, robustness, and speed in real clinical settings. The common issues related to these models, including limited data, annotation quality, and interpretability of AI decisions, are discussed in detail. Moreover, the need for strong applicable models across different populations and imaging modalities are addressed. The importance of privacy and ethics in general data use as well as safety and regulations for healthcare data are emphasized. The future potential of these models lies in their accessibility in low resource settings, usability in shared learning spaces while maintaining privacy, and improvement in diagnostic accuracy through multimodal learning. This review also highlights the importance of interdisciplinary collaboration among artificial intelligence researchers, radiologists, and policymakers. Such cooperation is essential to address current challenges and to fully realize the potential of AI-based object detection in radiology. Full article
(This article belongs to the Special Issue Learning and Optimization for Medical Imaging)
Show Figures

Figure 1

38 pages, 1484 KiB  
Review
Enhancing Radiologist Productivity with Artificial Intelligence in Magnetic Resonance Imaging (MRI): A Narrative Review
by Arun Nair, Wilson Ong, Aric Lee, Naomi Wenxin Leow, Andrew Makmur, Yong Han Ting, You Jun Lee, Shao Jin Ong, Jonathan Jiong Hao Tan, Naresh Kumar and James Thomas Patrick Decourcy Hallinan
Diagnostics 2025, 15(9), 1146; https://doi.org/10.3390/diagnostics15091146 - 30 Apr 2025
Viewed by 322
Abstract
Artificial intelligence (AI) shows promise in streamlining MRI workflows by reducing radiologists’ workload and improving diagnostic accuracy. Despite MRI’s extensive clinical use, systematic evaluation of AI-driven productivity gains in MRI remains limited. This review addresses that gap by synthesizing evidence on how AI [...] Read more.
Artificial intelligence (AI) shows promise in streamlining MRI workflows by reducing radiologists’ workload and improving diagnostic accuracy. Despite MRI’s extensive clinical use, systematic evaluation of AI-driven productivity gains in MRI remains limited. This review addresses that gap by synthesizing evidence on how AI can shorten scanning and reading times, optimize worklist triage, and automate segmentation. On 15 November 2024, we searched PubMed, EMBASE, MEDLINE, Web of Science, Google Scholar, and Cochrane Library for English-language studies published between 2000 and 15 November 2024, focusing on AI applications in MRI. Additional searches of grey literature were conducted. After screening for relevance and full-text review, 67 studies met inclusion criteria. Extracted data included study design, AI techniques, and productivity-related outcomes such as time savings and diagnostic accuracy. The included studies were categorized into five themes: reducing scan times, automating segmentation, optimizing workflow, decreasing reading times, and general time-saving or workload reduction. Convolutional neural networks (CNNs), especially architectures like ResNet and U-Net, were commonly used for tasks ranging from segmentation to automated reporting. A few studies also explored machine learning-based automation software and, more recently, large language models. Although most demonstrated gains in efficiency and accuracy, limited external validation and dataset heterogeneity could reduce broader adoption. AI applications in MRI offer potential to enhance radiologist productivity, mainly through accelerated scans, automated segmentation, and streamlined workflows. Further research, including prospective validation and standardized metrics, is needed to enable safe, efficient, and equitable deployment of AI tools in clinical MRI practice. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Segmentation and Diagnosis)
Show Figures

Figure 1

19 pages, 4692 KiB  
Article
Scalable Semantic Adaptive Communication for Task Requirements in WSNs
by Hong Yang, Xiaoqing Zhu, Jia Yang, Ji Li, Linbo Qing, Xiaohai He and Pingyu Wang
Sensors 2025, 25(9), 2823; https://doi.org/10.3390/s25092823 - 30 Apr 2025
Viewed by 87
Abstract
Wireless Sensor Networks (WSNs) have emerged as an efficient solution for numerous real-time applications, attributable to their compactness, cost effectiveness, and ease of deployment. The rapid advancement of the Internet of Things (IoT), Artificial Intelligence (AI), and sixth-generation mobile communication technology (6G) and [...] Read more.
Wireless Sensor Networks (WSNs) have emerged as an efficient solution for numerous real-time applications, attributable to their compactness, cost effectiveness, and ease of deployment. The rapid advancement of the Internet of Things (IoT), Artificial Intelligence (AI), and sixth-generation mobile communication technology (6G) and Mobile Edge Computing (MEC) in recent years has catalyzed the transition towards large-scale deployment of WSN devices, and changed the image sensing and understanding to novel modes (such as machine-to-machine or human-to-machine interactions). However, the resulting data proliferation and the dynamics of communication environments introduce new challenges for WSN communication: (1) ensuring robust communication in adverse environments and (2) effectively alleviating bandwidth pressure from massive data transmission. To address these issues, this paper proposes a Scalable Semantic Adaptive Communication (SSAC) for task requirement. Firstly, we design an Attention Mechanism-based Joint Source Channel Coding (AMJSCC) in order to fully exploit the correlation among semantic features, channel conditions, and tasks. Then, a Prediction Scalable Semantic Generator (PSSG) is constructed to implement scalable semantics, allowing for flexible adjustments to achieve channel adaptation. The experimental results show that the proposed SSAC is more robust than traditional and other semantic communication algorithms in image classification tasks, and achieves scalable compression rates without sacrificing classification performance, while improving the bandwidth utilization of the communication system. Full article
(This article belongs to the Special Issue 6G Communication and Edge Intelligence in Wireless Sensor Networks)
Show Figures

Figure 1

26 pages, 4592 KiB  
Review
Recent Progress in Organic Optoelectronic Synaptic Devices
by Min He and Xin Tang
Photonics 2025, 12(5), 435; https://doi.org/10.3390/photonics12050435 - 30 Apr 2025
Viewed by 112
Abstract
Organic semiconductors hold immense promise in the field of optoelectronic synapses due to their tunable optoelectronic properties, mechanical flexibility, and biocompatibility. This review article provides a comprehensive overview of recent advancements in organic optoelectronic synaptic devices. We delve into the fundamental concepts and [...] Read more.
Organic semiconductors hold immense promise in the field of optoelectronic synapses due to their tunable optoelectronic properties, mechanical flexibility, and biocompatibility. This review article provides a comprehensive overview of recent advancements in organic optoelectronic synaptic devices. We delve into the fundamental concepts and classifications of these devices, examine their roles and operational mechanisms, and explore their diverse application scenarios. Additionally, we highlight the current challenges and emerging opportunities in this field, outlining a forward-looking path for the future development and application of these materials and devices in next-generation artificial intelligence (AI). We emphasize the potential of further optimizing organic materials and devices, which could significantly enhance the integration of organic synapses into biointegrated electronics and human–computer interfaces. By addressing key challenges such as material stability, device performance, and scalability, we aim to accelerate the transition from laboratory research to practical applications, paving the way for innovative AI systems that mimic biological neural networks. Full article
(This article belongs to the Special Issue Organic Photodetectors, Displays, and Upconverters)
Show Figures

Figure 1

17 pages, 1712 KiB  
Article
Levenberg–Marquardt Analysis of MHD Hybrid Convection in Non-Newtonian Fluids over an Inclined Container
by Julien Moussa H. Barakat, Zaher Al Barakeh and Raymond Ghandour
Eng 2025, 6(5), 92; https://doi.org/10.3390/eng6050092 - 30 Apr 2025
Viewed by 110
Abstract
This work aims to explore the magnetohydrodynamic mixed convection boundary layer flow (MHD-MCBLF) on a slanted extending cylinder using Eyring–Powell fluid in combination with Levenberg–Marquardt algorithm–artificial neural networks (LMA-ANNs). The thermal properties include thermal stratification, which has a higher temperature surface on the [...] Read more.
This work aims to explore the magnetohydrodynamic mixed convection boundary layer flow (MHD-MCBLF) on a slanted extending cylinder using Eyring–Powell fluid in combination with Levenberg–Marquardt algorithm–artificial neural networks (LMA-ANNs). The thermal properties include thermal stratification, which has a higher temperature surface on the cylinder than on the surrounding fluid. The mathematical model incorporates essential factors involving mixed conventions, thermal layers, heat absorption/generation, geometry curvature, fluid properties, magnetic field intensity, and Prandtl number. Partial differential equations govern the process and are transformed into coupled nonlinear ordinary differential equations with proper changes of variables. Datasets are generated for two cases: a flat plate (zero curving) and a cylinder (non-zero curving). The applicability of the LMA-ANN solver is presented by solving the MHD-MCBLF problem using regression analysis, mean squared error evaluation, histograms, and gradient analysis. It presents an affordable computational tool for predicting multicomponent reactive and non-reactive thermofluid phase interactions. This study introduces an application of Levenberg–Marquardt algorithm-based artificial neural networks (LMA-ANNs) to solve complex magnetohydrodynamic mixed convection boundary layer flows of Eyring–Powell fluids over inclined stretching cylinders. This approach efficiently approximates solutions to the transformed nonlinear differential equations, demonstrating high accuracy and reduced computational effort. Such advancements are particularly beneficial in industries like polymer processing, biomedical engineering, and thermal management systems, where modeling non-Newtonian fluid behaviors is crucial. Full article
Show Figures

Figure 1

27 pages, 10552 KiB  
Article
Enhancing Dongba Pictograph Recognition Using Convolutional Neural Networks and Data Augmentation Techniques
by Shihui Li, Lan Thi Nguyen, Wirapong Chansanam, Natthakan Iam-On and Tossapon Boongoen
Information 2025, 16(5), 362; https://doi.org/10.3390/info16050362 - 29 Apr 2025
Viewed by 212
Abstract
The recognition of Dongba pictographs presents significant challenges due to the pitfalls in traditional feature extraction methods, classification algorithms’ high complexity, and generalization ability. This study proposes a convolutional neural network (CNN)-based image classification method to enhance the accuracy and efficiency of Dongba [...] Read more.
The recognition of Dongba pictographs presents significant challenges due to the pitfalls in traditional feature extraction methods, classification algorithms’ high complexity, and generalization ability. This study proposes a convolutional neural network (CNN)-based image classification method to enhance the accuracy and efficiency of Dongba pictograph recognition. The research begins with collecting and manually categorizing Dongba pictograph images, followed by these preprocessing steps to improve image quality: normalization, grayscale conversion, filtering, denoising, and binarization. The dataset, comprising 70,000 image samples, is categorized into 18 classes based on shape characteristics and manual annotations. A CNN model is then trained using a dataset that is split into training (with 70% of all the samples), validation (20%), and test (10%) sets. In particular, data augmentation techniques, including rotation, affine transformation, scaling, and translation, are applied to enhance classification accuracy. Experimental results demonstrate that the proposed model achieves a classification accuracy of 99.43% and consistently outperforms other conventional methods, with its performance peaking at 99.84% under optimized training conditions—specifically, with 75 training epochs and a batch size of 512. This study provides a robust and efficient solution for automatically classifying Dongba pictographs, contributing to their digital preservation and scholarly research. By leveraging deep learning techniques, the proposed approach facilitates the rapid and precise identification of Dongba hieroglyphs, supporting the ongoing efforts in cultural heritage preservation and the broader application of artificial intelligence in linguistic studies. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining: Innovations in Big Data Analytics)
Show Figures

Figure 1

29 pages, 2665 KiB  
Review
Data-Driven Learning Models for Internet of Things Security: Emerging Trends, Applications, Challenges and Future Directions
by Oyeniyi Akeem Alimi
Technologies 2025, 13(5), 176; https://doi.org/10.3390/technologies13050176 - 29 Apr 2025
Viewed by 414
Abstract
The prospect of integrating every object under a unified infrastructure, which provides humans with the possibility to monitor, access, and control objects and systems, has played a significant role in the geometric growth of the Internet of Things (IoT) paradigm, across various applications. [...] Read more.
The prospect of integrating every object under a unified infrastructure, which provides humans with the possibility to monitor, access, and control objects and systems, has played a significant role in the geometric growth of the Internet of Things (IoT) paradigm, across various applications. However, despite the numerous possibilities that the IoT paradigm offers, security and privacy within and between the different interconnected devices and systems are integral to the long-term growth of IoT networks. Various sophisticated intrusions and attack variants have continued to plague the sustainability of IoT technologies and networks. Thus, effective methodologies for the prompt identification, detection, and mitigation of these menaces are priorities for stakeholders. Recently, data-driven artificial intelligence (AI) models have been considered effective in numerous applications. Hence, in recent literature studies, various single and ensemble AI subset models, such as deep learning and reinforcement learning models, have been proposed, resulting in effective decision-making for the secured operation of IoT networks. Considering the growth trends, this study presents a critical review of recently published articles whereby learning models were proposed for IoT security analysis. The aim is to highlight emerging IoT security issues, current conventional strategies, methodology procedures, achievements, and also, importantly, the limitations and research gaps identified in those specific IoT security analysis studies. By doing so, this study provides a research-based resource for scholars researching IoT and general industrial control systems security. Finally, some research gaps, as well as directions for future studies, are discussed. Full article
(This article belongs to the Special Issue IoT-Enabling Technologies and Applications)
Show Figures

Figure 1

Back to TopTop