Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (122)

Search Parameters:
Keywords = edge intelligence accelerators

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2525 KB  
Article
Intelligent Compaction System for Soil-Rock Mixture Subgrades: Real-Time Moisture-CMV Fusion Control and Embedded Edge Computing
by Meisheng Shi, Shen Zuo, Jin Li, Junwei Bi, Qingluan Li and Menghan Zhang
Sensors 2025, 25(17), 5491; https://doi.org/10.3390/s25175491 - 3 Sep 2025
Abstract
The compaction quality of soil–rock mixture (SRM) subgrades critically influences infrastructure stability, but conventional settlement difference methods exhibit high spatial sampling bias (error > 15% in heterogeneous zones) and fail to characterize the overall compaction quality. These limitations lead to under-compaction (porosity > [...] Read more.
The compaction quality of soil–rock mixture (SRM) subgrades critically influences infrastructure stability, but conventional settlement difference methods exhibit high spatial sampling bias (error > 15% in heterogeneous zones) and fail to characterize the overall compaction quality. These limitations lead to under-compaction (porosity > 25%) or over-compaction (aggregate fragmentation rate > 40%), highlighting the need for real-time monitoring. This study develops an intelligent compaction system integrating (1) vibration acceleration sensors (PCB 356A16, ±50 g range) for compaction meter value (CMV) acquisition; (2) near-infrared (NIR) moisture meters (NDC CM710E, 1300–2500 nm wavelength) for real-time moisture monitoring (sampling rate 10 Hz); and (3) an embedded edge-computing module (NVIDIA Jetson Nano) for Python-based data fusion (FFT harmonic analysis + moisture correction) with 50 ms processing latency. Field validation on Linlin Expressway shows that the system meets JTG 3430-2020 standards, with the compaction qualification rate reaching 98% (vs. 82% for conventional methods) and 97.6% anomaly detection accuracy. This is the first system integrating NIR moisture correction (R2 = 0.96 vs. oven-drying) with CMV harmonic analysis, reducing measurement error by 40% compared to conventional ICT (Bomag ECO Plus). It provides a digital solution for SRM subgrade quality control, enhancing construction efficiency and durability. Full article
(This article belongs to the Special Issue AI and Smart Sensors for Intelligent Transportation Systems)
Show Figures

Figure 1

22 pages, 1672 KB  
Article
Optimizing Robotic Disassembly-Assembly Line Balancing with Directional Switching Time via an Improved Q(λ) Algorithm in IoT-Enabled Smart Manufacturing
by Qi Zhang, Yang Xing, Man Yao, Xiwang Guo, Shujin Qin, Haibin Zhu, Liang Qi and Bin Hu
Electronics 2025, 14(17), 3499; https://doi.org/10.3390/electronics14173499 - 1 Sep 2025
Viewed by 210
Abstract
With the growing adoption of circular economy principles in manufacturing, efficient disassembly and reassembly of end-of-life (EOL) products has become a key challenge in smart factories. This paper addresses the Disassembly and Assembly Line Balancing Problem (DALBP), which involves scheduling robotic tasks across [...] Read more.
With the growing adoption of circular economy principles in manufacturing, efficient disassembly and reassembly of end-of-life (EOL) products has become a key challenge in smart factories. This paper addresses the Disassembly and Assembly Line Balancing Problem (DALBP), which involves scheduling robotic tasks across workstations while minimizing total operation time and accounting for directional switching time between disassembly and assembly phases. To solve this problem, we propose an improved reinforcement learning algorithm, IQ(λ), which extends the classical Q(λ) method by incorporating eligibility trace decay, a dynamic Action Table mechanism to handle non-conflicting parallel tasks, and switching-aware reward shaping to penalize inefficient task transitions. Compared with standard Q(λ), these modifications enhance the algorithm’s global search capability, accelerate convergence, and improve solution quality in complex DALBP scenarios. While the current implementation does not deploy live IoT infrastructure, the architecture is modular and designed to support future extensions involving edge-cloud coordination, trust-aware optimization, and privacy-preserving learning in Industrial Internet of Things (IIoT) environments. Four real-world disassembly-assembly cases (flashlight, copier, battery, and hammer drill) are used to evaluate the algorithm’s effectiveness. Experimental results show that IQ(λ) consistently outperforms traditional Q-learning, Q(λ), and Sarsa in terms of solution quality, convergence speed, and robustness. Furthermore, ablation studies and sensitivity analysis confirm the importance of the algorithm’s core design components. This work provides a scalable and extensible framework for intelligent scheduling in cyber-physical manufacturing systems and lays a foundation for future integration with secure, IoT-connected environments. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

40 pages, 1946 KB  
Review
Climate-Resilient Crops: Integrating AI, Multi-Omics, and Advanced Phenotyping to Address Global Agricultural and Societal Challenges
by Doni Thingujam, Sandeep Gouli, Sachin Promodh Cooray, Katie Busch Chandran, Seth Bradley Givens, Renganathan Vellaichamy Gandhimeyyan, Zhengzhi Tan, Yiqing Wang, Keerthi Patam, Sydney A. Greer, Ranju Acharya, David Octor Moseley, Nesma Osman, Xin Zhang, Megan E. Brooker, Mary Love Tagert, Mark J. Schafer, Changyoon Jeong, Kevin Flynn Hoffseth, Raju Bheemanahalli, J. Michael Wyss, Nuwan Kumara Wijewardane, Jong Hyun Ham and M. Shahid Mukhtaradd Show full author list remove Hide full author list
Plants 2025, 14(17), 2699; https://doi.org/10.3390/plants14172699 - 29 Aug 2025
Viewed by 657
Abstract
Drought and excess ambient temperature intensify abiotic and biotic stresses on agriculture, threatening food security and economic stability. The development of climate-resilient crops is crucial for sustainable, efficient farming. This review highlights the role of multi-omics encompassing genomics, transcriptomics, proteomics, metabolomics, and epigenomics [...] Read more.
Drought and excess ambient temperature intensify abiotic and biotic stresses on agriculture, threatening food security and economic stability. The development of climate-resilient crops is crucial for sustainable, efficient farming. This review highlights the role of multi-omics encompassing genomics, transcriptomics, proteomics, metabolomics, and epigenomics in identifying genetic pathways for stress resilience. Advanced phenomics, using drones and hyperspectral imaging, can accelerate breeding programs by enabling high-throughput trait monitoring. Artificial intelligence (AI) and machine learning (ML) enhance these efforts by analyzing large-scale omics and phenotypic data, predicting stress tolerance traits, and optimizing breeding strategies. Additionally, plant-associated microbiomes contribute to stress tolerance and soil health through bioinoculants and synthetic microbial communities. Beyond agriculture, these advancements have broad societal, economic, and educational impacts. Climate-resilient crops can enhance food security, reduce hunger, and support vulnerable regions. AI-driven tools and precision agriculture empower farmers, improving livelihoods and equitable technology access. Educating teachers, students, and future generations fosters awareness and equips them to address climate challenges. Economically, these innovations reduce financial risks, stabilize markets, and promote long-term agricultural sustainability. These cutting-edge approaches can transform agriculture by integrating AI, multi-omics, and advanced phenotyping, ensuring a resilient and sustainable global food system amid climate change. Full article
(This article belongs to the Section Crop Physiology and Crop Production)
Show Figures

Figure 1

26 pages, 2030 KB  
Review
Edge Computing-Enabled Smart Agriculture: Technical Architectures, Practical Evolution, and Bottleneck Breakthroughs
by Ran Gong, Hongyang Zhang, Gang Li and Jiamin He
Sensors 2025, 25(17), 5302; https://doi.org/10.3390/s25175302 - 26 Aug 2025
Viewed by 841
Abstract
As the global digital transformation of agriculture accelerates, the widespread deployment of farming equipment has triggered an exponential surge in agricultural production data. Consequently, traditional cloud computing frameworks face critical challenges: communication latency in the field, the demand for low-power devices, and stringent [...] Read more.
As the global digital transformation of agriculture accelerates, the widespread deployment of farming equipment has triggered an exponential surge in agricultural production data. Consequently, traditional cloud computing frameworks face critical challenges: communication latency in the field, the demand for low-power devices, and stringent real-time decision constraints. These bottlenecks collectively exacerbate bandwidth constraints, diminish response efficiency, and introduce data security vulnerabilities. In this context, edge computing offers a promising solution for smart agriculture. By provisioning computing resources to the network periphery and enabling localized processing at data sources adjacent to agricultural machinery, sensors, and crops, edge computing leverages low-latency responses, bandwidth optimization, and distributed computation capabilities. This paper provides a comprehensive survey of the research landscape in agricultural edge computing. We begin by defining its core concepts and highlighting its advantages over cloud computing. Subsequently, anchored in the “terminal sensing-edge intelligence-cloud coordination” architecture, we analyze technological evolution in edge sensing devices, lightweight intelligent algorithms, and cooperative communication mechanisms. Additionally, through precision farming, intelligent agricultural machinery control, and full-chain crop traceability, we demonstrate its efficacy in enhancing real-time agricultural decision-making. Finally, we identify adaptation challenges in complex environments and outline future directions for research and development in this field. Full article
Show Figures

Figure 1

19 pages, 441 KB  
Review
Recent Advances and Applications of Nondestructive Testing in Agricultural Products: A Review
by Mian Li, Honglian Yin, Fei Gu, Yanjun Duan, Wenxu Zhuang, Kang Han and Xiaojun Jin
Processes 2025, 13(9), 2674; https://doi.org/10.3390/pr13092674 - 22 Aug 2025
Viewed by 491
Abstract
With the rapid development of agricultural intelligence, nondestructive testing (NDT) has shown considerable promise for agricultural product inspection. Compared with traditional methods—which often suffer from subjectivity, low efficiency, and sample damage—NDT offers rapid, accurate, and non-invasive solutions that enable precise inspection without harming [...] Read more.
With the rapid development of agricultural intelligence, nondestructive testing (NDT) has shown considerable promise for agricultural product inspection. Compared with traditional methods—which often suffer from subjectivity, low efficiency, and sample damage—NDT offers rapid, accurate, and non-invasive solutions that enable precise inspection without harming the products. These inherent advantages have promoted the increasing adoption of NDT technologies in agriculture. Meanwhile, rising quality standards for agricultural products have intensified the demand for more efficient and reliable detection methods, accelerating the replacement of conventional techniques by advanced NDT approaches. Nevertheless, selecting the most appropriate NDT method for a given agricultural inspection task remains challenging, due to the wide diversity in product structures, compositions, and inspection requirements. To address this challenge, this paper presents a review of recent advancements and applications of several widely adopted NDT techniques, including computer vision, near-infrared spectroscopy, hyperspectral imaging, computed tomography, and electronic noses, focusing specifically on their application in agricultural product evaluation. Furthermore, the strengths and limitations of each technology are discussed comprehensively, quantitative performance indicators and adoption trends are summarized, and practical recommendations are provided for selecting suitable NDT techniques according to various agricultural inspection tasks. By highlighting both technical progress and persisting challenges, this review provides actionable theoretical and technical guidance, aiming to support researchers and practitioners in advancing the effective and sustainable application of cutting-edge NDT methods in agriculture. Full article
Show Figures

Figure 1

23 pages, 2723 KB  
Article
Dairy DigiD: An Edge-Cloud Framework for Real-Time Cattle Biometrics and Health Classification
by Shubhangi Mahato and Suresh Neethirajan
AI 2025, 6(9), 196; https://doi.org/10.3390/ai6090196 - 22 Aug 2025
Viewed by 574
Abstract
Digital livestock farming faces a critical deployment challenge: bridging the gap between cutting-edge AI algorithms and practical implementation in resource-constrained agricultural environments. While deep learning models demonstrate exceptional accuracy in laboratory settings, their translation to operational farm systems remains limited by computational constraints, [...] Read more.
Digital livestock farming faces a critical deployment challenge: bridging the gap between cutting-edge AI algorithms and practical implementation in resource-constrained agricultural environments. While deep learning models demonstrate exceptional accuracy in laboratory settings, their translation to operational farm systems remains limited by computational constraints, connectivity issues, and user accessibility barriers. Dairy DigiD addresses these challenges through a novel edge-cloud AI framework integrating YOLOv11 object detection with DenseNet121 physiological classification for cattle monitoring. The system employs YOLOv11-nano architecture optimized through INT8 quantization (achieving 73% model compression with <1% accuracy degradation) and TensorRT acceleration, enabling 24 FPS real-time inference on NVIDIA Jetson edge devices while maintaining 94.2% classification accuracy. Our key innovation lies in intelligent confidence-based offloading: routine detections execute locally at the edge, while ambiguous cases trigger cloud processing for enhanced accuracy. An entropy-based active learning pipeline using Roboflow reduces the annotation overhead by 65% while preserving 97% of the model performance. The Gradio interface democratizes system access, reducing technician training requirements by 84%. Comprehensive validation across ten commercial dairy farms in Atlantic Canada demonstrates robust performance under diverse environmental conditions (seasonal, lighting, weather variations). The framework achieves mAP@50 of 0.947 with balanced precision-recall across four physiological classes, while consuming 18% less energy than baseline implementations through attention-based optimization. Rather than proposing novel algorithms, this work contributes a systems-level integration methodology that transforms research-grade AI into deployable agricultural solutions. Our open-source framework provides a replicable blueprint for precision livestock farming adoption, addressing practical barriers that have historically limited AI deployment in agricultural settings. Full article
Show Figures

Figure 1

45 pages, 2283 KB  
Review
Agricultural Image Processing: Challenges, Advances, and Future Trends
by Xuehua Song, Letian Yan, Sihan Liu, Tong Gao, Li Han, Xiaoming Jiang, Hua Jin and Yi Zhu
Appl. Sci. 2025, 15(16), 9206; https://doi.org/10.3390/app15169206 - 21 Aug 2025
Viewed by 446
Abstract
Agricultural image processing technology plays a critical role in enabling precise disease detection, accurate yield prediction, and various smart agriculture applications. However, its practical implementation faces key challenges, including environmental interference, data scarcity and imbalance datasets, and the difficulty of deploying models on [...] Read more.
Agricultural image processing technology plays a critical role in enabling precise disease detection, accurate yield prediction, and various smart agriculture applications. However, its practical implementation faces key challenges, including environmental interference, data scarcity and imbalance datasets, and the difficulty of deploying models on resource-constrained edge devices. This paper presents a systematic review of recent advances in addressing these challenges, with a focus on three core aspects: environmental robustness, data efficiency, and model deployment. The study identifies that attention mechanisms, Transformers, multi-scale feature fusion, and domain adaptation can enhance model robustness under complex conditions. Self-supervised learning, transfer learning, GAN-based data augmentation, SMOTE improvements, and Focal loss optimization effectively alleviate data limitations. Furthermore, model compression techniques such as pruning, quantization, and knowledge distillation facilitate efficient deployment. Future research should emphasize multi-modal fusion, causal reasoning, edge–cloud collaboration, and dedicated hardware acceleration. Integrating agricultural expertise with AI is essential for promoting large-scale adoption, as well as achieving intelligent, sustainable agricultural systems. Full article
(This article belongs to the Special Issue Pattern Recognition Applications of Neural Networks and Deep Learning)
Show Figures

Figure 1

17 pages, 1118 KB  
Article
SMA-YOLO: A Novel Approach to Real-Time Vehicle Detection on Edge Devices
by Haixia Liu, Yingkun Song, Yongxing Lin and Zhixin Tie
Sensors 2025, 25(16), 5072; https://doi.org/10.3390/s25165072 - 15 Aug 2025
Viewed by 530
Abstract
Vehicle detection plays a pivotal role in traffic management as a key technology for intelligent traffic management and driverless driving. However, current deep learning-based vehicle detection models face several challenges in practical applications. These include slow detection speeds, large computational and parametric quantities, [...] Read more.
Vehicle detection plays a pivotal role in traffic management as a key technology for intelligent traffic management and driverless driving. However, current deep learning-based vehicle detection models face several challenges in practical applications. These include slow detection speeds, large computational and parametric quantities, high leakage and misdetection rates in target-intensive environments, and difficulties in deploying them on edge devices with limited computing power and memory. To address these issues, this paper proposes an improved vehicle detection method called SMA-YOLO, based on the YOLOv7 model. Firstly, MobileNetV3 is adopted as the new backbone network to lighten the model. Secondly, the SimAM attention mechanism is incorporated to suppress background interference and enhance small-target detection capability. Additionally, the ACON activation function is substituted for the original SiLU activation function in the YOLOv7 model to improve detection accuracy. Lastly, SIoU is used to replace CIoU to optimize the loss of function and accelerate model convergence. Experiments on the UA-DETRAC dataset demonstrate that the proposed SMA-YOLO model achieves a lightweight effect, significantly reducing model size, computational requirements, and the number of parameters. It not only greatly improves detection speed but also maintains higher detection accuracy. This provides a feasible solution for deploying a vehicle detection model on embedded devices for real-time detection. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

24 pages, 2325 KB  
Review
Personalization of AI-Based Digital Twins to Optimize Adaptation in Industrial Design and Manufacturing—Review
by Izabela Rojek, Dariusz Mikołajewski, Ewa Dostatni, Jan Cybulski and Mirosław Kozielski
Appl. Sci. 2025, 15(15), 8525; https://doi.org/10.3390/app15158525 - 31 Jul 2025
Viewed by 758
Abstract
The growing scale of big data and artificial intelligence (AI)-based models has heightened the urgency of developing real-time digital twins (DTs), particularly those capable of simulating personalized behavior in dynamic environments. In this study, we examine the personalization of AI-based digital twins (DTs), [...] Read more.
The growing scale of big data and artificial intelligence (AI)-based models has heightened the urgency of developing real-time digital twins (DTs), particularly those capable of simulating personalized behavior in dynamic environments. In this study, we examine the personalization of AI-based digital twins (DTs), with a focus on overcoming computational latencies that hinder real-time responses—especially in complex, large-scale systems and networks. We use bibliometric analysis to map current trends, prevailing themes, and technical challenges in this field. The key findings highlight the growing emphasis on scalable model architectures, multimodal data integration, and the use of high-performance computing platforms. While existing research has focused on model decomposition, structural optimization, and algorithmic integration, there remains a need for fast DT platforms that support diverse user requirements. This review synthesizes these insights to outline new directions for accelerating adaptation and enhancing personalization. By providing a structured overview of the current research landscape, this study contributes to a better understanding of how AI and edge computing can drive the development of the next generation of real-time personalized DTs. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

31 pages, 2007 KB  
Review
Artificial Intelligence-Driven Strategies for Targeted Delivery and Enhanced Stability of RNA-Based Lipid Nanoparticle Cancer Vaccines
by Ripesh Bhujel, Viktoria Enkmann, Hannes Burgstaller and Ravi Maharjan
Pharmaceutics 2025, 17(8), 992; https://doi.org/10.3390/pharmaceutics17080992 - 30 Jul 2025
Cited by 1 | Viewed by 2161
Abstract
The convergence of artificial intelligence (AI) and nanomedicine has transformed cancer vaccine development, particularly in optimizing RNA-loaded lipid nanoparticles (LNPs). Stability and targeted delivery are major obstacles to the clinical translation of promising RNA-LNP vaccines for cancer immunotherapy. This systematic review analyzes the [...] Read more.
The convergence of artificial intelligence (AI) and nanomedicine has transformed cancer vaccine development, particularly in optimizing RNA-loaded lipid nanoparticles (LNPs). Stability and targeted delivery are major obstacles to the clinical translation of promising RNA-LNP vaccines for cancer immunotherapy. This systematic review analyzes the AI’s impact on LNP engineering through machine learning-driven predictive models, generative adversarial networks (GANs) for novel lipid design, and neural network-enhanced biodistribution prediction. AI reduces the therapeutic development timeline through accelerated virtual screening of millions of lipid combinations, compared to conventional high-throughput screening. Furthermore, AI-optimized LNPs demonstrate improved tumor targeting. GAN-generated lipids show structural novelty while maintaining higher encapsulation efficiency; graph neural networks predict RNA-LNP binding affinity with high accuracy vs. experimental data; digital twins reduce lyophilization optimization from years to months; and federated learning models enable multi-institutional data sharing. We propose a framework to address key technical challenges: training data quality (min. 15,000 lipid structures), model interpretability (SHAP > 0.65), and regulatory compliance (21CFR Part 11). AI integration reduces manufacturing costs and makes personalized cancer vaccine affordable. Future directions need to prioritize quantum machine learning for stability prediction and edge computing for real-time formulation modifications. Full article
Show Figures

Figure 1

25 pages, 5142 KB  
Article
Wheat Powdery Mildew Severity Classification Based on an Improved ResNet34 Model
by Meilin Li, Yufeng Guo, Wei Guo, Hongbo Qiao, Lei Shi, Yang Liu, Guang Zheng, Hui Zhang and Qiang Wang
Agriculture 2025, 15(15), 1580; https://doi.org/10.3390/agriculture15151580 - 23 Jul 2025
Viewed by 407
Abstract
Crop disease identification is a pivotal research area in smart agriculture, forming the foundation for disease mapping and targeted prevention strategies. Among the most prevalent global wheat diseases, powdery mildew—caused by fungal infection—poses a significant threat to crop yield and quality, making early [...] Read more.
Crop disease identification is a pivotal research area in smart agriculture, forming the foundation for disease mapping and targeted prevention strategies. Among the most prevalent global wheat diseases, powdery mildew—caused by fungal infection—poses a significant threat to crop yield and quality, making early and accurate detection crucial for effective management. In this study, we present QY-SE-MResNet34, a deep learning-based classification model that builds upon ResNet34 to perform multi-class classification of wheat leaf images and assess powdery mildew severity at the single-leaf level. The proposed methodology begins with dataset construction following the GBT 17980.22-2000 national standard for powdery mildew severity grading, resulting in a curated collection of 4248 wheat leaf images at the grain-filling stage across six severity levels. To enhance model performance, we integrated transfer learning with ResNet34, leveraging pretrained weights to improve feature extraction and accelerate convergence. Further refinements included embedding a Squeeze-and-Excitation (SE) block to strengthen feature representation while maintaining computational efficiency. The model architecture was also optimized by modifying the first convolutional layer (conv1)—replacing the original 7 × 7 kernel with a 3 × 3 kernel, adjusting the stride to 1, and setting padding to 1—to better capture fine-grained leaf textures and edge features. Subsequently, the optimal training strategy was determined through hyperparameter tuning experiments, and GrabCut-based background processing along with data augmentation were introduced to enhance model robustness. In addition, interpretability techniques such as channel masking and Grad-CAM were employed to visualize the model’s decision-making process. Experimental validation demonstrated that QY-SE-MResNet34 achieved an 89% classification accuracy, outperforming established models such as ResNet50, VGG16, and MobileNetV2 and surpassing the original ResNet34 by 11%. This study delivers a high-performance solution for single-leaf wheat powdery mildew severity assessment, offering practical value for intelligent disease monitoring and early warning systems in precision agriculture. Full article
Show Figures

Figure 1

42 pages, 6065 KB  
Review
Digital Alchemy: The Rise of Machine and Deep Learning in Small-Molecule Drug Discovery
by Abdul Manan, Eunhye Baek, Sidra Ilyas and Donghun Lee
Int. J. Mol. Sci. 2025, 26(14), 6807; https://doi.org/10.3390/ijms26146807 - 16 Jul 2025
Viewed by 1808
Abstract
This review provides a comprehensive analysis of the transformative impact of artificial intelligence (AI) and machine learning (ML) on modern drug design, specifically focusing on how these advanced computational techniques address the inherent limitations of traditional small-molecule drug design methodologies. It begins by [...] Read more.
This review provides a comprehensive analysis of the transformative impact of artificial intelligence (AI) and machine learning (ML) on modern drug design, specifically focusing on how these advanced computational techniques address the inherent limitations of traditional small-molecule drug design methodologies. It begins by outlining the historical challenges of the drug discovery pipeline, including protracted timelines, exorbitant costs, and high clinical failure rates. Subsequently, it examines the core principles of structure-based virtual screening (SBVS) and ligand-based virtual screening (LBVS), establishing the critical bottlenecks that have historically impeded efficient drug development. The central sections elucidate how cutting-edge ML and deep learning (DL) paradigms, such as generative models and reinforcement learning, are revolutionizing chemical space exploration, enhancing binding affinity prediction, improving protein flexibility modeling, and automating critical design tasks. Illustrative real-world case studies demonstrating quantifiable accelerations in discovery timelines and improved success probabilities are presented. Finally, the review critically examines prevailing challenges, including data quality, model interpretability, ethical considerations, and evolving regulatory landscapes, while offering forward-looking critical perspectives on the future trajectory of AI-driven pharmaceutical innovation. Full article
(This article belongs to the Special Issue Advances in Computer-Aided Drug Design Strategies)
Show Figures

Graphical abstract

26 pages, 5672 KB  
Review
Development Status and Trend of Mine Intelligent Mining Technology
by Zhuo Wang, Lin Bi, Jinbo Li, Zhaohao Wu and Ziyu Zhao
Mathematics 2025, 13(13), 2217; https://doi.org/10.3390/math13132217 - 7 Jul 2025
Cited by 1 | Viewed by 1476
Abstract
Intelligent mining technology, as the core driving force for the digital transformation of the mining industry, integrates cyber-physical systems, artificial intelligence, and industrial internet technologies to establish a “cloud–edge–end” collaborative system. In this paper, the development trajectory of intelligent mining technology has been [...] Read more.
Intelligent mining technology, as the core driving force for the digital transformation of the mining industry, integrates cyber-physical systems, artificial intelligence, and industrial internet technologies to establish a “cloud–edge–end” collaborative system. In this paper, the development trajectory of intelligent mining technology has been systematically reviewed, which has gone through four stages: stand-alone automation, integrated automation and informatization, digital and intelligent initial, and comprehensive intelligence. And the current development status of “cloud–edge–end” technologies has been reviewed: (i) The end layer achieves environmental state monitoring and precise control through a multi-source sensing network and intelligent equipment. (ii) The edge layer leverages 5G and edge computing to accomplish real-time data processing, 3D dynamic modeling, and safety early warning. (iii) The cloud layer realizes digital planning and intelligent decision-making, based on the industrial Internet platform. The three-layer collaboration forms a “perception–analysis–decision–execution” closed loop. Currently, there are still many challenges in the development of the technology, including the lack of a standardization system, the bottleneck of multi-source heterogeneous data fusion, the lack of a cross-process coordination of the equipment, and the shortage of interdisciplinary talents. Accordingly, this paper focuses on future development trends from four aspects, providing systematic solutions for a safe, efficient, and sustainable mining operation. Technological evolution will accelerate the formation of an intelligent ecosystem characterized by “standard-driven, data-empowered, equipment-autonomous, and human–machine collaboration”. Full article
(This article belongs to the Special Issue Mathematical Modeling and Analysis in Mining Engineering)
Show Figures

Figure 1

26 pages, 1171 KB  
Review
Key Considerations for Real-Time Object Recognition on Edge Computing Devices
by Nico Surantha and Nana Sutisna
Appl. Sci. 2025, 15(13), 7533; https://doi.org/10.3390/app15137533 - 4 Jul 2025
Viewed by 2450
Abstract
The rapid growth of the Internet of Things (IoT) and smart devices has led to an increasing demand for real-time data processing at the edge of networks closer to the source of data generation. This review paper introduces how artificial intelligence (AI) can [...] Read more.
The rapid growth of the Internet of Things (IoT) and smart devices has led to an increasing demand for real-time data processing at the edge of networks closer to the source of data generation. This review paper introduces how artificial intelligence (AI) can be integrated with edge computing to enable efficient and scalable object recognition applications. It covers the key considerations of employing deep learning on edge computing devices, such as selecting edge devices, deep learning frameworks, lightweight deep learning models, hardware optimization, and performance metrics. An example of an application is also presented in this article, which is about real-time power transmission line detection using edge computing devices. The evaluation results show the significance of implementing lightweight models and model compression techniques such as quantized Tiny YOLOv7. It also shows the hardware performance on some edge devices, such as Raspberry Pi and Jetson platforms. Through practical examples, readers will gain insights into designing and implementing AI-powered edge solutions for various object recognition use cases, including smart surveillance, autonomous vehicles, and industrial automation. The review concludes by addressing emerging trends, such as federated learning and hardware accelerators, which are set to shape the future of AI on edge computing for object recognition. Full article
Show Figures

Figure 1

21 pages, 1583 KB  
Review
3.0 Strategies for Yeast Genetic Improvement in Brewing and Winemaking
by Chiara Nasuti, Lisa Solieri and Kristoffer Krogerus
Beverages 2025, 11(4), 100; https://doi.org/10.3390/beverages11040100 - 1 Jul 2025
Viewed by 1460
Abstract
Yeast genetic improvement is entering a transformative phase, driven by the integration of artificial intelligence (AI), big data analytics, and synthetic microbial communities with conventional methods such as sexual breeding and random mutagenesis. These advancements have substantially expanded the potential for innovative re-engineering [...] Read more.
Yeast genetic improvement is entering a transformative phase, driven by the integration of artificial intelligence (AI), big data analytics, and synthetic microbial communities with conventional methods such as sexual breeding and random mutagenesis. These advancements have substantially expanded the potential for innovative re-engineering of yeast, ranging from single-strain cultures to complex polymicrobial consortia. This review compares traditional genetic manipulation techniques with cutting-edge approaches, highlighting recent breakthroughs in their application to beer and wine fermentation. Among the innovative strategies, adaptive laboratory evolution (ALE) stands out as a non-GMO method capable of rewiring complex fitness-related phenotypes through iterative selection. In contrast, GMO-based synthetic biology approaches, including the most recent developments in CRISPR/Cas9 technologies, enable efficient and scalable genome editing, including multiplexed modifications. These innovations are expected to accelerate product development, reduce costs, and enhance the environmental sustainability of brewing and winemaking. However, despite their technological potential, GMO-based strategies continue to face significant regulatory and market challenges, which limit their widespread adoption in the fermentation industry. Full article
(This article belongs to the Section Malting, Brewing and Beer)
Show Figures

Figure 1

Back to TopTop