Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,734)

Search Parameters:
Keywords = light guiding

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3454 KiB  
Review
Synthetic Gene Circuits Enable Sensing in Engineered Living Materials
by Yaxuan Cai, Yujie Wang and Shengbiao Hu
Biosensors 2025, 15(9), 556; https://doi.org/10.3390/bios15090556 - 22 Aug 2025
Abstract
Engineered living materials (ELMs) integrate living cells—such as bacteria, yeast, or mammalian cells—with synthetic matrices to create responsive, adaptive systems for sensing and actuation. Among ELMs, those endowed with sensing capabilities are gaining increasing attention for applications in environmental monitoring, biomedicine, and smart [...] Read more.
Engineered living materials (ELMs) integrate living cells—such as bacteria, yeast, or mammalian cells—with synthetic matrices to create responsive, adaptive systems for sensing and actuation. Among ELMs, those endowed with sensing capabilities are gaining increasing attention for applications in environmental monitoring, biomedicine, and smart infrastructure. Central to these sensing functions are synthetic gene circuits, which enable cells to detect and respond to specific signals. This mini-review focuses on recent advances in sensing ELMs empowered by synthetic gene circuits. Here, we highlight how rationally designed genetic circuits enable living materials to sense and respond to diverse inputs—including environmental chemicals, light, heat, and mechanical loadings—via programmable signal transduction and tailored output behaviors. Input signals are classified by their source and physicochemical properties, including synthetic inducers, environmental chemicals, light, thermal, mechanical, and electrical signals. Particular emphasis is placed on the integration of genetically engineered microbial cells with hydrogels and other functional scaffolds to construct robust and tunable sensing platforms. Finally, we discuss the current challenges and future opportunities in this rapidly evolving field, providing insights to guide the rational design of next-generation sensing ELMs. Full article
(This article belongs to the Special Issue Biomaterials for Biosensing Applications—2nd Edition)
40 pages, 3396 KiB  
Article
Using KeyGraph and ChatGPT to Detect and Track Topics Related to AI Ethics in Media Outlets
by Wei-Hsuan Li and Hsin-Chun Yu
Mathematics 2025, 13(17), 2698; https://doi.org/10.3390/math13172698 - 22 Aug 2025
Viewed by 47
Abstract
This study examines the semantic dynamics and thematic shifts in artificial intelligence (AI) ethics over time, addressing a notable gap in longitudinal research within the field. In light of the rapid evolution of AI technologies and their associated ethical risks and societal impacts, [...] Read more.
This study examines the semantic dynamics and thematic shifts in artificial intelligence (AI) ethics over time, addressing a notable gap in longitudinal research within the field. In light of the rapid evolution of AI technologies and their associated ethical risks and societal impacts, the research integrates the theory of chance discovery with the KeyGraph algorithm to conduct topic detection through a keyword network built through iterative semantic exploration. ChatGPT is employed for semantic interpretation, enhancing both the accuracy and comprehensiveness of the detected topics. Guided by the double helix model of human–AI interaction, the framework incorporates a dual-layer validation process that combines cross-model semantic similarity analysis with expert-informed quality checks. An analysis of 24 authoritative AI ethics reports published between 2022 and 2024 reveals a consistent trend toward semantic stability, with high cross-model similarity across years (2022: 0.808 ± 0.023; 2023: 0.812 ± 0.013; 2024: 0.828 ± 0.015). Statistical tests confirm significant differences between single-cluster and multi-cluster topic structures (p < 0.05). The thematic findings indicate a shift in AI ethics discourse from a primary emphasis on technical risks to broader concerns involving institutional governance, societal trust, and the regulation of generative AI. Core keywords, such as bias, privacy, and ethics, recur across all years, reflecting the consolidation of an integrated governance framework that encompasses technological robustness, institutional adaptability, and social consensus. This dynamic semantic analysis framework contributes empirically to AI ethics governance and offers actionable insights for researchers and interdisciplinary stakeholders. Full article
(This article belongs to the Special Issue Artificial Intelligence and Algorithms)
Show Figures

Figure 1

18 pages, 993 KiB  
Article
Students with Visual Impairments’ Comprehension of Visual and Algebraic Representations, Relations and Correspondence
by Fatma Nur Aktas and Ziya Argun
Educ. Sci. 2025, 15(8), 1083; https://doi.org/10.3390/educsci15081083 - 21 Aug 2025
Viewed by 119
Abstract
Exploring learning trajectories based on student thinking is needed to develop the teaching curricula, practices and educational support materials in mathematics for students with visual impairments. Hence, this study aims to reveal student thinking through various instructional tasks and tactile materials to explore [...] Read more.
Exploring learning trajectories based on student thinking is needed to develop the teaching curricula, practices and educational support materials in mathematics for students with visual impairments. Hence, this study aims to reveal student thinking through various instructional tasks and tactile materials to explore the sequence of goals in the learning trajectory. A teaching experiment involving introductory information on algebraic and visual representations regarding advanced mathematical concepts was designed for correspondence and relations. The research was carried out with a braille-literate 10th-grade high school student with a congenital visual impairment where colour and light are not perceived in Türkiye. As a result of the teaching experiment, the participant was able to determine the correspondence and relations between two sets using different representations. He even designed graphic representations using the needle page. The learning trajectory goals and instructional tasks can serve as guides for research on curriculum development, practice design and material development. Full article
Show Figures

Figure 1

23 pages, 1419 KiB  
Systematic Review
Mapping Entrepreneurial Collaborative Economy Landscape: A Systematic Literature Review with Textometric Analysis
by Salvador Bueno, Eva M. Gallego, Ramiro Montealegre and M. Dolores Gallego
Economies 2025, 13(8), 246; https://doi.org/10.3390/economies13080246 - 21 Aug 2025
Viewed by 316
Abstract
The collaborative economy is experiencing a remarkable surge, offering vast potential for growth. Consequently, this burgeoning movement has become a focal point of interest in the realm of entrepreneurship. However, numerous unexplored or inadequately addressed research gaps persist, leaving us without a well-defined [...] Read more.
The collaborative economy is experiencing a remarkable surge, offering vast potential for growth. Consequently, this burgeoning movement has become a focal point of interest in the realm of entrepreneurship. However, numerous unexplored or inadequately addressed research gaps persist, leaving us without a well-defined paradigm for what we can term the entrepreneurial collaborative economy. In light of these challenges, this study embarks on a quest to bridge these gaps through a comprehensive systematic literature review. Two research objectives guided our endeavor: (1) mapping the literature related to the collaborative economy in the field of entrepreneurship to propose a research taxonomy, and (2) analyzing areas in this field that warrant further research. Our literature review, conducted using the PRISMA methodology, yielded 407 studies. Employing advanced textometric techniques, we uncovered a research taxonomy consisting of three distinct clusters within entrepreneurial collaborative economy studies. In particular, our investigation has unveiled that the entrepreneurial collaborative economy paradigm remains in a state of emergence within the academic literature. The paper concludes with thought-provoking discussions and key insights. Full article
Show Figures

Figure 1

27 pages, 5654 KiB  
Article
Intelligent Detection and Description of Foreign Object Debris on Airport Pavements via Enhanced YOLOv7 and GPT-Based Prompt Engineering
by Hanglin Cheng, Ruoxi Zhang, Ruiheng Zhang, Yihao Li, Yang Lei and Weiguang Zhang
Sensors 2025, 25(16), 5116; https://doi.org/10.3390/s25165116 - 18 Aug 2025
Viewed by 294
Abstract
Foreign Object Debris (FOD) on airport pavements poses a serious threat to aviation safety, making accurate detection and interpretable scene understanding crucial for operational risk management. This paper presents an integrated multi-modal framework that combines an enhanced YOLOv7-X detector, a cascaded YOLO-SAM segmentation [...] Read more.
Foreign Object Debris (FOD) on airport pavements poses a serious threat to aviation safety, making accurate detection and interpretable scene understanding crucial for operational risk management. This paper presents an integrated multi-modal framework that combines an enhanced YOLOv7-X detector, a cascaded YOLO-SAM segmentation module, and a structured prompt engineering mechanism to generate detailed semantic descriptions of detected FOD. Detection performance is improved through the integration of Coordinate Attention, Spatial–Depth Conversion (SPD-Conv), and a Gaussian Similarity IoU (GSIoU) loss, leading to a 3.9% gain in mAP@0.5 for small objects with only a 1.7% increase in inference latency. The YOLO-SAM cascade leverages high-quality masks to guide structured prompt generation, which incorporates spatial encoding, material attributes, and operational risk cues, resulting in a substantial improvement in description accuracy from 76.0% to 91.3%. Extensive experiments on a dataset of 12,000 real airport images demonstrate competitive detection and segmentation performance compared to recent CNN- and transformer-based baselines while achieving robust semantic generalization in challenging scenarios, such as complete darkness, low-light, high-glare nighttime conditions, and rainy weather. A runtime breakdown shows that the enhanced YOLOv7-X requires 40.2 ms per image, SAM segmentation takes 142.5 ms, structured prompt construction adds 23.5 ms, and BLIP-2 description generation requires 178.6 ms, resulting in an end-to-end latency of 384.8 ms per image. Although this does not meet strict real-time video requirements, it is suitable for semi-real-time or edge-assisted asynchronous deployment, where detection robustness and semantic interpretability are prioritized over ultra-low latency. The proposed framework offers a practical, deployable solution for airport FOD monitoring, combining high-precision detection with context-aware description generation to support intelligent runway inspection and maintenance decision-making. Full article
(This article belongs to the Special Issue AI and Smart Sensors for Intelligent Transportation Systems)
Show Figures

Figure 1

20 pages, 3854 KiB  
Article
Accurate Classification of Multi-Cultivar Watermelons via GAF-Enhanced Feature Fusion Convolutional Neural Networks
by Changqing An, Maozhen Qu, Yiran Zhao, Zihao Wu, Xiaopeng Lv, Yida Yu, Zichao Wei, Xiuqin Rao and Huirong Xu
Foods 2025, 14(16), 2860; https://doi.org/10.3390/foods14162860 - 18 Aug 2025
Viewed by 239
Abstract
The online rapid classification of multi-cultivar watermelon, including seedless and seeded types, has far-reaching significance for enhancing quality control in the watermelon industry. However, interference in one-dimensional spectra affects the high-accuracy classification of multi-cultivar watermelons with similar appearances. This study proposed an innovative [...] Read more.
The online rapid classification of multi-cultivar watermelon, including seedless and seeded types, has far-reaching significance for enhancing quality control in the watermelon industry. However, interference in one-dimensional spectra affects the high-accuracy classification of multi-cultivar watermelons with similar appearances. This study proposed an innovative method integrating Gramian Angular Field (GAF), feature fusion, and Squeeze-and-Excitation (SE)-guided convolutional neural networks (CNN) based on VIS-NIR transmittance spectroscopy. First, one-dimensional spectra of 163 seedless and 160 seeded watermelons were converted into two-dimensional Gramian Angular Summation Field (GASF) and Gramian Angular Difference Field (GADF) images. Subsequently, a dual-input CNN architecture was designed to fuse discriminative features from both GASF and GADF images. Feature visualization of high-weight channels of the input images in convolutional layer revealed distinct spectral features between seedless and seeded watermelons. With the fusion of distinguishing feature information, the developed CNN model achieved a classification accuracy of 95.1% on the prediction set, outperforming traditional models based on one-dimensional spectra. Remarkably, wavelength optimization through competitive adaptive reweighted sampling (CARS) reduced GAF image generation time to 55.19% of full-wavelength processing, while improving classification accuracy to 96.3%. A better generalization of the model was demonstrated using 17 seedless and 20 seeded watermelons from other origins, with a classification accuracy of 91.9%. These findings substantiated that GAF-enhanced feature fusion CNN can significantly improve the classification accuracy of multi-cultivar watermelons, casting innovative light on fruit quality based on VIS-NIR transmittance spectroscopy. Full article
Show Figures

Figure 1

22 pages, 5884 KiB  
Article
Clinical Integration of NIR-II Fluorescence Imaging for Cancer Surgery: A Translational Evaluation of Preclinical and Intraoperative Systems
by Ritesh K. Isuri, Justin Williams, David Rioux, Paul Dorval, Wendy Chung, Pierre-Alix Dancer and Edward J. Delikatny
Cancers 2025, 17(16), 2676; https://doi.org/10.3390/cancers17162676 - 17 Aug 2025
Viewed by 311
Abstract
Background/Objectives: Back table fluorescence imaging performed on freshly excised tissue specimens represents a critical step in fluorescence-guided surgery, enabling rapid assessment of tumor margins before final pathology. While most preclinical NIR-II imaging platforms, such as the IR VIVO (Photon, etc.), offer high-resolution [...] Read more.
Background/Objectives: Back table fluorescence imaging performed on freshly excised tissue specimens represents a critical step in fluorescence-guided surgery, enabling rapid assessment of tumor margins before final pathology. While most preclinical NIR-II imaging platforms, such as the IR VIVO (Photon, etc.), offer high-resolution and depth-sensitive imaging under controlled, enclosed conditions, they are not designed for intraoperative or point-of-care use. This study compares the IR VIVO with the LightIR system, a more compact and clinically adaptable imaging platform using the same Alizé 1.7 InGaAs detector, to evaluate whether the LightIR can offer comparable performance for back table NIR-II imaging under ambient light. Methods: Standardized QUEL phantoms containing indocyanine green (ICG) and custom agar-based tissue-mimicking phantoms loaded with IR-1048 were imaged on both systems. Imaging sensitivity, spatial resolution, and depth penetration were quantitatively assessed. LightIR was operated in pulse-mode under ambient lighting, mimicking back table or intraoperative use, while IR VIVO was operated in a fully enclosed configuration. Results: The IR VIVO system achieved high spatial resolution (~125 µm) and detected ICG concentrations as low as 30 nM in NIR-I and 300 nM in NIR-II. The LightIR system, though requiring longer exposure times, successfully resolved features down to ~250 µm and detected ICG to depths ≥4 mm. Importantly, the LightIR maintained robust NIR-II contrast under ambient lighting, aided by real-time background subtraction, and enabled clear visualization of subsurface IR-1048 targets in unshielded phantom setups, conditions relevant to back table workflows. Conclusions: LightIR offers performance comparable to the IR VIVO in terms of depth penetration and spatial resolution, while also enabling open-field NIR-II imaging without the need for a blackout enclosure. These features position the LightIR as a practical alternative for rapid, high-contrast fluorescence assessment during back table imaging. The availability of such clinical-grade systems may catalyze the development of new NIR-II fluorophores tailored for real-time surgical applications. Full article
(This article belongs to the Special Issue Application of Fluorescence Imaging in Cancer)
Show Figures

Figure 1

23 pages, 1657 KiB  
Article
High-Precision Pest Management Based on Multimodal Fusion and Attention-Guided Lightweight Networks
by Ziye Liu, Siqi Li, Yingqiu Yang, Xinlu Jiang, Mingtian Wang, Dongjiao Chen, Tianming Jiang and Min Dong
Insects 2025, 16(8), 850; https://doi.org/10.3390/insects16080850 - 16 Aug 2025
Viewed by 587
Abstract
In the context of global food security and sustainable agricultural development, the efficient recognition and precise management of agricultural insect pests and their predators have become critical challenges in the domain of smart agriculture. To address the limitations of traditional models that overly [...] Read more.
In the context of global food security and sustainable agricultural development, the efficient recognition and precise management of agricultural insect pests and their predators have become critical challenges in the domain of smart agriculture. To address the limitations of traditional models that overly rely on single-modal inputs and suffer from poor recognition stability under complex field conditions, a multimodal recognition framework has been proposed. This framework integrates RGB imagery, thermal infrared imaging, and environmental sensor data. A cross-modal attention mechanism, environment-guided modality weighting strategy, and decoupled recognition heads are incorporated to enhance the model’s robustness against small targets, intermodal variations, and environmental disturbances. Evaluated on a high-complexity multimodal field dataset, the proposed model significantly outperforms mainstream methods across four key metrics, precision, recall, F1-score, and mAP@50, achieving 91.5% precision, 89.2% recall, 90.3% F1-score, and 88.0% mAP@50. These results represent an improvement of over 6% compared to representative models such as YOLOv8 and DETR. Additional ablation studies confirm the critical contributions of key modules, particularly under challenging scenarios such as low light, strong reflections, and sensor data noise. Moreover, deployment tests conducted on the Jetson Xavier edge device demonstrate the feasibility of real-world application, with the model achieving a 25.7 FPS inference speed and a compact size of 48.3 MB, thus balancing accuracy and lightweight design. This study provides an efficient, intelligent, and scalable AI solution for pest surveillance and biological control, contributing to precision pest management in agricultural ecosystems. Full article
Show Figures

Figure 1

33 pages, 22477 KiB  
Article
Spatial Synergy Between Carbon Storage and Emissions in Coastal China: Insights from PLUS-InVEST and OPGD Models
by Chunlin Li, Jinhong Huang, Yibo Luo and Junjie Wang
Remote Sens. 2025, 17(16), 2859; https://doi.org/10.3390/rs17162859 - 16 Aug 2025
Viewed by 364
Abstract
Coastal zones face mounting pressures from rapid urban expansion and ecological degradation, posing significant challenges to achieving synergistic carbon storage and emissions reduction under China’s “dual carbon” goals. Yet, the identification of spatially explicit zones of carbon synergy (high storage–low emissions) and conflict [...] Read more.
Coastal zones face mounting pressures from rapid urban expansion and ecological degradation, posing significant challenges to achieving synergistic carbon storage and emissions reduction under China’s “dual carbon” goals. Yet, the identification of spatially explicit zones of carbon synergy (high storage–low emissions) and conflict (high emissions–low storage) in these regions remains limited. This study integrates the PLUS (Patch-generating Land Use Simulation), InVEST (Integrated Valuation of Ecosystem Services and Trade-offs), and OPGD (optimal parameter-based GeoDetector) models to evaluate the impacts of land-use/cover change (LUCC) on coastal carbon dynamics in China from 2000 to 2030. Four contrasting land-use scenarios (natural development, economic development, ecological protection, and farmland protection) were simulated to project carbon trajectories by 2030. From 2000 to 2020, rapid urbanization resulted in a 29,929 km2 loss of farmland and a 43,711 km2 increase in construction land, leading to a net carbon storage loss of 278.39 Tg. Scenario analysis showed that by 2030, ecological and farmland protection strategies could increase carbon storage by 110.77 Tg and 110.02 Tg, respectively, while economic development may further exacerbate carbon loss. Spatial analysis reveals that carbon conflict zones were concentrated in major urban agglomerations, whereas spatial synergy zones were primarily located in forest-rich regions such as the Zhejiang–Fujian and Guangdong–Guangxi corridors. The OPGD results demonstrate that carbon synergy was driven largely by interactions between socioeconomic factors (e.g., population density and nighttime light index) and natural variables (e.g., mean annual temperature, precipitation, and elevation). These findings emphasize the need to harmonize urban development with ecological conservation through farmland protection, reforestation, and low-emission planning. This study, for the first time, based on the PLUS-Invest-OPGD framework, proposes the concepts of “carbon synergy” and “carbon conflict” regions and their operational procedures. Compared with the single analysis of the spatial distribution and driving mechanisms of carbon stocks or carbon emissions, this method integrates both aspects, providing a transferable approach for assessing the carbon dynamic processes in coastal areas and guiding global sustainable planning. Full article
(This article belongs to the Special Issue Carbon Sink Pattern and Land Spatial Optimization in Coastal Areas)
Show Figures

Figure 1

21 pages, 6600 KiB  
Article
Daylighting Performance Simulation and Optimization Design of a “Campus Living Room” Based on BIM Technology—A Case Study in a Region with Hot Summers and Cold Winters
by Qing Zeng and Guangyu Ou
Buildings 2025, 15(16), 2904; https://doi.org/10.3390/buildings15162904 - 16 Aug 2025
Viewed by 258
Abstract
In the context of green building development, the lighting design of campus living rooms in hot summer and cold winter areas faces the dual challenges of glare control in summer and insufficient daylight in winter. Based on BIM technology, this study uses Revit [...] Read more.
In the context of green building development, the lighting design of campus living rooms in hot summer and cold winter areas faces the dual challenges of glare control in summer and insufficient daylight in winter. Based on BIM technology, this study uses Revit 2016 modeling and the HYBPA 2024 performance analysis platform to simulate and optimize the daylighting performance of the campus activity center of Hunan City College in multiple rounds of iterations. It is found that the traditional single large-area external window design leads to uneven lighting in 70% of the area, and the average value of the lighting coefficient is only 2.1%, which is lower than the national standard requirement of 3.3%. Through the introduction of the hybrid system of “side lighting + top light guide”, combined with adjustable inner louver shading, the optimized average value of the lighting coefficient is increased to 4.8%, the uniformity of indoor illuminance is increased from 0.35 to 0.68, the proportion of annual standard sunshine hours (≥300 lx) reaches 68.7%, and the energy consumption of the artificial lighting is reduced by 27.3%. Dynamic simulation shows that the uncomfortable glare index at noon on the summer solstice is reduced from 30.2 to 22.7, which meets the visual comfort requirements. The study confirms that the BIM-driven “static-dynamic” simulation coupling method can effectively address climate adaptability issues. However, it has limitations such as insufficient integration with international healthy building standards, insufficient accuracy of meteorological data, and simplification of indoor dynamic shading factors. Future research can focus on improving meteorological data accuracy, incorporating indoor dynamic factors, and exploring intelligent daylighting systems to deepen and expand the method, promote the integration of cross-standard evaluation systems, and provide a technical pathway for healthy lighting environment design in summer-hot and winter-cold regions. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

18 pages, 2055 KiB  
Article
Language-Driven Cross-Attention for Visible–Infrared Image Fusion Using CLIP
by Xue Wang, Jiatong Wu, Pengfei Zhang and Zhongjun Yu
Sensors 2025, 25(16), 5083; https://doi.org/10.3390/s25165083 - 15 Aug 2025
Viewed by 584
Abstract
Language-guided multimodal fusion, which integrates information from both visible and infrared images, has shown strong performance in image fusion tasks. In low-light or complex environments, a single modality often fails to fully capture scene features, whereas fused images enable robots to obtain multidimensional [...] Read more.
Language-guided multimodal fusion, which integrates information from both visible and infrared images, has shown strong performance in image fusion tasks. In low-light or complex environments, a single modality often fails to fully capture scene features, whereas fused images enable robots to obtain multidimensional scene understanding for navigation, localization, and environmental perception. This capability is particularly important in applications such as autonomous driving, intelligent surveillance, and search-and-rescue operations, where accurate recognition and efficient decision-making are critical. To enhance the effectiveness of multimodal fusion, we propose a text-guided infrared and visible image fusion network. The framework consists of two key components: an image fusion branch, which employs a cross-domain attention mechanism to merge multimodal features, and a text-guided module, which leverages the CLIP model to extract semantic cues from image descriptions containing visible content. These semantic parameters are then used to guide the feature modulation process during fusion. By integrating visual and linguistic information, our framework is capable of generating high-quality color-fused images that not only enhance visual detail but also enrich semantic understanding. On benchmark datasets, our method achieves strong quantitative performance: SF = 2.1381, Qab/f = 0.6329, MI = 14.2305, SD = 0.8527, VIF = 45.1842 on LLVIP, and SF = 1.3149, Qab/f = 0.5863, MI = 13.9676, SD = 94.7203, VIF = 0.7746 on TNO. These results highlight the robustness and scalability of our model, making it a promising solution for real-world multimodal perception applications. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 28680 KiB  
Article
SN-YOLO: A Rotation Detection Method for Tomato Harvest in Greenhouses
by Jinlong Chen, Ruixue Yu, Minghao Yang, Wujun Che, Yi Ning and Yongsong Zhan
Electronics 2025, 14(16), 3243; https://doi.org/10.3390/electronics14163243 - 15 Aug 2025
Viewed by 267
Abstract
Accurate detection of tomato fruits is a critical component in vision-guided robotic harvesting systems, which play an increasingly important role in automated agriculture. However, this task is challenged by variable lighting conditions and background clutter in natural environments. In addition, the arbitrary orientations [...] Read more.
Accurate detection of tomato fruits is a critical component in vision-guided robotic harvesting systems, which play an increasingly important role in automated agriculture. However, this task is challenged by variable lighting conditions and background clutter in natural environments. In addition, the arbitrary orientations of fruits reduce the effectiveness of traditional horizontal bounding boxes. To address these challenges, we propose a novel object detection framework named SN-YOLO. First, we introduce the StarNet’ backbone to enhance the extraction of fine-grained features, thereby improving the detection performance in cluttered backgrounds. Second, we design a Color-Prior Spatial-Channel Attention (CPSCA) module that incorporates red-channel priors to strengthen the model’s focus on salient fruit regions. Third, we implement a multi-level attention fusion strategy to promote effective feature integration across different layers, enhancing background suppression and object discrimination. Furthermore, oriented bounding boxes improve localization precision by better aligning with the actual fruit shapes and poses. Experiments conducted on a custom tomato dataset demonstrate that SN-YOLO outperforms the baseline YOLOv8 OBB, achieving a 1.0% improvement in precision and a 0.8% increase in mAP@0.5. These results confirm the robustness and accuracy of the proposed method under complex field conditions. Overall, SN-YOLO provides a practical and efficient solution for fruit detection in automated harvesting systems, contributing to the deployment of computer vision techniques in smart agriculture. Full article
Show Figures

Figure 1

37 pages, 5086 KiB  
Article
Global Embeddings, Local Signals: Zero-Shot Sentiment Analysis of Transport Complaints
by Aliya Nugumanova, Daniyar Rakhimzhanov and Aiganym Mansurova
Informatics 2025, 12(3), 82; https://doi.org/10.3390/informatics12030082 - 14 Aug 2025
Viewed by 468
Abstract
Public transport agencies must triage thousands of multilingual complaints every day, yet the cost of training and serving fine-grained sentiment analysis models limits real-time deployment. The proposed “one encoder, any facet” framework therefore offers a reproducible, resource-efficient alternative to heavy fine-tuning for domain-specific [...] Read more.
Public transport agencies must triage thousands of multilingual complaints every day, yet the cost of training and serving fine-grained sentiment analysis models limits real-time deployment. The proposed “one encoder, any facet” framework therefore offers a reproducible, resource-efficient alternative to heavy fine-tuning for domain-specific sentiment analysis or opinion mining tasks on digital service data. To the best of our knowledge, we are the first to test this paradigm on operational multilingual complaints, where public transport agencies must prioritize thousands of Russian- and Kazakh-language messages each day. A human-labelled corpus of 2400 complaints is embedded with five open-source universal models. Obtained embeddings are matched to semantic “anchor” queries that describe three distinct facets: service aspect (eight classes), implicit frustration, and explicit customer request. In the strict zero-shot setting, the best encoder reaches 77% accuracy for aspect detection, 74% for frustration, and 80% for request; taken together, these signals reproduce human four-level priority in 60% of cases. Attaching a single-layer logistic probe on top of the frozen embeddings boosts performance to 89% for aspect, 83–87% for the binary facets, and 72% for end-to-end triage. Compared with recent fine-tuned sentiment analysis systems, our pipeline cuts memory demands by two orders of magnitude and eliminates task-specific training yet narrows the accuracy gap to under five percentage points. These findings indicate that a single frozen encoder, guided by handcrafted anchors and an ultra-light head, can deliver near-human triage quality across multiple pragmatic dimensions, opening the door to low-cost, language-agnostic monitoring of digital-service feedback. Full article
(This article belongs to the Special Issue Practical Applications of Sentiment Analysis)
Show Figures

Figure 1

24 pages, 4987 KiB  
Article
Enhanced Disease Segmentation in Pear Leaves via Edge-Aware Multi-Scale Attention Network
by Xin Shu, Jie Ding, Wenyu Wang, Yuxuan Jiao and Yunzhi Wu
Sensors 2025, 25(16), 5058; https://doi.org/10.3390/s25165058 - 14 Aug 2025
Viewed by 275
Abstract
Accurate segmentation of pear leaf diseases is paramount for enhancing diagnostic precision and optimizing agricultural disease management. However, variations in disease color, texture, and morphology, coupled with changes in lighting conditions and gradual disease progression, pose significant challenges. To address these issues, we [...] Read more.
Accurate segmentation of pear leaf diseases is paramount for enhancing diagnostic precision and optimizing agricultural disease management. However, variations in disease color, texture, and morphology, coupled with changes in lighting conditions and gradual disease progression, pose significant challenges. To address these issues, we propose EBMA-Net, an edge-aware multi-scale network. EBMA-Net introduces a Multi-Dimensional Joint Attention Module (MDJA) that leverages atrous convolutions to capture lesion information at different scales, enhancing the model’s receptive field and multi-scale processing capabilities. An Edge Feature Extraction Branch (EFFB) is also designed to extract and integrate edge features, guiding the network’s focus toward edge information and reducing information redundancy. Experiments on a self-constructed pear leaf disease dataset demonstrate that EBMA-Net achieves a Mean Intersection over Union (MIoU) of 86.25%, Mean Pixel Accuracy (MPA) of 91.68%, and Dice coefficient of 92.43%, significantly outperforming comparison models. These results highlight EBMA-Net’s effectiveness in precise pear leaf disease segmentation under complex conditions. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

11 pages, 1072 KiB  
Article
Design and Characteristic Simulation of Polarization-Maintaining Anti-Resonant Hollow-Core Fiber for 2.79 μm Er, Cr: YSGG Laser Transmission
by Lei Huang and Yinze Wang
Optics 2025, 6(3), 37; https://doi.org/10.3390/opt6030037 - 14 Aug 2025
Viewed by 136
Abstract
Anti-resonant hollow-core fibers have exhibited excellent performance in applications such as high-power pulse transmission, network communication, space exploration, and precise sensing. Employing anti-resonant hollow-core fibers instead of light guiding arms for transmitting laser energy at the 2.79 μm band can significantly enhance the [...] Read more.
Anti-resonant hollow-core fibers have exhibited excellent performance in applications such as high-power pulse transmission, network communication, space exploration, and precise sensing. Employing anti-resonant hollow-core fibers instead of light guiding arms for transmitting laser energy at the 2.79 μm band can significantly enhance the flexibility of medical laser handles, reduce system complexity, and increase laser transmission efficiency. Nevertheless, common anti-resonant hollow-core fibers do not have the ability to maintain the polarization state of light during laser transmission, which greatly affects their practical applications. In this paper, we propose a polarization-maintaining anti-resonant hollow-core fiber applicable for transmission at the mid-infrared 2.79 μm band. This fiber features a symmetrical geometric structure and an asymmetric refractive index cladding composed of quartz and a type of mid-infrared glass with a higher refractive index. Through optimizing the fiber structure at the wavelength scale, single-polarization transmission can be achieved at the 2.79 μm wavelength, with a polarization extinction ratio exceeding 1.01 × 105, indicating its stable polarization-maintaining performance. Simultaneously, it possesses low-loss transmission characteristics, with the loss in the x-polarized fundamental mode being less than 9.8 × 10−3 dB/m at the 2.79 µm wavelength. This polarization-maintaining anti-resonant hollow-core fiber provides a more reliable option for the light guiding system of the 2.79 μm Er; Cr: YSGG laser therapy device. Full article
Show Figures

Figure 1

Back to TopTop