Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (896)

Search Parameters:
Keywords = similarity-based retrieval

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 2948 KB  
Article
Molecular Mimicry Between Toxoplasma gondii B-Cell Epitopes and Human Antigens Related to Schizophrenia: An In Silico Approach
by Juan F. Cano, Maria Andrea Bernal-Valencia, Pablo Vargas-Acevedo, Germán Mejía-Salgado, Andrés Sánchez, Oscar Correa-Jiménez, Marlon Múnera and Alejandra de-la-Torre
Int. J. Mol. Sci. 2025, 26(21), 10321; https://doi.org/10.3390/ijms262110321 - 23 Oct 2025
Viewed by 166
Abstract
Schizophrenia is a complex disorder influenced by genetic, neurobiological, and environmental factors, with increasing evidence implicating immune dysregulation. This study examined potential molecular mimicry between autoantigens associated with schizophrenia and proteins from Toxoplasma gondii, a parasite previously linked to the disorder. Amino [...] Read more.
Schizophrenia is a complex disorder influenced by genetic, neurobiological, and environmental factors, with increasing evidence implicating immune dysregulation. This study examined potential molecular mimicry between autoantigens associated with schizophrenia and proteins from Toxoplasma gondii, a parasite previously linked to the disorder. Amino acid sequences of schizophrenia-related autoantigens were retrieved from databases (AAgAtlas, PubMed), and homologous sequences were searched within the T. gondii proteome. Sequence identity was evaluated, and conserved B-cell epitopes were predicted using three-dimensional structures from the Protein Data Bank or models generated in Swiss-Model, followed by epitope mapping with ElliPro. Five autoantigens—gamma-enolase (ENO2), thyroid peroxidase (TPO), glutamic acid decarboxylase 65 kDa isoform (GAD65), serine/threonine-protein kinase 2 (VRK2), and dihydropyrimidine dehydrogenase [NADP(+)] (DPYD)—showed similarities with T. gondii proteins. Among them, enolase exhibited the highest homology, with identities up to 65%. These findings provide preliminary evidence of shared antigenic features between the parasite and schizophrenia-related autoantigens. Such mimicry could contribute to disease mechanisms by triggering autoimmune responses in genetically susceptible individuals, supporting the hypothesis that T. gondii infection may influence schizophrenia pathogenesis. Nonetheless, the results are based exclusively on in silico analyses, and experimental validation will be required to confirm potential cross-reactivity. Full article
(This article belongs to the Special Issue Emerging Biological and Molecular Targets in Schizophrenia)
Show Figures

Graphical abstract

27 pages, 1378 KB  
Article
Automated Taxonomy Construction Using Large Language Models: A Comparative Study of Fine-Tuning and Prompt Engineering
by Binh Vu, Rashmi Govindraju Naik, Bao Khanh Nguyen, Sina Mehraeen and Matthias Hemmje
Eng 2025, 6(11), 283; https://doi.org/10.3390/eng6110283 - 22 Oct 2025
Viewed by 325
Abstract
Taxonomies provide essential hierarchical structures for classifying information, enabling effective retrieval and knowledge organization in diverse domains such as e-commerce, academic research, and web search. Traditional taxonomy construction, heavily reliant on manual curation by domain experts, faces significant challenges in scalability, cost, and [...] Read more.
Taxonomies provide essential hierarchical structures for classifying information, enabling effective retrieval and knowledge organization in diverse domains such as e-commerce, academic research, and web search. Traditional taxonomy construction, heavily reliant on manual curation by domain experts, faces significant challenges in scalability, cost, and consistency when dealing with the exponential growth of digital data. Recent advancements in Large Language Models (LLMs) and Natural Language Processing (NLP) present powerful opportunities for automating this complex process. This paper explores the potential of LLMs for automated taxonomy generation, focusing on methodologies incorporating semantic embedding generation, keyword extraction, and machine learning clustering algorithms. We specifically investigate and conduct a comparative analysis of two primary LLM-based approaches using a dataset of eBay product descriptions. The first approach involves fine-tuning a pre-trained LLM using structured hierarchical data derived from chain-of-layer clustering outputs. The second employs prompt-engineering techniques to guide LLMs in generating context-aware hierarchical taxonomies based on clustered keywords without explicit model retraining. Both methodologies are evaluated for their efficacy in constructing organized multi-level hierarchical taxonomies. Evaluation using semantic similarity metrics (BERTScore and Cosine Similarity) against a ground truth reveals that the fine-tuning approach yields higher overall accuracy and consistency (BERTScore F1: 70.91%; Cosine Similarity: 66.40%) compared to the prompt-engineering approach (BERTScore F1: 61.66%; Cosine Similarity: 60.34%). We delve into the inherent trade-offs between these methods concerning semantic fidelity, computational resource requirements, result stability, and scalability. Finally, we outline potential directions for future research aimed at refining LLM-based taxonomy construction systems to handle large dynamic datasets with enhanced accuracy, robustness, and granularity. Full article
Show Figures

Figure 1

22 pages, 662 KB  
Article
Multi-Chain Fusion Reasoning for Knowledge Graph Link Prediction
by Shaonian Huang, Peilin Li, Huanran Wang and Zhixin Chen
Electronics 2025, 14(20), 4127; https://doi.org/10.3390/electronics14204127 - 21 Oct 2025
Viewed by 266
Abstract
The knowledge graph link prediction task currently faces challenges such as insufficient semantic fusion of structured knowledge and unstructured text, limited representation learning of long-tailed entities, and insufficient interpretability of the reasoning process. Aiming at the above problems, this paper proposes a multi-chain [...] Read more.
The knowledge graph link prediction task currently faces challenges such as insufficient semantic fusion of structured knowledge and unstructured text, limited representation learning of long-tailed entities, and insufficient interpretability of the reasoning process. Aiming at the above problems, this paper proposes a multi-chain fusion reasoning framework to realize accurate link prediction. First, a dual retrieval mechanism based on semantic similarity metrics and embedded feature matching is designed to construct a high-confidence candidate entity set; second, entity-attribute chains, entity-relationship chains, and historical context chains are established by integrating context information from external knowledge bases to generate a candidate entity set. Finally, a self-consistency scoring method fusing type constraints and semantic space alignment is proposed to realize the joint validation of structural rationality and semantic relevance of candidate entities. Experiments on two public datasets show that the method in this paper fully utilizes the ability of multi-chain reasoning and significantly improves the accuracy of knowledge graph link prediction. Full article
(This article belongs to the Special Issue Digital Intelligence Technology and Applications, 2nd Edition)
Show Figures

Figure 1

20 pages, 1406 KB  
Study Protocol
A Study on the Intelligent Estimation Systems for Costing Traffic Engineering and Landscaping Projects
by Dan Zhang, Jinxuan Ning, Xing Li and Xiaochen Duan
Buildings 2025, 15(20), 3793; https://doi.org/10.3390/buildings15203793 - 21 Oct 2025
Viewed by 416
Abstract
Research Objective: This study analyzes the budget quotas and sample cases of traffic engineering and landscaping projects to address the following issues: low accuracy and inability to reflect the cost levels of enterprises in the existing cost estimation techniques. It constructs a historical [...] Read more.
Research Objective: This study analyzes the budget quotas and sample cases of traffic engineering and landscaping projects to address the following issues: low accuracy and inability to reflect the cost levels of enterprises in the existing cost estimation techniques. It constructs a historical database and utilizes Python and BIM to develop a BP neural network intelligent estimation system, aiming to provide data and decision support for intelligent and visual cost estimation in traffic landscaping projects. Research conclusions: This study focuses on the construction drawing budget estimation for transportation engineering and landscape ecological engineering projects. Data were collected through questionnaires administered to scholars and practitioners, with key factors influencing pricing units identified using SPSS factor analysis. Subsequently, extensive historical data on road transportation and greening engineering were gathered and standardized through temporal and regional adjustments. Quantitative feature analysis was then conducted to establish a historical database of construction drawing budgets for completed transportation landscape ecological projects, based on construction enterprises. The cosine similarity method was employed to retrieve highly similar sample cases from the database for target projects. A BP neural network-based intelligent estimation system was developed using Python and BIM technology, providing reliable data support and technical assurance for cost estimation, decision-making, and ongoing maintenance endeavors pertaining to transportation landscape and ecological engineering projects. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

25 pages, 2968 KB  
Article
ECSA: Mitigating Catastrophic Forgetting and Few-Shot Generalization in Medical Visual Question Answering
by Qinhao Jia, Shuxian Liu, Mingliang Chen, Tianyi Li and Jing Yang
Tomography 2025, 11(10), 115; https://doi.org/10.3390/tomography11100115 - 20 Oct 2025
Viewed by 197
Abstract
Objective: Medical Visual Question Answering (Med-VQA), a key technology that integrates computer vision and natural language processing to assist in clinical diagnosis, possesses significant potential for enhancing diagnostic efficiency and accuracy. However, its development is constrained by two major bottlenecks: weak few-shot generalization [...] Read more.
Objective: Medical Visual Question Answering (Med-VQA), a key technology that integrates computer vision and natural language processing to assist in clinical diagnosis, possesses significant potential for enhancing diagnostic efficiency and accuracy. However, its development is constrained by two major bottlenecks: weak few-shot generalization capability stemming from the scarcity of high-quality annotated data and the problem of catastrophic forgetting when continually learning new knowledge. Existing research has largely addressed these two challenges in isolation, lacking a unified framework. Methods: To bridge this gap, this paper proposes a novel Evolvable Clinical-Semantic Alignment (ECSA) framework, designed to synergistically solve these two challenges within a single architecture. ECSA is built upon powerful pre-trained vision (BiomedCLIP) and language (Flan-T5) models, with two innovative modules at its core. First, we design a Clinical-Semantic Disambiguation Module (CSDM), which employs a novel debiased hard negative mining strategy for contrastive learning. This enables the precise discrimination of “hard negatives” that are visually similar but clinically distinct, thereby significantly enhancing the model’s representation ability in few-shot and long-tail scenarios. Second, we introduce a Prompt-based Knowledge Consolidation Module (PKC), which acts as a rehearsal-free non-parametric knowledge store. It consolidates historical knowledge by dynamically accumulating and retrieving task-specific “soft prompts,” thus effectively circumventing catastrophic forgetting without relying on past data. Results: Extensive experimental results on four public benchmark datasets, VQA-RAD, SLAKE, PathVQA, and VQA-Med-2019, demonstrate ECSA’s state-of-the-art or highly competitive performance. Specifically, ECSA achieves excellent overall accuracies of 80.15% on VQA-RAD and 85.10% on SLAKE, while also showing strong generalization with 64.57% on PathVQA and 82.23% on VQA-Med-2019. More critically, in continual learning scenarios, the framework achieves a low forgetting rate of just 13.50%, showcasing its significant advantages in knowledge retention. Conclusions: These findings validate the framework’s substantial potential for building robust and evolvable clinical decision support systems. Full article
Show Figures

Figure 1

42 pages, 104137 KB  
Article
A Hierarchical Absolute Visual Localization System for Low-Altitude Drones in GNSS-Denied Environments
by Qing Zhou, Haochen Tang, Zhaoxiang Zhang, Yuelei Xu, Feng Xiao and Yulong Jia
Remote Sens. 2025, 17(20), 3470; https://doi.org/10.3390/rs17203470 - 17 Oct 2025
Viewed by 689
Abstract
Current drone navigation systems primarily rely on Global Navigation Satellite Systems (GNSSs), but their signals are susceptible to interference, spoofing, or suppression in complex environments, leading to degraded positioning performance or even failure. To enhance the positioning accuracy and robustness of low-altitude drones [...] Read more.
Current drone navigation systems primarily rely on Global Navigation Satellite Systems (GNSSs), but their signals are susceptible to interference, spoofing, or suppression in complex environments, leading to degraded positioning performance or even failure. To enhance the positioning accuracy and robustness of low-altitude drones in satellite-denied environments, this paper investigates an absolute visual localization solution. This method achieves precise localization by matching real-time images with reference images that have absolute position information. To address the issue of insufficient feature generalization capability due to the complex and variable nature of ground scenes, a visual-based image retrieval algorithm is proposed, which utilizes a fusion of shallow spatial features and deep semantic features, combined with generalized average pooling to enhance feature representation capabilities. To tackle the registration errors caused by differences in perspective and scale between images, an image registration algorithm based on cyclic consistency matching is designed, incorporating a reprojection error loss function, a multi-scale feature fusion mechanism, and a structural reparameterization strategy to improve matching accuracy and inference efficiency. Based on the above methods, a hierarchical absolute visual localization system is constructed, achieving coarse localization through image retrieval and fine localization through image registration, while also integrating IMU prior correction and a sliding window update strategy to mitigate the effects of scale and rotation differences. The system is implemented on the ROS platform and experimentally validated in a real-world environment. The results show that the localization success rates for the h, s, v, and w trajectories are 95.02%, 64.50%, 64.84%, and 91.09%, respectively. Compared to similar algorithms, it demonstrates higher accuracy and better adaptability to complex scenarios. These results indicate that the proposed technology can achieve high-precision and robust absolute visual localization without the need for initial conditions, highlighting its potential for application in GNSS-denied environments. Full article
Show Figures

Graphical abstract

23 pages, 10902 KB  
Article
Deep Relevance Hashing for Remote Sensing Image Retrieval
by Xiaojie Liu, Xiliang Chen and Guobin Zhu
Sensors 2025, 25(20), 6379; https://doi.org/10.3390/s25206379 - 16 Oct 2025
Viewed by 389
Abstract
With the development of remote sensing technologies, the volume of remote sensing data is growing dramatically, making efficient management and retrieval of large-scale remote sensing images increasingly important. Recently, deep hashing for content-based remote sensing image retrieval (CBRSIR) has attracted significant attention due [...] Read more.
With the development of remote sensing technologies, the volume of remote sensing data is growing dramatically, making efficient management and retrieval of large-scale remote sensing images increasingly important. Recently, deep hashing for content-based remote sensing image retrieval (CBRSIR) has attracted significant attention due to its computational efficiency and high retrieval accuracy. Although great advancements have been achieved, the imbalance between easy and difficult image pairs during training often limits the model’s ability to capture complex similarities and degrades retrieval performance. Additionally, distinguishing images with the same Hamming distance but different categories remains a challenge during the retrieval phase. In this paper, we propose a novel deep relevance hashing (DRH) for remote sensing image retrieval, which consists of a global hash learning model (GHLM) and a local hash re-ranking model (LHRM). The goal of GHLM is to extract global features from RS images and generate compact hash codes for initial ranking. To achieve this, GHLM employs a deep convolutional neural network to extract discriminative representations. A weighted pairwise similarity loss is introduced to emphasize difficult image pairs and reduce the impact of easy ones during training. The LHRM predicts relevance scores for images that share the same Hamming distance with the query to reduce confusion in the retrieval stage. Specifically, we represent the retrieval list as a relevance matrix and employ a lightweight CNN model to learn the relevance scores of image pairs and refine the list. Experimental results on three benchmark datasets demonstrate that the proposed DRH method outperforms other deep hashing approaches, confirming its effectiveness in CBRSIR. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

30 pages, 4855 KB  
Article
Towards Reliable High-Resolution Satellite Products for the Monitoring of Chlorophyll-a and Suspended Particulate Matter in Optically Shallow Coastal Lagoons
by Samuel Martin, Philippe Bryère, Pierre Gernez, Pannimpullath Remanan Renosh and David Doxaran
Remote Sens. 2025, 17(20), 3430; https://doi.org/10.3390/rs17203430 - 14 Oct 2025
Viewed by 323
Abstract
Coastal lagoons are fragile and dynamic ecosystems that are particularly vulnerable to climate change and anthropogenic pressures such as urbanization and eutrophication. These vulnerabilities highlight the need for frequent and spatially extensive monitoring of water quality (WQ). While satellite remote sensing offers a [...] Read more.
Coastal lagoons are fragile and dynamic ecosystems that are particularly vulnerable to climate change and anthropogenic pressures such as urbanization and eutrophication. These vulnerabilities highlight the need for frequent and spatially extensive monitoring of water quality (WQ). While satellite remote sensing offers a valuable tool to support this effort, the optical complexity and shallow depths of lagoons pose major challenges for retrieving water column biogeochemical parameters such as chlorophyll-a ([chl-a]) and suspended particulate matter ([SPM]) concentrations. In this study, we develop and evaluate a robust satellite-based processing chain using Sentinel-2 MSI imagery over two French Mediterranean lagoon systems (Berre and Thau), supported by extensive in situ radiometric and biogeochemical datasets. Our approach includes the following: (i) a comparative assessment of six atmospheric correction (AC) processors, (ii) the development of an Optically Shallow Water Probability Algorithm (OSWPA), a new semi-empirical algorithm to estimate the probability of bottom contamination (BC), and (iii) the evaluation of several [chl-a] and [SPM] inversion algorithms. Results show that the Sen2Cor AC processor combined with a near-infrared similarity correction (NIR-SC) yields relative errors below 30% across all bands for retrieving remote-sensing reflectance Rrs(λ). OSWPA provides a spatially continuous and physically consistent alternative to binary BC masks. A new [chl-a] algorithm based on a near-infrared/blue Rrs ratio improves the retrieval accuracy while the 705 nm band appears to be the most suitable for retrieving [SPM] in optically shallow lagoons. This processing chain enables high-resolution WQ monitoring of two coastal lagoon systems and supports future large-scale assessments of ecological trends under increasing climate and anthropogenic stress. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

22 pages, 7434 KB  
Article
A Lightweight Image-Based Decision Support Model for Marine Cylinder Lubrication Based on CNN-ViT Fusion
by Qiuyu Li, Guichen Zhang and Enrui Zhao
J. Mar. Sci. Eng. 2025, 13(10), 1956; https://doi.org/10.3390/jmse13101956 - 13 Oct 2025
Viewed by 251
Abstract
Under the context of “Energy Conservation and Emission Reduction,” low-sulfur fuel has become widely adopted in maritime operations, posing significant challenges to cylinder lubrication systems. Traditional oil injection strategies, heavily reliant on manual experience, suffer from instability and high costs. To address this, [...] Read more.
Under the context of “Energy Conservation and Emission Reduction,” low-sulfur fuel has become widely adopted in maritime operations, posing significant challenges to cylinder lubrication systems. Traditional oil injection strategies, heavily reliant on manual experience, suffer from instability and high costs. To address this, a lightweight image retrieval model for cylinder lubrication is proposed, leveraging deep learning and computer vision to support oiling decisions based on visual features. The model comprises three components: a backbone network, a feature enhancement module, and a similarity retrieval module. Specifically, EfficientNetB0 serves as the backbone for efficient feature extraction under low computational overhead. MobileViT Blocks are integrated to combine local feature perception of Convolutional Neural Networks (CNNs) with the global modeling capacity of Transformers. To further improve receptive field and multi-scale representation, Receptive Field Blocks (RFB) are introduced between the components. Additionally, the Convolutional Block Attention Module (CBAM) attention mechanism enhances focus on salient regions, improving feature discrimination. A high-quality image dataset was constructed using WINNING’s large bulk carriers under various sea conditions. The experimental results demonstrate that the EfficientNetB0 + RFB + MobileViT + CBAM model achieves excellent performance with minimal computational cost: 99.71% Precision, 99.69% Recall, and 99.70% F1-score—improvements of 11.81%, 15.36%, and 13.62%, respectively, over the baseline EfficientNetB0. With only a 0.3 GFLOP and 8.3 MB increase in model size, the approach balances accuracy and inference efficiency. The model also demonstrates good robustness and application stability in real-world ship testing, with potential for further adoption in the field of intelligent ship maintenance. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

23 pages, 2102 KB  
Article
Hawkish or Dovish? That Is the Question: Agentic Retrieval of FED Monetary Policy Report
by Ana Lorena Jiménez-Preciado, Mario Alejandro Durán-Saldivar, Salvador Cruz-Aké and Francisco Venegas-Martínez
Mathematics 2025, 13(20), 3255; https://doi.org/10.3390/math13203255 - 11 Oct 2025
Viewed by 438
Abstract
This paper develops a Natural Language Processing (NLP) pipeline to quantify the hawkish–dovish stance in the Federal Reserve’s semiannual Monetary Policy Reports (MPRs). The goal is to transform long-form central-bank text into reproducible stance scores and interpretable policy signals for research and monitoring. [...] Read more.
This paper develops a Natural Language Processing (NLP) pipeline to quantify the hawkish–dovish stance in the Federal Reserve’s semiannual Monetary Policy Reports (MPRs). The goal is to transform long-form central-bank text into reproducible stance scores and interpretable policy signals for research and monitoring. The corpus comprises 26 MPRs (26 February 2013 to 20 June 2025). PDFs are parsed and segmented and chunks are embedded, indexed with FAISS, retrieved via LangChain, and scored by GPT-4o on a continuous scale from −2 (dovish) to +2 (hawkish). Reliability is assessed with a four-dimension validation suite: (i) semantic consistency using cosine-similarity separation, (ii) numerical consistency against theory-implied correlation ranges (e.g., Taylor-rule logic), (iii) bootstrap stability of reported metrics, and (iv) content-quality diagnostics. Results show a predominant Neutral distribution (50.0%), with Dovish (26.9%) and Hawkish (23.1%). The average stance is near zero (≈0.019) with volatility σ ≈ 0.866, and the latest window exhibits a hawkish drift of ~+0.8 points. The Numerical Consistency Score is 0.800, and the integrated validation score is 0.796, indicating publication-grade robustness. We conclude that an embedding-based, agentic RAG approach with GPT-4o yields a scalable, auditable measure of FED communication; limitations include biannual frequency and prompt/model sensitivity, but the framework is suitable for policy tracking and empirical applications. Full article
Show Figures

Figure 1

20 pages, 7466 KB  
Article
Feasibility Study of CLIP-Based Key Slice Selection in CT Images and Performance Enhancement via Lesion- and Organ-Aware Fine-Tuning
by Kohei Yamamoto and Tomohiro Kikuchi
Bioengineering 2025, 12(10), 1093; https://doi.org/10.3390/bioengineering12101093 - 10 Oct 2025
Viewed by 625
Abstract
Large-scale medical visual question answering (MedVQA) datasets are critical for training and deploying vision–language models (VLMs) in radiology. Ideally, such datasets should be automatically constructed from routine radiology reports and their corresponding images. However, no existing method directly links free-text findings to the [...] Read more.
Large-scale medical visual question answering (MedVQA) datasets are critical for training and deploying vision–language models (VLMs) in radiology. Ideally, such datasets should be automatically constructed from routine radiology reports and their corresponding images. However, no existing method directly links free-text findings to the most relevant 2D slices in volumetric computed tomography (CT) scans. To address this gap, a contrastive language–image pre-training (CLIP)-based key slice selection framework is proposed, which matches each sentence to its most informative CT slice via text–image similarity. This experiment demonstrates that models pre-trained in the medical domain already achieve competitive slice retrieval accuracy and that fine-tuning them on a small dual-supervised dataset that imparts both lesion- and organ-level awareness yields further gains. In particular, the best-performing model (fine-tuned BiomedCLIP) achieved a Top-1 accuracy of 51.7% for lesion-aware slice retrieval, representing a 20-point improvement over baseline CLIP, and was accepted by radiologists in 56.3% of cases. By automating the report-to-slice alignment, the proposed method facilitates scalable, clinically realistic construction of MedVQA resources. Full article
Show Figures

Graphical abstract

21 pages, 2038 KB  
Review
Densifying the Future: A Critical Review of Osseodensification and Implant Dentistry
by Rafael Ortiz, Paulo Maurício and Paulo Sobral Mascarenhas
Dent. J. 2025, 13(10), 461; https://doi.org/10.3390/dj13100461 - 9 Oct 2025
Viewed by 524
Abstract
Osseodensification (OD) compacts trabecular bone during implant site preparation rather than removing it, potentially enhancing primary stability versus conventional drilling. This review critically appraised clinical and preclinical evidence for OD’s biological and biomechanical efficacy in implant dentistry. We conducted electronic searches in seven [...] Read more.
Osseodensification (OD) compacts trabecular bone during implant site preparation rather than removing it, potentially enhancing primary stability versus conventional drilling. This review critically appraised clinical and preclinical evidence for OD’s biological and biomechanical efficacy in implant dentistry. We conducted electronic searches in seven databases (PubMed, Scopus, Web of Science, ScienceDirect, SciELO, LILACS, DOAJ) for the period January 2014 to March 2024. Studies comparing osseodensification with conventional drilling in clinical and large-animal models were included. Primary outcomes were insertion torque, implant stability quotient (ISQ), bone-to-implant contact (BIC), bone area fraction occupancy (BAFO), and complications. Of 75 retrieved records, 38 studies (27 clinical, 11 preclinical) provided analysable data. Based on descriptive averages from the narrative synthesis, osseodensification increased mean insertion torque by around 45% (range 32–59%) and initial ISQ by 3–10 units compared with conventional drilling. These gains permitted immediate loading in 78% of cases and shortened operating time (mean reduction 15–20 min). Animal studies demonstrated 12–28% higher BIC and increased peri-implant bone density at 4–12 weeks. No serious adverse events were recorded. Postoperative morbidity was similar between techniques. The collated evidence indicates that osseodensification significantly improves primary stability and may accelerate healing protocols, particularly in low-density (Misch D3–D4) bone. However, the predominance of short-term data and heterogeneity in surgical parameters limit definitive conclusions. Long-term randomised controlled trials with standardised protocols are needed before universal clinical recommendations can be established. Full article
(This article belongs to the Section Dental Implantology)
Show Figures

Graphical abstract

20 pages, 7351 KB  
Article
A Sketch-Based Cross-Modal Retrieval Model for Building Localization Without Satellite Signals
by Haihua Du, Jiawei Fan, Yitao Huang, Longyang Lin and Jiuchao Qian
Electronics 2025, 14(19), 3936; https://doi.org/10.3390/electronics14193936 - 4 Oct 2025
Viewed by 416
Abstract
In existing non-satellite navigation systems, visual localization is widely adopted for its high precision. However, in scenarios with highly similar building structures, traditional visual localization methods that rely on direct coordinate prediction often suffer from decreased accuracy or even failure. Moreover, as scene [...] Read more.
In existing non-satellite navigation systems, visual localization is widely adopted for its high precision. However, in scenarios with highly similar building structures, traditional visual localization methods that rely on direct coordinate prediction often suffer from decreased accuracy or even failure. Moreover, as scene complexity increases, their robustness tends to decline. To address these challenges, this paper proposes a Sketch Line Information Consistency Generation (SLIC) model for indirect building localization. Instead of regressing geographic coordinates, the model retrieves candidate building images that correspond to hand-drawn sketches, and these retrieved results serve as proxies for localization in satellite-denied environments. Within the model, the Line-Attention Block and Relation Block are designed to extract fine-grained line features and structural correlations, thereby improving retrieval accuracy. Experiments on multiple architectural datasets demonstrate that the proposed approach achieves high precision and robustness, with mAP@2 values ranging from 0.87 to 1.00, providing a practical alternative to conventional coordinate-based localization methods. Full article
(This article belongs to the Special Issue Recent Advances in Autonomous Localization and Navigation System)
Show Figures

Figure 1

24 pages, 11759 KB  
Review
Data Sources for Traffic Analysis in Urban Canyons—The Comprehensive Literature Review
by Michał Zawodny and Maciej Kruszyna
Appl. Sci. 2025, 15(19), 10686; https://doi.org/10.3390/app151910686 - 3 Oct 2025
Viewed by 643
Abstract
We propose a comprehensive literature review based on big data and V2X research to find promising tools to detect vehicles for traffic research and provide safe autonomous vehicle (AV) traffic. Presented data sources can provide real-time data for V2X systems and offline databases [...] Read more.
We propose a comprehensive literature review based on big data and V2X research to find promising tools to detect vehicles for traffic research and provide safe autonomous vehicle (AV) traffic. Presented data sources can provide real-time data for V2X systems and offline databases from VATnets for micro- and macro-modeling in traffic research. The authors want to present a set of sources that are not based on GNSS and other systems that could be interrupted by high-rise buildings and dense smart city infrastructure, as well as review of big data sources in traffic modeling that can be useful in future traffic research. Both reviews findings are summarized in tables at the end of the review sections of the paper. The authors added propositions in the form of two hypotheses on how traffic models can obtain data in the urban canyon connected environment scenario. The first hypothesis uses Roadside Units (RSUs) to retrieve data in similar ways to cellular data in traffic research and proves that this source is data rich. The second one acknowledges Bluetooth/Wi-Fi scanners’ research potential in V2X environments. Full article
(This article belongs to the Special Issue Mapping and Localization for Intelligent Vehicles in Urban Canyons)
Show Figures

Figure 1

33 pages, 4190 KB  
Article
Preserving Songket Heritage Through Intelligent Image Retrieval: A PCA and QGD-Rotational-Based Model
by Nadiah Yusof, Nazatul Aini Abd. Majid, Amirah Ismail and Nor Hidayah Hussain
Computers 2025, 14(10), 416; https://doi.org/10.3390/computers14100416 - 1 Oct 2025
Viewed by 379
Abstract
Malay songket motifs are a vital component of Malaysia’s intangible cultural heritage, characterized by intricate visual designs and deep cultural symbolism. However, the practical digital preservation and retrieval of these motifs present challenges, particularly due to the rotational variations typical in textile imagery. [...] Read more.
Malay songket motifs are a vital component of Malaysia’s intangible cultural heritage, characterized by intricate visual designs and deep cultural symbolism. However, the practical digital preservation and retrieval of these motifs present challenges, particularly due to the rotational variations typical in textile imagery. This study introduces a novel Content-Based Image Retrieval (CBIR) model that integrates Principal Component Analysis (PCA) for feature extraction and Quadratic Geometric Distance (QGD) for measuring similarity. To evaluate the model’s performance, a curated dataset comprising 413 original images and 4956 synthetically rotated songket motif images was utilized. The retrieval system featured metadata-driven preprocessing, dimensionality reduction, and multi-angle similarity assessment to address the issue of rotational invariance comprehensively. Quantitative evaluations using precision, recall, and F-measure metrics demonstrated that the proposed PCAQGD + Rotation technique achieved a mean F-measure of 59.72%, surpassing four benchmark retrieval methods. These findings confirm the model’s capability to accurately retrieve relevant motifs across varying orientations, thus supporting cultural heritage preservation efforts. The integration of PCA and QGD techniques effectively narrows the semantic gap between machine perception and human interpretation of motif designs. Future research should focus on expanding motif datasets and incorporating deep learning approaches to enhance retrieval precision, scalability, and applicability within larger national heritage repositories. Full article
Show Figures

Graphical abstract

Back to TopTop