Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (143)

Search Parameters:
Keywords = latent-context information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 16245 KB  
Article
Aging State Classification of Lithium-Ion Batteries in a Low-Dimensional Latent Space
by Limei Jin, Franz Philipp Bereck, Rüdiger-A. Eichel, Josef Granwehr and Christoph Scheurer
Batteries 2026, 12(4), 127; https://doi.org/10.3390/batteries12040127 - 7 Apr 2026
Abstract
Battery datasets, whether gathered experimentally or through simulation, are typically high-dimensional and complex, which complicates the direct interpretation of degradation behavior or anomaly detection. To overcome these limitations, this study introduces a framework that compresses battery signals into a low-dimensional representation using an [...] Read more.
Battery datasets, whether gathered experimentally or through simulation, are typically high-dimensional and complex, which complicates the direct interpretation of degradation behavior or anomaly detection. To overcome these limitations, this study introduces a framework that compresses battery signals into a low-dimensional representation using an autoencoder, enabling the extraction of informative features for state analysis. A central component of this work is the systematic comparison of latent representations obtained from two fundamentally different data sources: frequency-domain impedance data and time-domain voltage-current data. The close agreement of aging trajectories in both representations suggests that information traditionally derived from impedance analysis can also be captured directly from raw time-series signals. To better approximate real operating conditions, synthetic datasets are augmented with stochastic perturbations. In this context, latent spaces learned from idealized periodic inputs are contrasted with those derived from permuted and noise-contaminated signals. The resulting low-dimensional features are subsequently evaluated through a support vector machine with both linear and nonlinear kernel functions, allowing the categorization of battery states into fresh, aged and damaged conditions. The results demonstrate that the progression of battery degradation is consistently reflected in the latent space, independent of the input domain or signal quality. This robustness indicates that the proposed approach can effectively capture essential aging characteristics even under non-ideal conditions. Consequently, this framework provides a basis for developing advanced diagnostic strategies, including the design of pseudo-random excitation profiles for improved battery state assessment and optimized operational control. Full article
Show Figures

Graphical abstract

27 pages, 1279 KB  
Article
Query-Adaptive Hybrid Search
by Pavel Posokhov, Stepan Skrylnikov, Sergei Masliukhin, Alina Zavgorodniaia, Olesia Koroteeva and Yuri Matveev
Mach. Learn. Knowl. Extr. 2026, 8(4), 91; https://doi.org/10.3390/make8040091 - 5 Apr 2026
Viewed by 98
Abstract
The modern information retrieval field increasingly relies on hybrid search systems combining sparse retrieval with dense neural models. However, most existing hybrid frameworks employ static mixing coefficients and independent component training, failing to account for the specific needs of individual queries and corpus [...] Read more.
The modern information retrieval field increasingly relies on hybrid search systems combining sparse retrieval with dense neural models. However, most existing hybrid frameworks employ static mixing coefficients and independent component training, failing to account for the specific needs of individual queries and corpus heterogeneity. In this paper, we introduce an adaptive hybrid retrieval framework featuring query-driven alpha prediction that dynamically calibrates the mixing weights based on query latent representations instantiated in a lightweight low-latency configuration and a full-capacity encoder-scale predictor, enabling flexible trade-offs between computational efficiency and retrieval accuracy without relying on resource-inefficient LLM-based online evaluation. Furthermore, we propose antagonist negative sampling, a novel training paradigm that optimizes the dense encoder to resolve the systematic failures of the lexical retriever, prioritizing hard negatives where BM25 exhibits high uncertainty. Empirical evaluations on large-scale multilingual benchmarks (MLDR and MIRACL) indicate that our approach demonstrates superior average performance compared to state-of-the-art models such as BGE-M3 and mGTE, achieving an nDCG@10 of 74.3 on long-document retrieval. Notably, our framework recovers up to 92.5% of the theoretical oracle performance and yields significant improvements in nDCG@10 across 16 languages, particularly in challenging long-context scenarios. Full article
(This article belongs to the Special Issue Trustworthy AI: Integrating Knowledge, Retrieval, and Reasoning)
Show Figures

Figure 1

17 pages, 1113 KB  
Communication
Bridging Spectral Statistics and Machine Learning for Semantic Road Network Analysis
by Abigail Kelly, Ramchandra Rimal and Arpan Man Sainju
Geomatics 2026, 6(2), 35; https://doi.org/10.3390/geomatics6020035 - 1 Apr 2026
Viewed by 199
Abstract
Accurate identification of road network intersections is essential for urban planning, autonomous navigation, and traffic safety analysis. However, standard approaches relying on local geometric attributes often overlook essential topological information. This limitation is particularly problematic for intersection types that are locally similar but [...] Read more.
Accurate identification of road network intersections is essential for urban planning, autonomous navigation, and traffic safety analysis. However, standard approaches relying on local geometric attributes often overlook essential topological information. This limitation is particularly problematic for intersection types that are locally similar but topologically distinct. To address this, we propose a hybrid framework that augments intrinsic node attributes with Generalized Random Dot Product Graph embeddings and neighbor-aggregated features. We utilize tree-based ensemble classifiers, specifically Random Forest and Extreme Gradient Boosting, to process this enriched feature set. Unlike standard spectral methods that assume homophily, this approach explicitly models heterophilous connectivity to capture structural patterns where dissimilar nodes connect. Experiments on a real-world urban road network demonstrate that this topological augmentation yields consistent and robust improvements. The proposed integration with the Extreme Gradient Boosting model achieves a Macro ROC AUC of 0.8966 and a Micro F1 score of 0.7005, outperforming the baseline model (ROC AUC 0.8100, Micro F1 0.5919). Performance gains are most pronounced for topologically ambiguous intersection classes, confirming that local attributes alone fail to capture structural distinctions. These results demonstrate that latent structural context is a critical discriminator for granular road intersection classification. Full article
Show Figures

Graphical abstract

23 pages, 18538 KB  
Article
MSRNet: Mamba-Based Self-Refinement Framework for Remote Sensing Change Detection
by Haoxuan Sun, Xiaogang Yang, Ruitao Lu, Jing Zhang, Bo Li and Tao Zhang
Remote Sens. 2026, 18(7), 1042; https://doi.org/10.3390/rs18071042 - 30 Mar 2026
Viewed by 279
Abstract
Accurate change detection (CD) in very high-resolution (VHR, <1 m) optical remote sensing images remains challenging, as it requires effective modeling of long-range bi-temporal dependencies and robustness against label noise in complex urban environments. Existing deep learning-based CD methods either rely on convolutional [...] Read more.
Accurate change detection (CD) in very high-resolution (VHR, <1 m) optical remote sensing images remains challenging, as it requires effective modeling of long-range bi-temporal dependencies and robustness against label noise in complex urban environments. Existing deep learning-based CD methods either rely on convolutional operations with limited receptive fields or employ global attention mechanisms with high computational cost, making it difficult to simultaneously achieve efficient global context modeling and fine-grained structural sensitivity. To address these challenges, we propose a Mamba-based self-refinement framework for remote sensing change detection (MSRNet). Specifically, we introduce an attention-enhanced oblique state space module (AOSS) to model spatio-temporal dependencies with linear complexity while preserving fine-grained structural information. The four-branch attention fusion module (FBAM) further enhances cross-dimensional feature interaction to improve the discriminative capability of differential representations. In addition, a self-refinement module (SRM) incorporates a momentum encoder to generate high-quality pseudo-labels, mitigating annotation noise and enabling learning from latent changes. Extensive experiments on two benchmark VHR datasets, LEVIR-CD and WHU-CD, demonstrate that MSRNet achieves state-of-the-art performance in both accuracy and computational efficiency. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

39 pages, 1508 KB  
Article
Acceptability Scale for the Use of Large Language Models (LLMs) by Project Teams: Development and Preliminary Validation
by Murilo Zanini de Carvalho, Renato Penha, Leonardo Vils, Flávio Santino Bizarrias and Fernando Antonio Ribeiro Serra
Systems 2026, 14(4), 366; https://doi.org/10.3390/systems14040366 - 30 Mar 2026
Viewed by 407
Abstract
The use of Large Language Models (LLMs) in organizational contexts has grown rapidly, particularly in project management activities. Despite this expansion, a relevant methodological gap can be observed in the literature: the absence of psychometrically validated instruments capable of measuring the acceptability of [...] Read more.
The use of Large Language Models (LLMs) in organizational contexts has grown rapidly, particularly in project management activities. Despite this expansion, a relevant methodological gap can be observed in the literature: the absence of psychometrically validated instruments capable of measuring the acceptability of these technologies prior to their effective adoption, especially in project-oriented governance contexts. Traditional technology adoption models predominantly focus on a posteriori assessment of individual use, providing limited support for prospective analyses that inform strategic decision-making and organizational coordination mechanisms. In response to this gap, this study aims to develop and validate a psychometric scale to indirectly measure the acceptability, through outcome beliefs and with behavioral predispositions serving as structural proxies of the latent construct of LLM use by project management teams, with a focus on a priori judgments that precede the effective adoption of the technology. The initial scale, composed of 17 items, underwent content validation and was administered to a sample of 154 project management professionals. The latent structure was examined through Exploratory and Confirmatory Factor Analyses, resulting in the refinement of the instrument to 13 items distributed across two correlated factors. The results indicate that LLM acceptability is adequately represented by a bidimensional structure comprising the dimensions Intention/Predisposition and Trust/Perceived Benefit, both demonstrating high internal consistency and good statistical fit, and nomological validity evidenced by significant associations with respondents’ self-reported LLM usage frequency. These findings reinforce the conceptualization of acceptability as a prospective and multidimensional construct, relevant for supporting governance decisions and the adoption of artificial intelligence-based technologies in project-oriented organizational systems. The indirect measurement approach adopted here is theoretically grounded in the premise that a priori acceptability is not directly observable but is constituted by cognitive and dispositional beliefs formed prior to use. Full article
Show Figures

Figure 1

20 pages, 1074 KB  
Article
A Contrastive Representation Learning Framework for Event Causality Identification
by Guixiang Liao, Yanli Chen, Wei Ke, Hanzhou Wu and Zhicheng Dong
Information 2026, 17(4), 321; https://doi.org/10.3390/info17040321 - 26 Mar 2026
Viewed by 312
Abstract
To address the challenges associated with identifying causal relationships among event mentions in the event causality identification (ECI) task, ECI has emerged as a pivotal area of research for comprehending event structures. Recent studies have leveraged Transformer-based models, augmented by auxiliary components, to [...] Read more.
To address the challenges associated with identifying causal relationships among event mentions in the event causality identification (ECI) task, ECI has emerged as a pivotal area of research for comprehending event structures. Recent studies have leveraged Transformer-based models, augmented by auxiliary components, to develop effective contextual representations for causality prediction. A critical step in ECI models involves transforming intricate event context representations into causal label representations, thereby facilitating the logical score calculations necessary for both training and inference. However, existing models frequently depend on simplistic feedforward networks for this transformation process, which often struggle to bridge the semantic gap between complex event contexts and target causal labels, particularly in linguistically nuanced scenarios. To address these limitations, we propose Contrastive Learning for Event Causality Identification (CLECI), an innovative ECI framework that enhances representation learning through the integration of contrastive learning techniques, a generator-discriminator mechanism with causal label embeddings. In contrast to traditional direct transformation methods, CLECI generates latent causal label embeddings that filter out irrelevant information while aligning with potential label representations. By incorporating contrastive learning principles, CLECI further augments the discriminative capability of event representations by constructing positive and negative pairs of events. Experimental evaluations conducted on the EventStoryLine (ESL), Causal-TimeBank (CTB), and MECI datasets demonstrate that CLECI achieves competitive performance, with F1-score improvements of 4.3%, 7.9%, and 2.5%, respectively, compared with the strongest baseline methods, while maintaining strong robustness in complex and noisy multilingual event contexts. Full article
(This article belongs to the Section Information Processes)
Show Figures

Graphical abstract

28 pages, 7008 KB  
Article
Multimodal Deep Learning Framework for Profiling Socio-Economic Indicators and Public Health Determinants in Urban Environments
by Esaie Dufitimana, Jean Pierre Bizimana, Ernest Uwayezu, Paterne Gahungu and Emmy Mugisha
Urban Sci. 2026, 10(4), 177; https://doi.org/10.3390/urbansci10040177 - 25 Mar 2026
Viewed by 334
Abstract
Urbanization significantly enhances socio-economic conditions, health, and well-being for many by improving access to services, education, and economic opportunities. However, socio-economic and public health disparities are also being exacerbated by urbanization. The reliable data required to monitor these conditions are often unavailable, outdated, [...] Read more.
Urbanization significantly enhances socio-economic conditions, health, and well-being for many by improving access to services, education, and economic opportunities. However, socio-economic and public health disparities are also being exacerbated by urbanization. The reliable data required to monitor these conditions are often unavailable, outdated, or inconsistent. This study introduces a multimodal deep learning framework that integrates satellite imagery with street network datasets to predict urban socio-economic indicators and public health determinants at the sector level as a political administrative unit of public health planning in Rwanda. We extracted latent visual and topological embeddings of the urban built environment, using a Convolutional Neural Network (CNN) and Graph Neural Network (GNN). These embeddings were fused through an attentional mechanism to train a multi-task regression model that simultaneously predicts multiple socio-economic indicators and public health determinants. This framework was applied to the City of Kigali in Rwanda. Overall, the multimodal fusion model achieved the best average performance across targets, with an average correlation of 0.68 and MAE of 1.26 for socio-economic indicators, and 0.68 and 1.46 for public health determinants, demonstrating the benefit of integrating visual and topological information. The learned fused embedding space arranges socio-economic indicators and public health determinant deciles along a continuous morphological gradient from sparsely built rural settings to dense urban settings, demonstrating that the urban form encodes latent signals that capture socio-economic indicators and health determinants. Moreover, the study reveals a strong relationship between socio-economic indicators and the public health index, with education, cooking materials, and floor materials exhibiting a correlation above 0.96. This work demonstrates the utility of an integrated framework for socio-economic indicator profiling and public health planning in data-scarce urban contexts, offering a scalable approach for monitoring the indicators of Sustainable Development Goals in rapidly changing urban environments. Full article
(This article belongs to the Topic Geospatial AI: Systems, Model, Methods, and Applications)
Show Figures

Figure 1

28 pages, 2882 KB  
Article
Semantic Divergence in AI-Generated and Human Influencer Product Recommendations: A Computational Analysis of Dual-Agent Communication in Social Commerce
by Woo-Chul Lee, Jang-Suk Lee and Jungho Suh
Appl. Sci. 2026, 16(6), 2816; https://doi.org/10.3390/app16062816 - 15 Mar 2026
Viewed by 448
Abstract
The proliferation of generative artificial intelligence (AI) as an autonomous recommendation agent fundamentally challenges traditional paradigms of marketing communication. As AI systems increasingly mediate consumer–brand relationships, understanding how artificial agents construct persuasive discourse—distinct from human communicators—becomes critical for developing effective dual-channel marketing strategies. [...] Read more.
The proliferation of generative artificial intelligence (AI) as an autonomous recommendation agent fundamentally challenges traditional paradigms of marketing communication. As AI systems increasingly mediate consumer–brand relationships, understanding how artificial agents construct persuasive discourse—distinct from human communicators—becomes critical for developing effective dual-channel marketing strategies. Grounded in Source Credibility Theory and the Computers Are Social Actors (CASA) paradigm, this study investigates the semantic and structural divergence between AI-generated product recommendations and human influencer marketing messages in social commerce contexts. Employing a mixed-methods computational approach integrating term frequency analysis, TF-IDF weighting, Latent Dirichlet Allocation (LDA) topic modeling, and BERT-based contextualized semantic embedding analysis (KR-SBERT), we examined 330 Instagram influencer posts and 541 AI-generated responses concerning inner beauty enzyme products—a hybrid category combining functional health claims with hedonic beauty appeals—in the Korean social commerce market. AI-generated responses were collected through a systematically designed query protocol with empirically grounded prompts derived from actual consumer search behaviors, and analytical robustness was verified through sensitivity analyses across multiple parameter thresholds. Our findings reveal a fundamental divergence in persuasive architecture: human influencers construct experiential narratives exhibiting message characteristics typically associated with peripheral-route cues (sensory descriptions, emotional testimonials, social context), while AI recommendations employ systematic, evidence-based discourse exhibiting message characteristics typically associated with central-route argumentation (functional mechanisms, ingredient specifications, objective criteria). Topic modeling identified four distinct thematic clusters for each source type: human discourse centers on embodied experience and relational consumption, whereas AI discourse organizes around informational utility and rational decision support. Jensen–Shannon Divergence analysis (JSD = 0.213 bits) confirmed moderate distributional divergence, while chi-square testing (χ2 = 847.23, p < 0.001) and Cramér’s V (0.312, indicating a medium-to-large effect) demonstrated statistically significant and substantively meaningful differences. These findings extend CASA theory by demonstrating that AI recommendation agents develop a characteristic “AI communication signature” distinguishable from human persuasion patterns. We propose an integrated Dual-Agent Persuasion Proposition—synthesizing CASA, ELM, and Source Credibility perspectives—suggesting that AI and human recommenders serve complementary functions across different stages of the consumer decision journey—a proposition whose predictions regarding sequential persuasive effectiveness and consumer processing routes await experimental validation. These findings carry implications for AI content strategy optimization, platform design, and emerging regulatory frameworks for AI-generated content labeling. Full article
Show Figures

Figure 1

18 pages, 1029 KB  
Article
Research with Epistemology: Are We Really Following the Scientific Method?
by Diego Lara-Haro, Alexander Haro-Sarango, Patricia López-Fraga and Angel Esquivel-Valverde
Publications 2026, 14(1), 18; https://doi.org/10.3390/publications14010018 - 7 Mar 2026
Viewed by 758
Abstract
Epistemology underpins the scientific method by clarifying what counts as knowledge, which forms of evidence are admissible, and how procedures can legitimately support conclusions. Under accelerated publishing conditions, these assumptions are often left implicit, which can weaken the inferential coherence of peer-reviewed manuscripts. [...] Read more.
Epistemology underpins the scientific method by clarifying what counts as knowledge, which forms of evidence are admissible, and how procedures can legitimately support conclusions. Under accelerated publishing conditions, these assumptions are often left implicit, which can weaken the inferential coherence of peer-reviewed manuscripts. This study aimed to model reviewers’ perceived epistemological deficiencies as a multidimensional construct with an overarching global component. A 14-item instrument covering four latent domains was administered to 183 peer reviewers from a Latin American academic network. A second-order structural equation model was estimated using SEM with DWLS (lavaan). The model showed excellent fit (CFI ≈ 1.00; RMSEA = 0.000; SRMR = 0.033) and strong factor loadings, indicating a coherent global factor alongside distinct domain-specific components. Reviewers’ accumulated experience was positively associated with the global factor (β = 0.047; p = 0.013), whereas the recent volume of reviews was not statistically significant (p = 0.254). These results suggest that epistemological scrutiny may reflect more stable evaluative competencies than short-term reviewing activity. The instrument can inform editorial rubrics and reviewer training aimed at strengthening problem–theory–method coherence and reflexive methodological justification. Because the measure captures perceptions within a single regional network, further validation across disciplines and cultural contexts is recommended. Full article
Show Figures

Figure 1

26 pages, 882 KB  
Article
Factor Structure and Measurement Invariance Across Gender of the Eating Disorder Examination Questionnaire—Short Form in Italian Workers
by Nicola Magnavita and Carlo Chiorri
Eur. J. Investig. Health Psychol. Educ. 2026, 16(3), 37; https://doi.org/10.3390/ejihpe16030037 - 5 Mar 2026
Viewed by 706
Abstract
Eating disorders (EDs) are complex conditions that can significantly affect health and productivity, yet their assessment in occupational settings remains underexplored. This study aimed to evaluate the psychometric properties of the Italian version of the Eating Disorder Examination Questionnaire—Short Form (EDE-QS) among 1912 [...] Read more.
Eating disorders (EDs) are complex conditions that can significantly affect health and productivity, yet their assessment in occupational settings remains underexplored. This study aimed to evaluate the psychometric properties of the Italian version of the Eating Disorder Examination Questionnaire—Short Form (EDE-QS) among 1912 workers undergoing health surveillance. Using an Item Response Theory framework, we tested dimensionality, reliability, and measurement invariance across gender, applying a graded response model to assess item discrimination and threshold parameters. Results supported an approximate unidimensional structure with excellent internal consistency (ω ≈ 0.95) and strong indices of factor score determinacy and construct replicability. Measurement invariance analyses indicated configural and metric invariance but not full scalar invariance, due to differential item functioning in a subset of items. Latent mean differences were small, with women scoring slightly higher than men, and associations with psychological, occupational, and health-related variables did not differ by gender. These findings indicate that the Italian EDE-QS shows promising structural validity as a brief measure of ED symptomatology in occupational samples in workplace contexts. However, gender-related item bias warrants cautious interpretation of specific behaviors, suggesting the need for tailored assessments to enhance diagnostic accuracy and inform preventive interventions. Full article
Show Figures

Figure 1

24 pages, 478 KB  
Article
Sustainable Consumer Behavior in the Phygital Environment: Determinants of Sustainable Decision-Making at the Interface of Physical and Digital Worlds
by Łukasz Wróblewski and Grzegorz Maciejewski
Sustainability 2026, 18(5), 2521; https://doi.org/10.3390/su18052521 - 4 Mar 2026
Viewed by 313
Abstract
The growing integration of digital technologies with physical consumption spaces has led to the emergence of phygital environments, fundamentally transforming consumer decision-making processes. At the same time, sustainability has become an increasingly important normative and strategic context shaping contemporary consumption. While phygital solutions [...] Read more.
The growing integration of digital technologies with physical consumption spaces has led to the emergence of phygital environments, fundamentally transforming consumer decision-making processes. At the same time, sustainability has become an increasingly important normative and strategic context shaping contemporary consumption. While phygital solutions are often associated with sustainability-oriented claims, empirical evidence explaining how consumer behavior in phygital environments relates to sustainability remains limited. This study examines consumer behavior in phygital purchasing contexts through the prism of sustainability, focusing on the decision-making mechanisms that may support sustainability-oriented choices rather than treating phygital behavior as sustainable consumption per se. Using a two-stage analytical approach, the study first identifies key purchasing dimensions characterizing consumer behavior in phygital environments and then empirically tests the direction and strength of their relationships within a theoretically grounded structural model. Based on survey data collected from 2160 consumers, Exploratory Factor Analysis (EFA) was employed to identify latent purchasing dimensions, followed by Confirmatory Factor Analysis (CFA) and covariance-based structural equation modeling (CB-SEM) to validate the measurement model and examine hypothesized relationships. The results reveal four interrelated purchasing dimensions—purchase pragmatism, emotional commitment to the purchase, purchase comfort, and purchase pleasure—that shape consumers’ engagement in phygital purchasing processes. The findings suggest that phygital environments may foster sustainability-oriented decision-making by enhancing information access, decision efficiency, emotional engagement, and experiential value. However, the study does not directly measure environmental or sustainability outcomes; instead, it clarifies how established dimensions of consumer decision-making operate within phygital environments when analyzed from a sustainability-oriented perspective. The study offers theoretical implications for research on phygital consumer behavior and sustainability-oriented marketing, as well as managerial insights for designing phygital customer experiences that may support more informed and responsible consumption choices. Full article
Show Figures

Figure 1

22 pages, 2690 KB  
Article
Assessing the Impacts of Green Logistics on Sustainable Business Performance: An Application of a Hybrid SEM-GM(1,1) Approach
by Khanh Han Nguyen and Tin Van Vo
Logistics 2026, 10(3), 52; https://doi.org/10.3390/logistics10030052 - 24 Feb 2026
Viewed by 1193
Abstract
Background: Amid global sustainability imperatives, the logistics sector serves as a key economic enabler while remaining a major contributor to greenhouse gas emissions. This study investigates the causal relationships between green logistics practices and sustainable business performance in Vietnamese small- and medium-sized [...] Read more.
Background: Amid global sustainability imperatives, the logistics sector serves as a key economic enabler while remaining a major contributor to greenhouse gas emissions. This study investigates the causal relationships between green logistics practices and sustainable business performance in Vietnamese small- and medium-sized enterprises, mediated by competitiveness, and forecasts future trends to inform transitions aligned with net-zero goals. Methods: A mixed-methods design integrates structural equation modeling with the gray model. Primary data were collected via Likert-scale questionnaires administered to 350 managers to measure latent variables. Secondary financial metrics (revenue, costs, assets, profits) from 15 firms spanning 2021–2024 enabled forecasting. Results: SEM, employing bootstrapping for path estimation, revealed positive direct effects, with the strongest effects for green transportation and weaker effects for technology, packaging, and warehousing. Mediation via competitiveness yielded mixed indirect effects: positive for warehousing and transportation, but negative for technology. GM(1,1) projected moderate performance growth under conditions of data uncertainty. Conclusions: The hybrid framework advances the resource-based view in emerging market contexts, recommending prioritization of transportation and technology initiatives alongside policy incentives to align with sustainable development goals and enhance resilience in Vietnam’s logistics sector. Full article
(This article belongs to the Section Sustainable Supply Chains and Logistics)
Show Figures

Figure 1

17 pages, 321 KB  
Article
Algorithmic Profiling of Operational Risk: A Data-Driven Predictive Model for Micro-Enterprise Solvency Assessment
by Jazmín Pérez-Salazar, Nicolás Márquez and Cristian Vidal-Silva
Computers 2026, 15(2), 135; https://doi.org/10.3390/computers15020135 - 22 Feb 2026
Viewed by 559
Abstract
The persistent financial exclusion of micro-enterprises is fundamentally driven by information asymmetry, as traditional credit scoring models rely heavily on audited financial statements that small entities rarely possess. To address this “thin-file” challenge, this study proposes a shift from asset-based valuation to behavioral [...] Read more.
The persistent financial exclusion of micro-enterprises is fundamentally driven by information asymmetry, as traditional credit scoring models rely heavily on audited financial statements that small entities rarely possess. To address this “thin-file” challenge, this study proposes a shift from asset-based valuation to behavioral algorithmic profiling, hypothesizing that high-frequency operational risk patterns can serve as informative proxies for solvency compared to static liquidity ratios. Using an Extreme Gradient Boosting (XGBoost) architecture on a synthetic dataset of 5000 micro-enterprise transaction logs, we develop a predictive framework that extracts latent features such as supply chain latency, inventory turnover consistency, and digital footprint intensity. The proposed model achieves an Area Under the Curve (AUC) of 0.94, outperforming traditional linear baselines and achieving performance levels above those commonly reported in micro-enterprise solvency prediction studies. The results indicate that operational stability emerges as a strong indicator of repayment capacity within the evaluated context, outperforming static liquidity-based measures. These findings suggest that computational intelligence approaches grounded in high-frequency operational data may contribute to mitigating information asymmetries in micro-enterprise credit assessment, particularly in environments characterized by limited financial disclosure, although further empirical validation is required prior to large-scale deployment. Full article
Show Figures

Figure 1

21 pages, 495 KB  
Article
A Mixture-of-Experts Model for Improved Generalization in Session-Aware Recommendation
by Sungshin Kwak, Jaedong Lee and Sohyun Park
Electronics 2026, 15(4), 825; https://doi.org/10.3390/electronics15040825 - 14 Feb 2026
Viewed by 394
Abstract
Recently, recommendation systems have actively integrated Transformers to capture real-time context. However, these systems often suffer from generalization imbalance, where predictions are biased toward popular (head) items due to the sparsity and volatility inherent in session-based data. To address this challenge, this paper [...] Read more.
Recently, recommendation systems have actively integrated Transformers to capture real-time context. However, these systems often suffer from generalization imbalance, where predictions are biased toward popular (head) items due to the sparsity and volatility inherent in session-based data. To address this challenge, this paper proposes MoE-SLMRec, a Mixture-of-Experts (MoE)-based recommendation model that selects expert networks based on session-level contextual information. The proposed model extracts a session latent representation, h, through a session-aware controller and forms balanced predictive characteristics across the entire data distribution via dynamic routing. Experimental results demonstrate that MoE-SLMRec significantly outperforms the baseline SLMRec, improving accuracy by 1.51 percentage points (from 18.76% to 20.27%). Furthermore, the model achieved state-of-the-art performance in Recall@20 (0.8358) and MRR@20 (0.3455), validating simultaneous improvements in both retrieval capability and ranking quality. Notably, the model effectively stabilized the performance for head items while coordinating the generalization trade-off between head and tail segments. By ensuring a favorable capacity–cost trade-off while maintaining robust performance, this study presents a promising alternative under session-based recommendation settings, facilitating scalable deployment in real-time recommendation services. Full article
Show Figures

Figure 1

23 pages, 1951 KB  
Article
LFTD: Transformer-Enhanced Diffusion Model for Realistic Financial Time-Series Data Generation
by Gyumun Choi, Donghyeon Jo, Wonho Song, Hyungjong Na and Hyungjoon Kim
AI 2026, 7(2), 60; https://doi.org/10.3390/ai7020060 - 5 Feb 2026
Viewed by 844
Abstract
Firm-level financial statement data form multivariate annual time series with strong cross-variable dependencies and temporal dynamics, yet publicly available panels are often short and incomplete, limiting the generalization of predictive models. We present Latent Financial Time-Series Diffusion (LFTD), a structure-aware augmentation framework that [...] Read more.
Firm-level financial statement data form multivariate annual time series with strong cross-variable dependencies and temporal dynamics, yet publicly available panels are often short and incomplete, limiting the generalization of predictive models. We present Latent Financial Time-Series Diffusion (LFTD), a structure-aware augmentation framework that synthesizes realistic firm-level financial time series in a compact latent space. LFTD first learns information-preserving representations with a dual encoder: an FT-Transformer that captures within-year interactions across financial variables and a Time Series Transformer (TST) that models long-horizon evolution across years. On this latent sequence, we train a Transformer-based denoising diffusion model whose reverse process is FiLM-conditioned on the diffusion step as well as year, firm identity, and firm age, enabling controllable generation aligned with firm- and time-specific context. A TST-based Cross-Decoder then reconstructs continuous and binary financial variables for each year. Empirical evaluation on Korean listed-firm data from 2011 to 2023 shows that augmenting training sets with LFTD-generated samples consistently improves firm-value prediction for market-to-book and Tobin’s Q under both static (same-year) and dynamic (ττ+1) forecasting settings and outperforms conventional generative augmentation baselines and ablated variants. These results suggest that domain-conditioned latent diffusion is a practical route to reliable augmentation for firm-level financial time series. Full article
Show Figures

Figure 1

Back to TopTop