Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,768)

Search Parameters:
Keywords = coded computing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 1323 KB  
Article
Circular Polarization-Based Quantum Encoding for Image Transmission over Error-Prone Channels
by Udara Jayasinghe and Anil Fernando
Signals 2026, 7(2), 37; https://doi.org/10.3390/signals7020037 - 8 Apr 2026
Abstract
Quantum image transmission over noisy communication channels remains a challenge due to the fragility of quantum states and their susceptibility to channel impairments. Existing quantum encoding schemes often exhibit limited noise resilience, while advanced approaches introduce computational and implementation complexity. To address these [...] Read more.
Quantum image transmission over noisy communication channels remains a challenge due to the fragility of quantum states and their susceptibility to channel impairments. Existing quantum encoding schemes often exhibit limited noise resilience, while advanced approaches introduce computational and implementation complexity. To address these limitations, this paper proposes a circular polarization-based quantum encoding framework for image transmission over error-prone channels. In the proposed approach, source images are compressed and source-encoded using standard image coding formats, including the joint photographic experts group (JPEG) standard and the high-efficiency image file format (HEIF), and converted into classical bitstreams. The resulting bitstreams are protected using channel coding and mapped onto quantum states via circular polarization representations, where left- and right-hand circularly polarized states encode binary information. The encoded quantum states are transmitted over noisy quantum channels to model channel impairments. At the receiver, appropriate quantum decoding and channel decoding operations are applied to recover the classical bitstream, followed by source decoding to reconstruct the image. The performance of the proposed framework is evaluated using image quality metrics, including peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and universal quality index (UQI). Simulation results demonstrate that the proposed circular polarization-based encoding scheme outperforms existing quantum image encoding techniques, achieving channel SNR gains of 4 dB over state-of-the-art Hadamard-based encoding and 3 dB over frequency-domain quantum encoding methods under severe noise conditions. These results indicate that circular polarization-based quantum encoding provides improved noise robustness and reconstruction fidelity for practical quantum image transmission systems. Full article
Show Figures

Figure 1

25 pages, 2651 KB  
Article
High-Performance Computing Optimization of the Maxwell–Stefan Diffusion Model in OpenFOAM
by Zixin Chi, Xin Hui and Bosen Wang
Appl. Sci. 2026, 16(7), 3611; https://doi.org/10.3390/app16073611 - 7 Apr 2026
Abstract
Multicomponent diffusion modeling based on the Maxwell–Stefan formulation is widely used in high-fidelity combustion simulations due to its superior physical accuracy compared with simplified diffusion models. However, the computational complexity of the Maxwell–Stefan model, which arises from the solution of coupled multicomponent transport [...] Read more.
Multicomponent diffusion modeling based on the Maxwell–Stefan formulation is widely used in high-fidelity combustion simulations due to its superior physical accuracy compared with simplified diffusion models. However, the computational complexity of the Maxwell–Stefan model, which arises from the solution of coupled multicomponent transport equations, becomes a major performance bottleneck in large-scale CFD simulations. In this work, a high-performance computing optimization strategy for the Maxwell–Stefan diffusion model is developed within the OpenFOAM framework. The proposed method improves computational efficiency through block-based computation and vectorization-oriented data organization to better exploit modern CPU architectures and SIMD instruction capabilities. The optimized implementation enhances memory locality, increases data reuse efficiency, and reduces cache miss penalties. Numerical validation is performed using two-dimensional laminar counterflow flame cases and ammonia–hydrogen turbulent combustion cases, including both premixed and non-premixed jet flames. Results demonstrate that the optimized Maxwell–Stefan implementation preserves numerical accuracy while significantly improving computational performance. Speedups of 2.5×–4.5× are achieved depending on the number of chemical species. The developed approach provides an efficient solution for detailed combustion simulations involving large chemical mechanisms. The test cases and source code are openly shared. Full article
22 pages, 4256 KB  
Systematic Review
Modeling the Resilience of Multimodal Freight Networks Under Disruptions: A Systematic Review
by Tariq Lamei, Ahmed Elsayed, Ahmed Ibrahim and Ahmed Abdel-Rahim
Infrastructures 2026, 11(4), 130; https://doi.org/10.3390/infrastructures11040130 - 6 Apr 2026
Viewed by 85
Abstract
Multimodal freight transportation networks are increasingly exposed to natural and human-made disruptions, yet prior research remains fragmented in how disruptions are represented, which modeling techniques are applied, and how results are validated, limiting comparability and actionable guidance for resilient planning. This study presents [...] Read more.
Multimodal freight transportation networks are increasingly exposed to natural and human-made disruptions, yet prior research remains fragmented in how disruptions are represented, which modeling techniques are applied, and how results are validated, limiting comparability and actionable guidance for resilient planning. This study presents a PRISMA-guided systematic review of disruption modeling in multimodal freight networks. A total of 21 studies were identified and coded to address three research questions concerning (RQ1) which analytical and computational modeling techniques are applied; (RQ2) to what extent models represent cross-modal interdependencies, cascading failures, and recovery processes; and (RQ3) what validation, calibration, and empirical testing strategies are employed. The review shows that optimization-based approaches and hybrid frameworks dominate the literature, complemented by fewer network science and data-driven methods. Most studies model disruptions as node/link failures and/or capacity degradation using static single-event scenarios, and explicit representations of cascading effects, operational delay propagation, and time-evolving recovery trajectories remain relatively rare. While many studies rely on real network data, formal calibration and historical backtesting against observed disruption events are uncommon, and validation is primarily case study-based. These findings highlight the need for more dynamic resilience modeling, stronger uncertainty quantification, standardized reporting of performance and resilience metrics, and greater use of empirically grounded validation to improve the generalizability and decision relevance of multimodal freight resilience models. Full article
Show Figures

Figure 1

16 pages, 2876 KB  
Article
Design and Implementation of a High-Resolution Real-Time Ultrasonic Endoscopy Imaging System Based on FPGA and Coded Excitation
by Haihang Gu, Fujia Sun, Shuhao Hou and Shuangyuan Wang
Electronics 2026, 15(7), 1526; https://doi.org/10.3390/electronics15071526 - 6 Apr 2026
Viewed by 72
Abstract
High-frequency endoscopic ultrasound is crucial for the early diagnosis of gastrointestinal tumors. However, achieving high axial resolution, deep tissue signal-to-noise ratio, and real-time data processing simultaneously remains a significant challenge in hardware implementation. This paper proposes a miniaturized real-time high-frequency imaging system based [...] Read more.
High-frequency endoscopic ultrasound is crucial for the early diagnosis of gastrointestinal tumors. However, achieving high axial resolution, deep tissue signal-to-noise ratio, and real-time data processing simultaneously remains a significant challenge in hardware implementation. This paper proposes a miniaturized real-time high-frequency imaging system based on the Xilinx Artix-7 FPGA. To overcome attenuation limitations of high-frequency signals, we employ a 4-bit Barker code-encoded excitation scheme coupled with a programmable ±100 V high-voltage transmission circuit. This effectively enhances echo energy without exceeding peak voltage safety thresholds. At the receiver end, the system utilizes a multi-channel analog front end integrated with mixed-signal time-gain compensation technology. Furthermore, to address transmission bottlenecks for massive echo data, we designed a Low-Voltage Differential Signaling (LVDS) interface logic based on dynamic phase calibration, ensuring stable, high-speed data transfer to the host computer via USB 3.0. Experimental results with a 20 MHz transducer demonstrate that the system achieves real-time B-mode imaging at 30 frames per second. Phantom testing revealed an axial resolution of 0.13 mm, enabling clear differentiation of 0.1 mm microstructures. Compared to conventional single-pulse excitation, coded excitation technology improved signal-to-noise ratio (SNR) by approximately 4.5 dB at a depth of 40 mm. These results validate the system’s capability for high-precision deep imaging suitable for clinical endoscopy applications, delivered in a compact, low-power form factor. Full article
Show Figures

Figure 1

26 pages, 6403 KB  
Article
RDD-DETR Algorithm for Full-Scale Detection of Rice Diseases
by Ziyan Yang, Wensi Zhang, Chengfeng Hu, Zehao Feng and Jie Li
Agriculture 2026, 16(7), 799; https://doi.org/10.3390/agriculture16070799 - 3 Apr 2026
Viewed by 131
Abstract
To tackle the challenges of high computational expense, limited detection accuracy, and imbalanced detection performance across multi-scale targets in rice disease identification within complex natural environments, we propose the Rice Disease Deformable Detection Transformer (RDD-DETR). This model serves as a full-scale detection framework [...] Read more.
To tackle the challenges of high computational expense, limited detection accuracy, and imbalanced detection performance across multi-scale targets in rice disease identification within complex natural environments, we propose the Rice Disease Deformable Detection Transformer (RDD-DETR). This model serves as a full-scale detection framework based on the Deformable Detection Transformer (Deformable DETR). The model introduces a Rectified Linear Unit (ReLU)-enhanced lightweight linear attention module, which uses differentiated position coding and ReLU kernel mapping to reduce computational complexity. A cross-layer dynamic fusion and inter-layer supervision module is designed to break the serial dependence in decoders and strengthen interlayer supervision, enabling the decoder to generate more accurate and robust target representations. Furthermore, we design an optimization mechanism for sub-scale positioning loss to substantially boost detection accuracy across all target scales. Experiments on our custom RiceLeafDisease-RSOD dataset demonstrate that RDD-DETR achieves an average precision (AP) at Intersection over Union (IoU) threshold 0.5:0.95 of 0.7363 across all categories, surpassing the baseline model by 6.09%. Notably, detection accuracy improves by 6.10% for small targets, 6.61% for medium targets, and 5.42% for large targets. Evaluated on the validation set (671 images with 2482 labeled bounding boxes), the model achieves an AP at IoU threshold 0.5 of 0.9684 while reducing computational cost by 37.41% (from 136.02 to 85.1 Giga Floating Point Operations, GFLOPs) compared to the original Deformable DETR. These results validate RDD-DETR as an effective solution for accurate and efficient real-time rice disease monitoring in complex field environments. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

18 pages, 1704 KB  
Review
Targeting Non-Coding RNAs as a Potential Therapeutic and Delivery Strategy Against Neurodegenerative Diseases
by Anastasia Bougea
Int. J. Mol. Sci. 2026, 27(7), 3260; https://doi.org/10.3390/ijms27073260 - 3 Apr 2026
Viewed by 271
Abstract
Neurodegenerative diseases (NDs), including Alzheimer’s disease, Parkinson’s disease, Huntington’s disease, and amyotrophic lateral sclerosis (ALS), represent a growing global health challenge characterized by progressive neuronal loss and a lack of definitive disease-modifying treatments. This review explores the emerging potential of targeting non-coding RNAs [...] Read more.
Neurodegenerative diseases (NDs), including Alzheimer’s disease, Parkinson’s disease, Huntington’s disease, and amyotrophic lateral sclerosis (ALS), represent a growing global health challenge characterized by progressive neuronal loss and a lack of definitive disease-modifying treatments. This review explores the emerging potential of targeting non-coding RNAs (ncRNAs), such as microRNAs (miRNAs), long non-coding RNAs (lncRNAs), and exosomal RNAs, to modulate pathogenic molecular pathways and address the underlying molecular origins of neurodegeneration. We evaluate the integration of advanced computational techniques for RNA structure prediction and gene regulatory network analysis, alongside chemical engineering strategies—such as Locked Nucleic Acids (LNAs) and phosphorothioate modifications—aimed at enhancing the stability and specificity of RNA-based molecules. Furthermore, we analyze cutting-edge delivery and editing technologies, including nanotechnology-driven solutions for precise neuronal targeting and the CRISPR/Cas13 system for direct ncRNA manipulation.The findings indicate that while challenges in delivery efficiency and long-term efficacy persist, the synergy of chemical engineering and computational modeling significantly improves the therapeutic profile of ncRNAs, with exosomal pathways offering a novel route for intercellular signaling modulation and biomarker discovery. Therapeutic interventions directed at specific clinical targets, such as miR-34a and BACE1-AS, demonstrate the capacity to influence protein aggregation and neuroinflammatory cascades. Although ncRNA-based therapies are currently in nascent stages, ongoing technological advancements in RNA editing and nanotechnology offer a transformative framework that could redefine the future of ND treatment and successfully halt disease progression rather than merely managing symptoms. Full article
(This article belongs to the Section Molecular Biology)
Show Figures

Figure 1

22 pages, 1767 KB  
Article
Trends in Unintentional Drowning Mortality Among U.S. Adults Aged ≥25 Years, 1999–2024: A U.S. Surveillance Analysis
by Akef Obeidat, Mohammad Dawar Zahid, Eshal Atif, Sadia Qazi, Anushah Faheem Ilyas, Fnu Urooba, Mazhar Ali, Vishan Das, Muhammad Rai Hassan Ashraf and Muhammad Atif Mazhar
Healthcare 2026, 14(7), 920; https://doi.org/10.3390/healthcare14070920 - 1 Apr 2026
Viewed by 414
Abstract
Background/Objectives: Drowning is a leading preventable cause of unintentional injury death, yet U.S. prevention efforts have largely focused on children. Despite international declines in pediatric drowning mortality, adult trends remain poorly characterized. We examined long-term trends and disparities in unintentional drowning mortality among [...] Read more.
Background/Objectives: Drowning is a leading preventable cause of unintentional injury death, yet U.S. prevention efforts have largely focused on children. Despite international declines in pediatric drowning mortality, adult trends remain poorly characterized. We examined long-term trends and disparities in unintentional drowning mortality among U.S. adults aged ≥25 years from 1999 to 2024. Methods: Using CDC WONDER Multiple Cause of Death data, drowning deaths were identified using ICD-10 codes W65–W74, V90, and V92. Age-adjusted mortality rates (AAMRs) per 100,000 were computed by direct standardization to the 2000 U.S. standard population. Joinpoint regression estimated the annual percent change (APC) and average annual percent change (AAPC). Three sensitivity analyses assessed transport-related code exclusion, pandemic-era restriction, and multiple cause-of-death coding. Results: During 1999–2024, 101,743 unintentional drowning deaths occurred among U.S. adults aged ≥25 years (76,554 males; 25,201 females), with 58.09% in natural water or outdoor settings. The overall AAMR showed a non-significant increase (AAPC: 0.55%, p = 0.054); however, joinpoint analysis identified stable rates through 2013 followed by a significant sustained increase (APC: 1.32%, 95% CI: 0.32–2.32, p = 0.012). The male-to-female rate ratio narrowed significantly from 4.00 (1999) to 3.32 (2024) (ratio of rate ratios: 0.83, p = 0.0006), driven by a sustained female increase (AAPC: 1.27%, p < 0.001). Adults aged 65–85+ showed the steepest rise (AAPC: 1.15%, p < 0.001). Non-Hispanic AI/AN adults had the highest rates (3.47–5.44 per 100,000), and non-metropolitan areas consistently exceeded metropolitan rates. Conclusions: A significant upward trajectory has persisted since 2013, with marked disparities by age, sex, race/ethnicity, and geography. Adult-focused, equity-driven prevention strategies aligned with USNWSAP implementation are needed to address this underrecognized burden. Full article
Show Figures

Graphical abstract

17 pages, 10206 KB  
Article
Structural, Electronic, and Thermoelectric Insights into the Novel K2OsCl3Ag3 and Rb2OsCl3Ag3 Perovskites
by Nicholas O. Ongwen and Adel Bandar Alruqi
Inorganics 2026, 14(4), 102; https://doi.org/10.3390/inorganics14040102 - 1 Apr 2026
Viewed by 202
Abstract
The field of perovskites continues to advance each day, with new materials being discovered in order to eliminate the toxic and less efficient ones. Some of the challenges currently facing the perovskite industry include coming up with materials with higher electrical conductivity and [...] Read more.
The field of perovskites continues to advance each day, with new materials being discovered in order to eliminate the toxic and less efficient ones. Some of the challenges currently facing the perovskite industry include coming up with materials with higher electrical conductivity and lower thermal conductivity, as well as p-type semiconductors. In an attempt to address these challenges, this study modeled two novel perovskites from potassium hexachloroosmate (VI) (K2OsCl6) by replacing some of the chlorine atoms with those of silver, then characterized their structural, electronic (using both conventional and hybrid functionals), and thermoelectric properties using Quantum Espresso and BoltzTrap2 codes. The calculations were performed within the framework of density functional theory. The results showed that the novel materials exhibited higher density, lower thermal conductivity, lower band gaps, and positive Hall coefficient, unlike the K2OsCl6 sample. These materials can thus be used in areas such as in p–n junctions, thermoelectric devices, and optoelectronic devices. However, since this study was purely computational, the properties need to be verified through an experimental study. Full article
(This article belongs to the Special Issue Advanced Inorganic Semiconductor Materials, 4th Edition)
Show Figures

Figure 1

24 pages, 531 KB  
Article
VMkCwPIR: A Single-Round Scalable Multi-Keyword PIR Protocol Supporting Non-Primary Key Queries
by Junyu Lu, Shengnan Zhao, Yuchen Huang, Zhongtian Jia, Lili Zhang and Chuan Zhao
Information 2026, 17(4), 337; https://doi.org/10.3390/info17040337 - 1 Apr 2026
Viewed by 223
Abstract
Keyword Private Information Retrieval (Keyword PIR) enables private querying over keyword-based databases, which are typically sparse, as opposed to the dense arrays used in standard Index PIR. However, existing Keyword PIR schemes are limited to single-keyword queries and generally assume that keywords serve [...] Read more.
Keyword Private Information Retrieval (Keyword PIR) enables private querying over keyword-based databases, which are typically sparse, as opposed to the dense arrays used in standard Index PIR. However, existing Keyword PIR schemes are limited to single-keyword queries and generally assume that keywords serve as unique identifiers, making them inadequate for practical scenarios where keywords are non-unique attributes and clients need to retrieve records matching multiple keywords simultaneously. To bridge this gap, we propose MkCwPIR, the first single-round, exact-match multi-keyword PIR protocol that supports conjunctive keyword queries while preserving strict keyword privacy against the server. Our construction employs Constant-weight codes and Newton–Girard identities to encode multi-keyword selection into a compact algebraic representation, representing a functional extension of CwPIR (Usenix Security ’22). While this functional expansion introduces additional computational overhead due to the processing of multiple keywords, we further introduce VMkCwPIR—an optimized variant leveraging BFV vectorized homomorphic encryption. Experimental results demonstrate that although the base MkCwPIR incurs higher latency due to its enhanced logical capabilities, the vectorized optimizations in VMkCwPIR effectively close this performance gap. Consequently, VMkCwPIR achieves a performance level comparable to the single-keyword CwPIR. Experimental results demonstrate that when processing a query with eight keywords, VMkCwPIR achieves a server-side execution time comparable to executing only four independent single-keyword queries in CwPIR, while maintaining constant communication overhead for up to 16 keywords. Full article
Show Figures

Figure 1

13 pages, 899 KB  
Review
A Conceptual Framework for Understanding Patient Expectations in Individualised Anaesthesia and Analgesia: A Narrative Review and Future Directions
by Krister Mogianos and Anna K. M. Persson
J. Pers. Med. 2026, 16(4), 191; https://doi.org/10.3390/jpm16040191 - 1 Apr 2026
Viewed by 220
Abstract
Acute postoperative pain remains a major clinical challenge, affecting both recovery and resource utilisation. Beyond nociceptive input, pain is shaped by cognitive and emotional factors, including patient expectations. This narrative review examines the role of expectations in perioperative pain modulation, framed within predictive [...] Read more.
Acute postoperative pain remains a major clinical challenge, affecting both recovery and resource utilisation. Beyond nociceptive input, pain is shaped by cognitive and emotional factors, including patient expectations. This narrative review examines the role of expectations in perioperative pain modulation, framed within predictive coding and Bayesian inference models. These models conceptualise pain as a probabilistic process that integrates sensory input with prior expectations, weighted by precision. In theory, positive expectations may enhance analgesic efficacy, whereas negative expectations may amplify pain via nocebo mechanisms. Control modifies expectations and may reduce perceived pain, while uncertainty diminishes these benefits. Evidence from observational studies links preoperative pain self-efficacy and anticipated pain scores to postoperative outcomes, yet interventional trials remain scarce. In this narrative review, we propose that expectation-sensitive strategies, including structured communication and computational modelling, may inform individualised anaesthesia and analgesia. Future research should validate these frameworks in clinical trials, optimise preoperative expectation management, and explore synergistic approaches that combine pharmacology with cognitive modulation. Understanding and leveraging expectations may offer a promising conceptual direction for more individualised perioperative care, although this approach remains hypothesis-generating at present. Full article
(This article belongs to the Special Issue New Insights into Personalized Medicine for Anesthesia and Pain)
Show Figures

Figure 1

22 pages, 1704 KB  
Article
Using Coding to Improve Executive Functioning in Children with Sickle Cell Disease: A Multiple-Baseline Single-Case Study
by Barbara Arfé, Maria Elisa delle Fave, Chiara Montuori, Lucia Ronconi, Sofia Carbone and Raffaella Colombatti
J. Intell. 2026, 14(4), 55; https://doi.org/10.3390/jintelligence14040055 - 1 Apr 2026
Viewed by 264
Abstract
Executive function (EF) impairments are common in children with intellectual and developmental disabilities and have a significant impact on learning and daily life. Cognitive training programs aimed at strengthening EFs may show limited feasibility and generalization. However, recent studies suggest that ecological, curriculum-embedded [...] Read more.
Executive function (EF) impairments are common in children with intellectual and developmental disabilities and have a significant impact on learning and daily life. Cognitive training programs aimed at strengthening EFs may show limited feasibility and generalization. However, recent studies suggest that ecological, curriculum-embedded problem-solving activities may be more promising. This multiple-baseline single-case study tested the feasibility and efficacy of a short computational thinking and coding intervention based on problem-solving for children with sickle cell disease, a hemoglobinopathy associated with cognitive decline and EF deficits. The trial followed the What Works Clearinghouse (WWC) Version 5 guidelines for single-case research. Three 7–8-year-old children with lower-range IQ (71–82) and EF impairments completed 11 coding sessions over 5–6 weeks using code.org, with pre/post assessments of non-verbal EF (planning, inhibition, and switching), and verbal EF skills (verbal working memory, phonological fluency and semantic fluency). Results showed 100% adherence to the intervention, significant improvement in coding (IRD range = 0.69–0.79), with positive transfer effects on nonverbal planning skills (gains > 2 z-scores) and also verbal fluency (z-score gains ranging from 0.47 to 1.04). Inter-individual variability in effects was related to the child’s individual cognitive profile. Findings suggest that problem-solving, coding-based activities can be feasible and potentially beneficial for children with significant EF impairments. Full article
Show Figures

Figure 1

16 pages, 5847 KB  
Article
Reshaping Optical Speckles and Random Light Beam
by Yi Cui and Jun Xiong
Photonics 2026, 13(4), 342; https://doi.org/10.3390/photonics13040342 - 31 Mar 2026
Viewed by 218
Abstract
Speckle patterns generated by coherent illumination of random media are ubiquitous in optical imaging and information processing. However, most existing studies have primarily focused on isotropic or homogeneous speckle fields, while controlled manipulation of speckle patterns with customized geometric morphologies has received comparatively [...] Read more.
Speckle patterns generated by coherent illumination of random media are ubiquitous in optical imaging and information processing. However, most existing studies have primarily focused on isotropic or homogeneous speckle fields, while controlled manipulation of speckle patterns with customized geometric morphologies has received comparatively little attention. Here, we propose a random phase-coded array (RPA) as a general framework for generating geometrically reshaped speckle, enabling the formation of nonconventional random light fields whose ensemble-averaged intensity distributions follow prescribed geometric shapes. In this framework, the speckle geometry is determined by the unit-cell structure of the RPA, the unit-cell size governs the overall spatial extent of the speckle pattern, and the illuminating beam size sets the characteristic speckle grain size. These relationships are rigorously validated through theoretical derivations and numerical simulations. As a result, the global statistical envelope of the random light field can be intuitively and flexibly controlled without compromising the fully developed speckle characteristics. Our experimental framework offers a straightforward, scalable, and versatile approach for generating customized random light fields, with potential applications in optical information processing, secure optical communication, computational imaging, and speckle-based metrology. Full article
(This article belongs to the Special Issue Ghost Imaging and Quantum-Inspired Classical Optics)
Show Figures

Figure 1

27 pages, 1134 KB  
Article
TC-HUR: A Tri-Phase Cauchy-Assisted Hunger Games Search and Unified Runge–Kutta Optimizer for Robust DNA Data Storage
by Beyza Öztürk, Ayşenur İgit, Aylin Kaya, Zeynep Tuğsem Çamlıca, Selen Arıcı and Muhammed Faruk Şahin
Int. J. Mol. Sci. 2026, 27(7), 3134; https://doi.org/10.3390/ijms27073134 - 30 Mar 2026
Viewed by 369
Abstract
Although DNA-based data storage theoretically provides an information density of 2 bits per nucleotide, biochemical constraints transform sequence design into a high-dimensional constrained combinatorial optimization problem. The high computational cost and low encoding efficiency of conventional rule-based approaches make metaheuristic methods an effective [...] Read more.
Although DNA-based data storage theoretically provides an information density of 2 bits per nucleotide, biochemical constraints transform sequence design into a high-dimensional constrained combinatorial optimization problem. The high computational cost and low encoding efficiency of conventional rule-based approaches make metaheuristic methods an effective alternative. This study proposes the TC-HUR hybrid algorithm to simultaneously optimize information density and conflicting biophysical constraints, including homopolymer (HP) length, GC content, melting temperature (Tm), and reverse-complement (RC) similarity. The method escapes local optima using Cauchy jump-enhanced Hunger Games Search (HGS), performs high-precision exploitation via Runge–Kutta (RUN) operators, and refines constraint violations at the nucleotide level through an adaptive intensive mutation mechanism. The algorithm is evaluated on a complex dataset of 1853 nucleotides under different noise regimes. TC-HUR outperforms RUN by 2.5% and HGS by 16.7% in average fitness. While maintaining homopolymer length near the ideal threshold, it reduces reverse-complement similarity to 19.10%, ensuring high sequence diversity. Under high-noise conditions, TC-HUR achieves a normalized edit distance of 0.1290, reducing insertion–deletion (indel) errors by approximately 14%. The results demonstrate that the proposed model effectively generates biophysically synthesizable and noise-resilient DNA codes. Full article
Show Figures

Figure 1

20 pages, 3507 KB  
Article
Optimizing Data Preprocessing and Hyperparameter Tuning for Soil Organic Carbon Content Prediction Using Large Language Models: A Case Study of the Black Soil and Windblown Sandy Soil Regions in Northeast China
by Hao Cui, Xianmin Chang and Shuang Gang
Appl. Sci. 2026, 16(7), 3349; https://doi.org/10.3390/app16073349 - 30 Mar 2026
Viewed by 201
Abstract
To address the current issues in soil organic carbon (SOC) content prediction where data preprocessing relies on expert experience to formulate fixed rules, resulting in a lack of uniform standards and insufficient consideration of regional soil heterogeneity; while hyperparameter tuning faces problems of [...] Read more.
To address the current issues in soil organic carbon (SOC) content prediction where data preprocessing relies on expert experience to formulate fixed rules, resulting in a lack of uniform standards and insufficient consideration of regional soil heterogeneity; while hyperparameter tuning faces problems of high computational costs and excessively long runtimes, this study proposes an intelligent modeling workflow driven by Large Language Models (LLM). This workflow focuses on optimizing two key aspects of SOC Random Forest modeling: data preprocessing and hyperparameter tuning. Results: The LLM-defined rules achieved sample retention rates of 55.33% and 61.90% in the two regions, respectively, showing more significant differences compared to traditional hard-coded rules (56.2% and 59.3%), and the mean soil organic carbon content deviations (30.27% and 20.05%) were both lower than those of traditional hard-coding. At the same time, the mean soil organic carbon content values in both regions closely matched the effectiveness of other methods, indicating that the large language model has effectively captured regional soil differences. With only a single evaluation of hyperparameter optimization, the adaptive model achieved test set R2 values of 0.394 and 0.694 in the black soil region and the aeolian sandy soil region, respectively, with root mean square error values of 8.76 g/kg and 6.07 g/kg—its performance is comparable to that of Grid Search and Random Search, while computational efficiency improved by over 95%. Performance comparisons with eXtreme Gradient Boosting (XGBoost) and Partial Least Squares Regression (PLSR) show that the LLM-optimized Random Forest achieved R2 = 0.394 and RMSE = 8.76 g/kg in the black soil region, and R2 = 0.694 and RMSE = 6.07 g/kg in the windblown sandy soil region, demonstrating practical application value. Full article
(This article belongs to the Section Environmental Sciences)
Show Figures

Figure 1

25 pages, 908 KB  
Article
Perception Norm for Mispronunciation Detection
by Mewlude Nijat, Yang Wei and Askar Hamdulla
Appl. Sci. 2026, 16(7), 3311; https://doi.org/10.3390/app16073311 - 29 Mar 2026
Viewed by 293
Abstract
Mispronunciation detection (MD) is a key component in computer-assisted pronunciation training (CAPT) and speaking tests. Most MD systems adopt a production view, measuring phone-level deviation from a canonical pronunciation (Native Norm) or the expected pronunciation of a target population (Target [...] Read more.
Mispronunciation detection (MD) is a key component in computer-assisted pronunciation training (CAPT) and speaking tests. Most MD systems adopt a production view, measuring phone-level deviation from a canonical pronunciation (Native Norm) or the expected pronunciation of a target population (Target Norm). Yet, pronunciation assessment is fundamentally perceptual: listeners map speech to linguistic categories under uncertainty and with individual psychological priors, so judgments are inherently subjective and lack a single gold standard. Labels are therefore often aggregated (e.g., voting), but aggregation rules are themselves subjective, require many annotators, and entangle individual perception with social consensus, complicating model training. In this paper, we propose a “Perception Norm”, which models MD as the decision process of individual annotators and trains models to simulate single listeners rather than an annotator pool. To support this study, we introduce UY/CH-CHILD-MA, a corpus of Uyghur-accented child Mandarin words and phrases with four independent phone-level annotations. Our experiments reveal substantial inter-annotator variation and show that a Transformer with pre-training and fine-tuning can learn annotator-specific patterns with high accuracy. Finally, we present a committee ensemble that combines annotator models using application-matched aggregation rules to produce task-specific assessments. The data and source code will be made publicly available upon publication. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop