Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,342)

Search Parameters:
Keywords = concatenation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4337 KB  
Article
A Transformer-Based Multimodal Fusion Network for Emotion Recognition Using EEG and Facial Expressions in Hearing-Impaired Subjects
by Shuni Feng, Qingzhou Wu, Kailin Zhang and Yu Song
Sensors 2025, 25(20), 6278; https://doi.org/10.3390/s25206278 - 10 Oct 2025
Viewed by 129
Abstract
Hearing-impaired people face challenges in expressing and perceiving emotions, and traditional single-modal emotion recognition methods demonstrate limited effectiveness in complex environments. To enhance recognition performance, this paper proposes a multimodal fusion neural network based on a multimodal multi-head attention fusion neural network (MMHA-FNN). [...] Read more.
Hearing-impaired people face challenges in expressing and perceiving emotions, and traditional single-modal emotion recognition methods demonstrate limited effectiveness in complex environments. To enhance recognition performance, this paper proposes a multimodal fusion neural network based on a multimodal multi-head attention fusion neural network (MMHA-FNN). This method utilizes differential entropy (DE) and bilinear interpolation features as inputs, learning the spatial–temporal characteristics of brain regions through an MBConv-based module. By incorporating the Transformer-based multi-head self-attention mechanism, we dynamically model the dependencies between EEG and facial expression features, enabling adaptive weighting and deep interaction of cross-modal characteristics. The experiment conducted a four-classification task on the MED-HI dataset (15 subjects, 300 trials). The taxonomy included happy, sad, fear, and calmness, where ‘calmness’ corresponds to a low-arousal neutral state as defined in the MED-HI protocol. Results indicate that the proposed method achieved an average accuracy of 81.14%, significantly outperforming feature concatenation (71.02%) and decision layer fusion (69.45%). This study demonstrates the complementary nature of EEG and facial expressions in emotion recognition among hearing-impaired individuals and validates the effectiveness of feature layer interaction fusion based on attention mechanisms in enhancing emotion recognition performance. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

23 pages, 4999 KB  
Article
Targeted Inhibition of Colorectal Carcinoma Using a Designed CEA-Binding Protein to Deliver p53 Protein and TCF/LEF Transcription Factor Decoy DNA
by Wen Wang, Xuan Sun and Geng Wu
Int. J. Mol. Sci. 2025, 26(20), 9846; https://doi.org/10.3390/ijms26209846 - 10 Oct 2025
Viewed by 107
Abstract
Colorectal carcinoma (CRC) is characterized by mutations in p53 and the Wnt signaling pathway, and immunotherapy has shown limited efficacy in microsatellite-stable CRC. Here, CEABP1, a binding protein for the CRC biomarker carcinoembryonic antigen (CEA), was designed de novo through the AI-based computational [...] Read more.
Colorectal carcinoma (CRC) is characterized by mutations in p53 and the Wnt signaling pathway, and immunotherapy has shown limited efficacy in microsatellite-stable CRC. Here, CEABP1, a binding protein for the CRC biomarker carcinoembryonic antigen (CEA), was designed de novo through the AI-based computational generation methods RFDiffusion/ProteinMPNN and stringent in silico selection, for targeted delivery of purified p53 protein and transcription factor T-cell factor (TCF)/lymphoid enhancer-binding factor (LEF) transcription factor decoy (TFD) DNA into CRC cells. The cell-penetrating peptide (CPP) p28 was employed to deliver the p28-p53-CEABP1 protein, which significantly enhanced p53’s inhibition of CRC cell proliferation and xenograft tumor growth. Codelivery of the p14ARF protein together with p53 prolonged the effective antitumor duration of p53. In addition, the DNA binding domain of Max was fused with CPP and CEABP1 to deliver TCF/LEF TFD DNA, comprising concatenated consensus binding motifs for TCF/LEF and Max, into CRC cells to inhibit Wnt target gene transcription, leading to marked suppression of CRC cell proliferation and xenograft tumor growth. These findings paved the way for the development of precision anticancer therapeutics using designed binding proteins of tumor biomarkers for targeted delivery of tumor suppressor proteins and TFD DNA. Full article
(This article belongs to the Special Issue Protein–Protein Interactions in Human Cancer)
Show Figures

Graphical abstract

20 pages, 3126 KB  
Article
Few-Shot Image Classification Algorithm Based on Global–Local Feature Fusion
by Lei Zhang, Xinyu Yang, Xiyuan Cheng, Wenbin Cheng and Yiting Lin
AI 2025, 6(10), 265; https://doi.org/10.3390/ai6100265 - 9 Oct 2025
Viewed by 277
Abstract
Few-shot image classification seeks to recognize novel categories from only a handful of labeled examples, but conventional metric-based methods that rely mainly on global image features often produce unstable prototypes under extreme data scarcity, while local-descriptor approaches can lose context and suffer from [...] Read more.
Few-shot image classification seeks to recognize novel categories from only a handful of labeled examples, but conventional metric-based methods that rely mainly on global image features often produce unstable prototypes under extreme data scarcity, while local-descriptor approaches can lose context and suffer from inter-class local-pattern overlap. To address these limitations, we propose a Global–Local Feature Fusion network that combines a frozen, pretrained global feature branch with a self-attention based multi-local feature fusion branch. Multiple random crops are encoded by a shared backbone (ResNet-12), projected to Query/Key/Value embeddings, and fused via scaled dot-product self-attention to suppress background noise and highlight discriminative local cues. The fused local representation is concatenated with the global feature to form robust class prototypes used in a prototypical-network style classifier. On four benchmarks, our method achieves strong improvements: Mini-ImageNet 70.31% ± 0.20 (1-shot)/85.91% ± 0.13 (5-shot), Tiered-ImageNet 73.37% ± 0.22/87.62% ± 0.14, FC-100 47.01% ± 0.20/64.13% ± 0.19, and CUB-200-2011 82.80% ± 0.18/93.19% ± 0.09, demonstrating consistent gains over competitive baselines. Ablation studies show that (1) naive local averaging improves over global-only baselines, (2) self-attention fusion yields a large additional gain (e.g., +4.50% in 1-shot on Mini-ImageNet), and (3) concatenating global and fused local features gives the best overall performance. These results indicate that explicitly modeling inter-patch relations and fusing multi-granularity cues produces markedly more discriminative prototypes in few-shot regimes. Full article
Show Figures

Figure 1

15 pages, 1557 KB  
Article
A Dual-Structured Convolutional Neural Network with an Attention Mechanism for Image Classification
by Yongzhuo Liu, Jiangmei Zhang, Haolin Liu and Yangxin Zhang
Electronics 2025, 14(19), 3943; https://doi.org/10.3390/electronics14193943 - 5 Oct 2025
Viewed by 329
Abstract
This paper presents a dual-structured convolutional neural network (CNN) for image classification, which integrates two parallel branches: CNN-A with spatial attention and CNN-B with channel attention. The spatial attention module in CNN-A dynamically emphasizes discriminative regions by aggregating channel-wise information, while the channel [...] Read more.
This paper presents a dual-structured convolutional neural network (CNN) for image classification, which integrates two parallel branches: CNN-A with spatial attention and CNN-B with channel attention. The spatial attention module in CNN-A dynamically emphasizes discriminative regions by aggregating channel-wise information, while the channel attention mechanism in CNN-B adaptively recalibrates feature channel importance. The extracted features from both branches are fused through concatenation, enhancing the model’s representational capacity by capturing complementary spatial and channel-wise dependencies. Extensive experiments on a 12-class image dataset demonstrate the superiority of the proposed model over state-of-the-art methods, achieving 98.06% accuracy, 96.00% precision, and 98.01% F1-score. Despite a marginally longer training time, the model exhibits robust convergence and generalization, as evidenced by stable loss curves and high per-class recognition rates (>90%). The results validate the efficacy of dual attention mechanisms in improving feature discrimination for complex image classification tasks. Full article
(This article belongs to the Special Issue Advances in Object Tracking and Computer Vision)
Show Figures

Figure 1

22 pages, 5361 KB  
Article
LMVMamba: A Hybrid U-Shape Mamba for Remote Sensing Segmentation with Adaptation Fine-Tuning
by Fan Li, Xiao Wang, Haochen Wang, Hamed Karimian, Juan Shi and Guozhen Zha
Remote Sens. 2025, 17(19), 3367; https://doi.org/10.3390/rs17193367 - 5 Oct 2025
Viewed by 474
Abstract
High-precision semantic segmentation of remote sensing imagery is crucial in geospatial analysis. It plays an immeasurable role in fields such as urban governance, environmental monitoring, and natural resource management. However, when confronted with complex objects (such as winding roads and dispersed buildings), existing [...] Read more.
High-precision semantic segmentation of remote sensing imagery is crucial in geospatial analysis. It plays an immeasurable role in fields such as urban governance, environmental monitoring, and natural resource management. However, when confronted with complex objects (such as winding roads and dispersed buildings), existing semantic segmentation methods still suffer from inadequate target recognition capabilities and multi-scale representation issues. This paper proposes a neural network model, LMVMamba (LoRA Multi-scale Vision Mamba), for semantic segmentation of remote sensing images. This model integrates the advantages of convolutional neural networks (CNNs), Transformers, and state-space models (Mamba) with a multi-scale feature fusion strategy. It simultaneously captures global contextual information and fine-grained local features. Specifically, in the encoder stage, the ResT Transformer serves as the backbone network, employing a LoRA fine-tuning strategy to effectively enhance model accuracy by training only the introduced low-rank matrix pairs. The extracted features are then passed to the decoder, where a U-shaped Mamba decoder is designed. In this stage, a Multi-Scale Post-processing Block (MPB) is introduced, consisting of depthwise separable convolutions and residual concatenation. This block effectively extracts multi-scale features and enhances local detail extraction after the VSS block. Additionally, a Local Enhancement and Fusion Attention Module (LAS) is added at the end of each decoder block. LAS integrates the SimAM attention mechanism, further enhancing the model’s multi-scale feature fusion capability and local detail segmentation capability. Through extensive comparative experiments, it was found that LMVMamba achieves superior performance on the OpenEarthMap dataset (mIoU 52.3%, OA 69.8%, mF1: 68.0%) and LoveDA (mIoU 67.9%, OA 80.3%, mF1: 80.5%) datasets. Ablation experiments validated the effectiveness of each module. The final results indicate that this model is highly suitable for high-precision land-cover classification tasks in remote sensing imagery. LMVMamba provides an effective solution for precise semantic segmentation of high-resolution remote sensing imagery. Full article
Show Figures

Graphical abstract

22 pages, 24236 KB  
Article
BMDNet-YOLO: A Lightweight and Robust Model for High-Precision Real-Time Recognition of Blueberry Maturity
by Huihui Sun and Rui-Feng Wang
Horticulturae 2025, 11(10), 1202; https://doi.org/10.3390/horticulturae11101202 - 5 Oct 2025
Viewed by 298
Abstract
Accurate real-time detection of blueberry maturity is vital for automated harvesting. However, existing methods often fail under occlusion, variable lighting, and dense fruit distribution, leading to reduced accuracy and efficiency. To address these challenges, we designed a lightweight deep learning framework that integrates [...] Read more.
Accurate real-time detection of blueberry maturity is vital for automated harvesting. However, existing methods often fail under occlusion, variable lighting, and dense fruit distribution, leading to reduced accuracy and efficiency. To address these challenges, we designed a lightweight deep learning framework that integrates improved feature extraction, attention-based fusion, and progressive transfer learning to enhance robustness and adaptability To overcome these challenges, we propose BMDNet-YOLO, a lightweight model based on an enhanced YOLOv8n. The backbone incorporates a FasterPW module with parallel convolution and point-wise weighting to improve feature extraction efficiency and robustness. A coordinate attention (CA) mechanism in the neck enhances spatial-channel feature selection, while adaptive weighted concatenation ensures efficient multi-scale fusion. The detection head employs a heterogeneous lightweight structure combining group and depthwise separable convolutions to minimize parameter redundancy and boost inference speed. Additionally, a three-stage transfer learning framework (source-domain pretraining, cross-domain adaptation, and target-domain fine-tuning) improves generalization. Experiments on 8250 field-collected and augmented images show BMDNet-YOLO achieves 95.6% mAP@0.5, 98.27% precision, and 94.36% recall, surpassing existing baselines. This work offers a robust solution for deploying automated blueberry harvesting systems. Full article
Show Figures

Figure 1

16 pages, 13271 KB  
Article
Smartphone-Based Estimation of Cotton Leaf Nitrogen: A Learning Approach with Multi-Color Space Fusion
by Shun Chen, Shizhe Qin, Yu Wang, Lulu Ma and Xin Lv
Agronomy 2025, 15(10), 2330; https://doi.org/10.3390/agronomy15102330 - 2 Oct 2025
Viewed by 283
Abstract
To address the limitations of traditional cotton leaf nitrogen content estimation methods, which include low efficiency, high cost, poor portability, and challenges in vegetation index acquisition owing to environmental interference, this study focused on emerging non-destructive nutrient estimation technologies. This study proposed an [...] Read more.
To address the limitations of traditional cotton leaf nitrogen content estimation methods, which include low efficiency, high cost, poor portability, and challenges in vegetation index acquisition owing to environmental interference, this study focused on emerging non-destructive nutrient estimation technologies. This study proposed an innovative method that integrates multi-color space fusion with deep and machine learning to estimate cotton leaf nitrogen content using smartphone-captured digital images. A dataset comprising smartphone-acquired cotton leaf images was processed through threshold segmentation and preprocessing, then converted into RGB, HSV, and Lab color spaces. The models were developed using deep-learning architectures including AlexNet, VGGNet-11, and ResNet-50. The conclusions of this study are as follows: (1) The optimal single-color-space nitrogen estimation model achieved a validation set R2 of 0.776. (2) Feature-level fusion by concatenation of multidimensional feature vectors extracted from three color spaces using the optimal model, combined with an attention learning mechanism, improved the validation R2 to 0.827. (3) Decision-level fusion by concatenating nitrogen estimation values from optimal models of different color spaces into a multi-source decision dataset, followed by machine learning regression modeling, increased the final validation R2 to 0.830. The dual fusion method effectively enabled rapid and accurate nitrogen estimation in cotton crops using smartphone images, achieving an accuracy 5–7% higher than that of single-color-space models. The proposed method provides scientific support for efficient cotton production and promotes sustainable development in the cotton industry. Full article
(This article belongs to the Special Issue Crop Nutrition Diagnosis and Efficient Production)
Show Figures

Figure 1

12 pages, 1627 KB  
Article
RC-LDPC-Polar Codes for Information Reconciliation in Continuous-Variable Quantum Key Distribution
by Fei Hua, Kun Chen, Wei Deng, Jing Cheng, Banghong Guo and Huanwen Xie
Entropy 2025, 27(10), 1025; https://doi.org/10.3390/e27101025 - 29 Sep 2025
Viewed by 319
Abstract
Continuous-variable quantum key distribution faces significant challenges, including quantum channel instability, particularly fluctuations in the signal-to-noise ratio (SNR) and extremely low SNR scenarios. Furthermore, non-ideal polar codes, characterized by insufficient polarization in finite-length regimes, can lead to some sub-channels being neither completely noise-free [...] Read more.
Continuous-variable quantum key distribution faces significant challenges, including quantum channel instability, particularly fluctuations in the signal-to-noise ratio (SNR) and extremely low SNR scenarios. Furthermore, non-ideal polar codes, characterized by insufficient polarization in finite-length regimes, can lead to some sub-channels being neither completely noise-free nor fully noise-dominated. This phenomenon limits the error correction capability when such codes are applied to information reconciliation. To address these challenges, we propose a novel RC-LDPC-Polar code for the CV-QKD reconciliation algorithm. We combine the error resilience of LDPC codes with the efficiency advantages of polar coding. This scheme supports adaptive rate adjustment across varying SNR conditions. Our simulation experiments demonstrate that the RC-LDPC-Polar concatenated coding scheme achieves a lower error rate under varying SNR conditions. Meanwhile, the proposed scheme achieves a higher final key rate and a longer transmission distance. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

27 pages, 5701 KB  
Article
An Enhanced Method to Estimate State of Health of Li-Ion Batteries Using Feature Accretion Method (FAM)
by Leila Amani, Amir Sheikhahmadi and Yavar Vafaee
Energies 2025, 18(19), 5171; https://doi.org/10.3390/en18195171 - 29 Sep 2025
Viewed by 344
Abstract
Accurate estimation of State of Health (SOH) is pivotal for managing the lifecycle of lithium-ion batteries (LIBs) and ensuring safe and reliable operation in electric vehicles (EVs) and energy storage systems. While feature fusion methods show promise for battery health assessment, they often [...] Read more.
Accurate estimation of State of Health (SOH) is pivotal for managing the lifecycle of lithium-ion batteries (LIBs) and ensuring safe and reliable operation in electric vehicles (EVs) and energy storage systems. While feature fusion methods show promise for battery health assessment, they often suffer from suboptimal integration strategies and limited utilization of complementary health indicators (HIs). In this study, we propose a Feature Accretion Method (FAM) that systematically integrates four carefully selected health indicators–voltage profiles, incremental capacity (IC), and polynomial coefficients derived from IC–voltage and capacity–voltage curves—via a progressive three-phase pipeline. Unlike single-indicator baselines or naïve feature concatenation methods, FAM couples’ progressive accretion with tuned ensemble learners to maximize predictive fidelity. Comprehensive validation using Gaussian Process Regression (GPR) and Random Forest (RF) on the CALCE and Oxford datasets yields state-of-the-art accuracy: on CALCE, RMSE = 0.09%, MAE = 0.07%, and R2 = 0.9999; on Oxford, RMSE = 0.33%, MAE = 0.24%, and R2 = 0.9962. These results represent significant improvements over existing feature fusion approaches, with up to 87% reduction in RMSE compared to state-of-the-art methods. These results indicate a practical pathway to deployable SOH estimation in battery management systems (BMS) for EV and energy storage applications. Full article
Show Figures

Figure 1

20 pages, 1488 KB  
Article
Attention-Fusion-Based Two-Stream Vision Transformer for Heart Sound Classification
by Kalpeshkumar Ranipa, Wei-Ping Zhu and M. N. S. Swamy
Bioengineering 2025, 12(10), 1033; https://doi.org/10.3390/bioengineering12101033 - 26 Sep 2025
Viewed by 353
Abstract
Vision Transformers (ViTs), inspired by their success in natural language processing, have recently gained attention for heart sound classification (HSC). However, most of the existing studies on HSC rely on single-stream architectures, overlooking the advantages of multi-resolution features. While multi-stream architectures employing early [...] Read more.
Vision Transformers (ViTs), inspired by their success in natural language processing, have recently gained attention for heart sound classification (HSC). However, most of the existing studies on HSC rely on single-stream architectures, overlooking the advantages of multi-resolution features. While multi-stream architectures employing early or late fusion strategies have been proposed, they often fall short of effectively capturing cross-modal feature interactions. Additionally, conventional fusion methods, such as concatenation, averaging, or max pooling, frequently result in information loss. To address these limitations, this paper presents a novel attention fusion-based two-stream Vision Transformer (AFTViT) architecture for HSC that leverages two-dimensional mel-cepstral domain features. The proposed method employs a ViT-based encoder to capture long-range dependencies and diverse contextual information at multiple scales. A novel attention block is then used to integrate cross-context features at the feature level, enhancing the overall feature representation. Experiments conducted on the PhysioNet2016 and PhysioNet2022 datasets demonstrate that the AFTViT outperforms state-of-the-art CNN-based methods in terms of accuracy. These results highlight the potential of the AFTViT framework for early diagnosis of cardiovascular diseases, offering a valuable tool for cardiologists and researchers in developing advanced HSC techniques. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

28 pages, 2869 KB  
Article
Enhancing Medical Image Segmentation and Classification Using a Fuzzy-Driven Method
by Akmal Abduvaitov, Abror Shavkatovich Buriboev, Djamshid Sultanov, Shavkat Buriboev, Ozod Yusupov, Kilichov Jasur and Andrew Jaeyong Choi
Sensors 2025, 25(18), 5931; https://doi.org/10.3390/s25185931 - 22 Sep 2025
Viewed by 668
Abstract
Automated analysis for tumor segmentation and illness classification is hampered by the noise, low contrast, and ambiguity that are common in medical pictures. This work introduces a new 12-step fuzzy-based improvement pipeline that uses fuzzy entropy, fuzzy standard deviation, and histogram spread functions [...] Read more.
Automated analysis for tumor segmentation and illness classification is hampered by the noise, low contrast, and ambiguity that are common in medical pictures. This work introduces a new 12-step fuzzy-based improvement pipeline that uses fuzzy entropy, fuzzy standard deviation, and histogram spread functions to enhance picture quality in CT, MRI, and X-ray modalities. The pipeline produces three improved versions per dataset, lowering BRISQUE scores from 28.8 to 21.7 (KiTS19), 30.3 to 23.4 (BraTS2020), and 26.8 to 22.1 (Chest X-ray). It is tested on KiTS19 (CT) for kidney tumor segmentation, BraTS2020 (MRI) for brain tumor segmentation, and Chest X-ray Pneumonia for classification. A Concatenated CNN (CCNN) uses the improved datasets to achieve a Dice coefficient of 99.60% (KiTS19, +2.40% over baseline), segmentation accuracy of 0.983 (KiTS19) and 0.981 (BraTS2020) versus 0.959 and 0.943 (CLAHE), and classification accuracy of 0.974 (Chest X-ray) versus 0.917 (CLAHE). A classic CNN is trained on original and CLAHE-filtered datasets. These outcomes demonstrate how well the pipeline works to improve image quality and increase segmentation/classification accuracy, offering a foundation for clinical diagnostics that is both scalable and interpretable. Full article
Show Figures

Figure 1

20 pages, 2197 KB  
Article
Perceptual Image Hashing Fusing Zernike Moments and Saliency-Based Local Binary Patterns
by Wei Li, Tingting Wang, Yajun Liu and Kai Liu
Computers 2025, 14(9), 401; https://doi.org/10.3390/computers14090401 - 21 Sep 2025
Viewed by 390
Abstract
This paper proposes a novel perceptual image hashing scheme that robustly combines global structural features with local texture information for image authentication. The method starts with image normalization and Gaussian filtering to ensure scale invariance and suppress noise. A saliency map is then [...] Read more.
This paper proposes a novel perceptual image hashing scheme that robustly combines global structural features with local texture information for image authentication. The method starts with image normalization and Gaussian filtering to ensure scale invariance and suppress noise. A saliency map is then generated from a color vector angle matrix using a frequency-tuned model to identify perceptually significant regions. Local Binary Pattern (LBP) features are extracted from this map to represent fine-grained textures, while rotation-invariant Zernike moments are computed to capture global geometric structures. These local and global features are quantized and concatenated into a compact binary hash. Extensive experiments on standard databases show that the proposed method outperforms state-of-the-art algorithms in both robustness against content-preserving manipulations and discriminability across different images. Quantitative evaluations based on ROC curves and AUC values confirm its superior robustness–uniqueness trade-off, demonstrating the effectiveness of the saliency-guided fusion of Zernike moments and LBP for reliable image hashing. Full article
Show Figures

Figure 1

18 pages, 2389 KB  
Article
Multigene Identification of a Giant Wild Strain of Ganoderma mutabile (ZHM1939) and Screening of Its Culture Substrates
by Huiming Zhou, Longqian Bao, Zeqin Peng, Yuying Bai, Qiqian Su, Longfeng Yu, Chunlian Ma, Jun He and Wanzhong Tan
Life 2025, 15(9), 1475; https://doi.org/10.3390/life15091475 - 19 Sep 2025
Viewed by 422
Abstract
In the present study, a new Ganoderma sp. (ZHM1939) was collected from Lincang, Yunnan, China, and described on the basis of morphological characters and multigene phylogenetic analysis of rDNA-ITS, TEF1α and RPB2 sequences. This fungus is characterized by the exceptionally large basidiomata, [...] Read more.
In the present study, a new Ganoderma sp. (ZHM1939) was collected from Lincang, Yunnan, China, and described on the basis of morphological characters and multigene phylogenetic analysis of rDNA-ITS, TEF1α and RPB2 sequences. This fungus is characterized by the exceptionally large basidiomata, oval shape, a pileus measuring 63.86 cm long, 52.35 cm wide, and 21.63 cm thick, and a fresh weight of 80.51 kg. The skeleton hyphae from the basidiocarp are grayish to grayish-red in color, septate, and 1.41–2.75 μm in diameter, with frequently dichotomous branched and broadly ellipsoid basidiospores. The basidiospores are monocellular, ellipsoid, with round ends or one slightly pointed end, brown–gray in color, and measured 6.52–10.26 μm × 4.68–7.17 μm (n = 30). When cultured for 9 days at 25 ± 2 °C on PDA, the colony was white, ellipsoid or oval, with slightly ragged edges, measured Φ58.26 ± 3.05 mm (n = 5), and the growth rate = 6.47 mm/day; prosperous blast-spores formed after culturing for 21 days, making the colony surface powdery-white. The mycelia were septate, hyaline, branching at near-right angles, measured Φ1.28–3.32 μm (n = 30), and had some connections. The blast-spores were one-celled, elliptic or barley-seed shaped, and measured 6.52–10.26 μm × 4.68–7.17 μm (n = 30). Its rDNA-ITS, TEF1α and RPB2 sequences amplified through PCR were 602 bp, 550 bp and 729 bp, respectively. Blast-n comparison with these sequences showed that ZHM1939 was 99.67–100% identical to related strains of Ganoderma mutabile. A maximum likelihood phylogenic tree using the concatenated sequence of rDNA-ITS, TEF1α and RPB2 was constructed and it showed that ZHM1939 clustered on the same terminal branch of the phylogenic tree with the strains Cui1718 and YUAN 2289 of G. mutabile (Bootstrap support = 100%). ZHM1939 could grow on all the 15 original inoculum substrates tested, among which the best growth was shown on substrate 2 (cornmeal 40 g, sucrose 10 g, agar 20 g), with the fastest colony growth rate (6.79 mm/day). Of the five propagation substrates tested, substrate 1 (wheat grains 500 g, gypsum powder 6.5 g and calcium carbonate 2 g) resulted in the highest mycelium growth rate (7.78 mm/day). Among the six cultivation substrates tested, ZHM1939 grew best in substrate 2 (cottonseed hulls 75 g, rice bran 12 g, tree leaves 5 g, cornmeal 5 g, lime powder 1 g, sucrose 1 g and red soil 1 g) with a mycelium growth rate of 7.64 mm/day. In conclusion, ZHM1939 was identified as Ganoderma mutabile, which is a huge mushroom and rare medicinal macrofungus resource. The original inoculum substrate 9, propagation substrate 1 and cultivation substrate 2 were the most optimal substrates for producing the original propagation and cultivation inocula of this macrofungus. This is the first report on successful growing conditions for mycelial production, but basidiocarp production could not be achieved. The results of the present work establish a scientific foundation for further studies, resource protection and application development of G. mutabile. Full article
(This article belongs to the Special Issue New Developments in Mycology)
Show Figures

Figure 1

15 pages, 704 KB  
Article
Suspected Adverse Drug Reactions Associated with Leukotriene Receptor Antagonists Versus First-Line Asthma Medications: A National Registry–Pharmacology Approach
by Mohammed Khan, Christine Hirsch and Alan M. Jones
Pharmacoepidemiology 2025, 4(3), 18; https://doi.org/10.3390/pharma4030018 - 19 Sep 2025
Viewed by 470
Abstract
Background/Objectives: The aim of this study was to determine the suspected adverse drug reaction (ADR) profile of leukotriene receptor antagonists (LTRAs; montelukast and zafirlukast) relative to first-line asthma medications such as short-acting beta agonists (SABAs; salbutamol) and inhaled corticosteroid (ICS; beclomethasone) in [...] Read more.
Background/Objectives: The aim of this study was to determine the suspected adverse drug reaction (ADR) profile of leukotriene receptor antagonists (LTRAs; montelukast and zafirlukast) relative to first-line asthma medications such as short-acting beta agonists (SABAs; salbutamol) and inhaled corticosteroid (ICS; beclomethasone) in the United Kingdom. to determine the chemical and pharmacological rationale for the suspected ADR signals. Methods: Properties of the asthma medications (pharmacokinetics and pharmacology) were datamined from the chemical database of bioactive molecules with drug-like properties, the European Molecular Biology Laboratory (ChEMBL). Suspected ADR profiles of the asthma medications were curated from the Medicines and Healthcare products Regulatory Authority (MHRA) Yellow Card interactive Drug Analysis Profiles (iDAP) and concatenated to the standardised prescribing levels (using Open Prescribing data) between 2018 and 2023. Results: Total ADRs per 100,000 Rx (p < 0.001) and psychiatric system organ class (SOC) ADRs (p < 0.001) reached statistical significance. Montelukast exhibited the greatest ADR rate at 15.64 per 100,000 Rx. Conclusions: Relative to the controls, montelukast displays a range of suspected system organ class level ADRs. For the credible and previously reported psychiatric ADRs, montelukast is statistically significant (p < 0.001). A mechanistic hypothesis is proposed based on polypharmacological interactions in combination with cerebrospinal fluid (CSF) levels attained. Montelukast had the highest nervous disorder ADR rate at 1.71 per 100,000 Rx, whereas beclomethasone and salbutamol had lower rates (0.43 and 0.14, respectively). These ADRs share a similar background to psychiatric ADRs with CSF penetrability involved and affecting the dopamine axis. This work further supports the monitoring of montelukast for rare but important neuropsychiatric side effects. Full article
(This article belongs to the Special Issue Pharmacoepidemiology and Pharmacovigilance in the UK)
Show Figures

Figure 1

19 pages, 6878 KB  
Article
Research on the Shear Performance of Undulating Jointed Rammed Earth Walls with Comparative Tests
by Jing Xiao, Ruijie Xu, Shan Dai and Wenfeng Bai
Buildings 2025, 15(18), 3356; https://doi.org/10.3390/buildings15183356 - 16 Sep 2025
Viewed by 280
Abstract
Rammed earth (RE) dwellings are characterized by accessible materials, low cost, and environmental sustainability. However, their poor seismic resistance limits their application. To address this issue, three conventional technical approaches have been developed: (1) adding cement to improve strength; (2) improving structural integrity [...] Read more.
Rammed earth (RE) dwellings are characterized by accessible materials, low cost, and environmental sustainability. However, their poor seismic resistance limits their application. To address this issue, three conventional technical approaches have been developed: (1) adding cement to improve strength; (2) improving structural integrity using reinforced concrete ring beams and columns; and (3) embedding vertical steel bars in order to provide resistance against horizontal seismic actions. While effective, these methods rely on energy-intensive materials with high carbon emissions. In this study, we analyze the seismic damage characteristics and construction mechanisms of RE walls. The results reveal that the horizontal joints in RE walls significantly weaken their resistance to horizontal seismic actions. To mitigate this, three types of undulating joints are proposed and six specimens tested. The maximum horizontal loads of the specimens with local subsidence-type joints are 132.44 kN and 135.41 kN, respectively, which are approximately 50% higher than specimens with horizontal joints, whose maximum horizontal loads are 80.7 kN and 85.83 kN, respectively, while the maximum horizontal loads of the specimens with horizontally concatenated gentle arc-type joints are 151.17 kN and 173.58 kN, respectively, and they exhibit nearly double the shear capacity of the specimens with horizontal joints. Building on these findings and test results, we also include recommendations for integrating elegant RE wall texture design with seismic-resistant undulating joint technology. Full article
(This article belongs to the Topic Green Construction Materials and Construction Innovation)
Show Figures

Figure 1

Back to TopTop