Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,163)

Search Parameters:
Keywords = color classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6436 KB  
Article
Face Morphing Attack Detection Using Similarity Score Patterns Between De-Morphed and Live Images
by Thi Thuy Hoang, Bappy Md Siful Islam and Heejune Ahn
Electronics 2025, 14(19), 3851; https://doi.org/10.3390/electronics14193851 - 28 Sep 2025
Abstract
Face morphing attacks have become a serious threat to Face Recognition Systems (FRSs). A de-morphing-based morphing attack detection method has been proposed and studied, which uses suspect and live capture, but the unknown morphing parameters in the used morphing algorithm make applying de-morphing [...] Read more.
Face morphing attacks have become a serious threat to Face Recognition Systems (FRSs). A de-morphing-based morphing attack detection method has been proposed and studied, which uses suspect and live capture, but the unknown morphing parameters in the used morphing algorithm make applying de-morphing methods challenging. This paper proposes a robust face morphing attack detection (FMAD) method (pipeline) leveraging deep learning de-morphing networks. Inspired by differences in similarity score (i.e., cosine similarity between feature vectors) variations between morphed and non-morphed images, the detection pipeline was proposed to learn the variation patterns of similarity scores between live capture and de-morphed face/bona fide images with different de-morphing factors. An effective deep de-morphing network based on StyleGAN and the pSp (pixel2style2pixel) encoder was developed. The network generates de-morphed images from suspect and live images with multiple de-morphing factors and calculates similarity scores between feature vectors from the ArcFace network, which are then classified by the detection network. Experiments on morphing datasets from the Color FERET, FRGCv2, and SYS-MAD databases, including landmark-based and deep learning attacks, demonstrate that the proposed method performs high accuracy in detecting unseen morphing attacks across different databases. It attains an Equal Error Rate (EER) of less than 1–4% and a Bona Fide Presentation Classification Error Rate (BPCER) of approximately 11% at an Attack Presentation Classification Error Rate (APCER) of 0.1%, outperforming previous methods. Full article
(This article belongs to the Topic Recent Advances in Security, Privacy, and Trust)
Show Figures

Figure 1

25 pages, 5161 KB  
Article
Non-Destructive Classification of Sweetness and Firmness in Oranges Using ANFIS and a Novel CCI–GLCM Image Descriptor
by David Granados-Lieberman, Alejandro Israel Barranco-Gutiérrez, Adolfo R. Lopez, Horacio Rostro-Gonzalez, Miroslava Cano-Lara, Carlos Gustavo Manriquez-Padilla and Marcos J. Villaseñor-Aguilar
Appl. Sci. 2025, 15(19), 10464; https://doi.org/10.3390/app151910464 - 26 Sep 2025
Abstract
This study introduces a non-destructive computer vision method for estimating postharvest quality parameters of oranges, including maturity index, soluble solid content (expressed in degrees Brix), and firmness. A novel image-based descriptor, termed Citrus Color Index—Gray Level Co-occurrence Matrix Texture Features (CCI–GLCM-TF), was developed [...] Read more.
This study introduces a non-destructive computer vision method for estimating postharvest quality parameters of oranges, including maturity index, soluble solid content (expressed in degrees Brix), and firmness. A novel image-based descriptor, termed Citrus Color Index—Gray Level Co-occurrence Matrix Texture Features (CCI–GLCM-TF), was developed by integrating the Citrus Color Index (CCI) with texture features derived from the Gray Level Co-occurrence Matrix (GLCM). By combining contrast, correlation, energy, and homogeneity across multiscale regions of interest and applying geometric calibration to correct image acquisition distortions, the descriptor effectively captures both chromatic and structural information from RGB images. These features served as input to an Adaptive Neuro-Fuzzy Inference System (ANFIS), selected for its ability to model nonlinear relationships and gradual transitions in citrus ripening. The proposed ANFIS models achieved R-squared values greater than or equal to 0.81 and root mean square error values less than or equal to 1.1 across all quality parameters, confirming their predictive robustness. Notably, representative models (ANFIS 2, 4, 6, and 8) demonstrated superior performance, supporting the extension of this approach to full-surface exploration of citrus fruits. The results outperform methods relying solely on color features, underscoring the importance of combining spectral and textural descriptors. This work highlights the potential of the CCI–GLCM-TF descriptor, in conjunction with ANFIS, for accurate, real-time, and non-invasive assessment of citrus quality, with practical implications for automated classification, postharvest process optimization, and cost reduction in the citrus industry. Full article
(This article belongs to the Special Issue Sensory Evaluation and Flavor Analysis in Food Science)
Show Figures

Figure 1

27 pages, 3413 KB  
Article
DermaMamba: A Dual-Branch Vision Mamba Architecture with Linear Complexity for Efficient Skin Lesion Classification
by Zhongyu Yao, Yuxuan Yan, Zhe Liu, Tianhang Chen, Ling Cho, Yat-Wah Leung, Tianchi Lu, Wenjin Niu, Zhenyu Qiu, Yuchen Wang, Xingcheng Zhu and Ka-Chun Wong
Bioengineering 2025, 12(10), 1030; https://doi.org/10.3390/bioengineering12101030 - 26 Sep 2025
Abstract
Accurate skin lesion classification is crucial for the early detection of malignant lesions, including melanoma, as well as improved patient outcomes. While convolutional neural networks (CNNs) excel at capturing local morphological features, they struggle with global context modeling essential for comprehensive lesion assessment. [...] Read more.
Accurate skin lesion classification is crucial for the early detection of malignant lesions, including melanoma, as well as improved patient outcomes. While convolutional neural networks (CNNs) excel at capturing local morphological features, they struggle with global context modeling essential for comprehensive lesion assessment. Vision transformers address this limitation but suffer from quadratic computational complexity O(n2), hindering deployment in resource-constrained clinical environments. We propose DermaMamba, a novel dual-branch fusion architecture that integrates CNN-based local feature extraction with Vision Mamba (VMamba) for efficient global context modeling with linear complexity O(n). Our approach introduces a state space fusion mechanism with adaptive weighting that dynamically balances local and global features based on lesion characteristics. We incorporate medical domain knowledge through multi-directional scanning strategies and ABCDE (Asymmetry, Border irregularity, Color variation, Diameter, Evolution) rule feature integration. Extensive experiments on the ISIC dataset show that DermaMamba achieves 92.1% accuracy, 91.7% precision, 91.3% recall, and 91.5% mac-F1 score, which outperforms the best baseline by 2.0% accuracy with 2.3× inference speedup and 40% memory reduction. The improvements are statistically significant based on a significance test (p < 0.001, Cohen’s d > 0.8), with greater than 79% confidence also preserved on challenging boundary cases. These results establish DermaMamba as an effective solution bridging diagnostic accuracy and computational efficiency for clinical deployment. Full article
Show Figures

Figure 1

25 pages, 4937 KB  
Article
Machine Learning-Driven XR Interface Using ERP Decoding
by Abdul Rehman, Mira Lee, Yeni Kim, Min Seong Chae and Sungchul Mun
Electronics 2025, 14(19), 3773; https://doi.org/10.3390/electronics14193773 - 24 Sep 2025
Viewed by 118
Abstract
This study introduces a machine learning–driven extended reality (XR) interaction framework that leverages electroencephalography (EEG) for decoding consumer intentions in immersive decision-making tasks, demonstrated through functional food purchasing within a simulated autonomous vehicle setting. Recognizing inherent limitations in traditional “Preference vs. Non-Preference” EEG [...] Read more.
This study introduces a machine learning–driven extended reality (XR) interaction framework that leverages electroencephalography (EEG) for decoding consumer intentions in immersive decision-making tasks, demonstrated through functional food purchasing within a simulated autonomous vehicle setting. Recognizing inherent limitations in traditional “Preference vs. Non-Preference” EEG paradigms for immersive product evaluation, we propose a novel and robust “Rest vs. Intention” classification approach that significantly enhances cognitive signal contrast and improves interpretability. Eight healthy adults participated in immersive XR product evaluations within a simulated autonomous driving environment using the Microsoft HoloLens 2 headset (Microsoft Corp., Redmond, WA, USA). Participants assessed 3D-rendered multivitamin supplements systematically varied in intrinsic (ingredient, origin) and extrinsic (color, formulation) attributes. Event-related potentials (ERPs) were extracted from 64-channel EEG recordings, specifically targeting five neurocognitive components: N1 (perceptual attention), P2 (stimulus salience), N2 (conflict monitoring), P3 (decision evaluation), and LPP (motivational relevance). Four ensemble classifiers (Extra Trees, LightGBM, Random Forest, XGBoost) were trained to discriminate cognitive states under both paradigms. The ‘Rest vs. Intention’ approach achieved high cross-validated classification accuracy (up to 97.3% in this sample), and area under the curve (AUC > 0.97) SHAP-based interpretability identified dominant contributions from the N1, P2, and N2 components, aligning with neurophysiological processes of attentional allocation and cognitive control. These findings provide preliminary evidence of the viability of ERP-based intention decoding within a simulated autonomous-vehicle setting. Our framework serves as an exploratory proof-of-concept foundation for future development of real-time, BCI-enabled in-transit commerce systems, while underscoring the need for larger-scale validation in authentic AV environments and raising important considerations for ethics and privacy in neuromarketing applications. Full article
(This article belongs to the Special Issue Connected and Autonomous Vehicles in Mixed Traffic Systems)
Show Figures

Figure 1

20 pages, 18992 KB  
Article
Application of LMM-Derived Prompt-Based AIGC in Low-Altitude Drone-Based Concrete Crack Monitoring
by Shijun Pan, Zhun Fan, Keisuke Yoshida, Shujia Qin, Takashi Kojima and Satoshi Nishiyama
Drones 2025, 9(9), 660; https://doi.org/10.3390/drones9090660 - 21 Sep 2025
Viewed by 236
Abstract
In recent years, large multimodal models (LMMs), such as ChatGPT 4o and DeepSeek R1—artificial intelligence systems capable of multimodal (e.g., image and text) human–computer interaction—have gained traction in industrial and civil engineering applications. Concurrently, insufficient real-world drone-view data (specifically close-distance, high-resolution imagery) for [...] Read more.
In recent years, large multimodal models (LMMs), such as ChatGPT 4o and DeepSeek R1—artificial intelligence systems capable of multimodal (e.g., image and text) human–computer interaction—have gained traction in industrial and civil engineering applications. Concurrently, insufficient real-world drone-view data (specifically close-distance, high-resolution imagery) for civil engineering scenarios has heightened the importance of artificially generated content (AIGC) or synthetic data as supplementary inputs. AIGC is typically produced via text-to-image generative models (e.g., Stable Diffusion, DALL-E) guided by user-defined prompts. This study leverages LMMs to interpret key parameters for drone-based image generation (e.g., color, texture, scene composition, photographic style) and applies prompt engineering to systematize these parameters. The resulting LMM-generated prompts were used to synthesize training data for a You Only Look Once version 8 segmentation model (YOLOv8-seg). To address the need for detailed crack-distribution mapping in low-altitude drone-based monitoring, the trained YOLOv8-seg model was evaluated on close-distance crack benchmark datasets. The experimental results confirm that LMM-prompted AIGC is a viable supplement for low-altitude drone crack monitoring, achieving >80% classification accuracy (images with/without cracks) at a confidence threshold of 0.5. Full article
Show Figures

Figure 1

16 pages, 1247 KB  
Article
Non-Invasive Retinal Pathology Assessment Using Haralick-Based Vascular Texture and Global Fundus Color Distribution Analysis
by Ouafa Sijilmassi
J. Imaging 2025, 11(9), 321; https://doi.org/10.3390/jimaging11090321 - 19 Sep 2025
Viewed by 221
Abstract
This study analyzes retinal fundus images to distinguish healthy retinas from those affected by diabetic retinopathy (DR) and glaucoma using a dual-framework approach: vascular texture analysis and global color distribution analysis. The texture-based approach involved segmenting the retinal vasculature and extracting eight Haralick [...] Read more.
This study analyzes retinal fundus images to distinguish healthy retinas from those affected by diabetic retinopathy (DR) and glaucoma using a dual-framework approach: vascular texture analysis and global color distribution analysis. The texture-based approach involved segmenting the retinal vasculature and extracting eight Haralick texture features from the Gray-Level Co-occurrence Matrix. Significant differences in features such as energy, contrast, correlation, and entropy were found between healthy and pathological retinas. Pathological retinas exhibited lower textural complexity and higher uniformity, which correlates with vascular thinning and structural changes observed in DR and glaucoma. In parallel, the global color distribution of the full fundus area was analyzed without segmentation. RGB intensity histograms were calculated for each channel and averaged across groups. Statistical tests revealed significant differences, particularly in the green and blue channels. The Mahalanobis distance quantified the separability of the groups per channel. These results indicate that pathological changes in retinal tissue can also lead to detectable chromatic shifts in the fundus. The findings underscore the potential of both vascular texture and color features as non-invasive biomarkers for early retinal disease detection and classification. Full article
(This article belongs to the Special Issue Emerging Technologies for Less Invasive Diagnostic Imaging)
Show Figures

Figure 1

13 pages, 1369 KB  
Article
Integrating Egg Case Morphology and DNA Barcoding to Discriminate South American Catsharks, Schroederichthys bivius and S. chilensis (Carcharhiniformes: Atelomycteridae)
by Carlos Bustamante, Carolina Vargas-Caro, María J. Indurain and Gabriela Silva
Diversity 2025, 17(9), 651; https://doi.org/10.3390/d17090651 - 16 Sep 2025
Viewed by 598
Abstract
Catsharks are benthic elasmobranchs that share spatial niches with littoral and demersal bony fishes. The genus Schroederichthys includes five species, two of which, S. chilensis and S. bivius, occur in the waters of Chile. These species are morphologically similar and are often [...] Read more.
Catsharks are benthic elasmobranchs that share spatial niches with littoral and demersal bony fishes. The genus Schroederichthys includes five species, two of which, S. chilensis and S. bivius, occur in the waters of Chile. These species are morphologically similar and are often misidentified because of their overlapping external features and color patterns. To improve species discrimination, we analyzed the egg case morphology of both species based on 36 egg cases (12 S. chilensis, 24 S. bivius) collected from gravid females captured as bycatch in artisanal fisheries between Iquique and Puerto Montt (July–December 2021). Nine morphometric variables were measured and standardized using the total egg case length. Although the egg cases were similar in general appearance, multivariate analyses revealed significant interspecific differences, with egg case height and anterior border width emerging as the most diagnostic variables. Linear discriminant analysis achieved a 100% classification accuracy within this dataset. To confirm species identity, 24 tissue samples (12 per species) were sequenced for the mitochondrial cytochrome c oxidase subunit I (COI) gene. The haplotypes corresponded to previously published sequences from Chile (S. chilensis) and Argentina (S. bivius), with reciprocal monophyly and 100% bootstrap support. While COI barcoding provided robust confirmation, the core contribution of this study lies in the identification of species-specific egg case morphometrics. Together, these findings establish a dual-track toolkit, egg case morphology for primary discrimination and COI barcodes for confirmatory validation, that can be incorporated into bycatch monitoring and biodiversity assessments, supporting the conservation of poorly known catsharks in the Southeast Pacific. Full article
(This article belongs to the Special Issue Shark Ecology)
Show Figures

Graphical abstract

26 pages, 11731 KB  
Article
Sow Estrus Detection Based on the Fusion of Vulvar Visual Features
by Jianyu Fang, Lu Yang, Xiangfang Tang, Shuqing Han, Guodong Cheng, Yali Wang, Liwen Chen, Baokai Zhao and Jianzhai Wu
Animals 2025, 15(18), 2709; https://doi.org/10.3390/ani15182709 - 16 Sep 2025
Viewed by 344
Abstract
Under large-scale farming conditions, automated sow estrus detection is crucial for improving reproductive efficiency, optimizing breeding management, and reducing labor costs. Conventional estrus detection relies heavily on human expertise, a practice that introduces subjective variability and consequently diminishes both accuracy and efficiency. Failure [...] Read more.
Under large-scale farming conditions, automated sow estrus detection is crucial for improving reproductive efficiency, optimizing breeding management, and reducing labor costs. Conventional estrus detection relies heavily on human expertise, a practice that introduces subjective variability and consequently diminishes both accuracy and efficiency. Failure to identify estrus promptly and pair animals effectively lowers breeding success rates and drives up overall husbandry costs. In response to the need for the automated detection of sows’ estrus states in large-scale pig farms, this study proposes a method for detecting sows’ vulvar status and estrus based on multi-dimensional feature crossing. The method adopts a dual optimization strategy: First, the Bi-directional Feature Pyramid Network—Selective Decoding Integration (BiFPN-SDI) module performs the bidirectional, weighted fusion of the backbone’s low-level texture and high-level semantic, retaining the multi-dimensional cues most relevant to vulvar morphology and producing a scale-aligned, minimally redundant feature map. Second, by embedding a Spatially Enhanced Attention Module head (SEAM-Head) channel attention mechanism into the detection head, the model further amplifies key hyperemia-related signals, while suppressing background noise, thereby enabling cooperative and more precise bounding box localization. To adapt the model for edge computing environments, Masked Generative Distillation (MGD) knowledge distillation is introduced to compress the model while maintaining the detection speed and accuracy. Based on the bounding box of the vulvar region, the aspect ratio of the target area and the red saturation features derived from a dual-threshold method in the HSV color space are used to construct a lightweight Multilayer Perceptron (MLP) classification model for estrus state determination. The network was trained on 1400 annotated samples, which were divided into training, testing, and validation sets in an 8:1:1 ratio. On-farm evaluations in commercial pig facilities show that the proposed system attains an 85% estrus detection success rate. Following lightweight optimization, inference latency fell from 24.29 ms to 18.87 ms, and the model footprint was compressed from 32.38 MB to 3.96 MB in the same machine, while maintaining a mean Average Precision (mAP) of 0.941; the accuracy penalty from model compression was kept below 1%. Moreover, the model demonstrates robust performance under complex lighting and occlusion conditions, enabling real-time processing from vulvar localization to estrus detection, and providing an efficient and reliable technical solution for automated estrus monitoring in large-scale pig farms. Full article
(This article belongs to the Special Issue Application of Precision Farming in Pig Systems)
Show Figures

Figure 1

25 pages, 5456 KB  
Article
A Lightweight Hybrid Detection System Based on the OpenMV Vision Module for an Embedded Transportation Vehicle
by Xinxin Wang, Hongfei Gao, Xiaokai Ma and Lijun Wang
Sensors 2025, 25(18), 5724; https://doi.org/10.3390/s25185724 - 13 Sep 2025
Viewed by 451
Abstract
Aiming at the real-time object detection requirements of the intelligent control system for laboratory item transportation in mobile embedded unmanned vehicles, this paper proposes a lightweight hybrid detection system based on the OpenMV vision module. The system adopts a two-stage detection mechanism: in [...] Read more.
Aiming at the real-time object detection requirements of the intelligent control system for laboratory item transportation in mobile embedded unmanned vehicles, this paper proposes a lightweight hybrid detection system based on the OpenMV vision module. The system adopts a two-stage detection mechanism: in long-distance scenarios (>32 cm), fast target positioning is achieved through red threshold segmentation based on the HSV(Hue, Saturation, Value) color space; when in close range (≤32 cm), it switches to a lightweight deep learning model for fine-grained recognition to reduce invalid computations. By integrating the MobileNetV2 backbone network with the FOMO (Fast Object Matching and Occlusion) object detection algorithm, the FOMO MobileNetV2 model is constructed, achieving an average classification accuracy of 94.1% on a self-built multi-dimensional dataset (including two variables of light intensity and object distance, with 820 samples), which is a 26.5% improvement over the baseline MobileNetV2. In terms of hardware, multiple functional components are integrated: OLED display, Bluetooth communication unit, ultrasonic sensor, OpenMV H7 Plus camera, and servo pan-tilt. Target tracking is realized through the PID control algorithm, and finally, the embedded terminal achieves a real-time processing performance of 55 fps. Experimental results show that the system can effectively and in real-time identify and track the detection targets set in the laboratory. The designed unmanned vehicle system provides a practical solution for the automated and low-power transportation of small items in the laboratory environment. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

82 pages, 17076 KB  
Review
Advancements in Embedded Vision Systems for Automotive: A Comprehensive Study on Detection and Recognition Techniques
by Anass Barodi, Mohammed Benbrahim and Abdelkarim Zemmouri
Vehicles 2025, 7(3), 99; https://doi.org/10.3390/vehicles7030099 - 12 Sep 2025
Viewed by 611
Abstract
Embedded vision systems play a crucial role in the advancement of intelligent transportation by supporting real-time perception tasks such as traffic sign recognition and lane detection. Despite significant progress, their performance remains sensitive to environmental variability, computational constraints, and scene complexity. This review [...] Read more.
Embedded vision systems play a crucial role in the advancement of intelligent transportation by supporting real-time perception tasks such as traffic sign recognition and lane detection. Despite significant progress, their performance remains sensitive to environmental variability, computational constraints, and scene complexity. This review examines the current state of the art in embedded vision approaches used for the detection and classification of traffic signs and lane markings. The literature is structured around three main stages, localization, detection, and recognition, highlighting how visual features like color, geometry, and road edges are processed through both traditional and learning-based methods. A major contribution of this work is the introduction of a practical taxonomy that organizes recognition techniques according to their computational load and real-time applicability in embedded contexts. In addition, the paper presents a critical synthesis of existing limitations, with attention to sensor fusion challenges, dataset diversity, and deployment in real-world conditions. By adopting the SALSA methodology, the review follows a transparent and systematic selection process, ensuring reproducibility and clarity. The study concludes by identifying specific research directions aimed at improving the robustness, scalability, and interpretability of embedded vision systems. These contributions position the review as a structured reference for researchers working on intelligent driving technologies and next-generation driver assistance systems. The findings are expected to inform future implementations of embedded vision systems in real-world driving environments. Full article
Show Figures

Figure 1

19 pages, 38450 KB  
Article
Color Normalization in Breast Cancer Immunohistochemistry Images Based on Sparse Stain Separation and Self-Sparse Fuzzy Clustering
by Attasuntorn Traisuwan, Somchai Limsiroratana, Pornchai Phukpattaranont, Phiraphat Sutthimat and Pichaya Tandayya
Diagnostics 2025, 15(18), 2316; https://doi.org/10.3390/diagnostics15182316 - 12 Sep 2025
Viewed by 386
Abstract
Background and Objective: The color normalization of breast cancer immunohistochemistry (IHC)-stained images helps change the color distribution of undesirable IHC-stained images to be more interpretable for the pathologists. This will affect the Allred score that the pathologists use to estimate the drug [...] Read more.
Background and Objective: The color normalization of breast cancer immunohistochemistry (IHC)-stained images helps change the color distribution of undesirable IHC-stained images to be more interpretable for the pathologists. This will affect the Allred score that the pathologists use to estimate the drug quantity for treating breast cancer patients. Methods: A new color normalization technique based on sparse stain separation and self-sparse fuzzy clustering is proposed. Results: The quaternion structural similarity was used to measure the quality of the normalization algorithm. Our technique has a structural similarity score lower than other techniques, and the color distribution similarity is closer to the target. We applied automated and unsupervised nuclei classification with Automatic Color Deconvolution (ACD) to test the color features extracted from normalized images. Conclusions: The classification result from our unsupervised nuclei classification with ACD is similar to other normalization methods, but it offers an easier perception to the pathologists. Full article
(This article belongs to the Special Issue Medical Images Segmentation and Diagnosis)
Show Figures

Figure 1

15 pages, 2807 KB  
Article
Investigation of the Coloration Mechanisms of Yellow-Green Nephrite from Ruoqiang (Xinjiang), China
by Boling Huang, Mingxing Yang, Xihan Yang, Xuan Wang, Ting Fang, Hongwei Han and Shoucheng Wang
Minerals 2025, 15(9), 961; https://doi.org/10.3390/min15090961 - 10 Sep 2025
Viewed by 398
Abstract
This study systematically investigates the color origin and coloration mechanisms of yellow-green nephrite from Ruoqiang, Xinjiang, using multiple analytical techniques including hyperspectral colorimetry, X-ray fluorescence (XRF) spectroscopy, titrimetry, laser ablation inductively coupled plasma–mass spectrometry (LA-ICP-MS), Raman spectroscopy and ultraviolet–visible (UV-Vis) spectroscopy. A pioneering [...] Read more.
This study systematically investigates the color origin and coloration mechanisms of yellow-green nephrite from Ruoqiang, Xinjiang, using multiple analytical techniques including hyperspectral colorimetry, X-ray fluorescence (XRF) spectroscopy, titrimetry, laser ablation inductively coupled plasma–mass spectrometry (LA-ICP-MS), Raman spectroscopy and ultraviolet–visible (UV-Vis) spectroscopy. A pioneering quantitative model (R2 = 0.942) was established between hue (H) and the Fe2O3 ratio (Fe2O3/TFe), revealing that the coloration mechanism is jointly governed by Fe3+ charge transfer (300–400 nm absorption band) and Fe2+→Fe3+ transitions (600–630 nm absorption band). Furthermore, the intensity variation in the 3651 cm−1 Raman peak serves to further confirm the critical role of Fe3+ occupancy in the tremolite lattice for color modulation. In combination with the partition patterns of Rare Earth elements (REEs) (right-leaning LREE distribution with negative Eu anomaly) and trace element characteristics, this study supports the classification of Ruoqiang yellow-green nephrite as a high oxygen fugacity magnesian marble-type deposit. In this type of deposit, the ore-forming environment facilitates Fe3+ enrichment and yellow-green hue formation. The findings provide new theoretical insights into the chromatic genesis of yellow-green nephrite and hold significant implications for its identification, quality grading, and research on metallogenic mechanisms. Full article
Show Figures

Figure 1

24 pages, 7601 KB  
Article
Network Intrusion Detection Integrating Feature Dimensionality Reduction and Transfer Learning
by Hui Wang, Wei Jiang, Junjie Yang, Zitao Xu and Boxin Zhi
Technologies 2025, 13(9), 409; https://doi.org/10.3390/technologies13090409 - 10 Sep 2025
Viewed by 337
Abstract
In the Internet era, network malicious intrusion behaviors occur frequently and network intrusion detection is increasingly in demand. Addressing the challenges of high-dimensional data, nonlinearity and noisy network traffic data in network intrusion detection, a net-work intrusion detection model is proposed in this [...] Read more.
In the Internet era, network malicious intrusion behaviors occur frequently and network intrusion detection is increasingly in demand. Addressing the challenges of high-dimensional data, nonlinearity and noisy network traffic data in network intrusion detection, a net-work intrusion detection model is proposed in this paper. Firstly, a hybrid multi-model feature selection and kernel-based dimensionality reduction algorithm is proposed to map high-dimensional features to low-dimensional space to achieve feature dimensionality reduction and enhance nonlinear differentiability. Then the semantic feature mapping is introduced to convert the low-dimensional features into color images which represent distinct data characteristic. For classifying these images, an integrated convolutional neural network is constructed. Moreover, sub-model fine-tuning is performed through transfer learning and weights are assigned to improve the performance of multi-classification detection. Experiments on the UNSW-NB15 and CICIDS 2017 datasets show that the proposed model achieves accuracies of 99.99% and 99.96%. The F1-scores of 99.98% and 99.91% are achieved respectively. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

18 pages, 4791 KB  
Article
A Machine-Learning-Based Cloud Detection and Cloud-Top Thermodynamic Phase Algorithm over the Arctic Using FY3D/MERSI-II
by Caixia Yu, Xiuqing Hu, Yanyu Lu, Wenyu Wu and Dong Liu
Remote Sens. 2025, 17(18), 3128; https://doi.org/10.3390/rs17183128 - 9 Sep 2025
Viewed by 398
Abstract
The Arctic, characterized by extensive ice and snow cover with persistent low solar elevation angles and prolonged polar nights, poses significant challenges for conventional spectral threshold methods in cloud detection and cloud-top thermodynamic phase classification. The study addressed these limitations by combining active [...] Read more.
The Arctic, characterized by extensive ice and snow cover with persistent low solar elevation angles and prolonged polar nights, poses significant challenges for conventional spectral threshold methods in cloud detection and cloud-top thermodynamic phase classification. The study addressed these limitations by combining active and passive remote sensing and developing a machine learning framework for cloud detection and cloud-top thermodynamic phase classification. Utilizing the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) cloud product from 2021 as the truth reference, the model was trained with spatiotemporally collocated datasets from FY3D/MERSI-II (Medium Resolution Spectral Imager-II) and CALIOP. The AdaBoost (Adaptive Boosting) machine learning algorithm was employed to construct the model, with considerations for six distinct Arctic surface types to enhance its performance. The accuracy test results showed that the cloud detection model achieved an accuracy of 0.92, and the cloud recognition model achieved an accuracy of 0.93. The inversion performance of the final model was then rigorously evaluated using a completely independent dataset collected in July 2022. Our findings demonstrated that our model results align well with results from CALIOP, and the detection and identification outcomes across various surface scenarios show high consistency with the actual situations displayed in false-color images. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

20 pages, 2553 KB  
Article
CCIBA: A Chromatic Channel-Based Implicit Backdoor Attack on Deep Neural Networks
by Chaoliang Li, Jiyan Liu, Yang Liu and Shengjie Yang
Electronics 2025, 14(18), 3569; https://doi.org/10.3390/electronics14183569 - 9 Sep 2025
Viewed by 509
Abstract
Deep neural networks (DNNs) excel in image classification but are vulnerable to backdoor attacks due to reliance on external training data, where specific markers trigger preset misclassifications. Existing attack techniques have an obvious trade-off between the effectiveness of the triggers and the stealthiness, [...] Read more.
Deep neural networks (DNNs) excel in image classification but are vulnerable to backdoor attacks due to reliance on external training data, where specific markers trigger preset misclassifications. Existing attack techniques have an obvious trade-off between the effectiveness of the triggers and the stealthiness, which limits their practical application. For this purpose, in this paper, we develop a method—chromatic channel-based implicit backdoor attack (CCIBA), which combines a discrete wavelet transform (DWT) and singular value decomposition (SVD) to embed triggers in the frequency domain through the chromaticity properties of the YUV color space. Experimental validation on different image datasets shows that compared to existing methods, CCIBA can achieve a higher attack success rate without a large impact on the normal classification ability of the model, and its good stealthiness is verified by manual detection as well as different experimental metrics. It successfully circumvents existing defense methods in terms of sustainability. Overall, CCIBA strikes a balance between covertness, effectiveness, robustness and sustainability. Full article
Show Figures

Figure 1

Back to TopTop