MF-FusionNet: A Lightweight Multimodal Network for Monitoring Drought Stress in Winter Wheat Based on Remote Sensing Imagery
Abstract
1. Introduction
- (1)
- We propose a lightweight multimodal feature fusion network, MF-FusionNet, which effectively integrates visual information from images and non-visual numerical data. The network strikes a balance between computational efficiency and model compactness, enabling accurate identification and graded classification of drought stress in winter wheat.
- (2)
- We design a Lightweight Multimodal Fusion Block (LMFB) to achieve deep fusion between RGB images and numerical vegetation indices. This module adaptively enhances key channel features relevant to winter wheat drought stress, while effectively suppressing environmental noise and other interfering factors, thereby improving the discriminative capacity of the fused features.
- (3)
- We introduce a Cross-Stage Feature Fusion Strategy (CFFS) that combines channel alignment and layer-wise integration to effectively incorporate multi-scale spatial information. This allows for the collaborative representation of localized drought symptoms and overall canopy-level characteristics. Meanwhile, we embed a Dual-Coordinate Attention Feature Extraction module (DCAFE), which leverages multi-path pooling and coordinate attention mechanisms to enhance the encoding of directional and spatial positional information, thus improving the model’s sensitivity to leaf texture and drought-critical regions in winter wheat.
- (4)
- The recognition results produced by the model are mapped back to the spatial layout of the field to generate a visual drought severity map, which intuitively displays the drought stress distribution across different winter wheat regions and supports precision agriculture management and decision-making.
2. Materials and Methods
2.1. Study Area
2.2. Experimental Design
2.3. Data Acquisition
2.4. Data Processing and Construction
2.4.1. UAV Image Processing and Construction
2.4.2. Vegetation Index Feature Processing and Construction
2.4.3. Multimodal Data Processing and Construction
2.5. MF-FusionNet Network Architecture
2.5.1. Fusion-StarNet Image Feature Extraction Network
- (1)
- Superstar Block Feature Extraction Module
- (2)
- Cross-Stage Feature Fusion Strategy (CFFS)
Algorithm 1: Cross-Stage Feature Fusion Strategy |
Input: Input feature map x, fusion stage index set F |
Output: Merged feature representation x |
prev_feats ← None for i = 1 to N do x ← stage_i(x) if prev_feats ≠ None and i ∈ F then if size(prev_feats) ≠ size(x) then prev_feats //Interpolation adjustment(prev_feats,target size = size(x)) end if prev_feats_aligned //Channel alignment (prev_feats) 1 × 1Conv x ← x + prev_feats_aligned // Feature fusion x ← fuse_convs(x) //3 × 3Conv end if prev_feats ← x end for |
2.5.2. Vegetation Index Feature Extraction
2.5.3. Lightweight Multimodal Fusion Block (LMFB)
2.5.4. Experimental Environment and Evaluation Metrics
3. Experiments and Results
3.1. Comparative Experiments on Visual Feature Extraction Networks
3.2. Comparative Experiments Between DCAFE and Other Attention Mechanisms
3.3. Comparative Experiments of CFFS at Different Network Stages
3.4. Ablation Study
3.5. Comparative Experiments on Different Fusion Strategies
3.6. Comparative Experiments on Intermediate Fusion Methods
3.7. Visualization of Winter Wheat Drought Stress Using Remote Sensing
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
LMFB | Lightweight Multimodal Fusion Block |
CFFS | Cross-Stage Feature Fusion Strategy |
DCAFE | Dual-Coordinate Attention Feature Extraction Module |
NM | Network Modification |
SE | Squeeze-and-Excitation |
CBAM | Convolutional Block Attention Module |
ECA | Efficient Channel Attention |
SimAM | A Simple, Parameter-Free Attention Module |
References
- Zhao, H.; Cai, D.H.; Wang, H.L.; Yang, Y.; Wang, R.; Zhang, K.; Qi, Y.; Zhao, F.; Chen, F.; Yue, P.; et al. Progress and prospect on impact of drought disaster on food security and its countermeasures. J. Arid Meteorol. 2023, 41, 187–206. [Google Scholar]
- Zhu, G.; Liu, Y.; Shi, P.; Jia, W.; Zhou, J.; Liu, Y.; Ma, X.; Pan, H.; Zhang, Y.; Zhang, Z.; et al. Stable water isotope monitoring network of different water bodies in Shiyang River basin, a typical arid river in China. Earth Syst. Sci. Data 2022, 14, 3773–3789. [Google Scholar] [CrossRef]
- Wu, H.; Yang, Z. Effects of Drought Stress and Postdrought Rewatering on Winter Wheat: A Meta-Analysis. Agronomy 2024, 14, 298. [Google Scholar] [CrossRef]
- Shah, S.; Depeng, W.; Shah, F.; Alharby, H.F.; Bamagoos, A.A.; Mjrashi, A.; Alabdallah, N.M.; AlZahrani, S.S.; AbdElgawad, H.; Adnan, M.; et al. Comprehensive Impacts of Climate Change on Rice Production and Adaptive Strategies in China. Front. Microbiol. 2022, 13, 926059. [Google Scholar] [CrossRef]
- Wu, Y.M.; Zhu, J.T.; Zhu, D.L.; Li, D. Meta-analysis on influencing factors of irrigated winter wheat yield and water use efficiency in China. J. Irrig. Drain. 2020, 39, 84–92. [Google Scholar]
- Xiao, X.; Ming, W.; Luo, X.; Yang, L.; Li, M.; Yang, P.; Ji, X.; Li, Y. Leveraging multisource data for accurate agricultural drought monitoring: A hybrid deep learning model. Agric. Water Manag. 2024, 293, 108692. [Google Scholar] [CrossRef]
- Zait, Y.; Shemer, O.E.; Cochavi, A. Dynamic responses of chlorophyll fluorescence parameters to drought across diverse plant families. Physiol. Plant. 2024, 176, e14527. [Google Scholar] [CrossRef] [PubMed]
- Das, S.; Christopher, J.; Apan, A.; Choudhury, M.R.; Chapman, S.; Menzies, N.W.; Dang, Y.P. UAV-Thermal imaging and agglomerative hierarchical clustering techniques to evaluate and rank physiological performance of wheat genotypes on sodic soil. ISPRS J. Photogramm. Remote Sens. 2021, 173, 221–237. [Google Scholar] [CrossRef]
- Das, S.; Christopher, J.; Choudhury, M.R.; Apan, A.; Chapman, S.; Menzies, N.W.; Dang, Y.P. Evaluation of drought tolerance of wheat genotypes in rain-fed sodic soil environments using high-resolution UAV remote sensing techniques. Biosyst. Eng. 2022, 217, 68–82. [Google Scholar] [CrossRef]
- Maji, A.K.; Das, S.; Marwaha, S.; Kumar, S.; Dutta, S.; Choudhury, M.R.; Arora, A.; Ray, M.; Perumal, A.; Chinusamy, V. Intelligent decision support for drought stress (IDSDS): An integrated remote sensing and artificial intelligence-based pipeline for quantifying drought stress in plants. Comput. Electron. Agric. 2025, 236, 110477. [Google Scholar] [CrossRef]
- Mucchiani, C.; Zaccaria, D.; Karydis, K. Assessing the potential of integrating automation and artificial intelligence across sample-destructive methods to determine plant water status: A review and score-based evaluation. Comput. Electron. Agric. 2024, 224, 108992. [Google Scholar] [CrossRef]
- Devi, S.; Singh, V.; Yashveer, S.; Poonia, A.K.; Paras; Chawla, R.; Kumar, D.; Akbarzai, D.K. Phenotypic, physiological and biochemical delineation of wheat genotypes under different stress conditions. Biochem. Genet. 2024, 62, 3305–3335. [Google Scholar] [CrossRef] [PubMed]
- Gupta, A.; Kaur, L.; Kaur, G. Drought stress detection technique for wheat crop using machine learning. PeerJ Comput. Sci. 2023, 9, e1268. [Google Scholar] [CrossRef] [PubMed]
- Gao, S.; Liang, H.; Hu, D.; Hu, X.; Lin, E.; Huang, H. SAM-ResNet50: A Deep Learning Model for the Identification and Classification of Drought Stress in the Seedling Stage of Betula luminifera. Remote Sens. 2024, 16, 4141. [Google Scholar] [CrossRef]
- An, J.; Li, W.; Li, M.; Cui, S.; Yue, H. Identification and Classification of Maize Drought Stress Using Deep Convolutional Neural Network. Symmetry 2019, 11, 256. [Google Scholar] [CrossRef]
- Goyal, P.; Sharda, R.; Saini, M.; Siag, M. A deep learning approach for early detection of drought stress in maize using proximal scale digital images. Neural Comput. Appl. 2024, 36, 1899–1913. [Google Scholar] [CrossRef]
- Yang, D.; Wang, F.; Hu, Y.; Lan, Y.; Deng, X. Citrus Huanglongbing Detection Based on Multi-Modal Feature Fusion Learning. Front. Plant Sci. 2021, 12, 809506. [Google Scholar] [CrossRef]
- Li, S.; Song, Z.; Liang, Q.; Meng, L.; Yu, Y.; Chen, Y. Nondestructive Detection of Citrus Infested by Bactrocera dorsalis Based on X-ray and RGB Image Data Fusion. Trans. Chin. Soc. Agric. Mach. 2023, 54, 385–392, (In Chinese with English abstract). [Google Scholar]
- Yao, J.; Wu, Y.; Liu, J.; Wang, H. Multimodal deep learning-based drought monitoring research for winter wheat during critical growth stages. PLoS ONE 2024, 19, e0300746. [Google Scholar] [CrossRef]
- Meier, U. Growth Stages of Mono- and Dicotyledonous Plants: BBCH Monograph; Open Agrar Repositorium: Quedlinburg, Germany, 2018. [Google Scholar] [CrossRef]
- Rouse, J.W., Jr.; Haas, R.H.; Deering, D.W.; Schell, J.A.; Harlan, J.C. Monitoring the Vernal Advancement and Retrogradation (Green Wave Effect) of Natural Vegetation; NASA/GSFC Type III, Final Report; Greenbelt, MD, USA, 1974. [Google Scholar]
- Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
- Boiarskii, B.; Hasegawa, H. Comparison of NDVI and NDRE indices to detect differences in vegetation and chlorophyll content. J. Mech. Contin. Math. Sci. 2019, 4, 20–29. [Google Scholar] [CrossRef]
- Jiang, L.; Kogan, F.N.; Guo, W.; Tarpley, J.D.; Mitchell, K.E.; Ek, M.B.; Tian, Y.; Zheng, W.; Zou, C.; Ramsay, B.H. Real-time weekly global green vegetation fraction derived from advanced very high resolution radiometer-based NOAA operational global vegetation index (GVI) system. J. Geophys. Res. Atmos. 2010, 115, D11. [Google Scholar] [CrossRef]
- Xue, J.; Su, B. Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 2017, 1353691. [Google Scholar] [CrossRef]
- Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
- Gitelson, A.A.; Viña, A.; Arkebauer, T.J.; Rundquist, D.C.; Keydan, G.; Leavitt, B. Remote estimation of leaf area index and green leaf biomass in maize canopies. Geophys. Res. Lett. 2003, 30, 1248. [Google Scholar] [CrossRef]
- Broge, N.H.; Leblanc, E. Comparing prediction power and stability of broadband and hyperspectral vegetation indices for estimation of green leaf area index and canopy chlorophyll density. Remote Sens. Environ. 2001, 76, 156–172. [Google Scholar] [CrossRef]
- Mishra, S.; Mishra, D.R. Normalized difference chlorophyll index: A novel model for remote estimation of chlorophyll-a concentration in turbid productive waters. Remote Sens. Environ. 2012, 117, 394–406. [Google Scholar] [CrossRef]
- Ma, X.; Dai, X.; Bai, Y.; Wang, Y.; Fu, Y. Rewrite the Stars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 5694–5703. [Google Scholar]
- Gupta, S.; Tripathi, A.K. Flora-NET: Integrating dual coordinate attention with adaptive kernel based convolution network for medicinal flower identification. Comput. Electron. Agric. 2025, 230, 109834. [Google Scholar] [CrossRef]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13713–13722. [Google Scholar]
- Yan, Q.; Feng, Y.; Zhang, C.; Pang, G.; Shi, K.; Wu, P.; Dong, W.; Sun, J.; Zhang, Y. Hvi: A New Color Space for Low-Light Image Enhancement. In Proceedings of the Computer Vision and Pattern Recognition Conference, Nashville, TN, USA, 11–15 June 2025; pp. 5678–5687. [Google Scholar]
- Koonce, B. MobileNetV3. In Convolutional Neural Networks with Swift for Tensorflow: Image Recognition and Dataset Categorization; Apress: Berkeley, CA, USA, 2021; pp. 125–144. [Google Scholar]
- Qin, D.; Leichner, C.; Delakis, M.; Fornoni, M.; Luo, S.; Yang, F.; Wang, W.; Banbury, C.; Ye, C.; Akin, B.; et al. MobileNetV4: Universal models for the mobile ecosystem. In European Conference on Computer Vision; Springer Nature: Cham, Switzerland, 2024; pp. 78–96. [Google Scholar]
- Ma, N.; Zhang, X.; Zheng, H.T.; Sun, J. Shufflenet V2: Practical Guidelines for Efficient CNN Architecture Design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 116–131. [Google Scholar]
- Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. Ghostnet: More Features from Cheap Operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1580–1589. [Google Scholar]
- Mehta, S.; Rastegari, M. Separable self-attention for mobile vision transformers. arXiv 2022, arXiv:2206.02680. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Yang, L.; Zhang, R.Y.; Li, L.; Xie, X. Simam: A simple, parameter-free attention module for convolutional neural networks. In International Conference on Machine Learning; PMLR: New York, NY, USA, 2021; pp. 11863–11874. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vis. 2020, 128, 336–359. [Google Scholar] [CrossRef]
- Yuan, Y.; Li, Z.; Zhao, B. A Survey of Multimodal Learning: Methods, Applications, and Future. ACM Comput. Surv. 2025, 57, 1–34. [Google Scholar] [CrossRef]
- Baltrušaitis, T.; Ahuja, C.; Morency, L.P. Multimodal machine learning: A survey and taxonomy. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 423–443. [Google Scholar] [CrossRef] [PubMed]
- Felix, M.J.B.; Main, R.; Watt, M.S.; Arpanaei, M.M.; Patuawa, T. Early Detection of Water Stress in Kauri Seedlings Using Multitemporal Hyperspectral Indices and Inverted Plant Traits. Remote Sens. 2025, 17, 463. [Google Scholar] [CrossRef]
- Candiago, S.; Remondino, F.; De Giglio, M.; Dubbini, M.; Gattelli, M. Evaluating multispectral images and vegetation indices for precision farming applications from UAV images. Remote Sens. 2015, 7, 4026–4047. [Google Scholar] [CrossRef]
- Ning, D.; Zhang, Y.; Li, X.; Qin, A.; Huang, C.; Fu, Y.; Gao, Y.; Duan, A. The effects of foliar supplementation of silicon on physiological and biochemical responses of winter wheat to drought stress during different growth stages. Plants 2023, 12, 2386. [Google Scholar] [CrossRef] [PubMed]
Equipment | Parameters | Parameter Content |
---|---|---|
UAV platform | Model | DJI Mavic 3 Multispectral |
Bare weight | 951 g | |
Operating temperature | −10 °C to 40 °C | |
Maximum wind resistance speed | 12 m/s | |
Sensor | Field of view (RGB) | 84° |
RGB camera | 4/3 CMOS, 20 megapixels | |
Equivalent focal length (RGB) | 24 mm | |
Multispectral sensor types | 1/2.8″ CMOS, 5 megapixels | |
Field of view (multispectral) | 73.91° | |
Equivalent focal length | 25 mm | |
Spectral band | G: 560 ± 16 nm; R: 650 ± 16 nm; RE: 730 ± 16 nm; NIR: 860 ± 26 nm |
Vegetation Index | Definition | References |
---|---|---|
Normalized Difference Vegetation Index (NDVI) | NDVI = (NIR − R)/(NIR + R) | [21] |
Normalized Difference Water Index (NDWI) | NDWI = (G − NIR)/(G + NIR) | [22] |
Normalized Difference Red Edge (NDRE) | NDRE = (NIR − RE)/(NIR + RE) | [23] |
Green Vegetation Index (GVI) | GVI = (2 × NIR − R)/(2 × NIR + R) | [24] |
Soil-Adjusted Vegetation Index (SAVI) | SAVI = (NIR − R)/(NIR + R + 0.5) × (1.5) | [25] |
Enhanced Vegetation Index (EVI) | EVI = (NIR − R)/(1 + NIR − 2.4 × R) × (2.5) | [26] |
Green-Normalized Difference Vegetation Index GNDVI) | GNDVI = (NIR − G)/(NIR + G) | [27] |
Optimized Soil-Adjusted Vegetation Index (OSAVI) | OSAVI = (NIR − R)/(NIR − R + 0.16) | [25] |
Triangular Vegetation Index (TVI) | [28] | |
Normalized Difference Chlorophyll Index (NDCI) | NDCI = (RE − NIR)/(RE + NIR) | [29] |
Type of Drought | Training Set | Validation Set |
---|---|---|
Suitable moisture (WW1) | 230 | 75 |
Mild drought (WW2) | 200 | 95 |
Moderate drought (WW3) | 190 | 80 |
Severe drought (WW4) | 215 | 105 |
Extreme drought (WW5) | 215 | 95 |
Total | 1050 | 450 |
Name | Related Configurations |
---|---|
Operating system | Windows 11 |
Processor | Intel Core i7-14700HX |
Graphics | NVIDIA GeForce GTX4060 8 GB |
Deep learning framework | PyTorch 2.3 |
Programming language | Python 3.12 |
Classification Model | Acc/% | R/% | F1/% | Params/M | Inference/ms | GFLOPs |
---|---|---|---|---|---|---|
MobileNetv3_small | 88.76 | 88.55 | 88.90 | 1.51 | 4.72 | 0.06 |
MobileNetv4_conv_small | 87.31 | 87.21 | 88.63 | 2.47 | 4.98 | 0.18 |
ShuffleNet_v2 | 91.46 | 91.43 | 91.66 | 1.26 | 4.2 | 0.15 |
GhostNet_050 | 91.86 | 91.56 | 91.68 | 1.31 | 5.22 | 0.05 |
MobileVitv2_050 | 90.82 | 90.89 | 90.15 | 1.1 | 8.32 | 0.36 |
StarNet_s1 | 92.61 | 92.48 | 92.61 | 2.68 | 4.79 | 0.43 |
StarNet_tiny | 92.08 | 92.24 | 92.42 | 0.98 | 4.62 | 0.14 |
Fusion-StarNet | 95.36 | 95.28 | 95.35 | 1.21 | 5.26 | 0.16 |
Attention Mechanism Name | Acc/% | R/% | F1/% | Params/M | Inference/ms | GFLOPs |
---|---|---|---|---|---|---|
ECA | 93.57 | 93.52 | 93.59 | 0.98 | 4.91 | 0.14 |
SE | 93.38 | 93.2 | 93.57 | 0.99 | 4.55 | 0.14 |
CBAM | 92.98 | 93.00 | 92.98 | 0.99 | 5.07 | 0.14 |
SimAM | 93.37 | 93.52 | 93.63 | 0.98 | 4.61 | 0.14 |
DCAFE | 93.81 | 93.07 | 93.26 | 1.01 | 5.11 | 0.15 |
Stage 4 | Star Block | Acc/% | R/% | F1/% | Params/M | Inference/ms | GFLOPs |
---|---|---|---|---|---|---|---|
92.08 | 92.24 | 92.42 | 0.98 | 4.62 | 0.14 | ||
√ | 93.22 | 93.03 | 93.27 | 0.99 | 4.63 | 0.14 | |
√ | 92.68 | 92.64 | 92.78 | 1.01 | 4.63 | 0.14 | |
√ | √ | 93.81 | 93.07 | 93.26 | 1.01 | 5.11 | 0.15 |
Experiment Name | Acc/% | R/% | F1/% | Params/M | Inference/ms | GFLOPs |
---|---|---|---|---|---|---|
A | 92.08 | 92.24 | 92.42 | 0.98 | 4.62 | 0.14 |
B | 94.01 | 94.08 | 94.16 | 0.99 | 4.68 | 0.15 |
C | 94.15 | 94.13 | 94.11 | 1.03 | 4.7 | 0.16 |
D | 94.38 | 94.26 | 94.49 | 1.18 | 4.72 | 0.16 |
E | 93.94 | 94.01 | 94.07 | 1.19 | 4.84 | 0.16 |
Network Modification | DCAFE | CFFS | Acc/% | R/% | F1/% | Params/M | Inference/ms | GFLOPs |
---|---|---|---|---|---|---|---|---|
92.61 | 92.48 | 92.61 | 2.68 | 4.79 | 0.43 | |||
√ | 92.08 | 92.24 | 92.42 | 0.98 | 4.62 | 0.14 | ||
√ | √ | 93.81 | 93.07 | 93.26 | 1.01 | 5.11 | 0.15 | |
√ | √ | 94.38 | 94.26 | 94.49 | 1.18 | 4.72 | 0.16 | |
√ | √ | √ | 95.36 | 95.28 | 95.35 | 1.21 | 5.26 | 0.16 |
Multimodal Fusion Strategy | Acc/% | R/% | F1/% | Params/M | Inference/ms | GFLOPs |
---|---|---|---|---|---|---|
Only Image | 95.36 | 95.28 | 95.35 | 1.21 | 5.26 | 0.16 |
Early Fusion | 94.01 | 93.78 | 94.02 | 1.30 | 5.15 | 0.17 |
Intermediate Fusion (Ours) | 96.71 | 96.71 | 96.64 | 1.45 | 5.29 | 0.16 |
Late Fusion | 96.32 | 96.43 | 96.30 | 1.52 | 5.68 | 0.17 |
Fusion Method | Acc/% | R/% | F1/% | Params/M | Inference/ms | GFLOPs |
---|---|---|---|---|---|---|
I | 93.43 | 93.47 | 93.72 | 1.22 | 4.85 | 0.15 |
II | 94.76 | 94.38 | 94.24 | 1.24 | 4.93 | 0.15 |
III | 95.12 | 95.03 | 95.24 | 1.30 | 5.08 | 0.16 |
IV | 96.71 | 96.71 | 96.64 | 1.45 | 5.29 | 0.16 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guo, Q.; Han, B.; Chu, P.; Wan, Y.; Zhang, J. MF-FusionNet: A Lightweight Multimodal Network for Monitoring Drought Stress in Winter Wheat Based on Remote Sensing Imagery. Agriculture 2025, 15, 1639. https://doi.org/10.3390/agriculture15151639
Guo Q, Han B, Chu P, Wan Y, Zhang J. MF-FusionNet: A Lightweight Multimodal Network for Monitoring Drought Stress in Winter Wheat Based on Remote Sensing Imagery. Agriculture. 2025; 15(15):1639. https://doi.org/10.3390/agriculture15151639
Chicago/Turabian StyleGuo, Qiang, Bo Han, Pengyu Chu, Yiping Wan, and Jingjing Zhang. 2025. "MF-FusionNet: A Lightweight Multimodal Network for Monitoring Drought Stress in Winter Wheat Based on Remote Sensing Imagery" Agriculture 15, no. 15: 1639. https://doi.org/10.3390/agriculture15151639
APA StyleGuo, Q., Han, B., Chu, P., Wan, Y., & Zhang, J. (2025). MF-FusionNet: A Lightweight Multimodal Network for Monitoring Drought Stress in Winter Wheat Based on Remote Sensing Imagery. Agriculture, 15(15), 1639. https://doi.org/10.3390/agriculture15151639