Next Article in Journal
The Identification of the Peanut Wild Relative Arachis stenosperma as a Source of Resistance to Stem Rot and Analyses of Genomic Regions Conferring Disease Resistance through QTL Mapping
Previous Article in Journal
Response of Soil Fungal Community in Coastal Saline Soil to Short-Term Water Management Combined with Bio-Organic Fertilizer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MV-SSRP: Machine Vision Approach for Stress–Strain Measurement in Rice Plants

by
Wenlong Yi
1,
Xunsheng Zhang
1,
Shiming Dai
1,
Sergey Kuzmin
2,
Igor Gerasimov
2 and
Xiangping Cheng
3,*
1
School of Software, Jiangxi Agricultural University, Nanchang 330045, China
2
Faculty of Computer Science and Technology, Saint Petersburg Electrotechnical University “LETI”, Saint Petersburg 197022, Russia
3
Institute of Applied Physics, Jiangxi Academy of Sciences, Nanchang 330096, China
*
Author to whom correspondence should be addressed.
Agronomy 2024, 14(7), 1443; https://doi.org/10.3390/agronomy14071443
Submission received: 15 May 2024 / Revised: 27 June 2024 / Accepted: 1 July 2024 / Published: 2 July 2024
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

:
Rice plants’ ability to develop lodging resistance is essential for their proper growth and development, and understanding the stress–strain relationship is crucial for a comprehensive analysis of this resilience. Nevertheless, significant data variability, inefficiency, and substantial observational inaccuracies hinder current measurement and analysis techniques. Therefore, this study proposes a machine vision-based stress–strain measurement method for rice plants to address these limitations. The technique primarily involves the implementation of the proposed MV-SSRP rotating target detection network, which enhances the model’s ability to predict the strain of rice stalks accurately when subjected to bending forces through the integration of the spatial channel reorganization convolution (ScConv) and Squeeze-and-Excitation (SE) attention mechanism. A stress–strain dynamic relationship model was also developed by incorporating real-time stress data obtained from a mechanical testing device. The experimental findings demonstrated that MV-SSRP attained precision, recall, and mean average precision (mAP) rates of 93.4%, 92.6%, and 97.6%, respectively, in the context of target detection. These metrics represented improvements of 4.8%, 3.8%, and 5.1%, respectively, over the performance of the YOLOv8sOBB model. This investigation contributes a theoretical framework and technical underpinning for examining rice lodging resistance.

1. Introduction

Rice (Oryza sativa L.), a globally fundamental food staple, holds significant importance in the world’s food supply [1]. Its yield and quality are intricately linked to national economies and societal well-being. It is a crucial cash crop in numerous countries and a primary economic resource for rural farmers [2]. The rice cultivation in China is predominantly concentrated in the Yangtze, Gan, Pearl, and Min River basins and the southern coastal regions. Despite consistent annual growth in rice production, external influences, such as climatic conditions, can still result in diminished yields in certain regions [3]. The presence of favorable traits in rice plants plays a crucial role in their ability to withstand external disruptions, with particular emphasis on the importance of lodging resistance, which directly impacts yield. Resilient varieties can endure adverse weather conditions such as wind and rain, thereby minimizing the occurrence of rice spike drop [4]. Investigating the correlation between stress and strain in rice plants throughout their growth and maturation stages is fundamental to understanding their lodging resistance mechanisms [5,6]. Enhanced comprehension of the interplay between cultivation management and environmental stresses in rice cultivation facilitates the optimization of yield and quality. It elucidates the underlying mechanisms governing rice’s response to such stresses [7]. This knowledge empowers us to accurately evaluate the growth needs of rice, implement precise water and fertilizer management, and consequently reduce production costs and resource wastage [8,9]. Hence, investigating the stress–strain relationship of rice plants is crucial for regulating their growth, particularly in understanding the correlation between their resistance to buckling and the applied stresses. This study is significant in comprehending and analyzing the buckling resistance of rice plants.
The conventional approach for assessing the resistance of rice plants to buckling involves manually bending the plant to a predetermined angle utilizing a mechanical testing device, followed by quantifying its physical characteristics [10]. Conversely, evaluating rice plant resistance to falling relies on a mechanical testing apparatus associated with data inconsistency, limited efficacy, substantial labor expenses, and significant observational inaccuracies. The integration of image vision technology offers a potential solution for automating the measurement of plant deformation forces. The current image vision method employed in three-point bending experiments is detrimental to plant specimens and is associated with extended experimental durations; furthermore, using pressure sensors to observe pressure values and plant deformation introduces a degree of error [11]. Additionally, the accuracy of employing a rotating target to measure plant bending requires enhancement [12].
Given current technology deficiencies, this study suggests a measurement approach that combines machine vision and physical mechanics modeling. This method enables the capture of images depicting the pull-bending deformation of rice plants in a non-invasive manner and accurately identifies crucial parameters, such as plant bending angle and displacement, through image processing modeling. It can surpass the constraints of manual measurement, making contributions to the field. The objectives of this study were to:
  • Develop an integrated suite of machine vision measurement techniques, encompassing hardware, software, and physical modeling approaches, to enable non-destructive quantification of force-induced deformations in rice plants.
  • Design an optimized rotating object detection network, YOLOv8sOBB, incorporating spatial channel reorganization convolution and channel attention mechanism, to enhance the accuracy of extracting detailed parameters associated with rice stem bending.
  • Construct a physical model elucidating the stress–strain relationship in rice plants, enabling quantitative characterization of stress-induced deformations.
  • Validate the feasibility and robustness of the proposed method.

2. Related Work

2.1. Rotating Target Detection

Detecting and localizing rotating targets poses a significant challenge in machine vision, particularly in identifying and pinpointing target objects tilted at various angles within an image [13,14,15]. Traditional target detection algorithms typically have limitations in detecting targets that are not strictly horizontal or vertical, leading to low detection accuracy, especially when dealing with curved rice plants. One commonly used method involves dividing the image of a curved rice plant into multiple horizontal or vertical sub-targets, which are identified individually and subsequently integrated [16]. An alternative approach involves the use of rotation-invariant features, such as rotation-invariant local binary pattern (RILBP) or scale-invariant feature transform (SIFI) [17], to describe rice plants. These features are then combined with Random Forest (RF) [18], Support Vector Machine (SVM) [19], and other classifiers to achieve target detection, thereby mitigating the impact of bending and enhancing detection accuracy.
Recently, significant advancements in rotating target detection tasks have been achieved by applying deep learning techniques, explicitly utilizing Convolutional Neural Networks (CNN) for feature extraction of rice plants and leveraging their rotational invariance to enhance detection performance [20]. For instance, the R3Det algorithm incorporates a rotating bounding box proposal network and a rotational regression loss to enhance the model’s capability to accommodate rotating targets’ shape and orientation attributes [21]. In contrast, the Rotated Faster R-CNN integrates a rotation parameter into the traditional Faster R-CNN architecture, allowing the model to handle arbitrary rotating targets effectively [22]. Additionally, the Rotated RepPoints model implements RepPoints [23]. The concept of representative points is employed to forecast the location and rotational orientation of the target by creating representative points to encapsulate the geometric structural details of the target, making it particularly well suited for situations necessitating precise targeting and orientation. One ReDet algorithm integrates the strengths of the Faster R-CNN and RetinaNet models, enhancing their capabilities for detecting rotating targets by leveraging the favorable attributes of both models and demonstrating proficient detection performance [24].
The algorithms described above utilize deep learning methods to accurately identify and analyze rice stalks in various states of tilting and bending. By capturing rotationally invariant features of rice plants, these algorithms can accurately detect and express the geometric features of plant-bending details using center coordinates, width, height, and degree of rotation relative to the horizontal position. This information can then be used to provide strain measurements for further applications in force–strain automation analysis.

2.2. Stress–Strain Analysis

When investigating the stress–strain relationship of rice’s resistance to buckling, current methodologies can be categorized into three main groups: mechanical testing, stress observation, and simulation analysis. Mechanical testing involves using specialized equipment to conduct experiments, such as tensile, compression, or bending tests, on rice plant samples to quantify mechanical properties and analyze the stress–strain relationship through numerical simulation. Huang et al. [25] conducted a three-point bending experiment to assess rice plants’ bending strength and flexural modulus, followed by an analysis of the stress–strain relationship using finite element analysis. Nevertheless, this approach is considered destructive as it necessitates the preparation of fresh samples for each iteration.
The pressure observation method involves applying pressure to rice plants using a press to measure the degree of bending and record the corresponding pressure values. Yadav et al. [26] introduced this method to evaluate the resistance of rice plants to buckling. However, solely relying on visual observation and manual recording of bending angles may lead to significant subjective errors, thereby compromising the accuracy of the assessment.
Simulation analysis methods, such as finite element analysis and machine learning techniques, were utilized by Challapalli et al. [27] to construct a bionic stalk training database. The researchers simulated the flexural resistance of natural structures using finite element analysis and optimized the performance of the bionic stalk through machine learning algorithms, ultimately achieving favorable outcomes. Zhou et al. [28] sought to investigate the relationship between the structural attributes of rice plants and their elastic modulus by employing multi-scale simulation techniques to calculate the elastic properties of rice stalks. This approach offers a promising avenue for enhancing the tensile elastic modulus of rice stalks. Shi et al. [29] developed a mechanical model utilizing the Hertz–Mindlin contact model and the bonded particle model (BPM) to simulate the failure mechanisms of stalks under diverse loading scenarios. Ottesen et al. [30] constructed a cross-sectional model of corn stalks with adjustable parameters. They utilized principal component analysis to elucidate the connection between cross-sectional morphology and model prediction accuracy.
The study above has yielded positive outcomes in examining plant stresses and identifying physical parameters [31,32,33]. Nonetheless, further investigation is required to explore the dynamic interplay between strain and stress in plants.

3. Materials and Methods

3.1. Machine Vision Acquisition Device

A specialized stress–strain measurement device was developed and implemented in this study to capture the strain generated by rice plants under stress accurately. The device comprised a height-adjustable actuator module, a force sensor module, a visual imaging module, and a data transmission module.
During measurement, rice plant sample 1 was affixed to the test platform, and the platform’s height was modified to align with the optimal force position of the plant. The controlled thrust exerted by the pusher through pushing device 4 was monitored by force sensor 3, which synchronously recorded the thrust value, enabling precise control over the extent of bending in the rice plant. The Visual Imaging Module 2 can comprehensively capture the rice plant’s deformation process under various conditions. The acquired images were transmitted to Computer 5 for real-time storage via the data transmission module, enabling subsequent analysis and processing. The operational concept of the measuring device is illustrated in Figure 1.

3.2. Strain Measurement Model

This study proposed a Spatial and Channel Squeeze and Excitation YOLOv8s for the Oriented Bounding Boxes (ScSE-YOLOv8s-OBB) algorithm based on the YOLOv8s network, which is a deformation measurement model capable of accurately measuring the geometric deformation of rice plants under stress. As illustrated in Figure 2, the MV-SSRP model consisted of three components: a backbone network, a neck network, and a decoupled head. As the core of the model, the backbone network performed feature extraction and transformation through five convolutional modules. Each module included a downsampling convolution, a BatchNorm layer, and a SiLU activation function to enhance the model’s nonlinear mapping capability. Furthermore, this study incorporated a Spatial Channel-wise Reassembly Convolution (ScConv) [24] after the convolutional modules in the original backbone network. The ScConv integrated a Spatial Reassembly Unit (SRU) and a Channel Reassembly Unit (CRU). The SRU effectively suppressed spatial redundancy through a separable reassembly technique, while the CRU significantly reduced channel redundancy through a split–transform–merge strategy.
Additionally, as a plug-and-play architectural unit, ScConv enhanced the module’s flexibility and practicality. By reorganizing and weighing the channels of the feature maps, ScConv improved the model’s efficiency in utilizing features. This design enabled the network to extract image features effectively and enhance the training speed and accuracy. Concurrently, this study incorporated the Squeeze-and-Excitation (SE) attention mechanism [34] after the four C2f modules in the backbone network, assisting the network in dynamically learning the relevance between channels. By weighing the features of different channels, the network can focus more on task-relevant features, thereby improving feature representation capability. The SE attention mechanism adjusted the importance of each channel in the feature maps, enabling the network model to learn feature representations more accurately and enhancing model accuracy. Moreover, the SE attention mechanism had a relatively small number of parameters, minimizing the computational overhead on the model while maintaining its efficiency. It helped improve the model’s efficiency and accuracy, making the overall network model more lightweight and efficient.
The neck network of the model constructed a feature pyramid at different levels, enabling multi-scale feature extraction and fusion to capture information from objects of varying scales better. The head of the network employed a decoupled head design, separating the original single detection head into two independent parts. Each decoupled head corresponded to one of the three different feature maps output from the neck section. These feature maps were individually processed by the decoupled heads to produce the final output of the model. Each decoupled head consisted of four 3 × 3 convolutional layers and two 1 × 1 convolutional layers.
Additionally, the regression head incorporated a Weighted Intersection over Union (WIoU) loss function to calculate the position offset between the predicted and ground truth bounding boxes, outputting a four-dimensional vector representing the top-left and bottom-right coordinates of the target box. The input feature maps of the bent rice stem underwent a series of operations in the backbone and neck networks, followed by an additional convolution to generate the predicted output. The predicted output included three types of feature maps, each containing rotated bounding boxes annotated by the model for subsequent processing.
Throughout the measurement process, the model partitioned the rice plant into three segments for labeling. As depicted in Figure 3, the rotating frames x1 and x3 were designated with the boundary of the force point of the push rod, with x1 positioned near the ground and x3 at a higher elevation. These frames’ rectangular side angles represented the plant’s bending angle. The height of x1 corresponded to the distance of the force point from the ground, while the sum of the height of x3 and the height of the force point equated to the total height of the plant. The stress point was identified separately as the rotating frame x2, with its width denoting the diameter of the stress point.

3.3. Stress–Strain Relationship Modeling

As illustrated in Figure 4, this study utilized an MV-SSRP model to obtain parameters such as the bending angle θ, height H, and diameter d of the force application point for rice plants. A stress–strain relationship model for rice plants was constructed with the push force F measured by a force sensor.
Through Equations (1) and (2), the stress and strain experienced by the stem were calculated. This approach enables the analysis of stress and strain solely from the input of a bent rice stem image, thereby simulating and analyzing the physical and mechanical characteristics of the bending point in rice stems through image analysis.
σ = E ε
ε = Δ H / H
where σ is the bending stress, ε is the strain, E is Young’s modulus of elasticity, H is the height of the plant, and ΔH is the amount of height change. When the plant is bent by force, the curvature equals the reciprocal of the bending radius rb. The curvature C of the rice stalk is obtained, as shown in Equation (3).
C = 1 r b = d α b d s α b d x
A relationship exists for miniature cross-sections, as shown in Equation (4).
l d x = y r b d x
where l′ is the plant bending arc length and y is the distance from this section to the neutral plane. The stress component σx is calculated as shown in Equation (5) in the direction of the vertical coordinate axis [35].
σ x = E ε x = E C y
The procedure for calculating the bending moment for each small section is shown in Equation (6).
M b = σ x y d A = C E y 2 d A = C E I
where Mb is the bending moment, I is the rotational inertia, and the rotational inertia is the integral of y2; when the rice plant is bent, the bending deflection f satisfies Equation (7).
f = F H 2 E I a 2 F a 3 6 E I + C 1 x + C 2
where F is the thrust force on the force point, and f is the bending deflection, i.e., the magnitude of the displacement that occurs in the direction perpendicular to the vertical coordinate axis at the force point after bending occurs in the rice plant. Where C1 is the magnitude of the coefficient after differentiation of bending deflection, and C2 is the constant after differentiation of bending deflection; the boundary conditions f′(0) = 0, f = 0, and C1 = C2 =0, which leads to Equation (8).
f = F a 2 6 E I 3 H a
where I = πd4/64, the bending deflection can be approximated as fH × tanθ; therefore, Young’s modulus of elasticity E of the rice plant is calculated as shown in Equation (9) [28].
E = F a 2 3 H a 6 I H t a n θ
The bending stress σ can be expressed by σ = M/W, where M is the bending moment, which can be expressed as M = Fa; W is the bending section coefficient, which can be expressed as W = I/(d/2); and d is the diameter of the stress point, as shown in Equation (10), which can be used to find the bending stress.
σ = F a d 2 I
The above model can quantitatively describe the strain behavior of rice plants under stress by measuring the relevant parameters obtained, which provides a theoretical basis for assessing their resistance to collapse.

4. Results

4.1. Experimental Environment

The hardware configuration employed in this experiment consisted of a 24-core Intel Xeon Gold 6226 CPU @2.7 GHz, 128 GB of memory and an NVIDIA Quadro RTX 4000 GPU. The software environment included the Ubuntu 20.04.5 LTS operating system, the PyTorch 1.10-cu11.3 deep learning framework, and the Python 3.9 programming language. The training parameters were epochs = 500, batch-size = 16, an initial learning rate of 0.001, the Adam optimizer, and a weight decay coefficient weight-decay of 0.0005. Regarding the loss function, the weight for the rotated bounding box position loss (box-loss) was set to 0.05, the weight for the classification loss (cls-loss) was 0.5, the weight for the rotated bounding box angle loss (theta-loss) was 0.5, and the weight for the object loss (obj-loss) was 1.0. The model outputted results every 20 epochs.
Moreover, this study implemented an early stopping strategy to optimize training efficiency and prevent overfitting during the model training process. Specifically, we set the patience parameter to 100 iterations. This strategy automatically terminates the training process when the model’s performance no longer significantly improves.

4.2. Dataset

The experiment utilized healthy rice plants grown in the same environment at the mature stage. A stress–strain measurement system was employed to acquire data. During the measurement process, the rice plants were fixed onto a platform, and the height of the push rod was adjusted to the middle position of the third internode of the stem. The shooting angle was adjusted perpendicular to the force application point. As illustrated in Figure 5, a pushing force was applied to cause the stem to bend. Starting from 0 N, the force sensor reading increased by 0.1 N, and an image of the bent rice stem was captured at each increment. The bent stem images were simultaneously stored on a computer. A total of 218 original images with a resolution of 640 × 640 were collected in the experiment. Subsequently, the dataset was augmented to 3052 images by adjusting brightness, adding noise, adjusting contrast, rotation, mirroring, translation, and cropping.
The RoLabelImg tool was utilized to annotate the original images. The specific annotation process involved identifying the stem to be annotated in the bent rice stem image, drawing an initial rectangular bounding box, adjusting the size of the bounding box, and using the rotation button to adjust the angle of the rotated bounding box to encompass the bent stem accurately. Three rotated bounding boxes were required for each image, enclosing the three parts of the rice stem. The annotation information was saved in an XML file format and stored in the same directory as the corresponding image. The dataset was divided into training, testing, and validation sets in an 8:1:1 ratio.

4.3. Evaluation Indicators

The assessment criteria comprised Precision, Recall, mean Average Precision (mAP), and Loss. Precision assesses the model’s prediction accuracy, Recall evaluates its capability to identify all positive samples, mAP reflects the overall model performance by integrating the area under the Precision and Recall curves, and the loss function value signifies the disparity between the predicted results and the actual labels, aiding in parameter optimization. In this investigation, an Intersection over Union (IoU) threshold of 0.5 was employed to compute mAP, and the evaluation metrics were determined as shown in Equations (11)–(13).
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
m A P = 1 N i = 1 N A P i
where TP means that the rotated frames detected correctly by the model in this experiment are the same as the true labeled results, and FP means that the model incorrectly detected some rotated frames, but these detections do not match the real situation, i.e., the detected frames are not the real existent rice stalks. FN is the case where the model fails to detect the real presence of rotated frame targets. N is the number of categories and APi is the average precision of the i-th category, calculated as shown in Equation (14)
A P i = 1 R i k = 1 R i P k × r e l k
where Ri is the number of TPs in category i, P(k) is the precision over the first k detections, and rel(k) indicates whether the kth detection is a true positive sample TP.

4.4. Experimental Results

The experiments employed the MV-SSRP model, which demonstrated strong performance on the rice plant dataset following 500 training iterations, as illustrated in Figure 6. Attaining a final precision of 93.4% and a recall of 97.3%, the model effectively identified and categorized the extent of curvature in rice plants.
Figure 7 presents a detailed comparison of key evaluation metrics between the MV-SSRP and the original YOLOv8sOBB models for training and validating sets. These metrics include box-loss, cls-loss, obj-loss, and theta-loss. Specifically, box-loss assesses the accuracy of the model’s predicted rotated bounding box positions; cls-loss measures the model’s performance in classifying rotated bounding boxes of rice stems; obj-loss reflects the degree of alignment between the model’s predictions and ground truth; and theta-loss evaluates the accuracy of the model’s predicted rotation angles for the bounding boxes.
Through comparative analysis of the trends in these loss functions, it is evident that as the number of epochs increases, MV-SSRP demonstrates progressive optimization across all evaluation metrics, enhancing the accuracy of rotated object detection for rice plant stems. Moreover, compared with the YOLOv8sOBB model on the validating set, the MV-SSRP model demonstrated superior overall performance, exhibiting lower values for the four losses above metrics and higher precision.
This study utilizes the PR curve graph and the normalized confusion matrix graph. The PR curve illustrates the interplay between precision and recall across various thresholds, with a larger area under the curve (mAP) indicating superior model performance. The normalized confusion matrix visually represents classification accuracy across different categories. As depicted in Figure 8, the model’s mean Average Precision (mAP) achieves values of 98.5%, 96.1%, and 98.2% for the respective categories of rotated boxes x1, x2, and x3. The overall mAP of 97.6% demonstrates a high level of accuracy in prediction across all categories, indicating excellent performance.
Nine rice samples are randomly selected to ensure the accuracy and representativeness of the MV-SSRP model validation and to consider the heterogeneity in physical characteristics among different rice plants. Simultaneously, to simulate the continuity of force application on plants in actual scenarios, this study applies an external force to each rice plant, starting from 0 N and incrementally increasing by 0.01 N up to 0.3 N. This study captures images of the bending rice stem at each force increment point, obtaining 30 samples for each rice plant. The MV-SSRP model is employed to predict the bending deformation of the rice stems. Concurrently, manual measurements of the actual plant deformation are taken under the same stress conditions.
Figure 9a–i presents the comparative relationship between measured deformation values and model-predicted deformation values for nine distinct rice plants. The x-axis represents the measured deformation values, while the y-axis denotes the model-predicted deformation values, collectively forming the coordinates of each data point in the scatter plot. These data points (small circles) exhibit a pronounced linear distribution trend, which can be fitted to yield a red-sloping line. The presence of this line reveals a significant linear correlation between the measured deformation values and the model-predicted deformation values. Notably, the degree of coincidence between this sloping line and the diagonal directly reflects the precision of the model’s predictions—the closer the sloping line approximates the diagonal, the higher the consistency between the model’s predicted results and the actual measured values, thereby confirming the high accuracy of the model’s predictions. Statistical analysis reveals a linear relationship between the model’s predictions and the actual values, with a correlation coefficient of R2 ≥ 0.9. This result demonstrates the accuracy of the MV-SSRP model’s predictions.

5. Discussion

5.1. Comparative Analysis of Models

A comparative analysis was conducted with several established models using the same dataset (R3Det [21], Rotated Faster R-CNN, Rotated RepPoints [23], and ReDet [24]). The experimental results of each model after 500 training iterations are presented in Table 1.
The findings indicate that the MV-SSRP model demonstrated superior accuracy and [email protected] and [email protected]:.95 metrics, achieving 93.4%, 97.6%, and 52.6%, respectively, compared to other models. These results underscore the practical utility of the MV-SSRP model for detecting rotating targets in rice plants.

5.2. Ablation Discussion

This study conducted two ablation experiments to thoroughly examine the impact of model structure and components on performance. The first experiment involved comparing different sizes of YOLOv8 network structures (YOLOv8n/l/m/s/x), with the results presented in Table 2. The findings indicate that as the network depth and width increase, a trend of initially improving model performance is followed by a decrease. Among the considered models, YOLOv8s demonstrated superior performance in various metrics, including accuracy, recall, and mAP, although it exhibited a slight deficiency compared to YOLOv8m in terms of target loss. Considering these, YOLOv8s was selected as the foundational network for this study.
Furthermore, the ScConv convolution [36] is analyzed in conjunction with various attention mechanisms such as SE attention [34], SKAttention [37], CoorAttention [38], ECA [39], and CBAM [40]. The impact of these modules on model performance was evaluated, and the results are presented in Table 3. The findings indicate that the ScConv convolution enhances the model’s expressive capabilities and reduces computational complexity. Additionally, incorporating attention mechanisms enhances the model’s ability to focus on key information and improve feature expression.

6. Conclusions

This study introduces a machine vision measurement technique, MV-SSRP, for precise assessment of the strain characteristics of rice plants under stress conditions and their corresponding stress–strain relationship. The proposed method employs a high-resolution camera and an image processing algorithm to enable non-contact measurement of rice plants. By analyzing the morphological alterations of stalks at varying bending angles and extracting geometric parameters, including stalk length, diameter, and bending angle, this method circumvents potential errors and damages associated with conventional measurement techniques, thereby enhancing measurement precision and efficiency. Additionally, the study employed a specialized mechanical testing apparatus to systematically apply incremental external loads to rice stalks, simultaneously monitoring and documenting the deformation data of the plants and producing stress–strain curves. This method holds significant importance in elucidating the mechanical characteristics of rice plants and enhancing cultivation and harvesting methodologies.
However, this study acknowledges certain limitations. Firstly, the experiments were conducted on rice samples of specific varieties grown under particular environmental conditions, necessitating further validation for applicability to other varieties and environments. Secondly, the strain response relationships of rice plants under multi-point loading conditions require further in-depth investigation to improve the model’s predictive accuracy and generalizability. Future research directions will address these limitations and explore broader agricultural machine vision applications. It includes, but is not limited to, expanding the diversity of experimental samples, studying complex strain relationships under multi-point loading conditions, and investigating innovative applications of machine vision technology in precision agricultural management and automated harvesting processes.

Author Contributions

Conceptualization, W.Y.; methodology, W.Y., S.D. and I.G.; software, X.Z. and S.K.; validation, X.Z. and S.K.; formal analysis, W.Y., X.Z. and S.D.; investigation, W.Y. and X.Z.; resources, W.Y.; data curation, X.Z.; writing—original draft preparation, W.Y. and X.Z.; writing—review and editing, W.Y. and X.C.; visualization, X.Z. and S.K.; supervision, W.Y.; project administration, W.Y.; funding acquisition, W.Y. and X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financially supported by the Natural Science Foundation of Jiangxi Province (Grant No. 20212BAB202015); Jiangxi Provincial Special Program 03 and 5G Projects (Grant No. 20232ABC03A18).

Data Availability Statement

The code and data for this study experiment can be downloaded and accessed through the following link: https://github.com/bks520/MV-SSRP (accessed on 1 July 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, R.; Li, X.; Tan, M.; Xin, L.; Wang, X.; Wang, Y.; Jiang, M. Inter-provincial differences in rice multi-cropping changes in main double-cropping rice area in China: Evidence from provinces and households. Chin. Geogr. Sci. 2019, 29, 127–138. [Google Scholar] [CrossRef]
  2. Su, Y.; He, S.; Wang, K.; Shahtahmassebi, A.R.; Zhang, L.; Zhang, J.; Zhang, M.; Gan, M. Quantifying the sustainability of three types of agricultural production in China: An emergy analysis with the integration of environmental pollution. J. Clean. Prod. 2020, 252, 119650. [Google Scholar] [CrossRef]
  3. Shi, W.; Wang, M.; Liu, Y. Crop yield and production responses to climate disasters in China. Sci. Total Environ. 2021, 750, 141147. [Google Scholar] [CrossRef] [PubMed]
  4. Shah, L.; Yahya, M.; Shah, S.M.A.; Nadeem, M.; Ali, A.; Ali, A.; Wang, J.; Riaz, M.W.; Rehman, S.; Wu, W. Improving lodging resistance: Using wheat and rice as classical examples. Int. J. Mol. Sci. 2019, 20, 4211. [Google Scholar] [CrossRef]
  5. Song, X.; Hao, Y.; Wang, S.; Zhang, L.; Liu, H.; Yong, F.; Dong, Z.; Yuan, Q. Dynamic mechanical response and damage evolution of cemented tailings backfill with alkalized rice straw under SHPB cycle impact load. Constr. Build. Mater. 2022, 327, 127009. [Google Scholar] [CrossRef]
  6. Demey, M.L.; Mishra, R.C.; Van Der Straeten, D. Sound perception in plants: From ecological significance to molecular understanding. Trends Plant Sci. 2023, 28, 825–840. [Google Scholar] [CrossRef]
  7. Nutan, K.K.; Rathore, R.S.; Tripathi, A.K.; Mishra, M.; Pareek, A.; Singla-Pareek, S.L. Integrating the dynamics of yield traits in rice in response to environmental changes. J. Exp. Bot. 2020, 71, 490–506. [Google Scholar] [CrossRef]
  8. Chen, Z.; Li, P.; Jiang, S.; Chen, H.; Wang, J.; Cao, C. Evaluation of resource and energy utilization, environmental and economic benefits of rice water-saving irrigation technologies in a rice-wheat rotation system. Sci. Total Environ. 2021, 757, 143748. [Google Scholar] [CrossRef]
  9. Stubbs, C.J.; Oduntan, Y.A.; Keep, T.R.; Noble, S.D.; Robertson, D.J. The effect of plant weight on estimations of stalk lodging resistance. Plant Methods 2020, 16, 128. [Google Scholar] [CrossRef]
  10. Tang, Z.; Zhang, B.; Wang, B.; Wang, M.; Chen, H.; Li, Y. Breaking paths of rice stalks during threshing. Biosyst. Eng. 2021, 204, 346–357. [Google Scholar] [CrossRef]
  11. Zargar, O.; Zhao, Z.; Li, Q.; Zou, J.; Pharr, M.; Finlayson, S.; Muliana, A. A Photoacoustic Method to Measure the Young’s Modulus of Plant Tissues. Exp. Mech. 2023, 63, 1321–1333. [Google Scholar] [CrossRef]
  12. Rehman, T.U.; Mahmud, M.S.; Chang, Y.K.; Jin, J.; Shin, J. Current and future applications of statistical machine learning algorithms for agricultural machine vision systems. Comput. Electron. Agric. 2019, 156, 585–605. [Google Scholar] [CrossRef]
  13. Li, Y.; Huang, Q.; Pei, X.; Jiao, L.; Shang, R. RADet: Refine feature pyramid network and multi-layer attention network for arbitrary-oriented object detection of remote sensing images. Remote Sens. 2020, 12, 389. [Google Scholar] [CrossRef]
  14. Liu, Z.; Cai, Y.; Wang, H.; Chen, L.; Gao, H.; Jia, Y.; Li, Y. Robust target recognition and tracking of self-driving cars with radar and camera information fusion under severe weather conditions. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6640–6653. [Google Scholar] [CrossRef]
  15. Yang, X.; Yan, J.; Ming, Q.; Wang, W.; Zhang, X.; Tian, Q. Rethinking rotated object detection with gaussian wasserstein distance loss. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; pp. 11830–11841. [Google Scholar]
  16. Fan, Q.; Cui, G.; Zhao, Z.; Shen, J. Obstacle Avoidance for Microrobots in Simulated Vascular Environment Based on Combined Path Planning. IEEE Robot. Autom. Lett. 2022, 7, 9794–9801. [Google Scholar] [CrossRef]
  17. Chen, Y.; Wu, Z.; Zhao, B.; Fan, C.; Shi, S. Weed and corn seedling detection in field based on multi feature fusion and support vector machine. Sensors 2020, 21, 212. [Google Scholar] [CrossRef] [PubMed]
  18. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  19. Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
  20. Lu, Y.; Yi, S.; Zeng, N.; Liu, Y.; Zhang, Y. Identification of rice diseases using deep convolutional neural networks. Neurocomputing 2017, 267, 378–384. [Google Scholar] [CrossRef]
  21. Yang, X.; Yan, J.; Feng, Z.; He, T. R3det: Refined single-stage detector with feature refinement for rotating object. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; pp. 3163–3171. [Google Scholar] [CrossRef]
  22. Xie, X.; Cheng, G.; Wang, J.; Yao, X.; Han, J. Oriented R-CNN for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 3520–3529. [Google Scholar] [CrossRef]
  23. Yang, Z.; Liu, S.; Hu, H.; Wang, L.; Lin, S. Reppoints: Point set representation for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9657–9666. [Google Scholar] [CrossRef]
  24. Han, J.; Ding, J.; Xue, N.; Xia, G.-S. Redet: A rotation-equivariant detector for aerial object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 2786–2795. [Google Scholar] [CrossRef]
  25. Huang, J.; Liu, W.; Zhou, F.; Peng, Y. Effect of multiscale structural parameters on the mechanical properties of rice stems. J. Mech. Behav. Biomed. Mater. 2018, 82, 239–247. [Google Scholar] [CrossRef]
  26. Yadav, S.; Singh, U.M.; Naik, S.M.; Venkateshwarlu, C.; Ramayya, P.J.; Raman, K.A.; Sandhu, N.; Kumar, A. Molecular mapping of QTLs associated with lodging resistance in dry direct-seeded rice (Oryza sativa L.). Front. Plant Sci. 2017, 8, 1431. [Google Scholar] [CrossRef]
  27. Challapalli, A.; Li, G. 3D printable biomimetic rod with superior buckling resistance designed by machine learning. Sci. Rep. 2020, 10, 20716. [Google Scholar] [CrossRef] [PubMed]
  28. Zhou, F.; Huang, J.; Liu, W.; Deng, T.; Jia, Z. Multiscale simulation of elastic modulus of rice stem. Biosyst. Eng. 2019, 187, 96–113. [Google Scholar] [CrossRef]
  29. Shi, Y.; Jiang, Y.; Wang, X.; Thuy, N.T.D.; Yu, H. A mechanical model of single wheat straw with failure characteristics based on discrete element method. Biosyst. Eng. 2023, 230, 1–15. [Google Scholar] [CrossRef]
  30. Ottesen, M.A.; Larson, R.A.; Stubbs, C.J.; Cook, D.D. A parameterised model of maize stem cross-sectional morphology. Biosyst. Eng. 2022, 218, 110–123. [Google Scholar] [CrossRef]
  31. Stubbs, C.J.; Larson, R.; Cook, D.D. Mapping spatially distributed material properties in finite element models of plant tissue using computed tomography. Biosyst. Eng. 2020, 200, 391–399. [Google Scholar] [CrossRef]
  32. Richely, E.; Bourmaud, A.; Placet, V.; Guessasma, S.; Beaugrand, J. A critical review of the ultrastructure, mechanics and modelling of flax fibres and their defects. Prog. Mater. Sci. 2022, 124, 100851. [Google Scholar] [CrossRef]
  33. Baley, C.; Gomina, M.; Breard, J.; Bourmaud, A.; Davies, P. Variability of mechanical properties of flax fibres for composite reinforcement. A review. Ind. Crops Prod. 2020, 145, 111984. [Google Scholar] [CrossRef]
  34. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef]
  35. Al-Zube, L.A.; Robertson, D.J.; Edwards, J.N.; Sun, W.; Cook, D.D. Measuring the compressive modulus of elasticity of pith-filled plant stems. Plant Methods 2017, 13, 99. [Google Scholar] [CrossRef]
  36. Li, J.; Wen, Y.; He, L. Scconv: Spatial and channel reconstruction convolution for feature redundancy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 6153–6162. [Google Scholar] [CrossRef]
  37. Li, X.; Wang, W.; Hu, X.; Yang, J. Selective kernel networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 510–519. [Google Scholar] [CrossRef]
  38. Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for efficient mobile network design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 13713–13722. [Google Scholar] [CrossRef]
  39. Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar] [CrossRef]
  40. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 4–14 September 2018; pp. 3–19. [Google Scholar] [CrossRef]
Figure 1. Rice plant strain image acquisition device.
Figure 1. Rice plant strain image acquisition device.
Agronomy 14 01443 g001
Figure 2. MV-SSRP model framework.
Figure 2. MV-SSRP model framework.
Agronomy 14 01443 g002
Figure 3. Rotating target recognition process.
Figure 3. Rotating target recognition process.
Agronomy 14 01443 g003
Figure 4. Modeling of stress–strain relationship in rice plants.
Figure 4. Modeling of stress–strain relationship in rice plants.
Agronomy 14 01443 g004
Figure 5. Rice plant strain raw dataset.
Figure 5. Rice plant strain raw dataset.
Agronomy 14 01443 g005
Figure 6. MV-SSRP model. (a) Model-training precision-recall; (b) model-labeling plot after training completion.
Figure 6. MV-SSRP model. (a) Model-training precision-recall; (b) model-labeling plot after training completion.
Agronomy 14 01443 g006
Figure 7. Comparative experimental results between MV-SSRP and the original network model. (a) Training set; (b) validating set.
Figure 7. Comparative experimental results between MV-SSRP and the original network model. (a) Training set; (b) validating set.
Agronomy 14 01443 g007
Figure 8. Model performance evaluation. (a) Average accuracy; (b) normalized confusion matrix.
Figure 8. Model performance evaluation. (a) Average accuracy; (b) normalized confusion matrix.
Agronomy 14 01443 g008
Figure 9. Statistical analysis of predicted deformation values for various rice plants using the MV-SSRP model.
Figure 9. Statistical analysis of predicted deformation values for various rice plants using the MV-SSRP model.
Agronomy 14 01443 g009
Table 1. Comparative experimental data table of different rotating target detection network models used on this study dataset.
Table 1. Comparative experimental data table of different rotating target detection network models used on this study dataset.
MethodPrecision (%)[email protected] (%)[email protected]:.95 (%)
R3Det85.381.541.7
Rotated Faster R-CNN87.683.244.3
Rotated RepPoints86.784.145.8
ReDet83.980.740.5
MV-SSRP (Ours)93.497.652.6
Table 2. Comparative data of four evaluation metrics for various YOLOv8OBB network architectures.
Table 2. Comparative data of four evaluation metrics for various YOLOv8OBB network architectures.
Network ModelAccuracy (%)Recall Rate (%)Mean Accuracy mAP (%)Target Loss Obj_loss
YOLOv8n-OBB83.187.387.10.0056
YOLOv8l-OBB86.387.789.40.0044
YOLOv8m-OBB81.985.681.90.0038
YOLOv8x-OBB86.187.186.30.0052
YOLOv8s-OBB88.693.593.80.0045
Table 3. Comparative data of four evaluation metrics for YOLOv8s-OBB with different modules added.
Table 3. Comparative data of four evaluation metrics for YOLOv8s-OBB with different modules added.
Network ModelAccuracy (%)Recall Rate (%)Average Accuracy mAP (%)Target Loss Obj_loss
4CBAM + YOLOv8sOBB88.990.291.70.0048
4CoorAttention + YOLOv8sOBB86.286.785.20.0046
4ECA + YOLOv8sOBB84.583.582.80.0049
4SE + YOLOv8sOBB89.189.692.20.0046
4SKAttention + YOLOv8sOBB91.992.996.80.005
MV-SSRP (Ours)93.492.697.60.0052
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yi, W.; Zhang, X.; Dai, S.; Kuzmin, S.; Gerasimov, I.; Cheng, X. MV-SSRP: Machine Vision Approach for Stress–Strain Measurement in Rice Plants. Agronomy 2024, 14, 1443. https://doi.org/10.3390/agronomy14071443

AMA Style

Yi W, Zhang X, Dai S, Kuzmin S, Gerasimov I, Cheng X. MV-SSRP: Machine Vision Approach for Stress–Strain Measurement in Rice Plants. Agronomy. 2024; 14(7):1443. https://doi.org/10.3390/agronomy14071443

Chicago/Turabian Style

Yi, Wenlong, Xunsheng Zhang, Shiming Dai, Sergey Kuzmin, Igor Gerasimov, and Xiangping Cheng. 2024. "MV-SSRP: Machine Vision Approach for Stress–Strain Measurement in Rice Plants" Agronomy 14, no. 7: 1443. https://doi.org/10.3390/agronomy14071443

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop