Next Article in Journal
Assessing Stakeholders’ Preferences for Future Rice Farming Practices in the Mekong Delta, Vietnam
Next Article in Special Issue
Factors Affecting the Adoption of Digital Technology by Farmers in China: A Systematic Literature Review
Previous Article in Journal
Investigation of the Microenvironment, Land Cover Characteristics, and Social Vulnerability of Heat-Vulnerable Bus Stops in Knoxville, Tennessee
Previous Article in Special Issue
Deep-Learning-Based Strawberry Leaf Pest Classification for Sustainable Smart Farms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Model for Yield Estimation Based on Sea Buckthorn Images

1
College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Hohhot 010018, China
2
Inner Mongolia Autonomous Region Key Laboratory of Big Data Research and Application of Agariculture and Animal Husbandry, Hohhot 010018, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sustainability 2023, 15(14), 10872; https://doi.org/10.3390/su151410872
Submission received: 26 April 2023 / Revised: 9 July 2023 / Accepted: 9 July 2023 / Published: 11 July 2023

Abstract

:
Sea buckthorn is an extremely drought-tolerant, resilient and sustainable crop that can be grown in areas with harsh climates and scarce resources to provide a source of nutrition and income for the local population. The use of image-based yield estimation methods allows for better management of sea buckthorn cultivation to improve its productivity and sustainability, while the error in fruit yield information due to occlusion can be well reduced by combining and analysing the image features extracted using binocular cameras. In this paper, mature wild sea buckthorn in the mountainous areas north of Hohhot City, Inner Mongolia Autonomous Region, were used as the study target. Firstly, complete images of sea buckthorn branches were collected by binocular cameras and features were extracted. The extracted features include the colour index of sea buckthorn fruits, the number of fruits and a total of four texture parameters, ASM, CON, COR and HOM. The features with significant correlation to sea buckthorn fruit weight were selected by correlation calculation of the feature parameters, the obtained correlation features were introduced into the BP neural network model for training and then the sea buckthorn estimation model was obtained. The results showed that the best yield estimation model was achieved by combining the COR index with the colour index and the number of sea buckthorn fruits, with a coefficient of determination R2 = 0.99267 and a root mean square error RMSE = 0.5214.

1. Introduction

Sea buckthorn is a sustainable crop that grows in marginal areas and has a wide range of economic and ecological values. It is not only used as a dietary supplement to boost the immune system, but also as an important industrial raw material for beverages, health products, pharmaceuticals, etc. [1,2]. Sea buckthorn cultivation does not require costly agricultural techniques and large amounts of fertilisers and pesticides and can be carried out on relatively infertile land [3]. Therefore, sea buckthorn cultivation is considered a sustainable agricultural practice. However, there are challenges associated with sea buckthorn cultivation and management, such as the difficulty of estimating and measuring yields. To address these issues and to improve the yield and quality of sea buckthorn, image-based yield estimation techniques for sea buckthorn have been proposed. This paper will explore the significance of sea buckthorn yield estimation technology in improving sea buckthorn production efficiency, reducing resource wastage, promoting economic development and increasing farmers’ income, with the aim of contributing to the promotion of sustainable development of sea buckthorn cultivation and society at large.
Yield is an important indicator for evaluating sea buckthorn cultivation, and rapid and accurate yield estimation is important for improving the efficiency of sea buckthorn breeding. Currently, the main means of crop yield estimation are divided into three main processing methods: traditional, spectral and image. Traditional yield estimation is mainly carried out by manual sampling method, which has the disadvantages of being high cost, time-consuming and having high manual error [4]. As spectral data have rich spectral information covering multiple bands of reflectance, spectral processing techniques have unique advantages in obtaining information on crop growth parameters. For example, He et al. successfully estimated county-level maize production in China from 2015 to 2019 by integrating remotely sensed spectral images and climate data to calculate spectral indices using a random forest model [5]. However, at the same time, spectral processing technology also has the limitations of expensive instruments and slow data processing speeds. Crop estimation based on image processing technology can be divided into monocular vision and binocular vision according to the different image acquisition methods. For example, Wang et al. implemented a YOLOv5 algorithm based on the YOLOv5 algorithm for position detection of sainfoin images along with yield estimation [6]. Zhao et al. constructed a rapid yield prediction model for rice based on two-dimensional images of rice spikes, which can initially estimate the yield of rice in the field, but the premise is that the growth of the spikes is uniform in the rice field to ensure the accuracy of the model [7]. The binocular vision yield estimation method can obtain the depth information of the image through feature matching, which has the advantages of wide application scenarios and high measurement accuracy compared to monocular vision and has the advantages of small data volume and fast processing speed compared to spectral processing [8].
Currently, yield estimation of sea buckthorn mainly relies on manual rough estimation, i.e., the total length of fruiting branches is first estimated by visual inspection or measurement, then two to three representative fruiting branches are selected, their length and number of fruit grains are measured and then the total number of fruits is calculated. Finally, the fruit yield is calculated based on the weight of 100 fruits [9]. As this method is time-consuming, labour-intensive and has a high error rate, a more accurate and efficient method is needed to address the issue of sea buckthorn yield measurement.
To this end, this study takes wild sea buckthorn as the research object and acquires image information of sea buckthorn branches using simulated binocular photography under laboratory conditions, and extracts the morphological features, colour features and texture features of sea buckthorn from the binocular camera images through image processing techniques, respectively, and constructs a neural network-based yield estimation model of sea buckthorn based on the image features and verifies that the yield estimation results of the model are reliable. Compared with the traditional manual estimation method, this method has the advantages of high accuracy, speed and low cost in calculating the yield of sea buckthorn in real time by algorithms after dual-camera shooting. This method also provides a theoretical basis and technical support for preliminary research on yield estimation with UAV binocular vision cameras. This method also provides new ideas and methods for the yield measurement of other fruit trees.

2. Materials and Methods

2.1. Experimental Materials

In this study, wild sea buckthorn was used in the mountainous area north of Hohhot, Inner Mongolia Autonomous Region (111°48’52″ E, 40°56’17″ N). A total of 120 samples of sea buckthorn branches of different lengths and densities were collected in late September 2020 and placed in the laboratory using a camera and a mobile phone to simulate binocular vision to take photographs at a fixed position from two angles. The diagram is shown in Figure 1. The camera model was a Nikon D3300 (Nikon, Tokyo, Japan) with a resolution of 4496 × 3000 and the image storage format was JPG. The phone model used was an iPhone XR (Apple Inc., Cupertino, CA, USA) with a resolution of 4032 × 3024 and the image storage format was JPG. Matlab 2019a software was used to process the captured image information.

2.2. Test Method

Firstly, complete images of sea buckthorn branches were captured by a binocular camera and the images were imported into Matlab software for feature extraction. The extracted features included extracting the colour index of sea buckthorn fruits using the K-Means algorithm, identifying the number of sea buckthorn fruits using the Hough circular transformation algorithm and calculating the image texture feature parameters using the grey scale co-occurrence matrix. Through the correlation calculation of these feature parameters, the features with significant correlation to the weight of sea buckthorn fruit were screened, the obtained correlation features were introduced into the BP neural network model for training and then the sea buckthorn estimation model was obtained. The specific experimental process is shown in Figure 2.

3. Data Processing

3.1. Colour Index Feature Extraction

RGB colour space is a method of generating different colours by mixing and overlaying the three primary colours of red, green and blue. The primary colour values of each channel are highly linearly correlated, so a variety of different colours can easily be generated by mixing different primary colour values. However, the RGB colour space is not the most ideal colour space as it is not designed based on the properties of the human visual system. Instead, the Lab colour space is widely considered to be a better choice. A schematic representation of the Lab colour space is shown in Figure 3, with all its colour information in chromaticity layers a and b and luminance information in L*. Therefore, the Euclidean distance metric can be used to measure differences between colours [10]. When using the Euclidean distance metric, the differences between colours can be seen as distances in the Lab colour space. This distance calculation can help us to determine more accurately how similar or different two colours are from each other. In conclusion, although the RGB colour space is very popular, in many cases the Lab colour space is more suitable for calculating the differences between colours.
The K-Means algorithm is a commonly used unsupervised learning clustering algorithm for dividing a given dataset into K shown, and different categories and the clustering results can help us to analyse the data more deeply [11]. The settings of the K value are usually based on human experience, and the number of clusters needs to be flexibly adjusted in practical applications to achieve the best clustering results. When executing the K-Means algorithm, K clustering centroids need to be randomly initialised, and then the positions of the clustering centroids are continuously adjusted through iterative optimisation to eventually converge to obtain the best clustering results. The core of the K-Means algorithm is to iteratively update the clustering centroids until the clustering results converge and reach the optimum.
In the specific implementation of the K-Means algorithm, K clustering centroids need to be selected randomly first. Then, for each data point, the Euclidean distance to each cluster centroid is calculated and assigned to the cluster with the closest distance. Next, an averaging operation is performed on the coordinates of the data points in each cluster to update the positions of the cluster centroids. This is an iterative process until the cluster centroids no longer change or change very little, then the algorithm converges and the K-Means clustering results are obtained.
j = 1 k i = 1 n x i u j 2
The K-Means algorithm achieves the highest intra-class similarity and the lowest inter-class similarity and achieves a local optimum, making it better suited for colour image segmentation. The original RGB image as shown in Figure 4 is converted to Lab image, a* and b* layers are extracted, and colour based segmentation is implemented using K-Means algorithm to extract sea buckthorn fruits. The results are shown in Figure 5.
As shown in Figure 6, after converting the segmented sea buckthorn fruit image into a binary image, it is found that there are image noises such as tree branches in the segmented image, and there are holes remaining in some of the fruits. In order to eliminate the noise, the binary image was filled with holes, small areas less than 350 pixels were removed from the image using the small area removal method under the eight-connected region and then the complete binary image of the fruit region was obtained after the morphological closure operation to fill the gullies, as shown in Figure 7.
The resulting binary image of the fruit was calculated as the number of pixels in the foreground, which gave the colour index of the sea buckthorn image, a value that gives a good indication of the proportion of pixels in the area of the sea buckthorn fruit that make up the captured sample image.

3.2. Hough Circle Transform to Identify the Number of Fruits

Hough circle detection is a common image feature extraction technique that can be used to detect objects with circular features. This method determines the presence or absence of circular objects in an image by performing a voting calculation on the pairwise relationship between the image space and the parameter space [12,13]. For a circle with a defined radius and centre, each of its points in the image space corresponds to a cone in the 3D parameter space. In turn, all points on a circle in image space correspond to the intersection of a cluster of intersecting cones in the parameter space [14]. During the Hough circle transformation, the centres and radii of all the parametric circles corresponding to this pixel are found by scanning the image and computing them pixel by pixel, and finally these parameters are polled to arrive at the most likely circular feature. However, as the parameter space of the Hough circle transform is three-dimensional, it has a large impact on its detection when there are many interfering edges in the image. The Hough transform processing flow chart is shown in Figure 8.
To avoid the effect of noise during the detection of sea buckthorn fruit regions, the resulting binarised fruit region image can be used as a mask to mark the non-fruit regions in the original colour image, thus achieving a colour image that retains only the sea buckthorn fruit regions. Next, the colour image containing only sea buckthorn fruits is Hough transformed to find the round fruits therein, and finally these detected fruits are labelled onto the original unprocessed sea buckthorn image and the number of detected sea buckthorn fruits is recorded. Through this process, the influence of other distracting edges in the image can be effectively reduced, enabling the Hough circle detection algorithm to detect sea buckthorn fruits more accurately, thus improving the accuracy and reliability of detection. The final detection results are shown in Figure 9, where the detected round fruits are marked on the original image, making the area of the fruit more visible.

3.3. Extraction of Texture Parameters

The grey scale co-occurrence matrix (GLCM) is a classical method used to characterise textures. It characterises a range of feature statistics of regional textures by describing features in the grey space of an image, such as orientation, neighbourhood spacing and magnitude of change [15]. The traditional grey scale co-occurrence matrix has the disadvantage of high data dimensionality and large data volume; for this reason, in the 1970s Haralick proposed 14 commonly used grey-scale co-occurrence matrix feature parameters to describe the texture feature information of images [16,17]. These feature parameters include energy (ASM), contrast (CON), correlation (COR) and homogeneity (HOM), and they are widely used in medical image processing, computer vision and other fields. In this paper, we have chosen these four texture parameters as the basis for forming the image texture feature space. Specifically, energy (ASM) describes the sum of the squares of the grey values of each pixel in an image, contrast (CON) describes the degree of difference between grey values, correlation (COR) describes the degree of correlation between different pixels and homogeneity (HOM) describes the degree of similarity between the grey values of neighbouring pixels. By calculating these feature parameters, we can more accurately characterise the texture of an image and provide a more reliable basis for subsequent image analysis and processing.
(1)
Energy, also known as angular second-order moments, is a measure of the uniformity of the image’s grey-scale distribution and texture thickness and is calculated as shown in Equation (2).
ASM = i = 0 K 1 j = 0 K 1 P i , j | d , θ 2
(2)
Contrast, also known as moment of inertia, measures the distribution of pixel matrix values and the amount of local variation in an image, reflecting the sharpness of the image and the depth of grooves in the texture. The formula is shown in Equation (3).
CON = i = 1 g j = 1 g i j 2 × m i , j
(3)
Correlation, used to measure the degree of similarity between the grey levels of an image in the row or column direction, is therefore worth the size reflecting the local grey correlation; the larger the value, the greater the correlation. The formula is shown in Equation (4).
COR = i j i μ i j μ j p i , j σ i σ j
(4)
The inverse moment, which reflects the homogeneity of the image texture, measures the amount of local variation in the image texture. It is calculated as shown in Equation (5).
H O M = i = 0 N 1 j = 0 N 1 P i , j | d , θ 1 + i j 2

3.4. Correlation Analysis

Correlation analysis is a statistical method to study the correlation between random variables by examining whether there is some kind of dependence between phenomena and exploring the direction of correlation and the degree of correlation for phenomena with detailed dependence. In this study, correlation analysis was performed between the colour index, number of sea buckthorn fruits and six feature parameters, ASM, CON, COR and HOM, extracted from sea buckthorn images based on image information and the actual yield of sea buckthorn. The correlation coefficient was used to determine the degree of correlation between the variables. The correlation coefficients were for a slightly linear correlation, for a real linear correlation, for a significant linear relationship and for a highly linear relationship. The correlation coefficients were calculated for two random variables, A and B. Each random variable had N scalar observations and the correlation coefficients were calculated as
r = 1 N 1 i = 1 N A i μ A ¯ σ A B i μ B ¯ σ B
where μ A and σ A are the mean and standard deviation of A, respectively, and μ B and σ B are the mean and standard deviation of B.
Correlation analysis was performed between four texture feature parameters, namely colour index, number and energy of sea buckthorn fruits (ASM), contrast (CON), correlation (COR), homogeneity (HOM) and the actual yield of sea buckthorn based on image information extraction of sea buckthorn images. The results showed that the correlation between the number of sea buckthorn fruits detected by colour index, Hough circle transformation and the actual yield of sea buckthorn were good, with the calculated correlation coefficients r being 0.9707 and 0.9140, respectively, both showing high correlation. The r of COR was 0.4393 among the four texture parameters, showing realistic correlation. The correlation coefficients of the other three texture parameters, ASM, CON and HOM, were −0.1193, 0.0224 and −0.0224, respectively, showing micro-correlation characteristics.

4. Model Construction

A BP neural network is a multilayer feed-forward artificial neural network whose main features are a continuous transfer function and an error back propagation algorithm. This neural network was conceptualised in 1986 by scientists led by Rumelhart and McClelland to address the limitations of artificial neural networks in dealing with non-linear problems [18]. The BP algorithm minimises the mean square error by continuously adjusting the weights and thresholds of the network, thus improving the fitting ability of the neural network [19]. In this study, a three-layer neural network, including an input layer, an implicit layer and an output layer, was used to construct the yield estimation model. In this case, the input layer receives the raw data, the implicit layer transforms the input data into a high-dimensional feature representation through a non-linear transformation and the output layer transforms these features into the final yield estimation result. The hyperparameters of the neural network need to be pre-set before training the neural network, such as parameters such as learning rate, momentum coefficient and number of iterations. The specific values of these parameters are shown in Table 1. By using a BP neural network, it can predict the yield more accurately and improve the production efficiency and economic benefits.
After constructing a feed-forward neural network, feature parameters with high correlation coefficients were selected to combine with the actual yield of sea buckthorn to train the sea buckthorn estimation model. Of the 120 sets of sea buckthorn data, 70% of them were selected as the training set to train the neural network, and the remaining 30% of the sample data were used as the test set to verify the reliability of the model.

5. Results and Analysis

5.1. Correlation between Image Features and Sea Buckthorn Yield

The yield of sea buckthorn and the corresponding extracted image features, which are colour index, number of sea buckthorn fruits, ASM, CON, COR and HOM, are shown in Table 2. The correlation between the image features and the actual yield of sea buckthorn is analysed. Among the four texture parameters, the correlation coefficient of COR was 0.4393, showing a realistic correlation. The correlation coefficients of the other three texture parameters, ASM, CON and HOM, were −0.1193, 0.0224 and −0.0224, respectively, showing micro-correlation.

5.2. Results of Yield Estimation Model Validation Analysis

The validation results of the seabuckthorn yield estimation model based on neural networks are shown below, with the feature parameters with real correlation and above selected as input and the actual yield of seabuckthorn as output.
(1)
A yield estimation model was constructed by selecting the texture feature parameter COR with sea buckthorn yield, and the model was validated with a coefficient of determination R2 = 0.4433 and root mean square error RMSE = 3.7643. The model validation was poor.
(2)
When the sea buckthorn colour index and the number of fruits identified were used as model inputs and the actual yield of sea buckthorn fruit was used as output, the model predicted better, with coefficient of determination R2 = 0.98379 and root mean square error RMSE = 0.6916. The validation of the yield prediction model is shown in Figure 10.
(3)
The best model prediction was achieved when all the characteristic parameters with correlation coefficients were selected to construct the yield estimation model, i.e., when the combination of the sea buckthorn colour index and the identified number of fruit and texture characteristics COR were used as model inputs and the actual yield of sea buckthorn fruit was used as output. The model was validated with a coefficient of determination R2 = 0.99267 and root mean square error RMSE = 0.5214. The yield prediction model validation results are shown in Figure 11.

6. Conclusions

(1)
This study investigated the correlation between the features extracted based on sea buckthorn images and the actual yield of sea buckthorn. The results showed that both the colour index and the number of sea buckthorn fruits correlated with the actual yield of sea buckthorn with coefficients of 0.9707 and 0.9140, respectively, showing high correlation characteristics. The colour index and the number of sea buckthorn fruits contributed significantly to the accuracy of the sea buckthorn yield estimation model. They were used as a basis for estimating the yield of sea buckthorn with good results.
(2)
The correlation between the texture index and the actual yield of sea buckthorn was lower than that between the colour index and the number of sea buckthorn fruits. The correlation between COR and actual yield was moderate, with a correlation coefficient of 0.4393. The correlation between the other three texture indices, ASM, CON and HOM, and the measured yield was low, all being slightly correlated.
(3)
When the texture indices alone were used as input to construct the sea buckthorn estimation model, the yield estimation model coefficient of determination obtained was extremely low and the model estimation was poor. Combining the COR index with the colour index and the number of sea buckthorn fruits gave the best estimates with a model coefficient of determination R2 = 0.99267 and root mean square error RMSE = 0.5214. The results showed that the COR index contributed positively to the accuracy of the estimation model, but its contribution was low, and COR alone could not be relied upon for yield estimation. When COR is combined with other parameters that are highly correlated with yield, it can then be used as an auxiliary basis for yield estimation and can improve the estimation of the model to some extent.

7. Discussion

In this study, a yield estimation model for sea buckthorn was constructed and the yield prediction of sea buckthorn was achieved based on the image information, and the prediction was good, but there were some shortcomings: because of the dense growth of sea buckthorn fruits, sea buckthorn embodied in the image will have the phenomenon of shading and stacking. This is the main reason for the low correlation coefficient between the number of fruits identified with the Hough transform and the colour index. In addition, there are also some areas for improvement: (1) the image pre-processing method needs to be strengthened, as the application of yield estimation technology in practice will involve more complex image information content and noise interference, so the image pre-processing process needs to be further improved. (2) The estimation of yield for sea buckthorn branch images is still some distance away from the landing of practical applications, and the estimation of yield for the whole sea buckthorn should be subsequently studied on the basis of this research.

Author Contributions

Y.D.: Conceptualization, Methodology, Software, Investigation, Formal Analysis, Writing—Original Draft; H.W.: Conceptualization, Methodology, Software, Investigation, Formal Analysis, Writing—Original Draft; C.W.: Visualization, Investigation; C.Z.: Resources, Supervision; Z.Z.: Conceptualization, Funding Acquisition, Resources, Supervision, Writing—Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Key Project of the Education Department of Inner Mongolia Autonomous Region under Grant No. NJZZ22509.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data cannot be made public at this time due to the confidentiality requirements of the project team for this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sui, M.; Liu, G.-D. Research progress on the comprehensive value of sea buckthorn. Shihezi Sci. Technol. 2020, 2, 1–2. [Google Scholar] [CrossRef]
  2. Guan, B.; Sun, P. Research progress of sea buckthorn resources and its cultivation technology. Anhui Agric. Sci. 2014, 22, 7401–7403+7520. [Google Scholar] [CrossRef]
  3. Sun, W.P.; Ma, N.; Dang, Y.Y. Extraction of various effective components from sea buckthorn pomace and its antioxidant properties. Food Ind. 2018, 6, 151–155. Available online: https://CNKI:SUN:SPGY.0.2018-06-039 (accessed on 20 May 2023).
  4. Wang, J.P.; Wu, H.Q.; Wang, D.J.; Xuan, J.W.; Guo, T.; Li, Y.K. A model for wheat yield estimation based on UAV visible light images and physiological indicators. J. Wheat Crops 2021, 10, 1307–1315. [Google Scholar]
  5. He, Y.; Qiu, B.; Cheng, F.; Chen, C.; Sun, Y.; Zhang, D.; Lin, L.; Xu, A. National Scale Maize Yield Estimation by Integrating Multiple Spectral Indexes and Temporal Aggregation. Remote Sens. 2023, 15, 414. [Google Scholar] [CrossRef]
  6. Wang, F.; Bian, X.; Wang, H.; Li, X. Monitoring and information management system for sainfoin cultivation based on YOLOv5 and ResNet. Inf. Technol. Informatiz. 2023, 4, 83–86. [Google Scholar]
  7. Zhao, S.; Zheng, H.; Chi, M.; Chai, X.; Liu, Y. Rapid yield prediction in paddy fields based on 2d image modelling of rice panicles. Comput. Electron. Agric. 2019, 162, 759–766. [Google Scholar] [CrossRef]
  8. Zeng, H.W.; Lei, J.B.; Tao, J.F.; Liu, C.L. A machine vision-based yield measurement method for grain combine harvesters. J. Agric. Mach. 2021, 52, 281–289. [Google Scholar]
  9. Jin, J.; Wen, X.; Gao, F.; Gu, Y.; Wang, D.; Guo, H.; Te, J. A method for estimating sea buckthorn fruit yield survey. Int. Seabuckthorn Res. Dev. 2013, 3, 1–7. [Google Scholar]
  10. Zou, Q.; Yang, L.; Peng, L.; Zheng, Q. Research on leaf segmentation algorithm based on Lab space and K-Means clustering. Agric. Mech. Res. 2015, 9, 222–226. [Google Scholar] [CrossRef]
  11. Xu, L.; Lv, J.D. Image segmentation of poplar plum based on homomorphic filtering and K-mean clustering algorithm. J. Agric. Eng. 2015, 14, 202–208. [Google Scholar] [CrossRef]
  12. Cheng, L.; Wan, X.; Zhang, S.; Wang, C.Y.; Pan, X.W. Identification of log transport vehicles in forest areas based on YCbCr and Hough transformed circles. For. Resour. Manag. 2020, 4, 140–145. [Google Scholar] [CrossRef]
  13. Gong, X.; Zhang, N. Improvement of circle detection algorithm based on Hough transform. Inf. Technol. 2020, 6, 89–93+98. [Google Scholar] [CrossRef]
  14. Lin, Y.; Zhao, H.; Yang, Z.; Lin, M. An equal-length log volume detection system combining deep learning and Hough transform. J. For. Eng. 2021, 6, 136–142. [Google Scholar] [CrossRef]
  15. Wu, Z.; Liu, X.; Zheng, L.; Zhou, K. A method for fabric defect detection based on grayscale co-occurrence matrix feature images. Micromach. Appl. 2015, 21, 47–50+54. [Google Scholar] [CrossRef]
  16. Li, J.X.; Zhao, S.; Jin, H.; Li, Y.F.; Guo, Y. Extraction of seismic damage information of buildings from high-resolution remote sensing imagery combining texture and morphological features. J. Seismol. 2019, 5, 658–670+681. [Google Scholar] [CrossRef]
  17. Wang, H.B.; Xie, Y.F. Improved grayscale co-occurrence matrix for surface defect detection of printed materials. Packag. Eng. 2020, 23, 272–278. [Google Scholar] [CrossRef]
  18. Jiao, L.-C.; Yang, S.-Y.; Liu, F.; Wang, S.-G.; Feng, Z.-X. Seventy years of neural networks: A review and outlook. J. Comput. Sci. 2016, 8, 1697–1716. [Google Scholar]
  19. Wei, L.-X.; Dong, J.-J.; Chen, C.-L.; Zhao, F. A BP neural network-based prediction model for urban freight generation. J. Shanghai Marit. Univ. 2020, 4, 50–54+86. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of image acquisition.
Figure 1. Schematic diagram of image acquisition.
Sustainability 15 10872 g001
Figure 2. Flow chart of experimental design.
Figure 2. Flow chart of experimental design.
Sustainability 15 10872 g002
Figure 3. Lab color space schematic. L* represents luminance; a* represents the component from green to red; b* represents the component from blue to yellow.
Figure 3. Lab color space schematic. L* represents luminance; a* represents the component from green to red; b* represents the component from blue to yellow.
Sustainability 15 10872 g003
Figure 4. Specimen of original seabuckthorn.
Figure 4. Specimen of original seabuckthorn.
Sustainability 15 10872 g004
Figure 5. Fruit image after clustering segmentation.
Figure 5. Fruit image after clustering segmentation.
Sustainability 15 10872 g005
Figure 6. Fruit image after binarization.
Figure 6. Fruit image after binarization.
Sustainability 15 10872 g006
Figure 7. Fruit image after morphological noise reduction.
Figure 7. Fruit image after morphological noise reduction.
Sustainability 15 10872 g007
Figure 8. Hough transform processing flow chart.
Figure 8. Hough transform processing flow chart.
Sustainability 15 10872 g008
Figure 9. Sea buckthorn fruits identified by Hough circle transform.
Figure 9. Sea buckthorn fruits identified by Hough circle transform.
Sustainability 15 10872 g009
Figure 10. Validation results of the yield estimation model established by the color index and the number of fruits.
Figure 10. Validation results of the yield estimation model established by the color index and the number of fruits.
Sustainability 15 10872 g010
Figure 11. Validation results of production estimation model.
Figure 11. Validation results of production estimation model.
Sustainability 15 10872 g011
Table 1. Model parameters set before training.
Table 1. Model parameters set before training.
Parameter NameNumerical Values
Number of neurons in the hidden layer12
Implicit layer activation function f x = sinh x cosh x = e x e x e x + e x
Maximum number of training sessions1000
Learning Rate 1 × 10 7
Training methodsBayesian regularisation training
Table 2. Image feature parameters corresponding to the actual fruit weight of sea buckthorn.
Table 2. Image feature parameters corresponding to the actual fruit weight of sea buckthorn.
Actual ProductionColour IndexNumber of FruitsASMCONCORHOM
15.04184959940.4820634850.0217260660.9730044190.989138846
24.452934721650.4520115550.0235466810.9779900120.988227846
10.66130838790.5375938930.0182212830.9789115230.990890347
9.5100719480.6463562050.0156129770.9775459580.992194203
7.690863540.4938575810.025115610.9660789360.987445952
12.25124752700.5105140750.0195417130.9773376290.990243382
6.166598360.482713560.0184719320.9695794550.990764133
7.6479300410.6248303590.0165945120.975625730.991706204
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Du, Y.; Wang, H.; Wang, C.; Zhang, C.; Zong, Z. A Model for Yield Estimation Based on Sea Buckthorn Images. Sustainability 2023, 15, 10872. https://doi.org/10.3390/su151410872

AMA Style

Du Y, Wang H, Wang C, Zhang C, Zong Z. A Model for Yield Estimation Based on Sea Buckthorn Images. Sustainability. 2023; 15(14):10872. https://doi.org/10.3390/su151410872

Chicago/Turabian Style

Du, Yingjie, Haichao Wang, Chunguang Wang, Chunhui Zhang, and Zheying Zong. 2023. "A Model for Yield Estimation Based on Sea Buckthorn Images" Sustainability 15, no. 14: 10872. https://doi.org/10.3390/su151410872

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop