Next Article in Journal
Design of a Flat-Panel Metasurface Reflectarray C-Band Antenna
Previous Article in Journal
Towards Modelica Models with Credibility Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Measurement of Frontal Area of Leaves in Wind Tunnel Based on Improved U-Net

1
College of Information and Computer Engineering, Northeast Forestry University, Harbin 150040, China
2
College of Electrical and Information Engineering, Heilongjiang University of Technology, Jixi 158100, China
3
School of Civil Engineering, Northeast Forestry University, Harbin 150040, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(17), 2730; https://doi.org/10.3390/electronics11172730
Submission received: 1 August 2022 / Revised: 25 August 2022 / Accepted: 29 August 2022 / Published: 30 August 2022
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
Research on the aerodynamic characteristics of leaves is part of the study of wind-induced tree disasters and has relevance to plant biological processes. The frontal area, which varies with the structure of leaves, is an important physical parameter in studying the aerodynamic characteristics of leaves. In order to measure the frontal area of a leaf in a wind tunnel, a method based on improved U-Net is proposed. First, a high-speed camera was used to collect leaf images in a wind tunnel; secondly, the collected images were corrected, cut and labeled, and then the dataset was expanded by scaling transformation; thirdly, by reducing the depth of each layer of the encoder and decoder of U-Net and adding a batch normalization (BN) layer and dropout layer, the model parameters were reduced and the convergence speed was accelerated; finally, the images were segmented based on the improved U-Net to measure the frontal area of the leaf. The training set was divided into three groups in the experiment. The experimental results show that the MIoUs were 97.67%, 97.78% and 97.88% based on the improved U-Net training on the three datasets, respectively. The improved U-Net model improved the measurement accuracy significantly when the dataset was small. Compared with the manually labeled image data, the RMSEs of the frontal areas measured by the models based on the improved U-Net were 1.56%, 1.63% and 1.60%, respectively. The R2 values of the three measurements were 0.9993. The frontal area of a leaf can be accurately measured based on the proposed method.

1. Introduction

Wind imposes a stress on trees to which their response can range from minor movement of leaves, branches and stems to catastrophic failure in the form of stem breakage and uprooting [1]. Wind damage can reduce the wood production of trees and cause huge economic losses to human production and loss of life [2,3,4]. The study of wind damage mechanisms, wind damage loss and control technology of trees is one of the main hot research fields at present. The common basis for these studies is to understand the aerodynamics of trees [5]. The stem, branch and leaf are three important components of trees, and the aerodynamic study of their behavior is the basis for determining the aerodynamics of trees [6]. A leaf is the smallest unit of a single tree, and leaf damage is the first effect of wind-induced tree disasters [7]. For a fully leafed tree, the drag of its leaves should be the largest force. Because the leaves center further from the ground than the trunk and branches, they impose a greater fraction of the torque about the base [8]. However, leaves are also a source of vibration damping, which reduces the degree of damage [9,10]. Therefore, it is necessary to understand the aerodynamic characteristics of leaves as part of the study of wind-induced tree disasters [5]. In addition, research on the aerodynamic characteristics of leaves also has relevance to the biological processes of plants [11], such as photosynthesis [12], gaseous exchange [13], water retention, or herbivore attacks [14,15,16].
At the leaf scale, wind is known to affect the time-averaged position of the leaf, as well as its shape, a mechanism generically referred to as reconfiguration [9,17,18]. This is a form of self-protection for plants to reduce the resistance leaves exert on them. In order to study the reconstruction mechanism of blades under wind load, wind tunnel experiments are mainly used [5,9,10,18,19,20,21]. In the wind tunnel experiment, the reconstruction of leaves was studied under different wind speeds to explore the aerodynamic characteristics of leaves. Vogel first put forward the reconfiguration phenomenon of leaves and provided data on drag reduction due to reconfiguration in broad leaves [8,9]. A typical relationship between resistance and velocity of a rigid body is shown in (1):
F = 1 2 C d ρ v 2 A
where Cd is the drag coefficient, ρ is the air density, A is the frontal area and v is the wind speed. For a rigid bluff body, the frontal area does not vary with wind speed; however, this is not the case for leaves, which change shape with wind speed. Vogel found that drag coefficients of leaves decrease with increasing wind speed [9], and frontal area, wind speed and aerodynamic force coefficients affect each other. However, it is difficult to measure the change in frontal area with time synchronously, which limits the accurate analysis and modeling of leaf behavior under wind load by using a dynamic model. Therefore, a method to measure the frontal area of leaves that changes with time under wind load in a wind tunnel is of great importance to the study of leaf aerodynamics.
At present, the frontal area is mainly measured by photogrammetry. It mainly includes using coordinate paper, image software and automatic segmentation by the program after setting the threshold.
  • Using coordinate paper: Some researchers pasted coordinate paper on the wind tunnel wall and used the camera to take the projection picture of the object on the wind tunnel wall. The frontal area of the object was obtained by the size of the projection image on the coordinate paper [20,22]. This method requires manual operation and is time-consuming and laborious.
  • Using image software: In order to segment the images quickly, in experiments some researchers have used two concentrated flashlights on the ceiling and downstream of the wind tunnel to increase the contrast of object images by lighting them up against a black tunnel background and turning off other lights. Then, the measured objects in the image were segmented by using the “Photoshop CS5” tool and the frontal area was calculated by counting the RGB values [23,24]. Although this method can measure the frontal area semi-automatically by using image processing software, it requires a good experimental environment and manual operation. These two methods are often used to measure the mean of the frontal area or the value at a particular time.
  • Automatic segmentation by the program after setting the threshold: In order to further improve the speed of data processing, the method of automatic processing using the program is adopted. Hao used a specially designed white model tree in the experiment, which appeared white on grayscale images [25]. By setting a threshold, the program can determine whether the pixels in a gray-scale image are white (tree) or black (no tree). In this process, all “tree pixels” are assigned bright white, and the frontal area of the tree is estimated by calculating the number of white pixels in the processed image. Although the method can automatically calculate the windward area through the program, it has specific requirements on the color of the object.
The advantages and disadvantages of the above methods are shown in Table 1. In the process of measuring the frontal area of leaves changing with time, it is necessary to collect many pictures (hundreds to thousands) for measurement. Manual and semi-manual methods are obviously difficult to use for processing such a large number of pictures. However, the current automatic processing method adopts the threshold method for image segmentation. This method has a high requirement for environment and requires the object to be clearly distinguished from the background. Joseph used the thresholding method to segment the images of a crop panicle to detect the frontal area and movement of the crop panicle. In the experiment, the crop panicle was colored red to make it distinguishable from the background. However, in the field test, this method does not work well in low-light situations, because the crop panicle cannot be clearly distinguished from the background [26]. The leaves will curl in the wind, so the leaves in the picture will still have shadows even under good light conditions. Shadows are areas lacking light, which will make it difficult to distinguish the edge area of leaves from the background, thus making the threshold segmentation method difficult to work.
The above methods for calculating the frontal area are all based on the image area of the object as its frontal area. The most critical step is to segment the object image accurately and calculate the frontal area by counting the pixels of the segmented image. Therefore, in order to measure the windward area of the leaf, an automatic segmentation method is needed for the image of the leaf in the wind. Currently, many image segmentation methods are used for leaf image segmentation. In the laboratory, leaf image segmentation mainly adopts a threshold segmentation method to segment the leaf image under a white background [27,28]. Methods of leaf segmentation with a natural background includes the graph-based approach, model-based approach, region-based active contours, 3D-histogram-based segmentation and Chamfer matching [29]. These methods are semi-automatic and require user interaction for segmenting the region of interest and are not suitable for automatic segmentation of images of leaves in the wind.
With the development of machine learning, deep learning technology is widely used in agriculture [30]. Deep learning is widely used for leaf and crop classification [31,32] and crop image segmentation [33,34]. However, deep learning methods often require training on a large number of labeled datasets. Labeling data is time-consuming work. U-Net [35] is one of the classical semantic segmentation models [36], which can train a deep convolution neural network with small datasets. It has the advantages of simple structure and high segmentation performance. Researchers measured different objects based on U-Net. Yu performed intelligent measurement of fish morphological characteristics based on U-Net [37]. Paulauskaite-Taraseviciene measured garments automatically based on U-Net [38]. Li measured the body shapes of goats and cattle under different backgrounds based on U-Net [39]. U-Net can accurately segment objects and measure objects. However, the dataset composed of different poses of leaves in the wind only contained two categories (leaves and background), and the number of feature combinations was far less than that of the PhC-U373 and DIC-HeLa datasets used by U-Net. If the depth of the filter in U-Net is used, the network will not converge easily, and the more parameters in each layer, the longer the training time will be required.
To solve the above problems, an improved U-Net-based image segmentation method is proposed to segment the leaf image in a wind tunnel, which has a good effect on small datasets. By adding a batch normalization (BN) layer after each convolution layer, the network can complete training faster and converge faster. A dropout layer is added at the end of each layer of the encoder and decoder to alleviate the over-fitting problem that is aggravated by the introduction of a BN layer, making the network less dependent on local characteristics. For the problem of few image features, the depth of each layer of the original U-Net encoder and decoder is reduced to speed up convergence. The experimental results show that the improved network improves the accuracy of leaf image segmentation, especially when the amount of training data is small. The leaf segmentation and frontal area measurement in the wind tunnel are achieved. The main contributions of this paper are listed as follows:
  • A method for automatic and accurate measurement of frontal area of leaves in a wind tunnel based on improved U-Net is achieved. This method only needs to label a small amount of data and has no special requirements for experimental environment and object to be measured.
  • The performance of image segmentation and the measurement accuracy of frontal area are compared between the proposed improved U-Net and the original U-Net.

2. Materials and Methods

2.1. Data Collection

Data collection was completed in the wind tunnel laboratory at Northeast Forestry University, Harbin, China. The wind tunnel is a closed-return type with a maximum wind speed of 60 m s−1 with a turbulence intensity of the flow field and non-uniformity of the velocity field both less than 0.5%. The size of the test section for the tests was 1.0 m (height) × 0.8 m (width) × 5 m (length). The wind speed used in this test varied gradually in the range of 2.99 m s−1 to 20.05 m s−1 in increments of 0.5 m s−1 to 1.0 m s−1. For more details on the wind tunnel, refer to Jiang [6].
The schematic diagram of the data acquisition system adopted in this study is shown in Figure 1a. In the direction of incoming wind, the leaf and camera are placed, respectively, to complete image acquisition, and the distance between the camera and the leaf at rest is about 0.65 m. The leaf was taken from the branch in the middle of the canopy of a healthy B. platyphylla Sukaczev tree in the test forest farm of Northeast Forestry University, Harbin, China, which was healthy and without any apparent damage on the surface. In the test, the leaf was used for no more than 2 h. The leaf was fixed in the wind tunnel using a bracket made from an aluminum tube with a diameter of 6 mm. This diameter is similar to that of the small twigs connecting the leaves in nature. It extended into the wind tunnel from the sidewall of the wind tunnel. The bracket was stably anchored to the floor outside the wind tunnel via a tripod so that it was not affected by vibrations at high wind speeds, as shown in Figure 1b. The reconfiguration of the leaf was recorded by a high-speed camera (MER-230-168U3M/C, Daheng (group) Co., Ltd., Beijing, China). The resolution of the camera was 1920 (H) × 1200 (V), the focal length was set at 35 mm and the frame rate was 168 frames. The camera was fixed on an iron rod in the wind tunnel parallel to the direction of the airflow, as shown in Figure 1b, and the camera was facing the leaf at rest and connected to the computer via a USB 3.0 cable. The images were automatically collected by StreamPix8 (Daheng (group) Co., Ltd., Beijing, China) software, and the image captured is shown in Figure 1c.
Data were collected at 15 sets of wind speeds. In order to avoid the influence of transient wind speed, each set of data was collected after the wind speed was set to be stable for 40 s.

2.2. Data Preprocessing

2.2.1. Lens Distortion Correction

Lens distortion is an inherent property of optical lenses, which distorts the image and affects the accuracy of photogrammetry. OpenCV (version 3.4.2, Intel, Santa Clara, CA, USA), an open-source computer vision library, was used to correct lens distortion. First, 30 images including different views (directions) of the checkerboard were taken, and the focal length was kept constant. Then, the camera internal parameters and distortion parameters were calculated using the cv2.calibrateCamera() function in OpenCV. Internal parameters and distortion coefficients are inherent properties of the camera, and they are deterministic if the internal structure of the camera does not change. Finally, the cv2.undistort() function in OpenCV was used to correct the distortion.

2.2.2. Building the Dataset and Data Augmentation

Firstly, after the lens distortion of the image, the leaf part of the image was cut to improve the processing speed of the program. In this experiment, the imaging regions of the leaf under 15 groups of wind speed were known. The cutting area was set and all images were cut automatically through the program. The size of the cut image was 256 × 256, as shown in Figure 2a. Then, the first 30 pictures captured at each wind speed were labeled with labelme software [40] to obtain the mask images. The mask images were converted to binary images in uint8 format as labeled images used in training. The binary image converted is shown in Figure 2b. A total of 450 pictures are labeled. The first 20 pictures were selected as the training set and the last 10 pictures were selected as the test set at each wind speed. There were 300 pictures in the training set and 150 pictures in the test set.
In order to generate more images for deep learning, data augmentation was adopted. Data augmentation is a method for efficiently increasing the amount of learning data in deep learning. Because the wind direction was unchanged, the scaling transformation was used to simulate changes in leaf positions in the direction of the wind speed, which expanded the dataset to twice the size of the original dataset. Scaling transformation randomly scales the length and width of the image, but the resolution of the image does not change. In this experiment, the scaling transformation ratio was between [0, 0.05].

2.3. Image Segmentation Based on Improved U-Net

2.3.1. Improved U-Net Model

The data used in the experiment were a collection of images of a single leaf at different wind speeds, with a small number of feature combinations. The original U-Net was improved as follows. First, the depth of each layer of the original U-Net encoder and decoder was reduced, the number of filters in the first layer of the encoder was reduced from 64 to 16, the number of other layers was reduced by four times in turn, and the number of filters in the last layer of the encoder was 256; secondly, the batch normalization (BN) layer was added after each convolution layer, so that the input of the activation function was concentrated on the normal distribution with the mean value of 0 and the variance of 1, so that the network can complete the training faster and accelerate the convergence speed; finally, a dropout layer was added at the end of each layer of the encoder and decoder. The dropout layer discarded the activation values according to a certain probability, reduced the interaction between the layers, alleviated the over-fitting problem that was aggravated by the introduction of the BN layer, made the network less dependent on local characteristics and enhanced the generalization ability of the network model. The improved U-Net model architecture is shown in Figure 3.

2.3.2. Image Segmentation Experiment

In the experiment, mean intersection over union (MIoU) was used to evaluate image segmentation performance. The formula for MIoU is shown in (2): k is the number of object classes and pij is the number of pixels classified as class i and predicted as class j.
MIoU = 1 k + 1 i = 0 k p ii j = 0 k p ij + j = 0 k p ji p ii
The implementation of this algorithm was based on python and keras. The experiment was carried out on an NVIDIA GeForce GTX 2080ti GPU. The cross-entropy was used as the loss function, Adam was used as the optimizer, the learning rate was set to 10−6, the spatial dropout ratio was set to 0.05 and the batch size was set to 5. To ensure the reliability of the experiment and the adequacy of the network training, the model was trained for 200 epochs.
To evaluate the improved U-Net segmentation performance, it was compared with the original U-Net. In order to evaluate the effect of dataset size on the performance of model segmentation, the training set was divided into A, B and C groups. Group A data contained 5 consecutive pictures at each wind speed, totaling 75 pictures; group B data contained 10 consecutive pictures at each wind speed, totaling 150 pictures; group C data contained all the data in the training set, totaling 300 pictures. U-Net and improved U-Net were used to train on the three datasets, and the segmentation performance of the trained models on the same test set was compared. Random seed points were set during the experiment to ensure that the U-Net and improved U-Net training and testing data were consistent.

2.4. Calculating the Area of a Single Pixel

The area of a single pixel was calculated by the checkerboard of the known area in the same plane as the leaf at rest. The cv2.findChessboardCorners() function in OpenCV library was used to find the inner corner of the checkerboard. Then, the cv2.cornerSubPix() function was used to further optimize the detected corners, and 34 points were detected for calculating area, as shown in the red points in Figure 4.
Due to the inevitability of tilting the checkerboard during operation, the shape composed with the 34 points is not rectangular. The discretized Green’s formula was used to calculate the area, as shown in (3). D is the area to be calculated and (x, y) is the discrete points on the peripheral curve. In the counterclockwise direction, the area SD of the shape composed with the 34 points was calculated.
S D = 1 2 i x i ( y i + 1 y i ) y i ( x i + 1 x i ) = 1 2 i ( x i y i + 1 y i x i + 1 )
The area SP of a single pixel is calculated by (4), where ST is the actual area of the checkerboard area.
S P = S T S D

2.5. Calculating the Frontal Area

The number of pixels of the leaf in the segmented image was counted, and then the frontal area is calculated by (5), where n is the number of pixels of the leaf and S is the frontal area.
S = n × S P

2.6. Evaluation Metrics of Measurement Accuracy

The measured data were compared with the data obtained from the labeled binary images on the test set to evaluate the measurement accuracy. In this experiment, determination coefficient (R2) and root mean square error (RMSE) were used to evaluate the accuracy and relationship between the measured value and the reference value, as shown in (6)–(8):
RMSE = i = 1 n ( y i y ^ i ) 2 n
RMSE % = 100 × RMSE y ¯
R 2 = i = 1 n ( y i y ¯ ) 2 i = 1 n ( y ^ i y ¯ ) 2
where n is the number of measured targets, y i is the measured value of frontal area based on the improved U-NET method, y ^ i is the measured value of the windward area manually marked (reference value) and y ¯ is the mean value of the reference value.

2.7. Method Flow

The flow chart for measuring the frontal area of the leaf based on the improved U-Net is shown in Figure 5, including the following steps. First, different wind were set in the wind tunnel and 15 sets of picture data were collected; secondly, the lens distortion of the original image was corrected and the part of the leaf was cut, the size of the cut image was 256 × 256; thirdly, the training set was expanded twice as much as the original data by scaling transformation; then, the expanded training set was input into the improved U-Net network for training, the trained model was obtained and the images of test set were segmented by the trained model; finally, the number of leaf pixels in the segmented image were counted and the frontal area of the leaf was calculated.

3. Results

Comparison experiments were carried out on three datasets, A, B and C, using the method introduced in Section 2 by using metrics such as training time, MIoU, RMSE and R2.

3.1. The Performance of Improved U-Net Model

First, the improved U-Net model and the original U-Net model were trained on A, B and C, respectively, and then the results of the trained models were compared on the same test set. The training times of the original U-Net on A, B and C were 771.49 s, 1537.18 s and 3000.24 s, respectively, while the training times of the improved U-Net on A, B and C were 493.02 s, 964.12 s and 1884.57 s, respectively, as shown in Table 2.
The training time of the improved U-Net model on three datasets was reduced, which was about 0.63 times that of the original U-Net model. This is because the number of parameters of the improved model was less, and the training speed was faster.
The MIoUs of the test set were 97.30%, 97.59% and 97.86% based on the original U-Net training on A, B and C, respectively. The MIoUs of the test set were 97.67%, 97.78% and 97.88% based on the improved U-Net training on A, B and C, respectively, as shown in Table 3.
On the three datasets, the MIoUs of the improved U-Net model on the training set were higher than that of the original U-Net model. The MIoU based on dataset A improved the most, indicating that the improved U-Net model improved more when the dataset was smaller.
The results of segmentation based on U-Net and the improved U-Net are shown in Figure 6. Some obvious segmentation differences are marked with rectangular boxes for better display.
It can be seen from Figure 6 that the segmentation results in the area with a complex background and in the area where the leaf is curled to form a hole were better based on improved U-Net. Moreover, the segmentation results of the original U-Net model based on the three datasets were obviously different in the rectangular box areas, as the original U-Net was affected by the size of the dataset. The segmentation results of the improved U-Net model based on the three datasets were close in the rectangular box areas. The improved U-Net was less affected by the size of the dataset. When the dataset was small, the segmentation results were still good.

3.2. Measurement Accuracy of Frontal Area

After training the model, the pictures of the test set were segmented, the frontal areas were calculated based on the segmented results, and then the calculated results were compared with the manually calibrated frontal areas. The RMSEs of the frontal areas measured by the models trained on A, B and C based on the original U-Net were 0.1865 cm2 (2.08%), 0.1483 cm2 (1.66%) and 0.1464 cm2 (1.64%), respectively. The R2 values were 0.9988, 0.9993 and 0.9993, respectively. The RMSEs of the frontal areas measured by the models trained on A, B and C based on the improved U-Net were 0.1401 cm2 (1.56%), 0.1456 cm2 (1.63%) and 0.1431 cm2 (1.60%), respectively. The R2 values of the three measurements were 0.9993, indicating that the measurements were closely related to the reference values, as shown in Table 4 and Figure 7.
On the three datasets, the RMSEs for measuring the frontal areas of the leaf based on the improved U-Net training model were improved. Moreover, the RMSE and R2 values based on dataset A improved the most, indicating that the improved U-Net model improved the measurement accuracy significantly when the dataset was small.

3.3. Frontal Area Varying with Wind Speed

The frontal areas at 15 groups of wind speeds measured by the models trained on A are shown in Figure 8. When the wind speed was 3.05 m/s, the leaf was slightly deformed, and the frontal area changed little.
With the increase in wind speed, the frontal area showed a downward trend. When the wind speed was 3.05 m/s, the leaf was slightly deformed and the frontal area changed little. When the wind speed was 4.25 m/s, the blade oscillated at low frequency and the change in frontal area increased. When the wind speed was 6.15 m/s, the frontal area changed the most, and the leaf opened and closed. When the wind speed was 7.25 m/s and 8.35 m/s, the leaf showed U-shaped stability and the frontal area changed little. When the wind speed was 9.45 m/s, a local extreme point appeared in the frontal area, at which time the leaf vibrated at high frequency. When the wind speed was 10.60 m/s–14.95 m/s, the leaf was conical and stable, and the frontal area changed little. When the wind speed was 15.75 m/s and 16.65 m/s, the leaf began to vibrate again and the change in frontal area increased. When the wind speed was 17.75 m/s and 18.90 m/s, the leaf was conical and stable again and the frontal area changed little.

4. Discussion

The improved U-Net can be used to segment the leaf in the wind with a result close to that of manual segmentation, which can be used to measure the frontal area of a leaf. By labeling a small amount of data (only five pictures need to be labeled at each wind speed in this experiment), a highly accurate segmentation model can be trained to accurately measure the frontal area.
In the process of building the dataset, we selected continuous pictures instead of random selection. This is because the leaf was continuously deformed in the wind. Each picture was different, and continuous images can represent the changes in the leaf. If the images were randomly selected, the images may be similar, which may lead to over-fitting in the training process.
The frontal area can be measured based on photogrammetry, but there are still errors in the measured results. In the current photogrammetric methods, the imaging area of the object is regarded as its frontal area. However, according to principle of pinhole imaging, the imaging area of the part close to the camera is large, while the imaging area of the part far away from the camera is small. In this experiment, the various parts of the leaf are not on a plane. Therefore, it is not accurate to regard the imaging area as the frontal area. In addition, the position of the checkerboard will lead to measurement error. In this study, the plane of the checkerboard is the same as the plane of the leaf at rest, but the leaf deviates from this plane in the wind. The area of a single pixel calculated is the area of the plane of the checkerboard, not the area of a single pixel in the plane where the leaf is located. This process will lead to measurement error. To solve the above problems, it is necessary to further measure the frontal area of the leaf based on 3D technology.

5. Conclusions

In this study, an improved U-Net model is proposed. Based on this model, images can be accurately segmented when only a small number of images are labeled, and the frontal area of the leaf in the wind tunnel can be measured with high accuracy. The effectiveness of the deep-learning-based method applied to the measurement of the frontal area of a leaf in a wind tunnel is verified. An undamaged leaf of a B. platyphylla Sukaczev tree from the forest farm of Northeast Forestry University was used as the research object. Compared with the manually labeled data, the frontal area of the leaf measured by the trained model based on the improved U-Net was accurately measured. The main conclusions are as follows:
  • The improved U-Net model improves the training speed, which is 0.63 times that of the original U-Net model.
  • The MIoUs of the improved U-Net model trained on the three datasets on the test set were 97.67%, 97.78% and 97.88%, respectively. Compared with the original U-Net, the performance is improved, and it works best with a small dataset. The segmentation results are better for the complex background area and the area where the leaf is curled to form a hole.
  • Compared with the manually labeled image data, the RMSEs of the frontal areas measured by the models trained on the three datasets based on the improved U-Net were 0.1401 cm2 (1.56%), 0.1456 cm2 (1.63%) and 0.1431 cm2 (1.60%), respectively. The R2 values of the three measurements were 0.9993. The frontal area of the leaf can be accurately measured based on the proposed method.
Therefore, this method only needs to label a few pictures to train a highly accurate segmentation model to measure the frontal area of a leaf. In future research, the algorithm for measuring the frontal area of a leaf based on three-dimensional technology will be further studied to make the measured frontal area closer to the real area.

Author Contributions

Conceptualization, X.Y., A.W. and H.J.; methodology, X.Y.; validation, X.Y. and A.W.; formal analysis, X.Y.; investigation, X.Y.; resources, X.Y.; data curation, X.Y. and H.J.; writing—original draft preparation, X.Y. and A.W.; writing—review and editing, X.Y., A.W. and H.J.; visualization, X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Heilongjiang Province of China, grant number LH2020C091; the 2022 Special Foundation Project of Fundamental Scientific Research Professional Expenses for Undergraduate Universities in Heilongjiang Province.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moore, J.; Gardiner, B.; Sellier, D. Tree Mechanics and Wind Loading. In Plant Biomechanics; Geitmann, A., Gril, J., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 79–106. [Google Scholar]
  2. Bayat, M.; Ghorbanpour, M.; Zare, R.; Jaafari, A.; Pham, B.T. Application of Artificial Neural Networks for Predicting Tree Survival and Mortality in the Hyrcanian forest of Iran. Comput. Electron. Agric. 2019, 164, 104929. [Google Scholar] [CrossRef]
  3. Bayat, M.; Noi, P.T.; Zare, R.; Bui, D.T. A Semi-Empirical Approach Based on Genetic Programming for the Study of Biophysical Controls on Diameter-Growth of Fagus Orientalis in Northern Iran. Remote Sens. 2019, 11, 1680. [Google Scholar] [CrossRef]
  4. Bayat, M.; Bettinger, P.; Heidari, S.; Hamidi, S.K.; Jaafari, A. A Combination of Biotic and Abiotic Factors and Diversity Determine Productivity in Natural Deciduous Forests. Forests 2021, 12, 1450. [Google Scholar] [CrossRef]
  5. Jiang, H.; Xin, D.; Zhang, H. Wind-tunnel study of the aerodynamic characteristics and mechanical response of the leaves of Betula platyphylla Sukaczev. Biosyst. Eng. 2021, 207, 162–176. [Google Scholar] [CrossRef]
  6. Ennos, R. Compliance in plants. WIT Trans. State Art Sci. Eng. 2005, 20, 21–37. [Google Scholar]
  7. Peng, Y.B.; Ai, X.Q.; Cheng, Y.Y.; Li, J. Wind-induced vibration analysis and dynamic reliability assessment of stochastic urban trees. Chin. Q. Mech. 2017, 38, 478–486. [Google Scholar]
  8. Vogel, S. Leaves in the lowest and highest winds: Temperature, force and shape. New Phytol. 2009, 183, 13–26. [Google Scholar] [CrossRef]
  9. Vogel, S. Drag and Reconfiguration of Broad Leaves in High Winds. J. Exp. Bot. 1989, 40, 941–948. [Google Scholar] [CrossRef]
  10. Shao, C.P.; Chen, Y.J.; Lin, J.Z. Wind induced deformation and vibration of a Platanus acerifolia leaf. Acta Mech. Sin. 2012, 28, 583–594. [Google Scholar] [CrossRef]
  11. De Langre, E. Plant Vibrations at all scales: A Review. J. Exp. Bot. 2019, 70, 3521–3531. [Google Scholar] [CrossRef]
  12. Burgess, A.J.; Retkute, R.; Preston, S.P.; Jensen, O.E.; Pound, M.P.; Pridmore, T.P.; Murchie, E.H. The 4-dimensional plant: Effects of wind-induced canopy movement on light fluctuations and photosynthesis. Front. Plant Sci. 2016, 7, 1392. [Google Scholar] [CrossRef] [PubMed]
  13. Nikora, V. Hydrodynamics of aquatic ecosystems: An interface between ecology, biomechanics and environmental fluid mechanics. River Res. Appl. 2010, 26, 367–384. [Google Scholar] [CrossRef]
  14. Yamazaki, K. Gone with the wind: Trembling leaves may deter herbivory. Biol. J. Linn. Soc. 2011, 104, 738–747. [Google Scholar] [CrossRef]
  15. Appel, H.M.; Cocroft, R.B. Plants respond to leaf vibrations caused by insect herbivore chewing. Oecologia 2014, 175, 1257–1266. [Google Scholar] [CrossRef]
  16. Warren, J. Is wind-mediated passive leaf movement an effective form of herbivore defence? Plant Ecol. Evol. 2015, 148, 52–56. [Google Scholar] [CrossRef]
  17. Gosselin, F.; De Langre, E.; Machado-Almeida, B.A. Drag reduction of flexible plates by reconfiguration. J. Fluid Mech. 2010, 650, 319–341. [Google Scholar] [CrossRef]
  18. Tadrist, L.; Saudreau, M.; De Langre, E. Wind and gravity mechanical effects on leaf inclination angles. J. Theor. Biol. 2014, 341, 9–16. [Google Scholar] [CrossRef]
  19. Miller, L.A.; Santhanakrishnan, A.; Jones, S.; Hamlet, C.; Mertens, K.; Zhu, L. Reconfiguration and the reduction of vortex-induced vibrations in broad leaves. J. Exp. Biol. 2012, 215, 2716–2727. [Google Scholar] [CrossRef]
  20. Yu, K.J.; Shao, C.P. Wind tunnel investigation of the aerodynamic characteristics of purple wisteria compound leaves. Chin. J. Theor. Appl. Mech. 2019, 1, 245–262. [Google Scholar]
  21. Zhu, Y.; Shao, C. The steady and vibrating statuses of tulip tree leaves in wind. Theor. Appl. Mech. Lett. 2017, 7, 30–34. [Google Scholar] [CrossRef]
  22. Kinugasa, T.; Sagayama, T.; Gantsetseg, B.; Liu, J.; Kimura, R. Effect of simulated grazing on sediment trapping by single plants: A wind-tunnel experiment with two grassland species in Mongolia. CATENA 2021, 202, 105262. [Google Scholar] [CrossRef]
  23. Cao, J.; Tamura, Y.; Yoshida, A. Wind tunnel study on aerodynamic characteristics of shrubby specimens of three tree species. Urban For. Urban Green. 2012, 11, 465–476. [Google Scholar] [CrossRef]
  24. Zheng, S.; Guldmann, J.M.; Liu, Z.; Zhao, L.; Wang, J.; Pan, X. Predicting the influence of subtropical trees on urban wind through wind tunnel tests and numerical simulations. Sustain. Cities Soc. 2020, 57, 102116. [Google Scholar] [CrossRef]
  25. Hao, Y.; Kopp, G.A.; Wu, C.H.; Gillmeier, S. A wind tunnel study of the aerodynamic characteristics of a scaled, aeroelastic, model tree. J. Wind. Eng. Ind. Aerodyn. 2020, 197, 104088. [Google Scholar] [CrossRef]
  26. Joseph, G.M.D.; Mohammadi, M.; Sterling, M.; Baker, C.J.; Gillmeier, S.G.; Soper, D. Determination of crop dynamic and aerodynamic parameters for lodging prediction. J. Wind. Eng. Ind. Aerodyn. 2020, 202, 104169. [Google Scholar] [CrossRef]
  27. Horaisová, K.; Kukal, J. Leaf classification from binary image via artificial intelligence. Biosyst. Eng. 2016, 142, 83–100. [Google Scholar] [CrossRef]
  28. Zhao, C.; Chan, S.S.; Cham, W.K.; Chu, L.M. Plant identification using leaf shapes—A pattern counting approach. Pattern Recognit. 2015, 48, 3203–3215. [Google Scholar] [CrossRef]
  29. Brindha, G.J.; Gopi, E.S. An hierarchical approach for automatic segmentation of leaf images with similar background using kernel smoothing based Gaussian process regression. Ecol. Inform. 2021, 63, 101323. [Google Scholar]
  30. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  31. Lee, S.H.; Chan, C.S.; Mayo, S.J.; Remagnino, P. How deep learning extracts and learns leaf features for plant classification. Pattern Recognit. 2017, 71, 1–13. [Google Scholar] [CrossRef]
  32. Tang, J.; Wang, D.; Zhang, Z.; He, L.; Xin, J.; Xu, Y. Weed identification based on K-means feature learning combined with convolutional neural network. Comput. Electron. Agric. 2017, 135, 63–70. [Google Scholar] [CrossRef]
  33. Arribas, J.I.; Sánchez-Ferrero, G.V.; Ruiz-Ruiz, G.; Gómez-Gil, J. Leaf classification in sunflower crops by computer vision and neural networks. Comput. Electron. Agric. 2011, 78, 9–18. [Google Scholar] [CrossRef]
  34. Dias, P.A.; Tabb, A.; Medeiros, H. Apple flower detection using deep convolutional networks. Comput. Ind. 2018, 99, 17–28. [Google Scholar] [CrossRef]
  35. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. Lect. Notes Comput. Sci. 2015, 9351, 234–241. [Google Scholar]
  36. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
  37. Yu, C.; Hu, Z.H.; Han, B.; Wang, P.; Zhao, Y.C.; Wu, H.M. Intelligent Measurement of Morphological Characteristics of Fish Using Improved U-Net. Electronics 2021, 10, 1426. [Google Scholar]
  38. Paulauskaite-Taraseviciene, A.; Noreika, E.; Purtokas, R.; Lagzdinyte-Budnike, I.; Daniulaitis, V.; Salickaite-Zukauskiene, R. An Intelligent Solution for Automatic Garment Measurement Using Image Recognition Technologies. Appl. Sci. 2022, 12, 4470. [Google Scholar] [CrossRef]
  39. Li, K.; Teng, G. Study on Body Size Measurement Method of Goat and Cattle under Different Background Based on Deep Learning. Electronics 2022, 11, 993. [Google Scholar] [CrossRef]
  40. Torralba, A.; Russell, B.C.; Yuen, J. LabelMe: Online Image Annotation and Applications. Proc. IEEE 2010, 98, 1467–1484. [Google Scholar] [CrossRef]
Figure 1. Image acquisition device. (a) Schematic diagram of data acquisition system; (b) fixed position of leaf and camera; (c) image captured.
Figure 1. Image acquisition device. (a) Schematic diagram of data acquisition system; (b) fixed position of leaf and camera; (c) image captured.
Electronics 11 02730 g001
Figure 2. Image preprocessing. (a) The picture after lens distortion correction and cut; (b) the binary image labeled.
Figure 2. Image preprocessing. (a) The picture after lens distortion correction and cut; (b) the binary image labeled.
Electronics 11 02730 g002
Figure 3. The improved U-Net model architecture.
Figure 3. The improved U-Net model architecture.
Electronics 11 02730 g003
Figure 4. The 34 points for calculating area.
Figure 4. The 34 points for calculating area.
Electronics 11 02730 g004
Figure 5. The flow chart for measuring the frontal area of the leaf based on the improved U-Net.
Figure 5. The flow chart for measuring the frontal area of the leaf based on the improved U-Net.
Electronics 11 02730 g005
Figure 6. Comparison of segmentation results of two models trained on three training sets. (a) Original image; (b) segmentation result of improved U-Net model based on A; (c) segmentation result of improved U-Net model based on B; (d) segmentation result of improved U-Net model based on C; (e) mask image; (f) segmentation result of U-Net model based on A; (g) segmentation result of U-Net model based on B; (h) segmentation result of U-Net model based on C.
Figure 6. Comparison of segmentation results of two models trained on three training sets. (a) Original image; (b) segmentation result of improved U-Net model based on A; (c) segmentation result of improved U-Net model based on B; (d) segmentation result of improved U-Net model based on C; (e) mask image; (f) segmentation result of U-Net model based on A; (g) segmentation result of U-Net model based on B; (h) segmentation result of U-Net model based on C.
Electronics 11 02730 g006
Figure 7. Comparison of frontal areas measured based on two models training on three datasets and reference values. (a) Model: original U-Net, dataset: A; (b) model: improved U-Net, dataset: A; (c) model: original U-Net, dataset: B; (d) model: improved U-Net, dataset: B; (e) model: original U-Net, dataset: C; (f) model: improved U-Net, dataset: C.
Figure 7. Comparison of frontal areas measured based on two models training on three datasets and reference values. (a) Model: original U-Net, dataset: A; (b) model: improved U-Net, dataset: A; (c) model: original U-Net, dataset: B; (d) model: improved U-Net, dataset: B; (e) model: original U-Net, dataset: C; (f) model: improved U-Net, dataset: C.
Electronics 11 02730 g007
Figure 8. The frontal areas at 15 groups of wind speeds measured by the models trained on A.
Figure 8. The frontal areas at 15 groups of wind speeds measured by the models trained on A.
Electronics 11 02730 g008
Table 1. Advantages and disadvantages of three methods for measuring frontal area.
Table 1. Advantages and disadvantages of three methods for measuring frontal area.
MethodAdvantagesDisadvantages
Using coordinate paperImages can be accurately segmented by manual operationManual operation, time-consuming and laborious, not suitable for processing a large number of images
Using image softwareSemi-automatic image segmentationA good experimental environment, still need manual operation, not suitable for processing a large number of images
Automatic segmentation by the program after setting the thresholdAutomatic image segmentation, suitable for processing a large number of imagesSpecial requirements for the color of the object and experimental environment, the object should be clearly distinguished from the background
Table 2. Comparison of training time between two models.
Table 2. Comparison of training time between two models.
ModelTraining Time (s)/ATraining Time (s)/BTraining Time (s)/C
U-Net771.491537.183000.24
Improved U-Net493.02964.121884.57
Table 3. Comparison of MIoU between two models.
Table 3. Comparison of MIoU between two models.
ModelMIoU (%)/AMIoU (%)/BMIoU (%)/C
U-Net97.3097.5997.86
Improved U-Net97.6797.7897.88
Table 4. Comparison of RMSEs of the frontal areas measured by two models.
Table 4. Comparison of RMSEs of the frontal areas measured by two models.
ModelRMSE (%)/ARMSE (%)/BRMSE (%)/C
U-net2.081.661.64
Improved U-net1.561.631.60
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, X.; Wang, A.; Jiang, H. Intelligent Measurement of Frontal Area of Leaves in Wind Tunnel Based on Improved U-Net. Electronics 2022, 11, 2730. https://doi.org/10.3390/electronics11172730

AMA Style

Yang X, Wang A, Jiang H. Intelligent Measurement of Frontal Area of Leaves in Wind Tunnel Based on Improved U-Net. Electronics. 2022; 11(17):2730. https://doi.org/10.3390/electronics11172730

Chicago/Turabian Style

Yang, Xinnian, Achuan Wang, and Haixin Jiang. 2022. "Intelligent Measurement of Frontal Area of Leaves in Wind Tunnel Based on Improved U-Net" Electronics 11, no. 17: 2730. https://doi.org/10.3390/electronics11172730

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop