Next Article in Journal
Evaluation of the Predictive Performance of Regional and Global Ground Motion Predictive Equations for Shallow Active Regions in Pakistan
Previous Article in Journal
Non-Financial Enterprises’ Shadow Banking Business and Total Factor Productivity of Enterprises
Previous Article in Special Issue
Perspectives on Bioenergy Feedstock Development in Pakistan: Challenges and Opportunities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identifying and Counting Tobacco Plants in Fragmented Terrains Based on Unmanned Aerial Vehicle Images in Beipanjiang, China

1
Karst Research Institute, Guizhou Normal University, Guiyang 550001, China
2
School of Geography and Environmental Sciences, Guizhou Normal University, Guiyang 550001, China
3
National Engineering Technology Research Center for Karst Rocky Desertification Control, Guiyang 550001, China
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(13), 8151; https://doi.org/10.3390/su14138151
Submission received: 16 May 2022 / Revised: 1 July 2022 / Accepted: 2 July 2022 / Published: 4 July 2022
(This article belongs to the Special Issue Smart Farming and Bioenergy Feedstock Crops)

Abstract

:
Refined tobacco plant information extraction is the basis of efficient yield estimation. Tobacco planting in mountainous plateau areas in China is characterized by scattered distribution, uneven growth, and mixed/intercropping crops. Thus, it is difficult to accurately extract information on the tobacco plants. The study area is Beipanjiang topographic fracture area in China, using the smart phantom 4 Pro v2.0 quadrotor unmanned aerial vehicle to collect the images of tobacco planting area in the study area. By screening the visible light band, Excess Green Index, Normalized Green Red Difference Vegetation Index, and Excess Green Minus Excess Red Index were used to obtain the best color index calculation method for tobacco plants. Low-pass filtering was used to enhance tobacco plant information and suppress noise from weeds, corn plants, and rocks. Combined with field measurements of tobacco plant data, the computer interactive interpretation method performed gray-level segmentation on the enhanced image and extracted tobacco plant information. This method is suitable for identifying tobacco plants in mountainous plateau areas. The detection rates of the test and verification areas were 96.61% and 97.69%, and the completeness was 95.66% and 96.53%, respectively. This study can provide fine data support for refined tobacco plantation management in the terrain broken area with large exposed rock area and irregular planting land.

1. Introduction

Tobacco, as a high-value crop, is an important source of income, employment opportunities, and tax revenue in both developed and developing countries. In the global tobacco-growing sector, China ranks first, accounting for 32% of the total production. The tobacco industry has a significant market size, and its tax revenue accounts for a large proportion of the national governmental fiscal revenue [1]. Yunnan and Guizhou are the main flue-cured tobacco growing areas in China. Most of the tobacco planting areas in these regions are in plateau mountainous areas. The tobacco fields are broken and the planting is scattered in these regions, which means that there are considerable difficulties in the manual collection of statistics on tobacco production, making it difficult to meet the requirements of the need for fast and timely tobacco planting and monitoring [2]. Timely grasp of the spatial distribution, planting area, growth, yield, and disaster losses of tobacco is important for accurate yield estimation that can assist enterprises in rationally arranging tobacco harvest, refined tobacco management, and for assisting government decision making.
Tobacco is mostly planted in the plateau areas of China, and its distribution is extensive and scattered. The efficiency of manual reporting is low and significantly affected by human factors. Remote sensing technology has advantages, such as large coverage, quick access to information, and low cost, thereby compensating for the shortcomings of human surveys, and has been widely used in estimating large-scale crop yield estimation and the gathering of planting area statistics. Remote sensing data sources range from medium resolution (Landsat) to high spatial resolution remote sensing satellite images (such as SPOT-5, China-Brazil Earth Resources Satellite 02 B, ZY-1 02C, ZY-3) [3,4,5,6,7]. Working methods include the following: optical images and synthetic aperture radar (SAR) [8], remote sensing platform research from high-altitude satellite remote sensing to low-altitude unmanned aerial vehicle (UAV) remote sensing [9,10,11], monitoring method research from pixel feature-based index calculation using statistical methods to object-oriented depth and machine learning methods and identifying studies from regions to individual tobacco plants [12,13].
Optical data are easily affected by cloudy and rainy weather in mountainous areas, and effective information cannot easily be obtained. Moreover, the data acquisition cycle is long, which is not conducive to the dynamic monitoring of crop growth. SAR data comprise data captured throughout the day and is unaffected by clouds and rainy weather unaffected by cloudy and rainy weather. However, interference by mountainous terrain can easily interfere with the backscatter information of the SAR [14,15]. For the rapid acquisition of crop information in complex habitats in plateaus and mountainous areas, unmanned aerial vehicles (UAV) have a wide range of prospective applications. They possess the characteristics of high resolution, low cost, and have a short data acquisition cycle and are thus suitable for the extraction of crop information in the complex habitats of plateau areas [16,17,18].
Guizhou Province lies in the low-latitude plateau area, with the total area of tobacco plantations being large; however, the spatial distribution is scattered. Here, the tobacco leaves are generally mixed or intercropped with other crops [19,20]. Therefore, it is difficult to accurately extract the planted area using large-scale low-to-medium remote sensing satellite image data; moreover, small plots are easily missed and misinterpret. In Southwestern China, it is difficult to obtain high spatial resolution satellite remote sensing images. UAV remote sensing has become the main means of monitoring tobacco-planting areas owing to its flexibility, high spatial resolution, and low cost. Object-oriented image analysis and deep learning are the main classification methods for remote sensing images with a high spatial resolution. The classification of various crops using deep learning has problems of complex models, large amounts of sample data, long training times, low portability, and low accuracy. There have been extensive studies on the application of UAV visible light remote sensing in agricultural information extraction, crop type identification, and growth monitoring [21,22,23], and the use of plant protection UAV can also provide a reference for crop pest control [24]. At the same time, the color index can identify crops and realize morphological feature extraction [25]. It can use the OSTU threshold method to extract plants based on visible light images [26]. Leaf polygons can also be extracted based on visible-light images to obtain average plant size information and number of plants [27]. In addition to plant morphology, plant information can also be extracted by color index and other methods. The color stretches the images of different varieties of corn crops, with an extraction accuracy of 76–83%, but the impact of weeds on corn plant recognition has not been eliminated [28]; however, characteristic information, such as the green index, can improve the accuracy of crop extraction to more than 90 and effectively estimate crop yield [29]. There are relatively few studies on the monitoring of economic crops in the mountainous areas of the Southwest Plateau. Scattered crop planting distribution, complex growth environment, and large terrain fluctuations still exist in this area. Therefore, the characteristic economic crops need to be quickly and accurately identified [30].
In view of this, considering the characteristics of broken crop planting plots and cloudy and foggy weather in karst mountain areas, the existing UAV remote sensing tobacco single plant recognition research has some shortcomings, such as high recognition accuracy, relatively complex calculation methods and complex processing process. The study focused on the tobacco-planting base in Beipanjiang Town, Zhenfeng County, Guizhou Province in the southwestern mountainous area of China as its research area, and the Phantom 4 Pro V2.0 drone was used to collect high-resolution orthophoto images (DOM). Tobacco plant information was extracted using methods such as color index and filter enhancement to obtain an efficient and scalable process for extracting tobacco plantation information effectively and provide refined data for monitoring crop planting in areas with complex terrain, and provide a decision-making basis for fine management of tobacco.

2. Materials and Methods

2.1. Overview of the Study Area

The research area has a complex environment and comprises a test and a verification area. Beipanjiang Town, Zhenfeng County, Guizhou Province, located in the mountainous area of southwest China, has a karst landform type. The altitude is 1324–1966.8 m, and the climatic conditions in the study area are suitable for tobacco cultivation. As shown in Figure 1, the center point of the test area was 105°35′54.597″ E, 25°36′37.063″ N; the cultivated land has a high degree of fragmentation, and the large exposed rock area is large. The center point of the verification area is at 105°35′50.205″ E, 25°36′4.522″ N, and the land types mainly include corn, weeds, shrubs, and cement roads. The total amount of cultivated land resources in the area is insufficient, and the soil layer is shallow, discontinuous in distribution, and low in water retention capacity and drought resistance.

2.2. Data Collection and Preprocessing

The Phantom 4 Pro V2.0 is equipped with a 1-in 20-megapixel image sensor, a quad-rotor foldable body, a camera with 20-megapixel effective pixels, and a 1-in CMOS image sensor. The hovering accuracy is as follows: vertical ±0.1 m (when visual positioning is working normally); ±0.5 m (when the GPS positioning operates normally), horizontal: ±0.3 m (when the visual positioning is working function normally); and ±1.5 m (when the GPS positioning is functions normally). As the weather conditions and time of data collection are related to the change in the color difference of the image, the incident sunlight from 12:00 to 14:00 is perpendicular to the ground surface, which can reduce the shadow cast by crops in oblique sunlight. The study area was a karst mountainous area, and the image collection time was from 12:00 to 13:30 on 12 June 2021. The light conditions were sufficient, and the wind power was level 3, which met the safe operating conditions of UAV. On a sunny day, cloud cover in the mountains can be avoided. The height difference of the survey area is large, and the overall slope is 45°. Altizure was used to plan the data acquisition route. To ensure data accuracy, the course overlap was set at 85, the side overlap at 70, and design altitude at 90 to avoid data redundancy.
The Phantom 4 Pro V2.0 quadrotor UAV was used to obtain RGB true color images. The Pix4Dmapper software was used to stitch the high-resolution images of the UAV to generate digital orthophotos of the survey area. Pix4DMapper was used for, amongst others, orthophoto stitching, aerial photo screening, aerial three encryption and image overlap matching, internal orientation, beam method LAN adjustment calculation, and camera self-calibration to perform tilt correction and projection difference correction on UAV remote sensing images. Additionally, it was used to complete the calculation of the image mosaic, color leveling, and cropping to finally obtain the orthophoto image data, as shown in Figure 2.
In the experiment, two UAV images of the area were selected as data: the test area and the verification area, as shown in Figure 3. The size of the test area was 10,424 pixels × 12,082 pixels, the pixel size was 0.027. They were located northwest region of the Beipanjiang River, with the terrain comprising a hillside slope which gradually decreases from northwest to southeast. The planting area was irregular. Further, the tobacco slices were dark green in color, and the leaves were relatively large. Simultaneously, the test area was a tobacco-growing area in a complex scene, and included roads, shrubs, rocks, houses, and bare soil.
The size of the verification area was 13,720 pixels × 16,412 pixels, and the pixel size was 0.03. Relative to the south of the test area, the terrain gradually decreased from northeast to the southwest. The planting area was irregular, the color of tobacco slices was mostly cyan, and the leaves were small. The two images could be used to effectively extract the number of tobacco plants by visual inspection, thereby providing an effective reference for extracting the number of tobacco plants in the later stage. However, there was mutual occlusion between the tobacco and surrounding shrubs and rocks, and the tobacco planting boundary was unclear and the shrub and weed planting range overlapped, making it difficult for later extraction.

2.3. Data Feature Analysis

The types of features have different eigenvalues in the red (R), green (G), and blue bands (B) [31]. To explore the difference between the target features of tobacco plants and surrounding features, this study set up a test area and an experimental area, located west of the Beipanjiang River, which provided a better foundation for testing the feasibility of the research method. The value range of 0–255 is used to construct color histogram by R, G and B channels, which is beneficial to obtain color information from invisible visible images. As can be seen from Figure 4, the value of green band of TOBACCO RGB curve is between 165 and 215, much higher than that of red band, 135–195 and blue wave segment, 135–170, indicating that tobacco has an obvious indicator in G band. The low coincidence degree between the weed RGB curve band values and tobacco plants, between 125 and 180, was also one of the factors that interfered with tobacco plant extraction. There are obvious troughs in the RGB curves of rocks, in which B band intersects with R and G band. The b-band value of the maize RGB curve is between 80 and 155, while the b-band value of tobacco is between 165 and 215. The two are less confusing and have little interference to the correct extraction of tobacco plants.

3. Methodology

3.1. Technical Route

Combined with parameters, such as chromatic aberration, texture features and geometric size of visible light images from UAV, Excess Green Index (EXG), Normalized Green-red Difference Vegetation Index (NGRDI) and Excess Green Minus Excess Red Index (EXG-EXR) of tobacco plants were compared at the seedling stage. The applicability of the image was preserved, the frequency information of tobacco was retained, and the information of other objects in the image was suppressed. Subsequently, the image threshold method was used to conduct the best threshold segmentation of the image. Finally, the target identification of tobacco plants and the extraction of the number of plants were realized, as shown in Figure 5.

3.2. Color Index

The current vegetation index research is mainly based on the visible light band and the near-infrared bands, such as the normalized vegetation index and the ratio vegetation index. There are many kinds of images and the remote sensing images required generally have a high acquisition cost and poor in effectiveness and spatial resolution. The vegetation indices based on visible light mainly include Excess Green Index (EXG) [32,33,34], Normalized Green-Red Difference Vegetation Index (NGRDI), and Excess Green Minus Excess Red Index (EXG-EXR) [35]. Woebbecke et al. [36] found that the EXG index has been widely used in recent years and has high accuracy in distinguishing vegetation from soil. The types of images in the study area mainly include tobacco, weeds, shrubs, cement roads, retaining walls, and weed-proof cloth. The EXG was used to separate vegetation and non-vegetation. The calculation equation is explained below.
EXG = 2 G R B
NGRDI = G R G + R
EXG EXR = 3 G 2.4 R B
In the equation, G, R, and B denote the green, red, and blue channel bands, respectively. In ENVI5.5, the R, G, and B bands are calculated as b1, b2, and b3, respectively.
Image filtering calculation:
The image after the color index calculation contains shrubs, weeds, and other noises that interfere with the extraction of the target tobacco plant; therefore, the noise of the tobacco plant target recognition needs to be suppressed. Image filtering can suppress noise in the target image by ensuring the details of the image. In this study, high-pass filter [37,38], low-pass filter [39], and directional filtering (directional) [40,41] were used to conduct comparative experiments to try to eliminate other crop noise of the image while overcoming the boundary effect, such that the gray value of the surface object can smoothly transition from the center to the edge and improve the accuracy of extraction of plant extraction.

3.2.1. High Pass Filter

The high-pass filtering process eliminates low-frequency components in the image while maintaining its high-frequency information of the image. It can be used to enhance edge information between different regions, that is, by using a transform kernel with a high central value (typically surrounded by negative weights). The default high-pass filter in ENVI uses a 3 × 3 transform kernel (the center value is “8”, and the outer pixel value is “−1”); therefore, the value of the high-pass filter transform kernel must be an odd number.
H ( u , v ) = 1 exp D 2 u , v / 2 D 0 2
where D0 is the distance from the frequency to the origin, and D(u, v) is the distance from the point (u, v) to the center of the frequency rectangle.
Assuming that the image size of the result of the calculation of the over-green index is M × N, and its Fourier transform has the same size, the center of the frequency rectangle is located at (M/2, N/2). The distance between the point and the center of the frequency rectangle is given by Equation (5), and the filtered image is expressed by Equation (6):
D ( u , v ) = u M 2 2 + v N 2 1 2
g ( x , y ) = F 1 H u , v F u , v
where M × N is the image size of the calculation result of the assumed green index, (u, v) = (M/2, N/2) represents the center of the frequency rectangle, and the distance between the point and the center of the frequency rectangle.

3.2.2. Low-Pass Filter

Low frequency filtering is used to save the low-frequency component information in the image, eliminating its high-frequency information. Low-pass filtering of ENVI is conducted using the IDL “SMOOTH” function on the selected target image. This function uses boxcar averaging, and the size of the box is determined by the size of the transform kernel; typically, the default transform kernel size is 3 × 3. However, when using the low-pass filter, it is easy to generate transitions to eliminate high-frequency components, resulting in blurred image edges.
H ( u , v ) = { 0 ,   D ( u , v ) > D 0 1 ,   D ( u , v ) D 0
D ( u , v ) = u p 2 2 + v Q 2 2
where D0 represents the radius of the passband, and the calculation method of D is the distance between two points.

3.2.3. Directional Filtering

Directional filtering involves the edge enhancement method. The purpose is to selectively enhance the image features of a specific direction component (for example, gradient), so the result shows that the output image has the same pixel value area of 0, and different pixel areas of value appear as lighter edges. The main steps to implement directional filtering are as follows: (1) select filters > convolutions > directional; (2) notably, the standard filtering in the convolution parameters dialog box needs to be adjusted, and the ENVI pass-through filtering needs to be in the text marked “Angle”. The desired direction (in degrees) must be entered in the box.

3.3. Threshold Segmentation

The main role of image segmentation is to distinguish between different areas with special implications in the image to achieve the optimal threshold, which has the characteristics of clear physical meaning and easy operation. The basic principles are as follows: to set different feature thresholds, divide the image pixels into several categories, set the original image to be f(x,y), find the feature value T in f(x,y) based on certain rules, and segment the image. Image after segmentation comprises two parts. Assuming that b0 = 0 (black), b1 = 1 (white), namely, the binarization of the image [42,43], the core idea states that when the threshold T results in the largest class variance between the target and the background, T is the best threshold for identifying and extracting the target objects.
W 0 = N / MN
W 1 = N 1 / MN
N 0 + N 1 = MN
W 0 + W 1 = 1
μ = W 0 μ 0 + W 1 μ 1
σ = W 0 μ 0 μ 2 + μ 1 μ 2
where W0 is the proportion of the target pixel point in the whole scene image; W1 is the proportion of the background pixel point in the whole scene image; μ0 is the average grayscale of the target object pixel; μ1 is the average grayscale of the background pixel; μ is the total average gray level of the image; MN represents the size of the image; N0 is the number of pixels with gray level is less than T; N1 is the number of pixels with gray level is larger than T; and σ is the inter-class variance.

3.4. Accuracy Verification

To quantitatively evaluate the viability of this method—referring to the OTSU research method [44,45]—we defined the true positive (TP) as the number of correctly classified tobacco plants in the identification extraction results, and the false positive (FP) as the wrongly classified tobacco plants in the extraction results. False negative (FN) represents the tobacco plants omitted from the extraction results. The calculation equations of each evaluation index, namely, branching factor (BF), detection rate (DP), and integrity (QP) are
BF = FP / TP
DP = TP / ( TP + FN ) × 100 %
QP = TP / TP + FN + FP × 100 %
where BF increases with the increase in the number of misclassified tobacco plants; DP is the percentage of correctly classified tobacco plants; QP represents the quality of the extraction results of tobacco plants; and the higher the QP value, the better the extraction results. On the whole, the BP value is negatively correlated with the DP and QP values, the smaller the BP value, the better the DP and QP values, indicating the better the extraction effect [46,47,48].

4. Results and Analysis

4.1. Exponential Image

Field investigation revealed that tobacco plants are often accompanied by rocks and weeds because of the complex growth environment, which increases the interference for the precise extraction of tobacco plants. To improve the separability of the soil and vegetation in ENVI5.5, the EXG, NGRDI, and EXG-EXR indices were independently calculated using the band calculation tool in ENVI5.5. Table 1 shows that, under different environments, the calculation results of each index increased with the complexity of the growth environment of tobacco plants, and the separation of soil and vegetation in the calculation results of the NGRDI gradually decreased, with the results of the EXG-EXR decreasing gradually. Soil–vegetation separation is low, while the hypergreen index has higher extraction ability for tobacco plants in complex environments, and can better distinguish between soil, rocks, and weeds.
In the EXG image of the bare soil area, the tobacco plants were black, the soil background was gray, and weeds were bright white. The extraction effect of tobacco plants was good and distinguished the types of objects in the bare soil area. Normalized green and red tobacco plants and weeds in the image, calculated by NGRDI were gray, and the soil background was black. In the results of this index, tobacco plants and weeds were easily mixed; therefore, the phenomenon of wrong extraction occurred. There was also the mixed extraction of tobacco plants and weeds. In the EXG calculation result image with the rock as the background, the tobacco plants were gray and the rocks were black, and the difference between the two was clearer. In the image calculated by NGRDI, the rocks were scattered and bright white, tobacco plants were black, and soil was gray. Tobacco plants were gray, and rocks were black in the EXG-EXR calculation result image, but the extraction effect of tobacco at the seedling stage was poor. In the EXG calculation result image with weeds as the background, both tobacco and weeds were gray, and the soil was black. It is important to note that some parts of the NGRDI image resulted in a different color index. In the image calculated by the NGRDI, weeds and some tobacco were bright white and tobacco and soil were black, so the extraction effect was not ideal. In summary, the NGRDI is prone to the mixed extraction of weeds under the background of weeds, resulting in the wrong extraction of tobacco plants; EXG-EXR is easy to omit tobacco at the seedling stage under the rock background; EXG is more efficient in extracting tobacco under the background of weeds and rocks, and is more suitable for extracting tobacco plants in terrain broken areas.
The statistical sample characteristics using spss2.0 are shown in Table 2. The gray-scale images have different responses to different targets on the ground. The DN values of tobacco, weeds and corn are positive, and the DN values of rock non-vegetation land are negative. It can be seen that by calculating the green index, the vegetated land and non-vegetated land were successfully separated.
According to the statistical characteristics of the four ground features in Table 2, the DN values of tobacco range from 54 to 131, weeds range from 62 to 142, and corn ranges from 70 to 104. The DN values of the three overlap within the range of 70 to 104, which is obviously confusing, and is not conducive to the extraction of tobacco plants. The average DN value of weeds is 101.24, which is higher than that of tobacco and maize, and the DN value of tobacco is 95.74, which is close to the DN value of weeds is 101.24, indicating that tobacco and weeds have some differences in the statistical characteristics of DN value. The standard deviation of tobacco is 15.07, which is greater than the standard deviation of weeds and corn, indicating that the dispersion of tobacco values is too large. The study needs to further separate tobacco, weeds and corn

4.2. Image Filtering Enhancement

After calculating the EXG color index and verification areas, the information of tobacco plants could be preserved to a certain extent to suppress the noise from rocks and weeds, but it still contained some noise. To increase the accuracy of plant extraction, high-pass, low-pass, and directional filtering were selected for image filtering enhancement. In ENVI5.5, we selected the convolution “Convolution and Morphology” and the convolution rotation “High Pass Gaussian”; the default size of the transformation kernel was 3 × 3 filtering to eliminate the information of specific frequencies and enhance the image of the study area.
The image results after processing using different image filtering methods show that high-pass filtering cannot enhance relatively complete tobacco plant information and cannot distinguish the surrounding objects well. Direction filtering is slightly better than high-pass filtering for enhancing crop information, but the accuracy is low. Low-pass filtering can enhance the information of tobacco plants and has a good filtering effect with respect to the information regarding surrounding objects. After conducting low-pass filtering, the average value of the ExG image is 77, and the degree of dispersion of the DN value is reduced. For 62–76, the border of tobacco plants can be better preserved, and the main information of the image is effectively preserved. The study area before and after low-pass filtering showed that the DN values of rocks and weeds were considerably different, and tobacco plants could be better separated from soil and weeds. The DN value changed in the following manner: blue (soil, rocks) < gray (weeds) < green (tobacco plants).

4.3. Tobacco Plant Extraction

Based on the image obtained after low-pass filtering, the threshold t was set as the grayscale segmentation threshold of tobacco plants and other plants. We selected color slices in ENVI5.5 and, through multiple threshold parameter adjustments, set a threshold to separate tobacco plants from other background values, and masked to extract tobacco plots. The categories 62–76 in ENVI isolated and identified tobacco plants; therefore, the size was set according to the threshold of this category in ArcGIS. The grayscale segmentation results at t = 65 were loaded into the ArcGIS 10.2. The area of all patches was 5336.23 m2, the largest patch area was 8.79 m2, the minimum area was 0.00078 m2, and the average patch area was 0.012 m2. In the threshold extraction process, the area of a single tobacco plant was used to segment larger patches and eliminate fine patches. As shown in Figure 6, when the grayscale segmentation threshold was t = 65, the plant area was S = 5.18 dm2, and 404 plants were missing (FN), 119 plants were incorrectly extracted (FP) in the study area, and the actual number of plants extracted was 57,594 (TP). The number of true tobacco plants (TP) calculated during the field investigation was 58,117, the calculated branching factor (BF) was 0.002, the detection rate (DP) was 99.3%, and completeness (QP) was 99.1%.
To further test the applicability of this research method, the Phantom 4 Pro V2.0 quadrotor drone was used to collect the verification images of the tobacco planting area in the tobacco planting base of Zhenfeng County at 2021-6-13T12:00-13:00. The conditions were clear with a wind force of 4. Tobacco plants were in the growing period, and weeds and rocks were scattered between the rows of the plants. The DN value calculated based on the ExG index ranged from 62 to 77, and distinguishing the tobacco plants from weeds in the verification area was difficult. The calculation results of ExG were processed via low-pass filter convolution in the test area; the DN value converged to 70–78, and the low-frequency information of tobacco plants was well preserved. When the grayscale segmentation threshold was t = 65 and the plant area was S = 5.18 dm2, 1685 plants were missing (FN) and 880 plants were incorrectly extracted (FP) in the test area, and the actual number of plants (TP) extracted was 71,256. Through field investigation, the TP was estimated to be 71469; the calculated BF was 0.012, the DP was 97.69%, and the QP was 96.53%.
The method proposed in this study was preliminarily verified in the experimental area and the verification area, which proves that the method is feasible for identifying and counting the number of tobacco plants in the summer growth period; it is better to calculate the EXG during the vigorous period of plant growth. According to the method proposed above, the image calculated by the EXG index was processed by low-pass filtering in the complex environment of the verification area. The extraction results of tobacco plants when the plant area was S = 5.18 dm2 are in Table 3. According to the terrain and tobacco growing environment, the extraction results were divided into a flat area, slope area, bare rock area, and weed area. Because the larger rock was located at a higher altitude, it was easy to block the tobacco plants below the rock, and the drone image shows these partially blocked tobacco plants. This was because the large coverage area of the rock provided a clearer and more consistent background. Obviously, to determine the effect on the extraction of the tobacco plants, the weed area should be judged according to its size and the distance from the tobacco plants. When the weed area is large, the research should be eliminated by threshold segmentation. When the weed area is similar to the tobacco plant area, misinterpretation can easily occur; this can also occur when weeds and tobacco plants grow together. In general, the detection rates of this method in the test and experimental areas are 96.61 and 95.66%, respectively, which are reasonable extraction results. In the experimental area, the number of tobacco plants in the seedling stage was 809, which was too large; thus, the extraction accuracy of tobacco plants in the verification area was reduced.
It can be seen from the below tables as Table 4 and Table 5 that this method has a good extraction effect during the vigorous growth period of tobacco. The wrong extraction mainly occurs when the tobacco plant is in the seedling stage, its growth is not obvious, and it is covered by vegetation. The phenomenon of omission occurred in the vegetation sheltering, close to the shrub, and the morphological characteristics of tobacco plants were not obvious.

5. Discussion

(1) Applicability of methods. This study realizes the identification of tobacco plants and the calculation of the number of tobacco plants in the fragmented terrain at low cost and with high efficiency, and overcomes factors such as fragmented plots, scattered planting, and steep terrain. The feasibility of calculating the number of plants can provide basic data for the accurate yield estimation of a single tobacco plant. In this study, the characteristics of tobacco vegetation were enhanced by calculating the EXG, and the target characteristics were enhanced by low-pass filtering; the target detection rate of tobacco plant identification and counting in the terrain fragmentation area reached 96.61%, the integrity was 95.66%, and the branching factor was 0.012. This is good recognition accuracy; however, its accuracy is still susceptible to rocks, weeds, and other field backgrounds.
(2) Parameter selection. Mining suitable color features from tobacco images according to the existing RGB color system is an important step in the process of target recognition using tobacco color features. In addition, by analyzing the color changes of tobacco plants in different periods, a time series data set of tobacco plant growth can be constructed to analyze the different characteristics of tobacco plants over time. The identification and counting of tobacco plants based on time series remote sensing can also be applied to such issues as tobacco diseases and insect pests, fine management and other issues. Thus, future research on this is important.
(3) Analysis of the influencing factors. Tobacco plant plots in the study area had extensive rock coverage and many weeds, which increases the number of error extraction of this target crop. Factors in the planting of this target crop include weeds and tobacco plant canopy appearing as sheets, which confuses the identification of tobacco plants; larger target rocks may obscure tobacco, thereby reducing the actual number of tobacco plants. A ground investigation found that some tobacco plants could not be alive in the early stages. If they survive, farmers replant in a later period. Therefore, when the survey was conducted in June, some seedling-stage tobacco plants appeared to have increased the difficulty of identification.
(4) Restriction of terrain factors. Distortion in ultra-low-altitude UAV remote sensing images is mainly attributed to geometric distortions caused by sensors and terrain fluctuations [46]. The terrain of the karst mountainous area is broken, and the tobacco planting plots are complex in composition and have large height differences. It is difficult for drones to fly at ultra-low altitude and close to the ground to collect high-resolution images. Using a fixed aerial height to collect the images, we found that tobacco plants are mainly distributed on the top of the mountain and the mountainside, and a phenomenon of target image information decline also occurs. Therefore, in future research, it is necessary to determine the effect of terrain factors on the information change rules in ultra-low-altitude crop targets and identify the multi-level visible light identification features and information attenuation rules for crops at different altitudes, which can effectively improve the quality of visible light remote sensing images.
(5) Method limitations. During our research, we found that the use of color indices and low-pass filtering can better achieve the target identification of tobacco plants, but this is easily disturbed by weeds. Based on the two-dimensional image features in the study, future research should consider introducing the three-dimensional structural features of tobacco plants into the identification method. To rectify the phenomena of missed detection and false detection in UAV images, morphological methods need to be considered further, and accurate extraction should be conducted based on the shape of tobacco. For weeds and crops with similar spectral characteristics, which are easily extracted by mistake, the “morphological-spectral” joint feature is used to eliminate the interference caused by similar spectra. Moreover, in this paper, the wrong mention of weeds and shrubs has not been solved reasonably well. In future studies, in-depth research is required to identify better methods of extracting target crops with high precision.

6. Conclusions

Our research explores the applicability and efficiency of using the method employing quadrotor UAV visible light images in precision agriculture monitoring in complex mountainous areas. The results showed that the quadrotor UAV is easy to operate, has low cost, can obtain highly accurate images of tobacco plots, and has unique advantages with respect to monitoring special economic crops in terrain-fragmented areas. The results show that the quadrotor UAV used is easy to operate, low in cost, highly accurate in identifying tobacco plants, and has unique advantages in monitoring special economic crops in fragmented terrain areas. At the same time, this method can monitor the growth of tobacco plants under different agricultural management methods, and can also provide a reference for agricultural refinement, field yield estimation, and modern agricultural management in karst mountainous areas.
The difference between this study and previous studies is that the method in proposed in this paper does not require a large number of samples, which is highly transferable, overcomes comprehensive factors, such as large terrain height differences and the uneven growth status of tobacco plants, and realizes the target detection of tobacco plants in the fragmented terrain. The test and verification areas accounted for 96.61% and 97.69% of the area, respectively. The method introduces low-pass filtering into the identification of crop plants, filters the information in the high-frequency part of the image, and retains the low-frequency information of tobacco plants in the image, while suppressing the slowly changing background and improving the accuracy of traditional color index identification of crop plants.
The traditional method of estimating tobacco yield by hand is not enough to scientifically support the development and refinement of tobacco growing statistics and improved production management. This study enables the accurate extraction of tobacco plants, which provides a reference method for tobacco refinement’s production. In future research, the registration and fusion of point cloud data and optical images should be completed based on the high-precision spatial position relationship, and the color feature information of the target image should be given to the image matching point cloud data to improve the accuracy of tobacco recognition. At the same time, the differences of color and morphological characteristics of tobacco in different growth stages should be considered when extracting tobacco plants.

Author Contributions

All authors contributed to this paper. Funding acquisition: Z.Z.; Methods: Y.W.; Acquisition of UAV data: Y.W., W.Z.; Ground and image verification: Y.W., T.Z., W.Z.; Visualization: Y.W., D.H.; Writing and editing: Y.W., D.H.; Review guide: Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation of China Regional Project (41661088); Guizhou Province “High-level Innovative Talents Training Program ‘Hundred’ Level Talents” Project (Qian Kehe Platform Talent [2016]5674); Guizhou Provincial Science and Technology Fund-funded Project “Fundamentals of Guizhou Science and Technology Cooperation-ZK[2021] General 194” Guizhou Province Graduate Research Fund Project (Qianjiao Cooperation YJSCXJH(2020)103); Guizhou Provincial Graduate Research Fund Project (Qianjiao Cooperation YJSKYJJ[2021]090).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Department of Rural Social and Economic Survey, National Bureau of Statistics. China Rural Statistical Yearbook; China Statistics Press: Beijing, China, 2019. [Google Scholar]
  2. Tao, J.; Shen, G.M.; Xu, Y.M.; Liang, H.B. Application prospect of remote sensing in monitoring and management of tobacco Planting. Acta Table Sin. 2015, 21, 111–116. [Google Scholar] [CrossRef]
  3. Ren, C.; Ju, H.; Zhang, H.; Huang, J. Forest land type precise classification based on SPOT5 and GF-1 images. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 894–897. [Google Scholar] [CrossRef]
  4. Peng, G.; Deng, L.; Cui, W.; Ming, T.; Shen, W. Remote sensing monitoring of tobacco field based on phenological characteristics and time series image—A case study of Chengjiang County, Yunnan Province, China. Chin. Geogr. Sci. 2009, 19, 186–193. [Google Scholar] [CrossRef]
  5. Wang, Z.; Chen, Y.; Mo, J.; Wang, X. Recognition of flue-cured tobacco crop based on spectral characteristics extracted from HJ-1 remote sensing images. Tob. Sci. Technol. 2014, 1, 72–76. [Google Scholar]
  6. Xu, W.; Lan, Y.; Li, Y.; Luo, Y.; He, Z. Classification method of cultivated land based on UAV visible light remote sensing. Int. J. Agric. Biol. Eng. 2019, 12, 103–109. [Google Scholar] [CrossRef] [Green Version]
  7. Xie, Z.; Chen, Y.; Lu, D.; Li, G.; Chen, E. Classification of land cover, forest, and tree species classes with ZiYuan-3 multispectral and stereo data. Remote Sens. 2019, 11, 164. [Google Scholar] [CrossRef] [Green Version]
  8. Betbeder, J.; Laslier, M.; Hubert-Moy, L.; Burel, F.; Baudry, J. Synthetic Aperture Radar (SAR) images improve habitat suitability models. Landsc. Ecol. 2017, 32, 1867–1879. [Google Scholar] [CrossRef]
  9. Li, Z.M.; Zhao, J.; Lan, Y.B.; Cui, X.; Yang, H.B. Research on crop classification based on UAV visible light image. J. Northwest Univ. Agric. For. Sci. Technol. (Nat. Sci. Ed.) 2020, 48, 137–144+154. [Google Scholar] [CrossRef]
  10. Lu, H.; Ma, L.; Fu, X.; Liu, C.; Wang, Z.; Tang, M. Landslides information extraction using object-oriented image analysis paradigm based on deep learning and transfer learning. Remote Sens. 2020, 12, 752. [Google Scholar] [CrossRef] [Green Version]
  11. Qu, X.Z. The application of UAV in agricultural and rural modernization. China Agric. Resour. Zoning 2021, 42, 159+216. [Google Scholar]
  12. Fan, Z.; Lu, J.; Gong, M.; Xie, H. Automatic tobacco plant detection in UAV images via deep neural networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 876–887. [Google Scholar] [CrossRef]
  13. Pan, H.Z.; Chen, Z.G. Application of UAV hyperspectral remote sensing data in inversion of winter wheat leaf area index. China Agric. Resour. Zoning 2018, 39, 32–37. [Google Scholar]
  14. Li, Q.; Yan, L.; Huang, D.; Zhou, Z.; Zhang, Y. Construction of a small sample dataset and identification of Pitaya trees (Selenicereus) based on UAV image on close-range acquisition. J. Appl. Remote Sens. 2022, 16, 024502. [Google Scholar] [CrossRef]
  15. Sun, Y.; Li, Z.L.; Luo, J.; Wu, T.; Liu, N. Farmland parcel-based crop classification in cloudy/rainy mountains using Sentinel-1 and Sentinel-2 based deep learning. Int. J. Remote Sens. 2022, 43, 1054–1073. [Google Scholar] [CrossRef]
  16. Huang, D.H.; Zhou, Z.F.; Peng, R.W.; Zhu, M.; Yin, L.J.; Zhang, Y.; Xiao, D.N.; Li, Q.X.; Hu, L.W.; Huang, Y.Y. Challenges and main research progress of low altitude remote sensing of crops in southwest plateau mountainous areas. J. Guizhou Norm. Univ. (Nat. Sci. Ed.) 2021, 39, 51–59. [Google Scholar] [CrossRef]
  17. Huang, L.; Wu, X.; Peng, Q.; Yu, X. Depth Semantic Segmentation of Tobacco Planting Areas from Unmanned Aerial Vehicle Remote Sensing Images in Plateau Mountains. J. Spectrosc. 2021, 2021, 6687799. [Google Scholar] [CrossRef]
  18. Zhou, Z.F.; Li, B.; Jia, L.H. Discussion on the Application of Synthetic Aperture Radar Technology in Quantitative Monitoring of Tobacco Planting in Karst Mountains. Bull. Surv. Mapp. 2012, 1, 246–248. [Google Scholar]
  19. Li, W.; Zhang, B.; Zhang, Y. Analysis on the characteristics of tobacco planting suitability in karst areas of Guizhou. Mod. Agric. Sci. Technol. 2016, 11, 82. [Google Scholar]
  20. Fu, Y.; Zhou, Z.F.; Wang, K.; Hu, Y. Research on the suitability of tobacco planting based on the karst plateau in Guizhou. Jiangsu Agric. Sci. 2014, 42, 92–95. [Google Scholar] [CrossRef]
  21. Gao, M.G.; Zhang, J.S.; Pan, Y.Z.; Duan, Y.M.; Zhang, D.J. Combining vegetation index and crop height to retrieve winter wheat leaf area index. China Agric. Resour. Zoning 2020, 41, 49–57. [Google Scholar]
  22. Salamí, E.; Barrado, C.; Pastor, E. UAV flight experiments applied to the remote sensing of vegetated areas. Remote Sens. 2014, 6, 11051–11081. [Google Scholar] [CrossRef] [Green Version]
  23. Berni, J.; Zarco-Tejada, P.J.; Suárez, L.; Fereres, E. Thermal and narrowband multispectral remote sensing for vegetation monitoring from an unmanned aerial vehicle. IEEE Trans. Geosci. Remote Sens. 2009, 47, 722–738. [Google Scholar] [CrossRef] [Green Version]
  24. Zhang, S.X. Research on the advantages and effects of plant protection drones on rice pest control. Agric. Eng. Technol. 2021, 41, 55–57. [Google Scholar] [CrossRef]
  25. Zhao, B.Q.; Ding, Y.C.; Cai, X.B.; Xie, J.; Liao, Q.X.; Zhang, J. Identification of plant number at seedling stage by mechanical direct seeding based on low-altitude uav remote sensing technology. Trans. Chin. Soc. Agric. Eng. 2017, 33, 115–123. [Google Scholar]
  26. Long, Y.; Liu, C.H.; Shuai, D.; Zhang, F.G. Image segmentation of canola based on color similarity in color space. In Proceedings of the 2nd International Conference on Computer Science and Application Engineering, Hohhot, China, 22–24 October 2018; pp. 1–6. [Google Scholar]
  27. Yang, W.; Xu, W.; Wu, C.; Zhu, B.; Chen, P.; Zhang, L.; Lan, Y. Cotton hail disaster classification based on drone multispectral images at the flowering and boll stage. Comput. Electron. Agric. 2021, 180, 105866. [Google Scholar] [CrossRef]
  28. Gnädinger, F.; Schmidhalter, U. Digital counts of maize plants by unmanned aerial vehicles (UAVs). Remote Sens. 2017, 9, 544. [Google Scholar] [CrossRef] [Green Version]
  29. García-Martínez, H.; Flores-Magdaleno, H.; Ascencio-Hernández, R.; Khalil-Gardezi, A.; Tijerina-Chávez, L.; Mancilla-Villa, O.R.; Vázquez-Peña, M.A. Corn grain yield estimation from vegetation indices, canopy cover, plant density, and a neural network using multispectral and RGB images acquired with unmanned aerial vehicles. Agriculture 2020, 10, 277. [Google Scholar] [CrossRef]
  30. Wu, M.Q.; Cui, Q.C.; Zhang, L.; Zhao, N. Tobacco field monitoring and classification method study in mountainous area. Remote Sens. Technol. Appl. 2011, 23, 305–309. [Google Scholar]
  31. Zhu, M.; Zhou, Z.F.; Zhao, X.; Huang, D.H.; Jiang, Y. Identification and extraction method of pitaya individual plant in karst plateau canyon area based on UAV remote sensing. Trop. Geogr. 2019, 39, 502–511. [Google Scholar] [CrossRef]
  32. Bajocco, S.; De Angelis, A.; Salvati, L. A satellite-based green index as a proxy for vegetation cover quality in a Mediterranean region. Ecol. Indic. 2012, 23, 578–587. [Google Scholar] [CrossRef]
  33. Wang, W.; Gao, X.; Cheng, Y.; Ren, Y.; Zhang, Z.; Wang, R. QTL Mapping of Leaf Area Index and Chlorophyll Content Based on UAV Remote Sensing in Wheat. Agriculture 2022, 12, 595. [Google Scholar] [CrossRef]
  34. Yang, B.; Zhu, W.; Rezaei, E.E.; Li, J.; Sun, Z.; Zhang, J. The Optimal Phenological Phase of Maize for Yield Prediction with High-Frequency UAV Remote Sensing. Remote Sens. 2022, 14, 1559. [Google Scholar] [CrossRef]
  35. Shuai, Y.M.; Yang, J.; Wu, H.; Yang, C.Y.; Xu, X.C.; Liu, M.Y.; Liu, T.; Liang, J. Analysis of multi-angle reflection characteristics of rice canopy quadrat based on UAV observation. Remote Sens. Technol. Appl. 2021, 36, 342–352. [Google Scholar]
  36. Woebbecke, D.M.; Meyer, G.E.; Bargen, K.V.; Mortensen, D.A. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  37. Tang, Y.L.; Liu, R.; Wang, H.Y. A principal component analysis image fusion method based on high-pass filtering. Beijing Surv. Mapp. 2018, 32, 499–503. [Google Scholar] [CrossRef]
  38. Jia, Y.H. Comparison of pixel-based remote sensing image fusion methods. Surv. Mapp. Inf. Eng. 1997, 4, 29–31. [Google Scholar]
  39. Xu, W.C.; Huang, W.; Li, Y.F.; Liu, J.F. Application of low-pass filtering and gray value adjustment in image enhancement. Laser Infrared 2012, 42, 458–462. [Google Scholar]
  40. Zhao, H.J.; Tan, J.X.; Lei, J.; Shu, Z.X. Power line extraction from UAV lidar based on geometric features. Bull. Surv. Mapp. 2020, 145–150. [Google Scholar] [CrossRef]
  41. Liu, J. Research on Attitude Solution Algorithm of Small Unmanned Aerial Vehicle Based on Fuzzy Adaptive. Master’s Thesis, Nanhua University, Hengyang, China, 2019. [Google Scholar] [CrossRef]
  42. Chen, Q.; Zhao, L.; Lu, J.; Kuang, G.; Wang, N.; Jiang, Y. Modified two-dimensional Otsu image segmentation algorithm and fast realization. IET Image Process. 2012, 6, 426–433. [Google Scholar] [CrossRef]
  43. Ji, W.; Tao, Y.; Zhao, D.A.; Yang, J. Research on Iterative Threshold Segmentation Method of Apple Tree Branches Based on CLAHE. Trans. Chin. Soc. Agric. Mach. 2014, 45, 69–75. [Google Scholar]
  44. Fu, G.P.; Chen, T.C.; Zhang, S.A.; Huang, W.F.; Yang, C.Y.; Zhu, L.X. Banana string recognition based on fusion of background saturation compression and Difference Threshold Segmentation. J. Chin. Agric. Mech. 2021, 42, 151–158. [Google Scholar]
  45. Zeng, Y.Y.; Xie, G.S.; Zhang, J.C. Low-light image segmentation based on intercept histogram and Otsu fusion. Adv. Laser Optoelectron. 2021, 58, 219–227. [Google Scholar]
  46. Yuan, X.; Wu, L.; Peng, Q. An improved Otsu method using the weighted object variance for defect detection. Appl. Surf. Sci. 2015, 349, 472–484. [Google Scholar] [CrossRef] [Green Version]
  47. Chen, L.; Xu, B.; Zhao, C.; Duan, D.; Cao, Q.; Wang, F. Application of Multispectral Camera in Monitoring the Quality Parameters of Fresh Tea Leaves. Remote Sens. 2021, 13, 3719. [Google Scholar] [CrossRef]
  48. Wierzbicki, D.; Kedzierski, M.; Fryskowsk, A. Assesment of the Influence of Uav Image Quality on the Orthophoto Production. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Overview of the study area.
Figure 1. Overview of the study area.
Sustainability 14 08151 g001
Figure 2. DOM diagram of the test area. Note: A refers to exposed rock area, B refers to flat area, and C refers to topographic relief area; (a1,b1,c1) scale is 1:400, (a2,b2,c2) scale is 1:200, (a3,b3,c3) scale is 1:50.
Figure 2. DOM diagram of the test area. Note: A refers to exposed rock area, B refers to flat area, and C refers to topographic relief area; (a1,b1,c1) scale is 1:400, (a2,b2,c2) scale is 1:200, (a3,b3,c3) scale is 1:50.
Sustainability 14 08151 g002
Figure 3. DOM diagram of the verification area. Note: A and C are flat areas, and B is terrain broken areas; (a1,b1,c1) scale is 1:400, (a2,b2,c2) scale is 1:200, (a3,b3,c3) scale is 1:50.
Figure 3. DOM diagram of the verification area. Note: A and C are flat areas, and B is terrain broken areas; (a1,b1,c1) scale is 1:400, (a2,b2,c2) scale is 1:200, (a3,b3,c3) scale is 1:50.
Sustainability 14 08151 g003
Figure 4. Image extraction RGB value curve. Note: The abscissa is the sampling distance, and the ordinate is the DN value of the feature image.
Figure 4. Image extraction RGB value curve. Note: The abscissa is the sampling distance, and the ordinate is the DN value of the feature image.
Sustainability 14 08151 g004
Figure 5. Method flow chart.
Figure 5. Method flow chart.
Sustainability 14 08151 g005
Figure 6. Precision evaluation of tobacco extraction.
Figure 6. Precision evaluation of tobacco extraction.
Sustainability 14 08151 g006aSustainability 14 08151 g006b
Table 1. Index calculation results in different environments.
Table 1. Index calculation results in different environments.
Back-GroundOriginal ImageEXGNGRDIEXG-EXR
Bare soil area Sustainability 14 08151 i001 Sustainability 14 08151 i002 Sustainability 14 08151 i003 Sustainability 14 08151 i004
Rocks Sustainability 14 08151 i005 Sustainability 14 08151 i006 Sustainability 14 08151 i007 Sustainability 14 08151 i008
Weeds Sustainability 14 08151 i009 Sustainability 14 08151 i010 Sustainability 14 08151 i011 Sustainability 14 08151 i012
Table 2. Sample statistics table.
Table 2. Sample statistics table.
SiAverage ValueStandard DeviationMinimumMedianMaximum
Lt
Tobacco95.7415.075498131
Weed101.2411.9762103142
Rock−11.093.70−18−1120
Corn86.338.077087104
Note: Lt—Land type; Si—Statistical indicators.
Table 3. Image enhancement results.
Table 3. Image enhancement results.
Original ImageHighpassLowpassDirectional
Sustainability 14 08151 i013 Sustainability 14 08151 i014 Sustainability 14 08151 i015 Sustainability 14 08151 i016
Sustainability 14 08151 i017 Sustainability 14 08151 i018 Sustainability 14 08151 i019 Sustainability 14 08151 i020
Note: highpass threshold segmentation set to 8–43, Lowpass threshold segmentation set to 62–76, Directional threshold segmentation set to10–36.
Table 4. Results of tobacco extraction.
Table 4. Results of tobacco extraction.
Flat AreaSlope AreaRocky OutcropsWeed Area
Sustainability 14 08151 i021 Sustainability 14 08151 i022 Sustainability 14 08151 i023 Sustainability 14 08151 i024
Sustainability 14 08151 i025 Sustainability 14 08151 i026 Sustainability 14 08151 i027 Sustainability 14 08151 i028
Table 5. Statistics of different extraction types.
Table 5. Statistics of different extraction types.
TypeIIIIIIIVV
True Positive Sustainability 14 08151 i029 Sustainability 14 08151 i030 Sustainability 14 08151 i031 Sustainability 14 08151 i032 Sustainability 14 08151 i033
Sustainability 14 08151 i034 Sustainability 14 08151 i035 Sustainability 14 08151 i036 Sustainability 14 08151 i037 Sustainability 14 08151 i038
False Positive Sustainability 14 08151 i039 Sustainability 14 08151 i040 Sustainability 14 08151 i041 Sustainability 14 08151 i042 Sustainability 14 08151 i043
Sustainability 14 08151 i044 Sustainability 14 08151 i045 Sustainability 14 08151 i046 Sustainability 14 08151 i047 Sustainability 14 08151 i048
False Negative Sustainability 14 08151 i049 Sustainability 14 08151 i050 Sustainability 14 08151 i051 Sustainability 14 08151 i052 Sustainability 14 08151 i053
Sustainability 14 08151 i054 Sustainability 14 08151 i055 Sustainability 14 08151 i056 Sustainability 14 08151 i057 Sustainability 14 08151 i058
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Zhou, Z.; Huang, D.; Zhang, T.; Zhang, W. Identifying and Counting Tobacco Plants in Fragmented Terrains Based on Unmanned Aerial Vehicle Images in Beipanjiang, China. Sustainability 2022, 14, 8151. https://doi.org/10.3390/su14138151

AMA Style

Wang Y, Zhou Z, Huang D, Zhang T, Zhang W. Identifying and Counting Tobacco Plants in Fragmented Terrains Based on Unmanned Aerial Vehicle Images in Beipanjiang, China. Sustainability. 2022; 14(13):8151. https://doi.org/10.3390/su14138151

Chicago/Turabian Style

Wang, Yu, Zhongfa Zhou, Denghong Huang, Tian Zhang, and Wenhui Zhang. 2022. "Identifying and Counting Tobacco Plants in Fragmented Terrains Based on Unmanned Aerial Vehicle Images in Beipanjiang, China" Sustainability 14, no. 13: 8151. https://doi.org/10.3390/su14138151

APA Style

Wang, Y., Zhou, Z., Huang, D., Zhang, T., & Zhang, W. (2022). Identifying and Counting Tobacco Plants in Fragmented Terrains Based on Unmanned Aerial Vehicle Images in Beipanjiang, China. Sustainability, 14(13), 8151. https://doi.org/10.3390/su14138151

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop