Next Article in Journal
Coupled Efficacy of Magneto-Electric Water Irrigation with Foliar Iron Fertilization for Spinach Growth
Previous Article in Journal
Evaluation of Grain Moisture Content at Maturity and Screening for Identification Indexes of Maize Inbred Lines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Residual Mulching Film Detection in Seed Cotton Using Line Laser Imaging

1
College of Mechanical and Electrical Engineering, Shihezi University, Shihezi 832000, China
2
Key Laboratory of Northwest Agricultural Equipment, Ministry of Agriculture and Rural Affairs, Shihezi 832000, China
3
Smart Farm Digital Equipment Technology Innovation Center, Xinjiang Production and Construction Corps, Shihezi 832000, China
*
Authors to whom correspondence should be addressed.
Agronomy 2024, 14(7), 1481; https://doi.org/10.3390/agronomy14071481 (registering DOI)
Submission received: 14 June 2024 / Revised: 2 July 2024 / Accepted: 7 July 2024 / Published: 9 July 2024
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

:
Due to the widespread use of mulching film in cotton planting in China, residual mulching film mixed with machine-picked cotton poses a significant hazard to cotton processing. Detecting residual mulching film in seed cotton has become particularly challenging due to the film’s semi-transparent nature. This study constructed an imaging system combining an area array camera and a line scan camera. A detection scheme was proposed that utilized features from both image types. To simulate online detection, samples were placed on a conveyor belt moving at 0.2 m/s, with line lasers at a wavelength of 650 nm as light sources. For area array images, feature extraction was performed to establish a partial least squares discriminant analysis (PLS-DA) model. For line scan images, texture feature analysis was used to build a support vector machine (SVM) classification model. Subsequently, image features from both cameras were merged to construct an SVM model. Experimental results indicated that detection methods based on area array and line scan images had accuracies of 75% and 79%, respectively, while the feature fusion method achieved an accuracy of 83%. This study demonstrated that the proposed method could effectively improve the accuracy of residual mulching film detection in seed cotton, providing a basis for reducing residual mulching film content during processing.

1. Introduction

Cotton, as a vital necessity of life, holds an indispensable position in both agriculture and the textile industry. In the process of cotton sales and processing, the content of cotton impurities affects the grade assessment and price fluctuations of the cotton [1]. With the widespread adoption of mulching film technology in China, the content of residual mulching film in seed cotton increases. A higher content of residual mulching film in cotton can lead to difficulties in processing within the textile chain, susceptibility to bleaching spots, uneven dyeing, and a reduction in the quality of cotton yarn and finished products, ultimately affecting the economic market and exports [2]. In recent years, the challenge of detecting residual mulching film in cotton has become increasingly prominent, largely due to the semi-transparent characteristics of the mulching film. Line laser imaging represents an important application of lasers used as light sources, which offer the advantage of high-energy focusing in the detection field. When irradiated at a certain angle, the surface of cotton fibers reflects light in all directions, resulting in weak diffuse reflection brightness, indicating poor reflectance performance. Residual mulching film on the surface reflects light as specular reflection, which has higher reflectivity and better reflectance performance [3,4]. By exploiting the differences in these two reflectance performances, residual mulching film can be discriminated in seed cotton. Hua et al. [5] analyzed the reflectance performance of lint cotton fibers and white foreign fibers including mulching film under 650 nm line laser irradiation. They found that both lint cotton and white foreign fiber images exhibited distinct peaks in their grayscale histograms. A simple image binarization segmentation algorithm was able to efficiently detect most of the white foreign fibers, with an average detection rate of 92.08% across six types of these fibers. Notably, the detection rate for transparent mulching film was 77.5%. Liu et al. [6,7] proposed an algorithm for identifying white foreign fibers in lint cotton based on the intensity of white foreign fibers and the reflected light on the surface of lint cotton under 650 nm line laser irradiation, and the distribution characteristics of its surface lint in the image. The algorithm achieved an average detection success rate of more than 87%, with the success rate for detecting semi-transparent plastic mulch reaching 95%. Wang et al. [8] used their optimized line laser imaging parameters—wavelength 658 nm, optical power 55 mW, exposure time 36 μs—to acquire 730 lint cotton images. The identification rate of white foreign fiber samples could reach 93.7%. The study showed that the missed detection samples were mainly poorly reflective residual mulching film. Zhang et al. [9] employed a two-light source imaging method using an LED and a 658 nm line laser to acquire 840 images. They found that the identification rate of white foreign fibers in lint cotton was 84.1%, while dark and colored foreign fibers were correctly identified at a rate of 93.9% by a simple binarized image segmentation algorithm. This performance was superior to that achieved by laser imaging alone or LED imaging alone. The foreign fibers that were missed under two-light illumination were mainly semi-transparent mulching film.
In most studies focusing on detecting foreign fibers in lint cotton, there has been a relatively high detection rate for white foreign fibers, including residual mulching film. However, detecting mulching film at the seed cotton stage could facilitate the management of foreign fiber content at its source. At this stage, the mulching film has not yet transformed into more complex foreign fibers, rendering its detection especially beneficial for quality control. Wei et al. [10] acquired images of seed cotton and 21 types of foreign fibers using two-light source illumination with a white LED and red line laser. The improved Sobel edge detection algorithm was used to detect the foreign fibers in the R and S channels. The experimental results showed that the detection rate of white foreign fibers reached 74.7%, and the detection rate of colored foreign fibers reached 70.8%, in which the colors of light blue, light green, and transparent mulch were extremely similar to the shade of seed cotton, and the surface was nonreflective, which made it difficult to detect. He et al. [11,12] found that the detection rate of foreign fibers in seed cotton reached 90.3% and 86.7%, respectively, using artificial intelligence methods based on deep learning and CNN neural networks under the condition of LED illumination or two-light source illumination combined with an LED and line laser. However, the study only detected white and colored fibers and did not involve foreign fibers such as colorless transparent mulching film.
In the aforementioned literature, there was no specific detection of transparent and semi-transparent mulching film in seed cotton. The use of line laser imaging methods to detect mulching film within seed cotton holds significant potential. Based on this, this study aimed to apply image feature fusion technology to combine area array and line array image features and analyze the effectiveness of this technology in detecting mulching film in seed cotton. The specific objectives were to (1) construct a two-camera imaging system to acquire both area array and line scan images; (2) analyze features of residual mulching film and seed cotton in both types of images; and (3) establish support vector machine (SVM) models and compare recognition rates of mulching film in the area array and line scan images, as well as recognition by image feature fusion.

2. Materials and Methods

2.1. Sample Preparation

Seed cotton and residual mulching film samples were collected from cotton fields in Shihezi, Xinjiang. Before the experiment, the seed cotton was manually flattened and spread to form a continuous, uniform layer that measured 35 cm in width and 50 cm in length, with a thickness ranging from 2 to 4 cm. This cotton layer was placed on a conveyor belt. The dimensions of the residual mulching film samples were approximately 2 cm in width and 4 cm in length (Figure 1). During the experiment, the residual mulching film samples were scattered randomly on the surface of the seed cotton layer, and all samples moved together with the conveyor belt.

2.2. Imaging System

Two industrial cameras equipped with optical lenses for image capture were fixed in a rack directly above the conveyor belt and connected to the computer interface through data transmission cables (Figure 2). The line scan camera used in the installation was a GigE industrial camera (Linea GigE 4K, Teledyne DALSA Inc., Waterloo, ON, Canada), with a resolution of 4096 × 2 and a maximum line rate of 48 kHz. Sapera CamExpert (Sapera LT SDK, Teledyne DALSA Inc., Waterloo, ON, Canada) software (https://www.teledynedalsa.com/en/products/imaging/vision-software/sapera-lt/) was used to adjust the camera’s parameters, including the captured image size, the line scanning frequency, and the exposure time (Table 1). The area array camera used was a CMOS camera (HF867, Green Vision Forest, Shenzhen, China), with a resolution of 640 × 480. The line laser (FU650B100-GD16-WLD, Fuzhe Technology Co., Ltd., Shenzhen, China) output was a red laser in the 650 nm band, with adjustable power ranging from 0 to 100 mW. The angle of incidence of the line laser was adjusted using a universal mount. The lens (AF 35 mm f/2D, Nikon Inc., Tokyo, Japan) was a full-frame, wide-angle, fixed-focus lens that maintains stable image quality over time.
The imaging system was encapsulated with black light-shielding cloth to minimize the interference of external light. A total of 120 images were acquired for the area array and another 120 images for the line scan. From the collected 120 area array images, 96 were randomly selected as the training set, and 24 as the test set. Similarly, from the collected 120 line scan images, 96 were randomly selected as the training set, and 24 as the test set.

2.3. Data Processing

2.3.1. Area Array Image

The collected area array images of samples were preprocessed using MATLAB R2020a (the MathWorks Inc., Natick, MA, USA). This mainly included the removal of the raw image’s background, linear interpolation, and local quadratic regression operations to reduce noise interference caused by illumination, exposure unevenness, and electronic equipment, thereby improving the quality of the images [13].
There was a laser line shown in the area array image, and there were differences in the straightness and gray value of the line when the laser illuminated seed cotton and mulching film. The unevenness of the seed cotton surface led to large volatility of the line straightness extracted from the area array image, which increased the difficulty of analyzing the overall laser line trend in the image. Using the local quadratic regression method, a trend line was fitted to the line laser and used as a baseline for each image, with data points that deviated significantly from this baseline being identified as outliers.
The laser line straightness in the image, the gray value features in the laser line direction, and the gray value features in the direction vertical to the laser line of the preprocessed images were extracted. These features were then used as inputs to train a partial least squares discriminant analysis (PLS-DA) model. The PLS-DA model was used for identifying seed cotton and mulching film [14,15].

2.3.2. Line Scan Image

The collected line scan images were first enhanced using contrast enhancement techniques. These methods involved image intensity adjustment, traditional histogram equalization, and contrast-limited adaptive histogram equalization (CLAHE) [16].
Using a superpixel segmentation method based on gridded k-means clustering, the simple linear iterative clustering (SLIC) algorithm clusters pixels to efficiently generate compact and almost uniform blocks of superpixels. The SLIC algorithm achieves superpixel segmentation of an image through the steps of initializing the seed points, assigning labels, and calculating a distance metric value. Specifically, the algorithm first distributes seed points uniformly within the image, then adjusts the position of the seed points to the gradient minimum, then defines a search region around each seed point, assigns class labels to each pixel, and associates it with the nearest seed point. The SLIC algorithm requires fewer parameters, basically only the number of superpixel blocks [17,18]. A superpixel block is a small region composed of adjacent pixels that share similar characteristics, such as color, brightness, and texture. These small regions preserve the local structural features of the image and typically do not destroy the boundary information of the objects within the image, retaining most of the valid information for subsequent image analysis.
The residual mulching film was separated from the complex seed cotton background by classifying these superpixel blocks. During the superpixel block classification process, the texture features of the superpixel blocks were analyzed, and feature vectors were constructed a using gray-level co-occurrence matrix (GLCM) [19,20]. GLCM is widely used for texture analysis and the classification of images. It reflects the comprehensive information of an image, in terms of direction, scale, and magnitude of change. This is achieved by calculating the correlation along a specific direction and distance in the image, between the gray values of two pixels. Texture features are extracted using GLCM, which can generate texture features and strong discriminative properties, such as correlation, contrast, energy, homogeneity, and entropy [21,22]. Correlation is a statistical measure that describes the linear relationship between two variables; contrast is a measure that reflects the difference in gray level between the front and back points in an image point pair; energy responds to uniformity and smoothing; homogeneity reflects the degree of homogeneity of the image; entropy is a measure of randomness that reflects the complexity of the gray-level distribution of the image; maximum probability is the value of the largest element in GLCM, which represents the maximum probability of the occurrence of a pixel pair in the image. These feature vectors were fed into a support vector machine (SVM) classification model [23,24].

2.3.3. Image Feature Fusion

Information fusion is a multilevel, multistep process that involves the association, correlation, and synthesis of data and information obtained from different sources [25,26,27]. Information fusion primarily includes pixel-level fusion [28], feature-level fusion [29], and decision-level fusion [30]. The feature-level fusion method was used in this paper to deal with area array image features and line scan image features.
As is shown in Figure 3, the steps of image feature fusion were as follows: read the area array image and line scan image; extract the laser line straightness features, the laser line direction gray value features, and the vertical laser line direction gray value features from the area array image; and extract the texture features from the line scan image. The feature data and the corresponding image position coordinates were then formed into a feature vector. This feature vector was input into an SVM model to output the classification result for seed cotton and residual mulching film. Coordinate information in the feature vector was read to establish a correspondence between image coordinates and classification results, thereby determining the classification result at a certain location in the image. Based on the recognized classification results, the channel location of the residual mulching film was identified.

2.4. Assessment

In order to meet the actual needs of cotton production and processing, it is necessary to determine the location of the conveyor belt where the residual mulching film is situated. Based on the spatial position distribution of the conveyor channel, the image area corresponding to the moving direction within the camera’s field of view were divided, and the correspondence between the spatial position of the conveyor and the image area were established (Figure 4). Utilizing the correlation between the two, the position of the conveyor channel in which the mulching film was located were determined from the image position of the mulching film. The channels were numbered in sequence as channel 1, channel 2, channel 3, channel 4, and channel 5. In this study, the length and width of the residual mulching film samples were greater than 20 mm, and the width of each conveyor belt channel was 20 mm.
As the mulching film needs to be detected and rejected on a cotton processing line, it is necessary to determine whether there is residual mulching film in the channel, i.e., the ratio of the pixel area of the residual mulching film to the total pixel area of the channel. According to previous tests, if the ratio was less than 0.07, a larger volume of seed cotton forming a ball might be mistakenly recognized as residual mulching film, leading to a higher test error. If the ratio was greater than 0.07, a smaller volume of elongated mulching film might be missed, resulting in a relatively low detection rate of the residual mulching film. Therefore, a ratio of 0.07 was considered more appropriate. As is shown in Figure 4, if the pixel area of the mulching film region in channel 3 accounted for more than 7% of the total pixel area of the channel, the presence of mulching film in channel 3 was considered confirmed.

3. Results

3.1. Area Array Image Analysis

3.1.1. Image Preprocessing

As is shown in Figure 5a, the research object was the information of the laser line region in the area array image of the seed cotton and mulching film. The image information was easily interfered with by the background (non-target region), necessitating the removal of the background. The original RGB image was first converted to a grayscale image. According to the analysis of preliminary experimental data, the laser line area should encompass an image region of 60 pixels near the maximum gray value of each row. The image value of the laser line area was converted to 1, and the background was set to 0, forming a binary mask to remove the background. Then, the red channel of the original image was extracted and multiplied with the binary mask previously created (Figure 5b). Linear interpolation was used to compensate for the lack of information caused by concave regions in the image and to reduce the influence of interference information (Figure 5c).
Under line laser illumination, mulching film appeared straighter than seed cotton (Figure 5c). The straightness of the laser line in the image was extracted. Initially, the maximum gray level of each row in the image was extracted, and its corresponding positional coordinates were noted. Subsequently, a feature matrix was constructed using the coordinates corresponding to the maximum gray level of each row. This matrix was arranged in ascending order following the laser line direction. Finally, the data curve of the feature matrix was plotted. The local quadratic regression method was used to smooth the straightness curves, with a selected frac value of 0.15. Figure 5d shows the straightness curve of the laser line (d) before and (e) after smoothing treatment. The straightness of the laser line was later used as one of the features to analyze the area array images.

3.1.2. Image Feature Extraction

After the background removal, the average gray value within the laser line region was extracted along the direction of the line. In Figure 6a,b, there was no discernible difference in straightness between the seed cotton and the mulching film. However, when the gray value in the laser line was extracted in Figure 6c, the mulching film displayed a higher gray value than the seed cotton. The derivatives of the gray value were also extracted as depicted in Figure 6d. There were noticeably higher values in the derivatives in the area near the junction of the cotton seed and the mulching film. It was also treated as a gray value feature in the vertical direction of the laser line. Overall, Figure 6 demonstrates greater differences in the gray value and its derivatives between seed cotton and mulching film compared to the differences observed in straightness. Therefore, the gray value and its derivatives were used as image features for the classification model.
Using the preprocessed area array image for analysis, three image features were extracted, including laser line straightness, gray value features in the direction of the laser line, and gray value features in the vertical direction of the laser line. The analysis results indicate that a specific feature can accurately distinguish between the seed cotton area and residual mulching film area in an area array image. Therefore, the PLS-DA model was used to improve the overall accuracy of the classification. These features served as the input features for subsequent classification modeling.

3.2. Line Scan Image Analysis

3.2.1. Image Enhancement

The raw image, expressed in dark red, made the mulching film indistinct (Figure 7). After intensity adjustment with a contrast limitation threshold of 100, the mulching film became clearer, though its shape remained vague. Using traditional histogram equalization, the image features of the mulching film were enhanced, but the seed cotton became saturated. By applying CLAHE (contrast limited adaptive histogram equalization), not only were the image details enhanced, but the background was also effectively suppressed. Consequently, images processed with CLAHE were utilized for subsequent classification.

3.2.2. Superpixel Segmentation

The image was subjected to superpixel segmentation to extract underlying hierarchical features. In this process, pixels exhibiting similar characteristics, such as brightness, smoothness, and texture, were grouped into superpixel blocks (Figure 8). The cyan net-like lines in the figure represent the image segmentation boundaries established by the SLIC algorithm. Although these boundaries originally had no actual width, they were depicted with a width of one pixel to enhance visualization. In most pixel regions, the segmentation lines aligned well with the actual edges of the objects. However, instances of inconsistent boundary alignment were observed. The concurrent presence of seed cotton and mulching film within the same superpixel block could complicate the classification of the pixel block, thereby affecting the accuracy of mulching film recognition. The parameter K in the SLIC clustering method, which determines the number of superpixels created (thereby dividing the image into approximately K superpixel regions), significantly influences segmentation outcomes. Selecting an appropriate value for K can mitigate the impact on recognition accuracy. Multiple tests have shown that a K value of 400 results in more precise superpixel segmentation, with improved edge accuracy and enhanced recognition performance.

3.2.3. Texture Feature

The superpixel blocks of 52 samples each of seed cotton and residual mulching film were individually extracted to show the texture features (Figure 9). Texture features of each block, including maximum probability, correlation, contrast, energy, homogeneity, and entropy, were calculated separately for the two groups. Among these features, the differences in maximum probability and correlation between the two groups were comparatively subtle. Homogeneity demonstrated more pronounced differences between the groups. While the contrast features also exhibited significant differences, they had greater variance, implying less consistency in the feature distinctions. Conversely, the energy and entropy features indicated significant differences with relatively smaller variances, denoting better stability in their feature distinctions. The correlation, contrast, energy, homogeneity, and entropy were used as the input features for each block.

3.2.4. SVM Classification

To build the SVM classification model, from the 120 collected images, 96 were randomly selected to form the training set, while the remaining 24 served as the test set. Texture features were extracted from the superpixels from both the residual mulching film and seed cotton. As is shown in Figure 10, due to the irregular shapes of the superpixel blocks, their texture features were characterized using grayscale histogram statistics. In this study, five texture features—correlation, contrast, energy, homogeneity, and entropy—were utilized. The process of constructing texture feature vectors for superpixel blocks involves the following steps: First, a grayscale histogram was drawn for each superpixel block. Second, the five texture features were calculated based on the histogram and combined to form a feature vector E = ( C o r r e l a t i o n , C o n t r a s t , E n e r g y , H o m o g e n e i t y , E n t r o p y ) . Finally, the feature vector was normalized.
The specific details of the SVM classification model used in the experiment were as follows: The model’s kernel function was the Gaussian kernel. Residual mulching film superpixels were labeled as +1, and seed cotton superpixel blocks were labeled as −1. The trained model was tested using the test set to obtain the confidence level for each superpixel block. If the confidence level was above 0, it was classified as a residual mulching film superpixel and labeled +1. If the confidence level was below or equal to 0, it was categorized as a seed cotton superpixel with the label −1. Figure 11 demonstrates that the model could accurately classify most superpixel blocks. Test results indicated that the residual mulching film superpixel blocks had an accuracy rate of 88.78%, the seed cotton superpixel blocks had an accuracy rate of 83.22%, and the overall correct classification rate was 86%.

3.3. Test Results

The area array image training set samples and the line scan image training set samples were used as the training set for image fusion processing. The processing methods were the same as those for the individual processing of area array images and line scan images. Subsequently, the area array image test set samples and the line scan image test set samples were used as the test set for image fusion processing, and the SVM model was used to identify and test the images in the test set. The experimental results are shown in Table 2. In the test set, the accuracy of identifying residual mulching film in seed cotton was 75% when using the area array imaging alone, 79% when using the line scan imaging alone, and 83% when using the image feature fusion method. Therefore, it can be concluded that the image fusion method effectively improves the identification accuracy of residual mulching film in seed cotton.

4. Discussion

In Figure 12a, shadows were generated in the area array images of seed cotton and residual mulching film due to the uneven cotton layer. The advantage of high-energy focusing of the line laser reduced the misclassification rate of the shadows in area array images. However, when the cotton layer was flat, the straightness and other image features of seed cotton and mulching film were quite similar, leading to a small portion of seed cotton being misclassified as mulching film. In Figure 12b, when the line scan images were analyzed, the image texture features at the boundary between seed cotton and residual film were quite similar, making it easy for seed cotton to be misclassified as mulching film.
In this study, a red line laser with a wavelength of 650 nm was employed, while other wavelengths were not tested. Wang et al. [31] found significant optical characteristic differences between lint cotton and mulching film in the blue light (456.1 nm), red light (605.8 nm, 628.9 nm, 685.6 nm), and near-infrared (805.4 nm, 837 nm, 1006.3 nm) bands. Future research could consider using the wavelengths mentioned in their study.
The samples of cotton and mulching film on the conveyor belt drifted due to wind resistance, resulting in blurry images. In this study, the conveyor belt speed was set to 0.2 m/s to maintain image quality. On the production line, the suction generated by fans accelerates the movement of seed cotton and mulching film, creating a discrepancy between this experiment and actual conditions. In future research, the imaging system will be further improved to adapt to the rapid movement of samples.

5. Conclusions

An imaging system for identifying residual mulching film in seed cotton was developed, utilizing both area array images and line scan images. A recognition scheme that integrates features from both types of images was proposed. For the features extracted from the area array images, methods such as background removal, linear interpolation, local quadratic regression, feature extraction, and PLS-DA model recognition were employed. The experimental results indicated a residual mulching film detection accuracy of 75%. For the features extracted from the line scan images, methods such as image enhancement, superpixel segmentation, texture feature extraction, and SVM model recognition were applied. The results showed a residual mulching film detection accuracy of 79%. To further improve the accuracy of residual mulching film recognition, the method of image feature fusion that leverages the characteristics of both imaging techniques was employed. The experimental results demonstrated a residual mulching film detection accuracy of 83%, suggesting that this method was feasible. This study provides a reference for the detection of residual mulching film in seed cotton. In future studies, different wavelengths of laser can be employed for detection, and the imaging system can be further improved to adapt to the rapid movement of seed cotton and residual mulching film samples in the cotton processing line.

Author Contributions

Conceptualization, R.Z., S.W. and Z.W.; methodology, S.W., M.Z. and Z.W.; software, Z.W. and Z.Z.; validation, S.W. and Z.W.; formal analysis, S.W. and Z.W.; investigation, R.Z.; resources, R.Z. and M.Z.; data curation, Z.W. and M.Z.; writing—original draft preparation, S.W., M.Z. and Z.W.; writing—review and editing, M.Z. and R.Z.; visualization, Z.Z., S.W., Z.W. and M.Z.; supervision, R.Z. and M.Z.; project administration, R.Z.; funding acquisition, R.Z. and M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (2022YFD2002400), the National Natural Science Foundation of China (32101613), the Science and Technology Bureau of Xinjiang Production and Construction Corps (2022DB003), the Science and Technology Planning Project of the 12th Division of Xinjiang Production and Construction Corps (SRS2022011), and the Guiding Science and Technology Plan Project of Xinjiang Production and Construction Corps (2023ZD053).

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, M.; Li, C.; Yang, F. Classification of Foreign Matter Embedded inside Cotton Lint Using Short Wave Infrared (SWIR) Hyperspectral Transmittance Imaging. Comput. Electron. Agric. 2017, 139, 75–90. [Google Scholar] [CrossRef]
  2. Zhou, W.; Xv, S.; Liu, C.; Zhang, J. Applications of near Infrared Spectroscopy in Cotton Impurity and Fiber Quality Detection: A Review. Appl. Spectrosc. Rev. 2016, 51, 318–332. [Google Scholar] [CrossRef]
  3. Yang, W.; Li, D.; Zhu, L.; Kang, Y.; Li, F. A New Approach for Image Processing in Foreign Fiber Detection. Comput. Electron. Agric. 2009, 68, 68–77. [Google Scholar] [CrossRef]
  4. Yang, W.; Lu, S.; Wang, S.; Li, D. Fast Recognition of Foreign Fibers in Cotton Lint Using Machine Vision. Math. Comput. Model. 2011, 54, 877–882. [Google Scholar] [CrossRef]
  5. Hua, C.; Su, Z.; Qiao, L.; Shi, J. White Foreign Fibers Detection in Cotton Using Line Laser. Trans. Chin. Soc. Agric. Mach. 2012, 43, 181–185. [Google Scholar] [CrossRef]
  6. Liu, F.; Su, Z.; Qiao, L. Linear Laser Detecting Method of White Foreign Fibers in Cotton Based on Sample Cross-section Imaging. Trans. Chin. Soc. Agric. Mach. 2013, 44, 215–218+256. [Google Scholar] [CrossRef]
  7. Liu, F.; Su, Z.; He, X.; Zhang, C.; Chen, M.; Qiao, L. A Laser Imaging Method for Machine Vision Detection of White Contaminants in Cotton. Text. Res. J. 2014, 84, 1987–1994. [Google Scholar] [CrossRef]
  8. Wang, D.; Yin, B.; Liu, X.; He, X.; Su, Z. Laser line scan imaging method for detection of white foreign fibers in cotton. Trans. Chin. Soc. Agric. Eng. 2015, 31, 310–314. [Google Scholar] [CrossRef]
  9. Zhang, L.; Wei, P.; Wu, J.; Liu, X.; Su, Z. Detection method of foreign fibers in cotton based on illumination of line—Laser and LED. Trans. Chin. Soc. Agric. Eng. 2016, 32, 289–293. [Google Scholar] [CrossRef]
  10. Wei, P.; Zhang, L.; Liu, X.; Wang, D.; Su, Z. Detecting method of foreign fibers in seed cotton using double illumination imaging. Text. Res. J. 2017, 38, 32–38. [Google Scholar] [CrossRef]
  11. He, X.; Wei, P.; Zhang, L.; Deng, B.; Pan, Y.; Su, Z. Detection method of foreign fibers in seed cotton based on deep-learning. Text. Res. J. 2018, 39, 131–135. [Google Scholar] [CrossRef]
  12. He, X.; Su, Z.; Deng, B.; Pan, Y.; Chi, Z. An artificial intelligence method for detecting foreign fiber in seed cotton. Cotton Text. Technol. 2018, 46, 49–52. [Google Scholar]
  13. Dai, Q.; Cheng, J.-H.; Sun, D.-W.; Zeng, X.-A. Advances in Feature Selection Methods for Hyperspectral Image Processing in Food Industry Applications: A Review. Crit. Rev. Food Sci. Nutr. 2015, 55, 1368–1382. [Google Scholar] [CrossRef] [PubMed]
  14. Biancolillo, A.; Måge, I.; Næs, T. Combining SO-PLS and Linear Discriminant Analysis for Multi-Block Classification. Chemom. Intell. Lab. Syst. 2015, 141, 58–67. [Google Scholar] [CrossRef]
  15. Zontov, Y.V.; Rodionova, O.Y.; Kucheryavskiy, S.V.; Pomerantsev, A.L. PLS-DA—A MATLAB GUI Tool for Hard and Soft Approaches to Partial Least Squares Discriminant Analysis. Chemom. Intell. Lab. Syst. 2020, 203, 104064. [Google Scholar] [CrossRef]
  16. Zuiderveld, K. Contrast Limited Adaptive Histogram Equalization. In Graphics Gems; Academic Press: Cambridge, MA, USA, 1994; pp. 474–485. ISBN 978-0-12-336156-1. [Google Scholar]
  17. Kishorjit Singh, N.; Johny Singh, N.; Kanan Kumar, W. Image Classification Using SLIC Superpixel and FAAGKFCM Image Segmentation. IET Image Process. 2020, 14, 487–494. [Google Scholar] [CrossRef]
  18. Akyilmaz, E.; Leloglu, U.M. Segmentation of SAR Images Using Similarity Ratios for Generating and Clustering Superpixels. Electron. Lett. 2016, 52, 654–656. [Google Scholar] [CrossRef]
  19. Faugeras, O.D.; Pratt, W.K. Decorrelation Methods of Texture Feature Extraction. IEEE Trans. Pattern Anal. Mach. Intell. 1980, PAMI-2, 323–332. [Google Scholar] [CrossRef] [PubMed]
  20. Wang, L.; Sun, J.; Li, T. Intelligent Sports Feature Recognition System Based on Texture Feature Extraction and SVM Parameter Selection. J. Intell. Fuzzy Syst. 2020, 39, 4847–4858. [Google Scholar] [CrossRef]
  21. Zhang, X.; Cui, J.; Wang, W.; Lin, C. A Study for Texture Feature Extraction of High-Resolution Satellite Images Based on a Direction Measure and Gray Level Co-Occurrence Matrix Fusion Algorithm. Sensors 2017, 17, 1474. [Google Scholar] [CrossRef]
  22. Eleyan, A.; Demirel, H. Co-Occurrence Matrix and Its Statistical Features as a New Approach for Face Recognition. Turk. J. Electr. Eng. Comput. Sci. 2011, 19, 97–107. [Google Scholar] [CrossRef]
  23. Ghosh, A.; Chatterjee, P. Prediction of Cotton Yarn Properties Using Support Vector Machine. Fibers Polym. 2010, 11, 84–88. [Google Scholar] [CrossRef]
  24. Chen, Q.; Zhao, J.; Fang, C.H.; Wang, D. Feasibility Study on Identification of Green, Black and Oolong Teas Using near-Infrared Reflectance Spectroscopy Based on Support Vector Machine (SVM). Spectrochim. Acta A Mol. Biomol. Spectrosc. 2007, 66, 568–574. [Google Scholar] [CrossRef] [PubMed]
  25. Singh, S.; Singh, H.; Bueno, G.; Deniz, O.; Singh, S.; Monga, H.; Hrisheekesha, P.N.; Pedraza, A. A Review of Image Fusion: Methods, Applications and Performance Metrics. Digit. Signal Process. 2023, 137, 104020. [Google Scholar] [CrossRef]
  26. Liang, L.; Gao, Z. SharDif: Sharing and Differential Learning for Image Fusion. Entropy 2024, 26, 57. [Google Scholar] [CrossRef] [PubMed]
  27. Kaur, H.; Koundal, D.; Kadyan, V. Image Fusion Techniques: A Survey. Arch. Comput. Methods Eng. 2021, 28, 4425–4447. [Google Scholar] [CrossRef] [PubMed]
  28. Li, S.; Kwok, J.T.; Wang, Y. Using the Discrete Wavelet Frame Transform to Merge Landsat TM and SPOT Panchromatic Images. Inf. Fusion 2002, 3, 17–23. [Google Scholar] [CrossRef]
  29. Singh, S.; Singh, H.; Mittal, N.; Singh, H.; Hussien, A.G.; Sroubek, F. A Feature Level Image Fusion for Night-Vision Context Enhancement Using Arithmetic Optimization Algorithm Based Image Segmentation. Expert Syst. Appl. 2022, 209, 118272. [Google Scholar] [CrossRef]
  30. Roheda, S.; Krim, H.; Luo, Z.-Q.; Wu, T. Decision Level Fusion: An Event Driven Approach. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; pp. 2598–2602. [Google Scholar]
  31. Wang, J.; Zhang, M.; Zhao, Z.; Wei, Z.; Zhang, R. Optical Properties of Cotton and Mulching Film and Feature Bands Selection in the 400 to 1120 Nm Range. Comput. Electron. Agric. 2024, 219, 108747. [Google Scholar] [CrossRef]
Figure 1. Seed cotton and residual mulching film samples.
Figure 1. Seed cotton and residual mulching film samples.
Agronomy 14 01481 g001
Figure 2. Structure sketch of the imaging system.
Figure 2. Structure sketch of the imaging system.
Agronomy 14 01481 g002
Figure 3. Flowchart of image feature fusion.
Figure 3. Flowchart of image feature fusion.
Agronomy 14 01481 g003
Figure 4. Correspondence between the spatial position of the conveyor belt and the image area (The numbers in blue circle point out the channels of detection).
Figure 4. Correspondence between the spatial position of the conveyor belt and the image area (The numbers in blue circle point out the channels of detection).
Agronomy 14 01481 g004
Figure 5. Area array image preprocessing: (a) raw image, (b) grayscale image after removing the background (The red arrow and box point out the lack of information caused by concave regions), (c) image after linear interpolation processing (The red arrow points out the compensation), (d) laser line straightness before smoothing treatment, and (e) after smoothing treatment.
Figure 5. Area array image preprocessing: (a) raw image, (b) grayscale image after removing the background (The red arrow and box point out the lack of information caused by concave regions), (c) image after linear interpolation processing (The red arrow points out the compensation), (d) laser line straightness before smoothing treatment, and (e) after smoothing treatment.
Agronomy 14 01481 g005
Figure 6. Images with more significant gray value features in the laser line direction. (a) Seed cotton and residual mulching film area array image; (b) straightness feature; (c) gray value feature in the laser line direction; (d) gray value derivative feature in the vertical laser line direction.
Figure 6. Images with more significant gray value features in the laser line direction. (a) Seed cotton and residual mulching film area array image; (b) straightness feature; (c) gray value feature in the laser line direction; (d) gray value derivative feature in the vertical laser line direction.
Agronomy 14 01481 g006
Figure 7. Image contrast enhancement of 2 samples (the yellow arrow points to the mulching film).
Figure 7. Image contrast enhancement of 2 samples (the yellow arrow points to the mulching film).
Agronomy 14 01481 g007
Figure 8. Image enhancement and superpixel segmentation of 2 samples (the yellow arrow points to the mulching film).
Figure 8. Image enhancement and superpixel segmentation of 2 samples (the yellow arrow points to the mulching film).
Agronomy 14 01481 g008
Figure 9. Texture features of seed cotton and residual mulching film. (a) Maximum probability; (b) correlation; (c) contrast; (d) energy; (e) homogeneity; (f) entropy.
Figure 9. Texture features of seed cotton and residual mulching film. (a) Maximum probability; (b) correlation; (c) contrast; (d) energy; (e) homogeneity; (f) entropy.
Agronomy 14 01481 g009
Figure 10. Superpixel blocks of samples from the training set: (a) seed cotton; (b) residual mulching film.
Figure 10. Superpixel blocks of samples from the training set: (a) seed cotton; (b) residual mulching film.
Agronomy 14 01481 g010
Figure 11. Effectiveness of SVM model for the residual mulching film recognition of 4 image samples.
Figure 11. Effectiveness of SVM model for the residual mulching film recognition of 4 image samples.
Agronomy 14 01481 g011
Figure 12. Misidentified images. (a) Area array image (The red box indicates the flat cotton layer under line laser); (b) line scan image (The red box indicated the cotton misclassified as mulching film).
Figure 12. Misidentified images. (a) Area array image (The red box indicates the flat cotton layer under line laser); (b) line scan image (The red box indicated the cotton misclassified as mulching film).
Agronomy 14 01481 g012
Table 1. Parameters of the imaging system.
Table 1. Parameters of the imaging system.
Imaging SystemCameraLaser PowerExposure TimeSampling
Time
Conveyor Belt SpeedLaser Tilt AngleDistance to the Surface of Seed Cotton
Area array systemArea array30 mw-300 ms0.2 m/s10°14 cm
Line scan systemLine scan100 mw100 ms-0.2 m/s10°14 cm
Table 2. Test results.
Table 2. Test results.
Methods of AnalysisRecognition Accuracy
Training SetTest Set
Area array system83%75%
Line scan system88%79%
Feature fusion90%83%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Zhang, M.; Wen, Z.; Zhao, Z.; Zhang, R. Residual Mulching Film Detection in Seed Cotton Using Line Laser Imaging. Agronomy 2024, 14, 1481. https://doi.org/10.3390/agronomy14071481

AMA Style

Wang S, Zhang M, Wen Z, Zhao Z, Zhang R. Residual Mulching Film Detection in Seed Cotton Using Line Laser Imaging. Agronomy. 2024; 14(7):1481. https://doi.org/10.3390/agronomy14071481

Chicago/Turabian Style

Wang, Sanhui, Mengyun Zhang, Zhiyu Wen, Zhenxuan Zhao, and Ruoyu Zhang. 2024. "Residual Mulching Film Detection in Seed Cotton Using Line Laser Imaging" Agronomy 14, no. 7: 1481. https://doi.org/10.3390/agronomy14071481

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop