Next Article in Journal
The Influence of Mobile Device Type on Camera-Based Monitoring of Neck Movements for Cervical Rehabilitation
Previous Article in Journal
A Compact High-Isolation Four-Element MIMO Antenna with Asymptote-Shaped Structure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convolutional Neural Network Model for Variety Classification and Seed Quality Assessment of Winter Rapeseed

1
Department of Agronomy, Faculty of Agronomy, Horticulture and Bioengineering, Poznań University of Life Sciences, Dojazd 11, 60-632 Poznań, Poland
2
Department of Genetics and Plant Breeding, Faculty of Agronomy, Horticulture and Bioengineering, Poznań University of Life Sciences, Dojazd 11, 60-632 Poznań, Poland
3
Agricultural College of Coimbra (ESAC/IPC), Research Centre for Natural Resources, Environment and Society (CERNAS), 3045-601 Coimbra, Portugal
4
Department of Biosystems Engineering, Faculty of Environmental and Mechanical Engineering, Poznań University of Life Sciences, Wojska Polskiego 50, 60-637 Poznań, Poland
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(5), 2486; https://doi.org/10.3390/s23052486
Submission received: 20 December 2022 / Revised: 16 February 2023 / Accepted: 20 February 2023 / Published: 23 February 2023
(This article belongs to the Section Sensor Networks)

Abstract

:
The main objective of this study is to develop an automatic classification model for winter rapeseed varieties, to assess seed maturity and damage based on seed colour using a convolutional neural network (CNN). A CNN with a fixed architecture was built, consisting of an alternating arrangement of five classes Conv2D, MaxPooling2D and Dropout, for which a computational algorithm was developed in the Python 3.9 programming language, creating six models depending on the type of input data. Seeds of three winter rapeseed varieties were used for the research. Each imaged sample was 20.000 g. For each variety, 125 weight groups of 20 samples were prepared, with the weight of damaged or immature seeds increasing by 0.161 g. Each of the 20 samples in each weight group was marked by a different seed distribution. The accuracy of the models’ validation ranged from 80.20 to 85.60%, with an average of 82.50%. Higher accuracy was obtained when classifying mature seed varieties (average of 84.24%) than when classifying the degree of maturity (average of 80.76%). It can be stated that classifying such fine seeds as rapeseed seeds is a complex process, creating major problems and constraints, as there is a distinct distribution of seeds belonging to the same weight groups, which causes the CNN model to treat them as different.

1. Introduction

Rapeseed (Brassica napus L.) is the second largest source of vegetable oil in the world, after soya, and the first on the European continent [1,2,3,4,5,6]. According to a report by the United States Department of Agriculture Foreign Agricultural Service [7] global rapeseed seeds production for the 2021/2022 season amounted to 73.86 million tones. Eurostat [8] and the International Grains Council report, that European Union countries produced over twenty million tons of seeds. This means that EU rapeseed production, although 9% lower than the record in 2020/2021, is more than 5% higher compared to the average of the last five years [8].
The value of rapeseed seeds, which are a raw material in the oil industry, is strictly dependent on both the harvesting technology (maturity, amount of damage) and the conditions and method of postharvest handling, especially drying, cleaning, transport, and storage. Therefore, from the technological value point of view, an immensely important problem in rapeseed production is the reduction of seed quality losses. Mechanical damages also cause the initiation of unfavorable chemical and biological transformations in the seeds, which result, inter alia, from their morpho-anatomical structure and chemical composition [2,3,9,10,11].
Mechanical damage to rapeseed seeds most often occurs during harvesting and transport, where there is an interaction of mechanical forces derived from the speed at which moving machinery parts hit the seeds, according to Wang et al. [12]. This speed, along with moisture, as well as seed size, maturity, drying time, and storage conditions, has a decisive influence on the number and degree of damage. The inappropriate exploitation of the machinery used largely contributes to this type of damage. As research shows, up to 15% of micro- and macro-damage to seeds occurs during harvesting because of an inadequate adjustment of individual harvester components, considering the maturity and moisture of the harvested crop.
Rapeseed seeds’ resistance to mechanical damage also depends significantly on their moisture. Higher water content means greater seed flexibility and greater susceptibility to deformation. On the other hand, at low water content, seeds become hard and brittle, and external stresses contribute to the formation of cracks and halving. According to Stępniewski et al. [13] and Szwed et al. [14] initially, as the amount of water rises, the dynamic strength of seeds increases, and then decreases. Seeds with a moisture of less than 7% are the least resistant, and their harvesting and transport (especially pneumatic transport) cause major damage.
The outer surface of the seed has numerous pores and holes that increase the surface area of the seed coat. They also increase the coefficient of wall friction, which affects the contact between seeds and the working elements of the threshing unit. In the polar part of the seed, there is a dent caused by the radicle located under the seed coat. The fact that the seed is not a homogenous, smooth sphere, with a sphericity index range from 0.85 to 0.9 suggests, that there are areas on the seed coat surface that are especially susceptible to cracking [15,16,17,18]. The edge of the seed coat, cracked under contact stresses, is nonhomogeneous and jagged, influenced by the unevenness of the seed coat.
The deep neural networks (DNN) and the convolutional neural networks (CNN) method is a modern and dynamically developing tool used to solve different tasks of multilevel complexity, e.g.,: image analysis and objects recognition [19,20,21], face and action recognition [22], drivers monitoring [23,24,25], quality assessment of products, agricultural produce, and other biological materials [26,27]. DNN significantly improved speech recognition accuracy [28,29,30], and related tasks, such as machine translation [31], natural language processing [32], or generating sound [33]. Models generated by DNN play an important role in understanding the genetics of such illnesses as autism and spinal muscular atrophy [34]. They are also used in medical imaging for detecting cancer of the skin [35], the brain [36], and the breasts [37]. Deep neural networks are also used in robotics for programming manipulators [38], ground robots motion path planning [39], visual navigation [40], aircraft control [41], and autonomous vehicle trajectory control [42].
The possibility of using CNN and image analysis to assess the quality of food products, roots, and the identification of weeds, diseases, and pests of crops are currently the aim of interest for many researchers [43,44,45,46,47]. Computer image analysis becomes one of the main techniques used in agriculture to assess seeds and grains in terms of quality losses, quantifying their degree of mechanical damage, maturity stage, disease infestation, or contamination with other plant species [48]. Due to its noninvasive character and increasing computing power, machine image analysis has significant advantages over labour-intensive and costly methods that destroy the material being assessed [49,50]. Computer techniques enable the implementation of precision agriculture technology to agronomic treatments, balanced fertilization, and strip spot application of plant protection products [51,52].
The use of computer image analysis to assess the quality of seeds, leaves, inflorescences, or even whole rapeseed plants, is presented in numerous studies. Scientists Xia et al. [53] used hyperspectral image analysis (HSI) to detect rapeseed plants’ stress to prolonged flooding of crops in water. The authors studied images of leaves of two rapeseed varieties collected during three periods of plant growth under waterlogged conditions. Kong et al. [54] used the hyperspectral imaging method with a spectral range of 384–1034 nm to detect Sclerotinia sclerotiorum on rapeseed stems, and Zhao et al. [55] on petals. The hyperspectral imaging method was also used by Zhang et al. [56] to determine the soluble protein content, the sugar content Zhang et al. [57], and Bao et al. [58], in turn, to detect glutamic acid in rapeseed leaves. Olivos-Trujillo et al. [59] used a near-infrared spectroscopy method (NIR) and image analysis to determine fat content and other qualitative parameters of rapeseed seeds. In this study, the authors presented three predictive models, of which the ANN-based (Artificial Neural Networks) model had the highest accuracy. Zhang et al. [60] used hyperspectral imaging and leaves images, in turn, to estimate quickly rapeseed seeds yield. Image analysis is also a good method to assess plants’ nutrition level, estimate the number of micro- and macro-nutrients, and a great tool to support the decision-making process of mineral fertilization in precision agriculture conditions [61]. The development of artificial intelligence and the use of CNNs in agricultural practice allows rapid and highly accurate identification of objects and non-destructive diagnostics of real-world models, including plant materials. CNN models are essential in the application of ‘Agriculture 4.0’ technology and digital data analysis. With this in mind, the authors have set themselves the main objective of developing an automatic classification model for winter rapeseed varieties, using a CNN, based on a seed maturity evaluation and seed damage expressed threw a seed coat colour. In this study, an attempt was made to develop a CNN structure, an algorithm describing this structure in order to facilitate the identification of oilseed rape seeds and their degree of damage. In agricultural practice, the ability to quickly assess the degree of seed damage is important in terms of storage and suitability for the processing industry.

2. Material and Research Method

2.1. Data Set Preparation

Seeds of three winter rapeseed varieties were used for this study, i.e., Atora F1, Californium, and Graf F1, which were obtained from Dłoń (51°41′23″ N, 17°04′10″ E) the experimental station of the Poznań University of Life Science. The experimental plots were characterized by soil quality class III, heavy soil type, and good rye complex of agricultural suitability. Mean annual temperature 9.93 °C, sum of precipitation in the whole year 553.67 mm. Seeds were cleared on the sieves and at this time all foreign bodies such as dust, soil residues, stones, and siliques were removed from the samples. Then, the seeds were stored in paper bags at room temperature (20–25 °C). Each imaged sample had a weight of 20.000 g, which allowed it to cover tightly the bottom of the plate, and for each variety 125 weight groups were prepared, with the weight of damaged or immature seeds increasing by 0.161 g in each group. The partitioning thresholds were determined by the laboratory scale range and its minimal weighed amount, which was 160 mg. There were, in turn, 20 samples of the same weight in each group (i.e., 20.000 g), but with differently spread rapeseed seeds. Imaged samples were labelled with a code containing the variety symbol and a sequence number, i.e., atora.0–atora.2499, californium.0–californium.2499, and graf.0–graf.2499. A detailed list of sample codes and seed weights are shown in Table 1.
Rapeseed seed images were taken with a digital camera, which had a 4288 × 3216 (14 million) pixel sensor and a 1/2.3 inch class sensor. The camera was equipped with a 36× optical zoom lens and its shortest focal length was 24 mm, corresponding to the largest aperture of 1:2.9. Seed imaging was performed at maximum zoom and the imaging surface was at a distance of 40 cm from the lens. Imaging was performed in a chamber illuminated by three light sources, at 800 lumens, with a black and non-reflective surface. The image files were stored in the camera’s internal memory, and saved in 96 dpi resolution (2139 × 1888) size in the computer’s memory (Figure 1).

2.2. Defining Seeds Classification Criteria

In optical object recognition and classification, it is very important to select appropriate features of the analyzed image, which should describe them unambiguously. The analyzed images of rapeseed seeds contain small-size, low-contrast objects, which was a determinant in the selection of their resolution. In the classification and recognition of seeds images of the Atora F1, Californium, and Graf F1 varieties, the basic criterion was the colour of the mature seeds or the colour of the seeds at the same weight as the immature seeds, i.e., the different weight groups were compared in pairs. When assessing the degree of seed maturity in individual rapeseed varieties, it was assumed that mature seeds suitable for long-term storage had no more than 1% of immature or damaged seeds. Therefore, in the samples analyzed, the first and second weight groups are considered mature, i.e., samples atora.0–atora.39, californium.0–californium.39, and graf.0–graf.39. These weight groups were treated as one set of the given variety, which was compared with the others considered non-compliant.

2.3. Experimental Set Up

In this study, the algorithms were developed in the Python 3.9 programming language using scientific computing libraries (environments) TensorFlow 2.0, Keras, Scipy, and Numpy. The TensorFlow 2.0 library is a scalable and cross-platform programming interface for running machine learning algorithms. Keras is a specialized API (Application Programming Interface) interface intended for creating neural networks, originally designed as a support class for the TensorFlow 2.0 library. SciPy, on the other hand, is an open-source Python library that is used to solve scientific and mathematical problems. It is built on the NumPy extension and allows the user to manipulate and visualize data using a wide range of high-level commands.

2.4. Loading and Pre-Processing a Data Set

Conceptually, an image in its simplest single-channel form (e.g., binary, monochrome, greyscale, or black and white) is a two-dimensional function f(x, y), mapping a coordinate pair to a real number that is related to the intensity (colour) of a given point. An image can have multiple channels, such as an RGB, where the colour is represented by using three channels red, green, and blue. For an RGB colour image, each pixel in the (x, y) coordinates can be represented by three tuples (Irx,y, Igx,y, Ibx,y). To be processed, the image f(x, y) must be digitized in spatial and amplitude terms. Spatial coordinates (x, y) digitization is called image sampling, and amplitude digitization is called grey-level quantization. The pixel value corresponding to the channel is usually represented as an integer value in the range 0–255 or a floating-point value in the range 0–1. The image is stored as a file, and there can be many different types of files. Each file usually has some data, which can be extracted as 2D multidimensional arrays for binary or greyscale images and 3D arrays for RGB colour images.
When working with rapeseed seeds images, they are loaded into NumPy arrays using the “uint8” data type (i.e., unsigned, 8-bit fixed-point numbers), which take values in the range [0, 255], which is quite sufficient for storing pixel information in RGB images. Two TensorFlow 2.0. modules will be used to prepare the data set. The first is tf.io used for loading and storing data and the second is tf.image for decoding raw content and resizing images.
Firstly, the contents of the files were checked and a list of image names of the rapeseed seeds samples was generated using the pathlib library. Then, they were visualized and sized according to code 1 added in https://github.com/piotrrybacki/seed-quality-CNN; (accessed on 19 December 2022) (Figure 2).
The list of files displayed shows, that the set of data contains 7500 images of winter rapeseed seeds, 2500 for each variety, and occupies approximately 9.5 GB. The images of the imaged rapeseed seeds were arranged in two ways depending on the type of analysis being conducted. For the recognition of rapeseed seeds variety, the images were divided into three subsets, i.e., a learning set containing 4500 samples (1500 samples from each variety), and validation and test sets containing 1500 samples (500 samples of each variety). For the seeds’ maturity assessment, each variety was in turn divided into learning sets containing 1500 samples, and validation and test sets containing 500 samples each. Depending on the type of conducted analysis, models based on the proposed CNN architecture were labelled according to the data in Table 2.
Listing 2 shows code 2 (https://github.com/piotrrybacki/seed-quality-CNN; (accessed on 19 December 2022)) for automatically copying images from the source directory to the learning, validation, and testing directories.

2.5. Multilayer Architecture of CNN Network

The network was implemented using the Keras interface. Due to the extensive analysis of the imaged seed, the overall structure of the CNN is an alternating arrangement of five classes Conv2D (with activation function ReLu), MaxPooling2D, and Dropout. By default, the Conv2D class assumes, that the input data are compatible with the NWHC format, where N stands for the number of images in the batch group, W and H designate the width and height of the image, respectively, and C is the number of channels. As shown in Figure 3, each convolutional layer was followed by a pooling layer for subsampling, reducing the size of the feature map. MaxPoo12D class creates maximizing pooling layers. The argument pool size = 2 specifies the size of the window used to calculate the maximum value, and the strides = 1 parameter was used to configure the pooling layer. The use of the Dropout class in the analysis will allow the construction of a dropout layer for regularization, where the argument rate determines the probability of input units being dropped during network learning. When calling this layer, it is possible to regulate its operation by using a training argument, which determines if the call is to occur during learning or inference.
The input sensor was arbitrarily transformed to 200 × 200 object maps to finally produce 7 × 7 object maps just before the flattening layer. The depth of object maps gradually increases in the network from 32 to 128, while the size of object maps decreases (from 200 × 200 to 7 × 7). As the model under development uses a binary classification, the network ends with Dense layers. One with a dimension of 512 and a ReLu activation function, and the second with a dimension of 1 and a Sigmoid activation function. Listing 3 attached in https://github.com/piotrrybacki/seed-quality-CNN; (accessed on 19 December 2022) shows the programming code for the model in Figure 3.
The example algorithm automatically, considering the name of the file (sample image) sorted and copied them to the appropriate directory, from which they were then downloaded by the CNN model, depending on the type of comparison executed.
The next stage of the model under development is to plot the loss curves and the analysis and prediction accuracy values according to code 4 attached in: https://github.com/piotrrybacki/seed-quality-CNN; (accessed on 19 December 2022).
The final stage of the analysis is to display the results of the predictions in the form of probabilities of belonging to each class (of variety or maturity) and transform them into the predicted classes using the function tf.argmax, which will search for the image with the highest probability of belonging and assign a corresponding label that is the name of the variety or maturity. This was done for the group of 10 examples in each model and both input data and predicted labels were visualized according to code 5 attached in https://github.com/piotrrybacki/seed-quality-CNN; (accessed on 19 December 2022).

3. Results of the Analysis

The result of the conducted analyses is a proposal for a CNN architecture and a code in Python 3.9 that enables the automatic comparison and recognition of fully mature rapeseed seed varieties and the assessment of their immaturity or damage degree. Table 3 summarizes the changes in map size depending on the layer number of the developed CNN model. As can be seen from the data, each hidden layer of the CNN network model causes maps to decrease, yielding 6,795,457 parameters in the output.
The code developed for the proposed CNN architecture allowed images to be automatically sorted into training, validation, and test directories. Then, based on a random sequence, the algorithm performed unsupervised training of the individual models and their validation, the results of which are shown in Figure 4.
The main objective of the analyses was to develop as accurate a model as possible to classify oilseed rape seeds. Therefore, the primary measure was the accuracy of the validation. As can be seen in Figure 5, this accuracy initially increased up to 30 epochs, then stabilised to 40 epochs, after which it decreased. This may be due to overtraining of the model. Therefore, 30 epochs were used as the optimal value, which is the solution to the problem.
As presented in Table 4, the accuracy of model validation ranged from 80.20% to 85.60%, with an average of 82.50%. Higher accuracy was obtained when classifying mature seed varieties (average of 84.24%) than when classifying the degree of maturity (average of 80.76%). The highest accuracy (85.60%) was obtained for the RAPESEEDS_CG model classifying mature seeds of the Californium and Graf F1 varieties, and the lowest accuracy (83.24%) for the RAPESEEDS_AC classifying mature seeds of the Atora F1 and Californium varieties. On the other hand, when assessing seeds maturity the highest accuracy (81.17%) was obtained for the RAPESEEDS_GQ model classifying Graf F1 variety, and the lowest (80.20%) for the RAPESEEDS_AQ model classifying the maturity of the Atora F1 variety.
The final result of the conducted analysis was to display, according to code 5, the result of the predictions in the form of probabilities of belonging to each class. The developed algorithm searched for the image with the highest probability of belonging and assigned the corresponding label, which is the name of the variety in the models: RAPESEEDS_AC, RAPESEEDS_CG, RAPESEEDS_GA, and for maturity in models RAPESEEDS_AQ, RAPESEEDS_CQ, RAPESEEDS_GQ a conventional label was “True” meaning mature seeds, or “False” for immature seeds. This was done for the group of 10 examples, in each model, and both input data and predicted labels were visualized (Figure 6). For four models (RAPESEEDS_GA, RAPESEEDS_AQ, RAPESEEDS_CQ, RAPESEEDS_GQ) out of 10 samples three were misidentified, one model (RAPESEEDS_CG) misidentified one imaged sample and one model (RAPESEEDS_AC) misidentified two samples.

4. Discussion

The identification and classification of rapeseed and cereal seeds have become an important part of their storage and further processing, where information on their type and quality is required. Seed classification of rapeseed varieties Bristol, Californium, Dexter, Finesse, Licord, Orkan, and Valeska was conducted by Kurtulmus and Ünal [62] using algorithms programmed in Python 2.7 language and the Scipy, Numpy, and Scikit-image environments. Using various prediction methods, they achieved an overall classification accuracy rate of 99.24%, claiming that it was even possible to achieve 100.00% model accuracy. However, such an accuracy rate is not recommended in machine learning and computer image analysis, due to the danger of overtraining the model. Research on the classification of rapeseed seeds has also been conducted by Zou et al. [63], using the potential of the visible, near-infrared spectra and Back Propagation in Neural Network BPNN, proposing a model with 100.00% accuracy. Sun et al. [64] in turn used CNN to recognize rapeseed plants in the field. They applied the method of increasing hidden convolutional layers and its impact on model accuracy. The authors showed, that increasing hidden layers does not significantly improve the accuracy of the model, obtaining the highest average recognition accuracy of 93.54%, and the minimum value of the loss function of 0.206 with three convolutional layers. Jung et al. [65], on the other hand, applied three CNN architectures to recognize rapeseed in early growth stages with 10°, rotating plant images, achieving validation accuracy ranging from 13.04 to 88.89%, with an average of 58.34%. Comparing those results to the model proposed in this study, which has an average validation accuracy of 82.50%, it can be concluded, that it meets expectations in terms of the accuracy of rapeseed seeds recognition. According to the research of Ni et al. [66] on the classification of corn seeds and Lin et al. [67] on the classification of rice seeds, it is possible to obtain higher accuracy (over 90%) with larger research objects, as it is easier to analyze their texture.
Zhang et al. [68] proposed a CNN-based algorithm for citrus fruit detection, quality classification and automatic identification of the five most common diseases. The authors tested several state of the network architectures for their performance on a set of 1524 images taken under field conditions from different orchards at different time intervals, scales, angles, and lighting conditions. They obtained fruit identification precision and accuracy of 87.2% and 89.0%. Bernardes et al. [69] used CNN methods to discriminate between Fusarium head blight (FHB)-infected seeds of wheat cultivar TBIO Toruk. The models achieved 99% accuracy in detecting FHB in seeds. These results suggest the potential of imaging technology and deep learning models for accurate seed classification.
Howard et al. [70], on the other hand, used a CNN-based model architecture, called MobileNets, for object detection, geolocalisation, fine structure classification and face recognition, while Hamid et al. [71] in their study used the MobileNetV2 spline neural network, to classify 14 different seed classes, and its accuracy was 98% and 95% in the training and test sets, respectively. The MobileNetV2 model of Albarrak et al. [72] also used a dataset containing eight different classes of date fruit in their study, achieving 99% accuracy. The proposed model was also compared with other existing models such as AlexNet, VGG16, InceptionV3, ResNet, and MobileNets.

5. Conclusions

This study proposes an automatic classification model for winter rapeseed seeds of three varieties and the assessment of their maturity degree based on colour contrast using CNN. A CNN with a fixed architecture was built, consisting of an alternating arrangement of five classes Conv2D, MaxPooling2D, and Dropout, for which in the Python 3.9 programming language. Using scientific computational environments TensorFlow 2.0, Keras, Scipy, and Numpy, a computational algorithm was developed, creating six models depending on the type of input data. The algorithm proposed in this study described with a code, allows the number of classes to be changed smoothly and the number of images copied to the training, validation and test directories to be changed and randomly, making data analysis much easier.
The validation accuracy of models presented in this study ranged from 80.20% to 85.60%, with an average of 82.50%. Higher accuracy was obtained when classifying mature seed varieties (average of 84.24%) than when classifying the degree of their maturity (average of 80.76%) within a single variety. This is due to the fact, that immature or damaged seeds of the varieties tested did not differ significantly in colour. After the damage to the seed coat, the seeds were a similar yellow colour. These results can be seen in Figure 6, where for four models, out of 10 samples three were misidentified, one model misidentified one imaged sample and one misidentified two samples. It should be added that when it comes to varieties classification, all samples were from the same weight groups.
As a conclusion, it can be stated that classifying such fine seeds as rapeseed seeds is a complex process, creating major problems and constraints, as there is a distinct distribution of seeds belonging to the same weight groups, which causes the CNN model to treat them as different. With this in mind, it is advisable to continue research and analysis on a vision-based seed classification model. The proposed model will be extended to classify seeds based on their texture. Analysis based on two criteria will significantly increase the accuracy of the model.

Author Contributions

Conceptualization, P.R. and J.N.; methodology, P.R.; software, P.R.; validation, P.R., J.N., K.B. and K.D.; formal analysis, J.N. and K.B.; investigation, P.R. and K.B.; resources, J.N.; data curation, P.R., J.N., K.B. and K.D.; writing—original draft preparation, P.R., J.N., K.B. and K.D.; writing—review and editing, P.R. and J.N.; funding acquisition, P.R. and J.N. All authors have read and agreed to the published version of the manuscript.

Funding

Publication was co-financed within the framework of the Polish Ministry of Science and Higher Education’s program: “Regional Excellence Initiative” in the years 2019–2023 (No. 005/RID/2018/19)”, financing amount 1,200,000,000 PLN.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is available in https://github.com/piotrrybacki/seed-quality-CNN; (accessed on 19 December 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arrutia, F.; Binnera, E.; Williams, P.; Waldronc, K.W. Oilseeds beyond oil: Press cakes and meals supplying global protein requirements. Trends Food Sci. Technol. 2020, 100, 88–102. [Google Scholar] [CrossRef]
  2. Campbell, L.; Rempel, C.B.; Wanasundara, J.P. Canola/Rapeseed Protein: Future Opportunities and Directions—Workshop Proceedings of IRC 2015. Plants 2016, 5, 17. [Google Scholar] [CrossRef] [PubMed]
  3. Fairhurst, S.M.; Cole, L.J.; Kocarkova, T.; Jones-Morris, C.; Evans, A.; Jackson, G. Agronomic Traits in Oilseed Rape (Brassica napus) Can Predict Foraging Resources for Insect Pollinators. Agronomy 2021, 11, 440. [Google Scholar] [CrossRef]
  4. Poisson, E.; Trouverie, J.; Brunel-Muguet, S.; Akmouche, Y.; Pontet, C.; Pinochet, X.; Avice, J.-C. Seed Yield Components and Seed Quality of Oilseed Rape Are Impacted by Sulfur Fertilization and Its Interactions with Nitrogen Fertilization. Front. Plant Sci. 2019, 10, 458. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Zhang, C.; Feng, X.; Wang, J.; Liu, F.; He, Y.; Zhou, W. Mid-infrared spectroscopy combined with chemometrics to detect Sclerotinia stem rot on oilseed rape (Brassica napus L.) leaves. Plant Methods 2017, 13, 1–9. [Google Scholar] [CrossRef] [Green Version]
  6. Roberts, H.R.; Dodd, I.C.; Hayes, F.; Ashworth, K. Chronic tropospheric ozone exposure reduces seed yield and quality in spring and winter oilseed rape. Agric. For. Meteorol. 2022, 316, 108859. [Google Scholar] [CrossRef]
  7. USDA, Oilseeds: World Markets and Trade. 2022. Available online: https://www.fas.usda.gov/data/oilseeds-world-markets-and-trade (accessed on 12 May 2022).
  8. Eurostat. Available online: https://ec.europa.eu/eurostat (accessed on 22 May 2022).
  9. Namazkar, S.; Stockmarr, A.; Frenck, G.; Egsgaard, H.; Terkelsen, T.; Mikkelsen, T.; Ingvordsen, C.H.; Jørgensen, R.B. Concurrent elevation of CO2, O3 and temperature severely affects oil quality and quantity in rapeseed. J. Exp. Bot. 2016, 67, 4117–4125. [Google Scholar] [CrossRef] [Green Version]
  10. Niemann, J.; Bocianowski, J.; Wojciechowski, A. Effects of genotype and environment on seed quality traits variability in interspecific cross-derived Brassica lines. Euphytica 2018, 214, 193. [Google Scholar] [CrossRef] [Green Version]
  11. Niemann, J.; Wojciechowski, A.; Janowicz, J. Broadening the variability of quality traits in rapeseed through interspecific hybridization with an application of immature embryo culture. Biotechnologia 2012, 2, 109–115. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, G.; Guan, Z.; Mu, S.; Tang, Q.; Wu, C. Optimization of operating parameter and structure for seed thresher device for rape combine harvester. Nongye Gongcheng Xuebao/Trans. Chin. Soc. Agric. Eng. 2017, 33, 52–57. [Google Scholar] [CrossRef]
  13. Stępniewski, A.; Szot, B.; Sosnowski, S. Uszkodzenia nasion rzepaku w pozbiorowym procesie obróbki. Acta Agrophysica 2003, 2, 195–203. [Google Scholar]
  14. Szwed, G. Kształtowanie fizycznych i technologicznych cech nasion rzepaku w modelowych warunkach przechowywania. Acta Agrophysica 2000, 27, 3–116. [Google Scholar]
  15. Gupta, S.K.; Delseny, M.; Kader, J.C. Advances in botanical research. Incorporating advances in plant pathology. In Rapeseed breeding; Academic Press: Cambridge, MA, USA, 2007; ISBN -13: 978-0-12-374098-4. [Google Scholar]
  16. Reddy, G.V.P. Integrated Management of Insect Pests on Canola and Other Brassica Oilseed Crops; CPI Group (UK) Ltd.: Croydon, UK, 2017; ISBN -13: 978 1 78064 820 0. [Google Scholar]
  17. Liu, T.; Tao, B.; Wu, H.; Wen, J.; Yi, B.; Ma, C.; Tu, J.; Fu, T.; Zhu, L.; Shen, J. Bn. YCO affects chloroplast development in Brassica napus L. Crop. J. 2020, 9, 992–1002. [Google Scholar] [CrossRef]
  18. Kirkegaard, J.A.; Lilley, J.M.; Berry, P.M.; Rondanini, D.P. Chapter 17—Canola. Crop Physiol. Case Hist. Major Crops 2021, 2021, 518–549. [Google Scholar] [CrossRef]
  19. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  20. Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Szegedy, C.; Toshev, A.; Erhan, D. Deep neural networks for object detection. Adv. Neural Inf. Process. Syst. 2013, 2553–2561. [Google Scholar]
  22. Schroff, F.; Kalenichenko, D.; Philbin, J. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
  23. Lemley, J.; Bazrafkan, S.; Corcoran, P. Deep Learning for Consumer Devices and Services: Pushing the limits for machine learning, artificial intelligence, and computer vision. IEEE Consum. Electron. Mag. 2017, 6, 48–56. [Google Scholar] [CrossRef] [Green Version]
  24. Lemley, J.; Bazrafkan, S.; Corcoran, P. Smart Augmentation Learning an Optimal Data Augmentation Strategy. IEEE Access 2017, 5, 5858–5869. [Google Scholar] [CrossRef]
  25. Lemley, J.; Bazrafkan, S.; Corcoran, P. Transfer learning of temporal information for driver action classification. In Proceedings of the 28th Modern Artificial Intelligence and Cognitive Science Conference, Fort Wayne, IN, USA, 28–29 April 2017. [Google Scholar] [CrossRef]
  26. Bordes, A.; Chopra, S.; Weston, J. Question answering with subgraph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 26–28 October 2014. [Google Scholar]
  27. Jean, S.; Cho, K.; Memisevic, R.; Bengio, Y. On using very large target vocabulary for neural machine translation. arXiv 2015, arXiv:1412.2007. [Google Scholar]
  28. Ma, J.; Sheridan, R.P.; Liaw, A.; Dahl, G.E.; Svetnik, V. Deep Neural Nets as a Method for Quantitative Structure–Activity Relationships. J. Chem. Inf. Model. 2015, 55, 263–274. [Google Scholar] [CrossRef]
  29. Mikolov, T.; Deoras, A.; Povey, D.; Burget, L.; Cernocky, J. Strategies for training large scale neural network language models. Proc. Autom. Speech Recognit. Underst. 2011, 196–201. [Google Scholar] [CrossRef]
  30. Sainath, T.N.; Mohamed, A.; Kingsbury, B.; Ramabhadran, B. Deep convolutional neural networks for LVCSR. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 8614–8618. [Google Scholar] [CrossRef]
  31. Wang, Q.; Ma, X. Machine Translation Quality Assessment of Selected Works of Xiaoping Deng Supported by Digital Humanistic Method. Int. J. Appl. Linguistics Transl. 2021, 7, 59–68. [Google Scholar] [CrossRef]
  32. Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; Kuksa, P. Natural Language Processing (Almost) from Scratch. J. Mach. Learn. Res. 2011, 12, 2493–2537. [Google Scholar]
  33. van den Oord, A.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. WaveNet: A generative model for raw audio. arXiv 2016, arXiv:1609.03499v2. [Google Scholar]
  34. Zhou, J.; Troyanskaya, O.G. Predicting effects of noncoding variants with deep learning–based sequence model. Nat. Methods 2015, 12, 931–934. [Google Scholar] [CrossRef] [Green Version]
  35. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef] [PubMed]
  36. Jermyn, M.; Mok, K.; Mercier, J.; Desroches, J.; Pichette, J.; Saint-Arnaud, K.; Bernstein, L.; Guiot, M.-C.; Petrecca, K.; Leblond, F. Intraoperative brain cancer detection with Raman spectroscopy in humans. Sci. Transl. Med. 2015, 7, 274ra219. [Google Scholar] [CrossRef] [PubMed]
  37. Wang, K.; Wu, F.; Seo, B.R.; Fischbach, C.; Chen, W.; Hsu, L.; Gourdon, D. Breast cancer cells alter the dynamics of stromal fibronectin-collagen interactions. Matrix Biol. 2016, 60–61, 86–95. [Google Scholar] [CrossRef]
  38. Levine, S.; Pastor, P.; Krizhevsky, A.; Ibarz, J.; Quillen, D. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robot. Res. 2017, 37, 421–436. [Google Scholar] [CrossRef]
  39. Pfeiffer, M.; Schaeuble, M.; Nieto, J.; Siegwart, R.; Cadena, C. From perception to decision: A data-driven approach to end-to-end motion planning for autonomous ground robots. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Marina Bay Sands, Singapore, 29 May–3 June 2017; pp. 1527–1533. [Google Scholar] [CrossRef] [Green Version]
  40. Gupta, S.; Davidson, J.; Levine, S.; Sukthankar, R.; Malik, J. Cognitive Mapping and Planning for Visual Navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 2616–2625. [Google Scholar]
  41. Dong, Y.; Tao, J.; Zhang, Y.; Lin, W.; Ai, J. Deep Learning in Aircraft Design, Dynamics, and Control: Review and Prospects. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 2346–2368. [Google Scholar] [CrossRef]
  42. Shalev-Shwartz, S.; Shammah, S.; Shashua, A. Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving. arXiv 2016, arXiv:abs/1610.03295. [Google Scholar]
  43. Bhupendra, M.K.; Miglani, A.; Kankar, P.K. Deep CNN-based damage classification of milled rice grains using a high-magnification image dataset. Comput. Electron. Agric. 2022, 195, 106811. [Google Scholar] [CrossRef]
  44. Hashim, N.; Onwude, D.I.; Maringgal, B. Chapter 15—Technological advances in postharvest management of food grains. Res. Technol. Adv. Food Sci. 2022, 371–406. [Google Scholar] [CrossRef]
  45. Ni, J.; Liu, B.; Li, J.; Gao, J.; Yang, H.; Han, Z. Detection of Carrot Quality Using DCGAN and Deep Network with Squeeze-and-Excitation. Food Anal. Methods 2022, 15, 1432–1444. [Google Scholar] [CrossRef]
  46. Sun, D.; Robbins, K.; Morales, N.; Shu, Q.; Cen, H. Advances in optical phenotyping of cereal crops. Trends Plant Sci. 2021, 27, 191–208. [Google Scholar] [CrossRef] [PubMed]
  47. Zhang, J.; Qu, M.; Gong, Z.; Cheng, F. Online double-sided identification and eliminating system of unclosed-glumes rice seed based on machine vision. Measurement 2021, 187, 110252. [Google Scholar] [CrossRef]
  48. Patel, K.K.; Kar, A.; Jha, S.N.; Khan, M.A. Machine vision system: A tool for quality inspection of food and agricultural products. J. Food Sci. Technol. 2012, 49, 123–141. [Google Scholar] [CrossRef] [Green Version]
  49. Mahajan, S.; Das, A.; Sardana, H.K. Image acquisition techniques for assessment of legume quality. Trends Food Sci. Technol. 2015, 42, 116–133. [Google Scholar] [CrossRef]
  50. Patrício, D.I.; Rieder, R. Computer vision and artificial intelligence in precision agriculture for grain crops: A systematic review. Comput. Electron. Agric. 2018, 153, 69–81. [Google Scholar] [CrossRef] [Green Version]
  51. Rybacki, P.; Przygodziński, P.; Osuch, A.; Blecharczyk, A.; Walkowiak, R.; Osuch, E.; Kowalik, I. The Technology of Precise Application of Herbicides in Onion Field Cultivation. Agriculture 2021, 11, 577. [Google Scholar] [CrossRef]
  52. Rybacki, P.; Przygodziński, P.; Blecharczyk, A.; Kowalik, I.; Osuch, A.; Osuch, E. Strip spraying technology for precise herbicide application in carrot fields. Open Chem. 2022, 20, 287–296. [Google Scholar] [CrossRef]
  53. Xia, J.; Cao, H.; Yang, Y.; Zhang, W.; Wan, Q.; Xu, L.; Ge, D.; Zhang, W.; Ke, Y.; Huang, B. Detection of waterlogging stress based on hyperspectral images of oilseed rape leaves (Brassica napus L.). Comput. Electron. 2019, 159, 59–68. [Google Scholar] [CrossRef]
  54. Kong, W.; Zhang, C.; Huang, W.; Liu, F.; He, Y. Application of Hyperspectral Imaging to Detect Sclerotinia sclerotiorum on Oilseed Rape Stems. Sensors 2018, 18, 123. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Zhao, Y.R.; Yu, K.Q.; Li, X.; He, Y. Detection of Fungus Infection on Petals of Rapeseed (Brassica napus L.) Using NIR Hyperspectral Imaging. Sci. Rep. 2016, 6, 38878. [Google Scholar] [CrossRef]
  56. Zhang, C.; Liu, F.; Kong, W.; He, Y. Application of Visible and Near-Infrared Hyperspectral Imaging to Determine Soluble Protein Content in Oilseed Rape Leaves. Sensors 2015, 15, 16576–16588. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Zhang, C.; Liu, F.; Kong, W.; Cui, P.; He, Y.; Zhou, W. Estimation and Visualization of Soluble Sugar Content in Oilseed Rape Leaves Using Hyperspectral Imaging. Trans. ASABE 2016, 59, 1499–1505. [Google Scholar] [CrossRef]
  58. Bao, Y.; Kong, W.; Liu, F.; Qiu, Z.; He, Y. Detection of Glutamic Acid in Oilseed Rape Leaves Using Near Infrared Spectroscopy and the Least Squares-Support Vector Machine. Int. J. Mol. Sci. 2012, 13, 14106–14114. [Google Scholar] [CrossRef] [Green Version]
  59. Olivos-Trujillo, M.’; Gajardo, H.A.; Salvo, S.; González, A.; Muñoz, C. Assessing the stability of parameters estimation and prediction accuracy in regression methods for estimating seed oil content in Brassica napus L. using NIR spectroscopy. In Proceedings of the 2015 CHILEAN Conference on Electrical, Electronics Engineering, Information and Communication Technologies (CHILECON), Santiago, Chile, 28–30 October 2015; pp. 25–30. [Google Scholar] [CrossRef]
  60. Zhang, X.; Liu, F.; He, Y.; Gong, X. Detecting macronutrients content and distribution in oilseed rape leaves based on hyperspectral imaging. Biosyst. Eng. 2013, 115, 56–65. [Google Scholar] [CrossRef]
  61. Hu, J.; Xu, X.; Liu, L.; Yang, Y. Application of Extreme Learning Machine to Visual Diagnosis of Rapeseed Nutrient Deficiency. In International Forum on Digital TV and Wireless Multimedia Communications; Springer: Singapore, 2018; pp. 238–248. [Google Scholar] [CrossRef]
  62. Kurtulmuş, F.; Ünal, H. Discriminating rapeseed varieties using computer vision and machine learning. Expert Syst. Appl. 2015, 42, 1880–1891. [Google Scholar] [CrossRef]
  63. Zou, Q.; Fang, H.; Liu, F.; Kong, W.; He, Y. Comparative Study of Distance Discriminant Analysis and Bp Neural Network for Identification of Rapeseed Cultivars Using Visible/Near Infrared Spectra. In Computer and Computing Technologies in Agriculture IV. CCTA 2010. IFIP Advances in Information and Communication Technology; Li, D., Liu, Y., Chen, Y., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; p. 347. [Google Scholar] [CrossRef] [Green Version]
  64. Sun, Z.; Guo, X.; Xu, Y.; Zhang, S.; Cheng, X.; Hu, Q.; Wang, W.; Xue, X. Image Recognition of Male Oilseed Rape (Brassica napus) Plants Based on Convolutional Neural Network for UAAS Navigation Applications on Supplementary Pollination and Aerial Spraying. Agriculture 2022, 12, 62. [Google Scholar] [CrossRef]
  65. Jung, M.; Song, J.S.; Hong, S.; Kim, S.; Go, S.; Lim, Y.P.; Park, J.; Park, S.G.; Kim, Y.-M. Deep Learning Algorithms Correctly Classify Brassica rapa Varieties Using Digital Images. Front. Plant Sci. 2021, 12, 738685. [Google Scholar] [CrossRef] [PubMed]
  66. Ni, C.; Wang, D.; Vinson, R.; Holmes, M.; Tao, Y. Automatic inspection machine for maize kernels based on deep convolutional neural networks. Biosyst. Eng. 2018, 178, 131–144. [Google Scholar] [CrossRef]
  67. Lin, P.; Li, X.L.; Chen, Y.M.; He, Y. A deep convolutional neural network architecture for boosting image discrimination accuracy of rice species. Food Bioprocess Technol. 2018, 11, 765–773. [Google Scholar] [CrossRef]
  68. Zhang, X.; Xun, Y.; Chen, Y. Automated identification of citrus diseases in orchards using deep learning. Biosyst. Eng. 2022, 223, 249–258. [Google Scholar] [CrossRef]
  69. Bernardes, R.C.; De Medeiros, A.; da Silva, L.; Cantoni, L.; Martins, G.F.; Mastrangelo, T.; Novikov, A.; Mastrangelo, C.B. Deep-Learning Approach for Fusarium Head Blight Detection in Wheat Seeds Using Low-Cost Imaging Technology. Agriculture 2022, 12, 1801. [Google Scholar] [CrossRef]
  70. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  71. Hamid, Y.; Wani, S.; Soomro, A.B.; Alwan, A.A.; Gulzar, Y. Smart Seed Classification System based on MobileNetV2 Architecture. In Proceedings of the 2nd International Conference on Computing and Information Technology (ICCIT), Tabuk, Saudi Arabia, 25–27 January 2022; pp. 217–222. [Google Scholar]
  72. Albarrak, K.; Gulzar, Y.; Hamid, Y.; Mehmood, A.; Soomro, A.B. A Deep Learning-Based Model for Date Fruit Classification. Sustainability 2022, 14, 6339. [Google Scholar] [CrossRef]
Figure 1. Seed samples of oilseed rapeseed varieties: (a) Atora F1, (b) Californium, (c) Graf F1.
Figure 1. Seed samples of oilseed rapeseed varieties: (a) Atora F1, (b) Californium, (c) Graf F1.
Sensors 23 02486 g001
Figure 2. Visualization of images of rape seed samples.
Figure 2. Visualization of images of rape seed samples.
Sensors 23 02486 g002
Figure 3. Diagram of the implemented CNN network.
Figure 3. Diagram of the implemented CNN network.
Sensors 23 02486 g003
Figure 4. Visualization of loss function curves and learning accuracy and validation for the created CNN in models: (a) RAPESEEDS_AC, (b) RAPESEEDS_CG, (c) RAPESEEDS_GA, (d) RAPESEEDS_AQ, (e) RAPESEEDS_CQ, (f) RAPESEEDS_GQ.
Figure 4. Visualization of loss function curves and learning accuracy and validation for the created CNN in models: (a) RAPESEEDS_AC, (b) RAPESEEDS_CG, (c) RAPESEEDS_GA, (d) RAPESEEDS_AQ, (e) RAPESEEDS_CQ, (f) RAPESEEDS_GQ.
Sensors 23 02486 g004aSensors 23 02486 g004bSensors 23 02486 g004c
Figure 5. Accuracy of models depending on the number of epochs: (a) in the training process, (b) in the validation process.
Figure 5. Accuracy of models depending on the number of epochs: (a) in the training process, (b) in the validation process.
Sensors 23 02486 g005aSensors 23 02486 g005b
Figure 6. Input of oilseed rape seed images with their predicted labels. Red box indicates incorrect model reading.
Figure 6. Input of oilseed rape seed images with their predicted labels. Red box indicates incorrect model reading.
Sensors 23 02486 g006
Table 1. Weight group of the imaged three winter rape seed samples (rape and unripe) with the respective weights.
Table 1. Weight group of the imaged three winter rape seed samples (rape and unripe) with the respective weights.
Seed Weight
[g]
Research
Sample
Weight GroupRipe
Seeds
Unripe
Seeds
AtoraCaliforniumGraf
120.0000.000atora.0–atora.19californium.0–californium.19graf.0–graf.19
219.8390.161atora.20–atora.39californium.20–californium.39graf.20–graf.39
319.6770.323atora.40–atora.59californium.40–californium.59graf.40–graf.59
419.5160.484atora.60–atora.79californium.60–californium.79graf.60–graf.79
519.3550.645atora.80–atora.99californium.80–californium.99graf.80–graf.99
619.1940.807atora.100–atora.119californium.100–californium.119graf.100–graf.119
719.0320.968atora.120–atora.139californium.120–californium.139graf.120–graf.139
818.8711.129atora.140–atora.159californium.140–californium.159graf.140–graf.159
918.7101.290atora.160–atora.179californium.160–californium.179graf.160–graf.179
1018.5481.452atora.180–atora.199californium.180–californium.199graf.180–graf.199
1118.3871.613atora.200–atora.219californium.200–californium.219graf.200–graf.219
1151.61218.388atora.2280–atora.2299californium.2280–californium.2299graf.2280–graf.2299
1161.45018.550atora.2300–atora.2319californium.2300–californium.2319graf.2300–graf.2319
1171.28918.711atora.2320–atora.2339californium.2320–californium.2339graf.2320–graf.2339
1181.12818.872atora.2340–atora.2359californium.2340–californium.2359graf.2340–graf.2359
1190.96719.033atora.2360–atora.2379californium.2360–californium.2379graf.2360–graf.2379
1200.80519.195atora.2380–atora.2399californium.2380–californium.2399graf.2380–graf.2399
1210.64419.356atora.2400–atora.2419californium.2400–californium.2419graf.2400–graf.2419
1220.48319.517atora.2420–atora.2439californium.2420–californium.2439graf.2420–graf.2459
1230.32119.679atora.2440–atora.2459californium.2440–californium.2459graf.2440–graf.2459
1240.16019.840atora.2460–atora.2479californium.2460–californium.2479graf.2460–graf.2479
1250.00020.000atora.2480–atora.2499californium.2480–californium.2499graf.2480–graf.2499
Table 2. Description of arboricultural seed classification models.
Table 2. Description of arboricultural seed classification models.
Model CNNDescription
RAPESEEDS_ACClassification of rapeseed varieties Atora F1—Californium
RAPESEEDS_CGClassification of rapeseed varieties Californium—Graf F1
RAPESEEDS_GAClassification of rapeseed varieties Graf F1—Atora F1
RAPESEEDS_AQEvaluation of rape variety Atora F1
RAPESEEDS_CQEvaluation of rape variety Californium
RAPESEEDS_GQEvaluation of rape variety Graf F1
Table 3. Variation of map size according to layer number of the developed CNN model.
Table 3. Variation of map size according to layer number of the developed CNN model.
Layer (Type)Output ShapeParam
conv2d (Conv2D)(None, 198, 198, 32)896
max_pooling2d (MaxPooling2D)(None, 99, 99, 32)0
dropout (Dropout)(None, 99, 99, 32)0
conv2d_1 (Conv2D)(None, 97, 97, 64)18,496
max_pooling2d_1 (MaxPooling2D)(None, 48, 48, 64)0
dropout_1 (Dropout)(None, 48, 48, 64)0
conv2d_2 (Conv2D)(None, 46, 46, 128)73,856
max_pooling2d_2 (MaxPooling2D)(None, 23, 23, 128)0
dropout_2 (Dropout)(None, 23, 23, 128)0
conv2d_3 (Conv2D)(None, 21, 21, 128)147,584
max_pooling2d_3 (MaxPooling2D)(None, 10, 10, 128)0
dropout_3 (Dropout)(None, 10, 10, 128)0
flatten (Flatten)(None, 12800)0
dense (Dense)(None, 512)6,554,112
dense_1 (Dense)(None, 1)513
Total params: 6,795,457; Trainable params: 6,795,457; Non-trainable params: 0.
Table 4. Average accuracy and loss values of the training and validation process for CNN models in 30 epoch.
Table 4. Average accuracy and loss values of the training and validation process for CNN models in 30 epoch.
Model CNNAccuracy [%]Loss
TrainingValidationTrainingValidation
RAPESEEDS_AC93.1183.240.180.38
RAPESEEDS_CG94.1985.600.170.43
RAPESEEDS_GA90.9083.880.190.32
RAPESEEDS_AQ93.5380.200.190.37
RAPESEEDS_CQ90.3580.900.180.43
RAPESEEDS_GQ92.2381.170.170.41
Average92.3982.500.180.39
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rybacki, P.; Niemann, J.; Bahcevandziev, K.; Durczak, K. Convolutional Neural Network Model for Variety Classification and Seed Quality Assessment of Winter Rapeseed. Sensors 2023, 23, 2486. https://doi.org/10.3390/s23052486

AMA Style

Rybacki P, Niemann J, Bahcevandziev K, Durczak K. Convolutional Neural Network Model for Variety Classification and Seed Quality Assessment of Winter Rapeseed. Sensors. 2023; 23(5):2486. https://doi.org/10.3390/s23052486

Chicago/Turabian Style

Rybacki, Piotr, Janetta Niemann, Kiril Bahcevandziev, and Karol Durczak. 2023. "Convolutional Neural Network Model for Variety Classification and Seed Quality Assessment of Winter Rapeseed" Sensors 23, no. 5: 2486. https://doi.org/10.3390/s23052486

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop