Next Article in Journal
Validation of a Simplified Numerical Model for Predicting Solid–Liquid Phase Change with Natural Convection in Ansys CFX
Next Article in Special Issue
Evaluation of Unmanned-Aerial-Vehicle-Integrated Control System Efficiency on the Basis of Generalized Multiplicative Criterion
Previous Article in Journal / Special Issue
Brain–Computer-Interface-Based Smart-Home Interface by Leveraging Motor Imagery Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Use of Computer Vision to Improve the Affinity of Rootstock-Graft Combinations and Identify Diseases of Grape Seedlings

1
Institute of Physics and Technology, V.I. Vernadsky Crimean Federal University, 295007 Simferopol, Russia
2
Nikitsky Botanical Gardens—National Scientific Center of Russian Academy of Sciences, 298648 Yalta, Russia
3
Humanitarian Pedagogical Academy, V.I. Vernadsky Crimean Federal University, 295007 Simferopol, Russia
*
Author to whom correspondence should be addressed.
Inventions 2023, 8(4), 92; https://doi.org/10.3390/inventions8040092
Submission received: 30 May 2023 / Revised: 30 June 2023 / Accepted: 15 July 2023 / Published: 19 July 2023
(This article belongs to the Special Issue Recent Advances and New Trends in Signal Processing)

Abstract

:
This study explores the application of computer vision for enhancing the selection of rootstock-graft combinations and detecting diseases in grape seedlings. Computer vision has various applications in viticulture, but publications and research have not reported the use of computer vision in rootstock-graft selection, which defines the novelty of this research. This paper presents elements of the technology for applying computer vision to rootstock-graft combinations and includes an analysis of grape seedling cuttings. This analysis allows for a more accurate determination of the compatibility between rootstock and graft, as well as the detection of potential seedling diseases. The utilization of computer vision to automate the grafting process of grape cuttings offers significant benefits in terms of increased efficiency, improved quality, and reduced costs. This technology can replace manual labor and ensure economic efficiency and reliability, among other advantages. It also facilitates monitoring the development of seedlings to determine the appropriate planting time. Image processing algorithms play a vital role in automatically determining seedling characteristics such as trunk diameter and the presence of any damage. Furthermore, computer vision can aid in the identification of diseases and defects in seedlings, which is crucial for assessing their overall quality. The automation of these processes offers several advantages, including increased efficiency, improved quality, and reduced costs through the reduction of manual labor and waste. To fulfill these objectives, a unique robotic assembly line is planned for the grafting of grape cuttings. This line will be equipped with two conveyor belts, a delta robot, and a computer vision system. The use of computer vision in automating the grafting process for grape cuttings offers significant benefits in terms of efficiency, quality improvement, and cost reduction. By incorporating image processing algorithms and advanced robotics, this technology has the potential to revolutionize the viticulture industry. Thanks to training a computer vision system to analyze data on rootstock and graft grape varieties, it is possible to reduce the number of defects by half. The implementation of a semi-automated computer vision system can improve crossbreeding efficiency by 90%. Reducing the time spent on pairing selection is also a significant advantage. While manual selection takes between 1 and 2 min, reducing the time to 30 s using the semi-automated system, and the prospect of further automation reducing the time to 10–15 s, will significantly increase the productivity and efficiency of the process. In addition to the aforementioned benefits, the integration of computer vision technology in grape grafting processes brings several other advantages. One notable advantage is the increased accuracy and precision in pairing selection. Computer vision algorithms can analyze a wide range of factors, including size, shape, color, and structural characteristics, to make more informed decisions when matching rootstock and graft varieties. This can lead to better compatibility and improved overall grafting success rates.

1. Introduction

Viticulture is a traditional branch of the agro-industrial complex in the south of Russia. The favorable soil and climatic conditions of this region allow for growing high yields of grapes during various ripening periods. However, the effectiveness of these conditions is not yet fully utilized, primarily due to miscalculations made during the establishment and operation of industrial plantations. One of the main reasons for the unsatisfactory state of perennial plantations lies in the quality of planting material. The quality of seedlings in the future will depend on the yield of plantations, the period of their productive operation, the volume of investments, and the payback period. In vineyards planted with certified planting material, the service life is increased 1.5 times, productivity is increased by 30–50%, and the volume of capital investments is significantly reduced, which is accompanied by an increase in the competitiveness of domestic grape and wine producers [1,2,3,4].
The duration of the productive period of rootstock combinations will largely depend on the affinity, which is determined by the proximity of the components in terms of morphological and anatomical features, the type of metabolism, and the agricultural technology used. Affinity can also change depending on the environmental conditions. A grafted plant consists of two parts that are different in their biological properties, each of which reacts in its own way to certain environmental conditions by changing the metabolic rate. With the same type of reaction of both parts, i.e., while maintaining a similar nature of their metabolism, the grafted plant retains high viability and productivity [5,6,7,8,9].
An analysis of numerous studies has shown that the splicing of rootstock combinations depends on many factors that can be conditionally divided into mechanical and physiological compatibility according to the signs of a clear manifestation. Mechanical compatibility is determined by the high level of splicing of the graft components. Studies have shown that the incompatibility of the mechanical type can be interconnected with differences in the anatomical structure of the grafting components. Therefore, in grapes with equal rootstock and scion diameters, there can be significant differences in the structural ratio of the core area to wood. This may affect the mechanical strength of the rootstock combination, and the fusion will not be circular. With mechanical action on the seedling, breaks are observed at the grafting site.
The probability of the coincidence of the tissues of the core and wood in rootstock varieties with the graft, according to many authors, is considered one of the main factors in the high level of fusion of grafted cuttings after stratification, as well as the release of standard planting material from grape schools [2,10,11,12,13,14].
This study aims to use computer vision to improve the selection of a suitable graft to rootstock based on the analysis of a section of a grape seedling.
Computer vision (CV) is receiving increasing attention in agriculture, as it represents a set of methods that allow for the automatic collection of images and precise and efficient extraction of valuable information [15]. CV-based systems typically include an image acquisition phase and image analysis methods that can distinguish areas of interest for detection and classification such as plants and leaves [16,17]. There are many image analysis methods, but deep learning (DL) [18] is considered the most effective for detecting objects in agriculture such as diseases [19], crops and weeds [20], and fruits [21]. Deep learning is based on machine learning and is capable of automatically extracting features from unstructured data [22]. Convolutional neural networks (CNN) are being tested for various tasks to support precision agriculture production systems, including phenological monitoring [23,24]. Thus, high-performance plant phenotyping methods [25] based on CV in combination with DL (CV and DL) can assess the spatiotemporal dynamics of crop features related to their phenophases in the context of precision agriculture [26,27,28].
However, to the best of our knowledge, computer vision technologies have not been applied to improve the affinity of rootstock-scion combinations of vine seedlings. This is a differential and innovative aspect of the present work, the purpose of which is to show a new method for the analysis of seedlings, which will reduce the need for laborious and time-consuming manual operations and checks.
Computer vision algorithms can automatically determine the main characteristics of the seedling, such as the diameter of the stem, the number and location of the vessels, the presence of damage, etc. To achieve this, we need to take a photo of a cut of a seedling and process it using special computer vision programs. Based on these data, it is possible to determine which scion will be most suitable for a given seedling and rootstock. This approach can significantly speed up and simplify the process of selecting a scion for a rootstock, which will save time and money on growing grapes.
Figure 1 shows a schematic representation of the use of computer vision to improve the affinity of rootstock-graft combinations.
In addition to the benefits mentioned above, computer vision can also assist in identifying and diagnosing diseases or pests affecting vine seedlings. By analyzing images of the seedlings, computer vision algorithms can detect visual cues or abnormalities that indicate the presence of diseases or pests. This early detection can help farmers take timely actions to prevent the spread of diseases and minimize crop losses.
Furthermore, computer vision can provide valuable insights into the growth and development of vine seedlings. By analyzing images captured at different stages of growth, computer vision algorithms can track the progress of key phenological traits such as leaf emergence, bud break, and fruit development. This information can aid in optimizing irrigation, fertilization, and other agronomic practices to ensure optimal growth conditions for the vines.
The integration of computer vision with deep learning techniques such as convolutional neural networks (CNNs) enables a more accurate and robust analysis of vine seedlings. CNNs are capable of learning complex patterns and features from large datasets, allowing them to effectively classify and identify various characteristics of the seedlings. This includes distinguishing between different scion varieties, assessing the health and quality of the seedlings, and predicting their future growth potential.
The application of computer vision in the context of rootstock-scion combinations for vine seedlings offers a novel and efficient approach to streamlining the selection process. By automating the analysis of seedling characteristics, farmers can save time and resources, leading to more precise and successful grafting decisions.

2. Materials and Methods

2.1. Plant Materials

The studies were carried out at the V.I. Vernadsky Crimean Federal University’s grape grafting complex. The objects of research were vines of rootstock and graft varieties, from which stratified cuttings of various variety-rootstock combinations were subsequently produced, represented by technical zoned varieties. For the graft group, the native varieties Dzhevatkara, Sarypandas, Ekim Kara, Kefessia, and Kokur Beliy grafted onto the rootstocks of the varieties Berlandieri x Riparia Kober 5BB, Riparia x Rupestris 101–14, and Berlandieri x Riparia CO4 were studied. Most of the rootstock varieties have an interspecific hybridization of American species with European varieties of the species Vitisvinifera, which has certain morphological and anatomical distinctive features from varieties of European and Asian origin of the species Vitisvinifera. These differences lead to varying degrees of incompatibility since the development of wood and, as a result, the conducting system, may be different depending on the variety-rootstock combination. The output of stratified cuttings, along with other factors, depends on the safety of the buds in the eyes of the graft vine.
In 2022, manually conducted grafting operations at the complex numbered approximately 260,000. With the implementation of the semi-automated system in 2023, this number increased to half a million grafts. The goal for 2024 is to reach maximum capacity, aiming for one million grafts per year.
Regarding the training dataset, a size of 1000 pairs of photographs was used to train the computer vision system.
The detailed economic calculations and analysis are presented in a separate publication, providing a comprehensive understanding of the economic implications and benefits of implementing a semi-automated computer vision system in grape grafting processes.
Before grafting, as a part of the process of connecting the rootstock and graft parts of the cutting, the quality of the graft and rootstock material was assessed with three repetitions of 30 pieces each. In the period from the second half of November to the first ten days of December, the harvesting of rootstock and graft vines was carried out and their biometric indicators were assessed. The vines, after soaking in a 0.5% solution of chinosol, were kept in a refrigerator until the vaccination campaign was carried out. The storage temperature was +2…+4 °C. In March, the preserved vines of the rootstock and graft varieties were prepared for the production of grafts. The rootstock vine was cut to a length of at least 40 cm (lower cut under the eye) with blinding eyes on the vine using a knife. The graft vine was cut into one-eyed cuttings, leaving part of the internodes (below the peephole without a length limitation, but not less than 5 cm, above the peephole within 1.5 cm). The prepared inoculations, in accordance with the scheme of the experiment, were bundled, labeled, and placed in boxes, which were transferred to the stratification chamber. Stratification was carried out using an open method on water in a humid atmosphere at an air temperature of 25–27 °C. The relative air humidity was maintained within 80…90%.
The biometric assessment of their quality is important for studying vines of graft and rootstock varieties. The biometric indicators include the total length of the vines, m; average length of the vine, m; average internode length, cm; average diameter of the vine, mm; average core diameter, mm; average ratio of the total diameter of the vine to the diameter of the core; average cross-sectional area of the vine, mm2; average cross-sectional area of the core, mm2; average cross-sectional area of wood, mm2; conditional coefficient of shoot ripening. It was found that in the graft and rootstock vines, on average, for varieties and years, the lengths of internodes differed. This corresponds to the fact that in the methods for determining grape varieties for varietal compliance, the biometric indicators of the vine can change depending on the conditions of the place and year of cultivation [2,4,5,6,7,8,9,10,11].

2.2. Fuzzy Classification Model for Characteristics and Damage of Vine Seedlings

The vector of the results of the characteristics and damage of vine seedlings should be supported by metrics that characterize the significance and proximity of the state of vine seedlings to a set of traits of a particular class. In practice, as one of the methods of confirmation, a comparison with similar conclusions made previously by specialists can be performed. To achieve this, a database of “reference” classes of characteristics and lesions of grape seedlings, characteristic of a particular localization of agricultural production, can be formed, containing the characteristics and descriptions of objects in the image. Analyzing on the basis of comparison, a specialist or artificial intelligence can draw reasonable conclusions based on their proximity to the standards according to certain rules. The formulation of the problem of fuzzy classification based on image analysis can be reduced to the classical problem of classifying the elements of a set of characteristics and damage of grape seedlings using a tuple of fuzzy variables, together with an additional neural network for fuzzy classification.
The fuzzy classifier software modules were developed and distributed under the MIT license. However, we decided to build our own fuzzy classifier, which allowed us to enter and take into account information and expert assessments from specialists using given membership functions. The proposed mechanism made it possible to more accurately and flexibly tune the classifier, taking into account the complexity of formalizing knowledge and expert opinions, and retrain the fuzzy model. Under these conditions, the method of convoluting tuples of image objects using classes with the formation of a fuzzy estimate of their form factors, such as the fill factor (the share of the area occupied by class objects), presence factor (the share of class objects in the tuple), and degree of severity of the class structure (assessment of the degree of assigning an object to a class), was used. Also, the estimates of the SNA of the entire image, the estimate of informativeness (the share of the area occupied by all objects), and the uniformity of the distribution of objects were also converted to fuzzy variables.
This study uses fuzzy classification rules, each of which describes one of the types of damage to grapes in the data set. The a priori rule is a fuzzy description in the n-dimensional property space, and the sequence of rules is a fuzzy class label from the given set (1):
x = x 1 , x 2 , x j , , x n , j = 1 , 2 , , n ;
where n denotes the number of properties, x is the input vector, and Aij are the fuzzy sets, represented by the fuzzy ratios of the output of the i-th rule and the input vector or the previous fuzzy rule. The degree of activation of the i-th rule from Set (2), M, is calculated as:
a i x = j = 1 n A i j x j , i = 1 , 2 , , M .
The classifier output is determined by Rule (3), which has the highest degree of activation, ai:
y = g a i * , i * = arg max 1 i M a i   .
At this stage, we have implemented only 21 rules for drawing a conclusion regarding the type of lesion and its degree. The degree of confidence in the solution is given by the normalized degree of rule triggering (4):
C o n f = a i * i = 1 M a i   .
However, a feature of grape damage is its combination of several classes, which requires information about all reliable lesions found in the image in accordance with a fuzzy assessment of confidence. Therefore, the result is determined by Vector (5)
a d = a d 1 , a d 2 , , a d m
where a i x > D m , D m is the threshold value for a given class of damage to grapes. It also allows for combining estimates and conclusions obtained after the detection of several images from the same source location. The union is a convolution using the maximum and average values of the metrics to activate classes. The output of the classifier activates the score gain among the set L of images acquired at the selected location (6).
y l o c = g a l i * , i * = arg max 1 i M max 1 l L a l i   ,   i = 1 , 2 , , M
The complex lesion vector collapses the estimates using the maximum weighted one from the set L relative to the threshold Dm (7).
a l = a l 1 , a l 2 , , a l m   . i = 1,2 , , m ,
In the future, the rules will be reinforced by tracking the dynamics of the development of the process when using certain factors in the processing and protection of grapes. This a posteriori information forms a set of meta-rules or reflex patterns that are evaluated and ranked by disease, proven potency, and grape variety. Given the variety of drugs and technologies for combating grape diseases, we have chosen the path of accumulating the proposed templates and technologies as rules that have their own fuzzy assessment of effectiveness µ.

3. Results

3.1. Search Subject by Bloom

Color is a property of bodies that reflects or emits visible radiation of a certain spectral composition and intensity. Everywhere we are surrounded by color indicators, including traffic lights, white and yellow road markings, corporate products, road signs, and various other indicators. For example, for visually impaired people, yellow circles are glued to the doors of shops (yellow is the only color they see) so as not to confuse, for example, a glass showcase with a glass door. There are also yellow stripes on pedestrian crossings. This color is one of the brightest and makes it easier for people with poor eyesight to navigate the city. Searching by color is influenced by many factors, for example, illumination. We must also not forget that the visible color is the result of an interaction between the spectrum of the emitted light and the surface. If a white sheet is illuminated by the light of a red bulb, then the sheet will also appear red [29].
The main difficulty of object localization is the most difficult aspect of object detection. There are many ways to search for objects in an image, and one of these methods is using a sliding window of different sizes to search for objects in an image. This method is called “exhaustive search”. The “exhaustive search” method consumes enormous computing resources since it is necessary to detect an object in thousands of windows, given the small size of the image. To improve the operation of the method and increase the speed of data processing, we optimized the processing algorithm, namely we changed the window sizes in different ratios (instead of 23 magnifications by several pixels). The result of this optimization was an increase in the speed of the algorithm, but the efficiency leaves much to be desired.
This paper discusses a selective search algorithm that uses both an exhaustive method and segmentation (a method of separating objects of different shapes in an image by assigning different colors to them). To determine the color characteristics, a program was written that helps highlight the contours in the image. To determine these parameters, a program was written for the RGB color of an image captured by video camera no. 1. The results of this program are shown in Figure 2. The resulting images were obtained by coloring the original image in red, green, and blue and defining the mask [30].
By applying the output information, it is possible to find the pixels of the required color using a function that selects the image pixels that fall within the specified range of colors. This function examines the array elements (pixels) element by element and checks whether the colors are in the list of values of two different matrices.

3.2. Algorithm for Contour Analysis of the Image of a Cut of a Grape Seedling for the Selection of the Optimal Graft to the Rootstock

The contour analysis algorithm is one of the most important and useful methods for describing, recognizing, comparing, and searching for graphic images (objects). A contour is the outer outline of an object.
The contour analysis is performed under the following conditions:
The contour contains sufficient information about the shape of the object;
Due to having the same brightness as the background, the object may not have a clear border or it may be noisy with interference, making it impossible to select the contour (successful use only with a clearly defined object against a contrasting background and no interference);
The overlapping of objects or their grouping leads to the contour being incorrectly selected and not corresponding to the border of the object;
The internal points of the object are not taken into account. Therefore, a number of restrictions are imposed on the scope of the contour analysis, which are mainly related to the problems of contour detection in images.
Among the methods for obtaining a binary image, one can single out, for example, thresholding or selecting an object by color. The OpenCV library contains convenient methods for detecting and manipulating image edges. The find contours function is used to find contours in a binary image. This function can find the outer and nested contours and determine their nesting hierarchy. There are also other functions that are often used, such as minAreaRect2(), which returns the smallest possible rectangle that can wrap around the path but can be rotated relative to the image coordinate system at a certain angle. An example of how contour analysis works is shown in Figure 3a. When the algorithm is running with the cv. CHAIN_APPROX_SIMPLE parameter (Figure 3b), all boundary points are preserved. It removes all unnecessary points, compresses the contour, and also saves memory [31].

3.3. Development of a Software Module for the Analysis of Grape Seedlings

To find the radii of the selected contours with the support of two cameras, the following method was developed. The investigated object moved along the assembly line, and when it entered the frame of video camera no. 2, its contour was located by subtracting the snow-white background. As soon as the middle of the plant reached the center of the frame, the assembly line stopped and focused on the prevailing paint of the resulting contour. Subsequently, it started processing the image from video camera no. 1. With a given color, a specific color palette was used to determine the contour of the ellipse and find its center.
Before use, the camera was calibrated. Calibration allows us to take into account the distortion introduced by the optical system. Camera calibration comes down to obtaining the internal and external parameters of the camera from the available photos or videos captured by it. Camera calibration is usually performed at the initial stage of many computer vision problems. In addition, this procedure allows us to correct distortion in photos and videos.
Before the sensor was embedded, the image was converted to shades of gray to reduce computational costs. First, the image was smoothed to remove useless noise, and then a Gaussian filter was applied. To achieve this, we used a filter that can be approximated by 1 Gaussian derivative. Then came the calculation of the gradients. Object contours were marked where the image gradient contained the largest values. After that, non-maximums were removed, and only local maxima were marked as boundaries. This was followed by double-threshold filtering. The output contained clear contours on a black-and-white binary image. The results of the analysis of a current image of grape cuttings are shown in Figure 4.

3.4. Analysis of the Results

Here, we consider the results obtained during the testing of the plant contour analysis algorithm and also present the comparative results of the algorithm for five different seedlings. Each sapling has two multi-colored circles in the center (Figure 5a), which are needed to assess the quality of the sapling and together form a two-dimensional representation of the internal structure of the sapling. The selection of the contour on the seedling is displayed as a red connecting line (Figure 5b), but the program also highlighted unnecessary parts due to excess material in the images (such as the film that is visible in the picture), which is why the algorithm did not perfectly determine the contour.
The next step was to convert the image to a black-and-white format (Figure 5c). After that, we used a mask to select the contours by cutting off the extra points (Figure 5d) [32,33]. As we can see in the view of the contour from above (Figure 5d), the algorithm selected an extra part of the seedling due to an incorrect source code (it captured a part of the film in which the seedling was wrapped).
Next, we calculated the radii of the ellipses and determined how much space the inner ellipse occupied relative to the outer one (Figure 6a). If it occupied more than half of the seedling, then it was considered unsuitable. However, the visual processing of the program in this case was incorrect and a first-order error occurred since we used incorrect image data. On this account, it is possible to increase the error in the calculation of the ellipses (Figure 6b), but this is only suitable for unique cases such as when seedlings with external physical interference are used.
Let us move on to the second test. This seedling turned was illuminated by a flash (Figure 7a), which is why the white background strongly predominated on the left side, which led to the incorrect operation of the algorithm. We used image processing based on information about the color of the object. First, the algorithm colored the original picture in red, green, and blue (Figure 7b). Next, came the calculation of the mask for all three colors: red, green, and blue (Figure 7c). Then, came the testing for the set of HSV colors (Figure 7d).
The next step was to calculate the mask for all three colors: red, green, and blue (Figure 8a). As we can see in the images, too much noise was obtained during the determination, which did not allow us to conduct a qualitative assessment of the seedling. The next step was to use the contour analysis algorithm (Figure 8b), but due to the image being overexposed by the flash, the algorithm could not highlight the center. Next, we converted the image to a black-and-white format (Figure 8c). After that, we used a mask to select the contours by cutting off the extra points (Figure 8d). It turned out that there was a very small part in the center that the algorithm did not highlight. However, visually, since the center was quite light, indicating that the dead part occupied a small space, the seedling was considered to be of high quality.
The next photo of the seedling used for testing is shown in Figure 9a. The image of the seedling turned out well, without unnecessary defects and noise. The next step used the contour analysis algorithm (Figure 9b). After that, we used a mask to select the contours by cutting off the extra points (Figure 9c). Further, the use of the algorithm for selecting the contours is shown in Figure 9d. The algorithm highlighted the contours of the seedling clearly along the edges, the center in this experiment, which means that the dead part occupied a small space and the seedling was high quality.
The fourth seedling had damage in the very center in the form of a fault (Figure 10a). The next step used the contour analysis algorithm (Figure 10b). After that, we used a mask to select the contours by cutting off the extra points (Figure 10c). Further, the use of the algorithm for selecting contours is shown in Figure 10d. It was not possible to qualitatively highlight the contours of the seedling, as there were too many changes in this picture and the algorithm could not cope with the task of evaluating the dead part relative to the center of the healthy part. Due to the pronounced defects, it was considered to be of poor quality.
We conducted one last test. This seedling (Figure 11a) was without visible damage but the image was slightly blurry. The next step used the contour analysis algorithm (Figure 11b). Then, we used a mask to select the contours by cutting off the extra points (Figure 11c,d). This qualitatively highlighted the contours of the seedling. In this picture, it can be seen that there was a small black area that was not selected and it was not clear how critical this was—perhaps the problem was due to the fuzzy image as a whole. The algorithm coped with its task and determined the seedling to be a high-quality one.

3.5. Pre-Training of the Neural Network for Detecting Specified Object Classes (Grape Diseases) in Images

This section presents images demonstrating the results of pre-training the neural network for detecting grape diseases in images. Thanks to this process, the authors obtained accurate and reliable data on the condition of grapevines, which allows for improving crop quality and preserving plant health. The dataset labeling information for the initial training is shown in Figure 12. The dataset labeling information for the final training is shown in Figure 13.
The confusion matrix is a table that visualizes the effectiveness of the classification algorithm by comparing the predicted value of the target variable with its actual value. The performance graph for the algorithm used in the initial models is presented in Figure 14, and it can be seen that the performance was at a low level. The performance graph for the algorithm used in the final model training is presented in Figure 15, and it can be seen that the performance was at a sufficiently high level.
To evaluate the quality of the algorithm’s performance on each class separately, we used precision and recall metrics. Precision can be interpreted as the proportion of objects classified as positive by the classifier that are truly positive, whereas recall shows the proportion of positive class objects found by the algorithm out of all positive class objects.
Figure 16 shows the plots for the final model training, which indicate that the accuracy increased and the number of undetected affected areas decreased with each epoch, indicating the correctness of the training and high key performance indicators, as well as the absence of model overfitting.
The accuracy and recall increased with each epoch, which may indicate that the model was improving and becoming more accurate in detecting affected areas. If there was no overfitting, this is an even more positive result.

4. Discussion

As noted above, the monitoring of phenology and seedling growth using computer vision has received considerable attention in the literature [20,24,25,34,35,36,37,38,39,40,41]. It is therefore important to relate our proposal to these related works. Although each of these papers is devoted to quantifying some aspect of the early stages of plant development, they include a wide variety of approaches that are related to the word “seedling”.
A number of studies are devoted to the use of computer vision and robotics for the winter pruning of vines, reducing the need for human intervention. At the heart of this approach is an architecture that allows a robot to easily move through vineyards, identify vines with high accuracy, and approach them for pruning [42,43].
However, we did not find studies and publications on the use of computer vision and the automation of the processes of grafting grape cuttings.
The appearance of a seedling is determined visually. On examination, a person may not notice the damage to the seedling, so some authors have proposed solutions based on computer vision.
Some of the techniques used in computer vision include image segmentation, object recognition, feature extraction, and image classification. These techniques are used to analyze and interpret visual data to provide insights, detect anomalies, and make predictions. In addition, computer vision can help determine the quality of grape seedlings, which is also important when choosing a suitable scion for a rootstock. Analysis of a cutting section of a seedling can reveal the presence of diseases, damage, or other defects that can affect the growth and development of the plant. Thus, computer vision can help growers increase the efficiency of growing grapes and obtain a better harvest.
Also, computer vision can help improve the study of grape seedlings and grape diseases by providing accurate and fast data on plant health. For example, computer vision can automatically detect and classify grape diseases, allowing us to quickly and accurately determine which plants need treatment. We can also use computer vision to analyze the growth and development of grape seedlings, which will help determine the optimal conditions for growing them. In addition, computer vision can help collect data on grape varieties, which will improve the selection and cultivation of grapes.
There are grape diseases that can only be identified by cutting grape seedlings. For example, bacterial cancer of grapes, which manifests itself in the form of dark spots on the wood and can lead to the death of the plant. Also, when cut, root rot caused by a fungus can be detected. This disease can lead to a weakening of the plant and a decrease in yields. Therefore, when buying grape seedlings, it is recommended to pay attention to the condition of the roots and wood, as well as take preventive measures to prevent the development of diseases.
Computer vision can be used to examine vine seedlings at the scion to determine the presence of disease or other problems. To achieve this, it is necessary to cut a vine seedling and obtain an image using a microscope or other imaging device. The image can then be processed using computer vision algorithms that can automatically detect the presence of diseases or other problems on the grape seedling. For example, algorithms can look for signs, such as changes in color or texture, which may indicate the presence of a disease.
Unlike scenarios where UAV images integrated with an artificial neural network (ANN) model are analyzed, we use fixed-scale macro photography to obtain accurate sizes. We also apply filters to the image to highlight areas and evaluate geometric indicators. These are different approaches.
Such approaches can be useful for the rapid and accurate detection of grape diseases, which can help vine growers and farmers take more effective disease control measures and increase yields. In the context of analyzing grape seedlings, computer vision techniques can be used to identify and classify different types of damage, such as diseases and frost infestations, based on the visual features present in the seedling images. This can then inform decisions regarding the treatment and management of the plants.
This study is expected to contribute to the field of viticulture by providing an automated and accurate method for analyzing seedlings, which will enable farmers to detect and treat diseases or frost infestations early enough to prevent devastating crop losses. The proposed solution is also expected to reduce the need for labor-intensive and time-consuming manual inspections while improving the accuracy and reliability of disease and frost detection.
This research relates to the field of viticulture, where there is a growing interest in the use of computer vision techniques for plant analysis and disease detection. The authors of this study used existing computer vision methods and techniques, such as image segmentation, object recognition, feature extraction, and image classification, to develop a software module for the analysis of grape seedlings. These methods are widely used in various fields, including medical imaging, robotics, and surveillance.
In addition, the authors proposed their original approach to the use of fuzzy logic models in the classification of the characteristics and damage of grape seedlings. Fuzzy logic is a mathematical approach that allows us to display uncertainty and inaccuracy in data. It has been used in a variety of applications, including control systems, decision making, and pattern recognition.
In general, the proposed approach is based on existing research in the field of viticulture and computer vision and also presents new methods and techniques for the analysis of grape seedlings. The software complex mentioned in this study implements the best approaches to vineyard control systems using computer vision technologies. The decision support system software can be adapted to solve other similar tasks. The commercialization plan of the software product is aimed at automating and robotizing agriculture, and it will serve as a basis for the development of the next type of similar software.
The results of this study show the efficiency of using artificial neural networks to classify grape lesions in images. The solution to these problems is impossible without the widespread use of convolutional neural networks. However, the most valuable conclusions are based on a comprehensive assessment of the results of recognizing both the entire image and individual objects using detection. The apparatus of fuzzy logic, with the help of linguistic and fuzzy variables, allows presenting the obtained conclusions in the form of a full-fledged conclusion.
The developed system for localization manages the delivered task but also has its disadvantages. The system is sensitive to background noise, which can lead to biased results, especially for images with relatively edgy contours. Computer vision algorithms do not always work correctly, especially when there are superfluous materials in the images like crumpled or cut film, when the data are strongly illuminated, or when the seedlings are strongly damaged.

5. Conclusions

This study discusses the use of computer vision to increase the affinity of rootstock-graft combinations and detect diseases in grape seedlings. A very important task has been set for domestic viticulture: to multiply the volume of production of its own planting material in a short time. Partial mechanization in viticulture has low efficiency and quality due to manual labor and a large amount of waste. The automation of the processes of grafting grape cuttings based on computer vision will help solve this problem.
CV analyzes the cut image of a grape seedling to find the optimal graft for the rootstock. This technology will allow evaluating the parameters and viability of seedlings; controlling the positioning of seedlings before grafting and the quality of the grafting result; and monitoring the development of seedlings to determine the time of planting. For this, image processing algorithms are used that automatically determine the characteristics of the seedling such as the diameter of the trunk, the presence of damage, etc.
This paper also presents a mathematical model based on fuzzy logic for classifying the features of grapevine cuts. The model is based on cluster analysis of the obtained images to optimize rootstock combinations and diagnose diseases or pests of grape seedlings. An algorithm for contour analysis of the image of a grape seedling is proposed. A software module for the analysis of grape seedlings is presented, and an analysis of the results obtained is carried out. This project can be used by agronomists and growers as a tool to quickly and accurately identify growth problems and take appropriate action to eliminate them. In addition, the project can be used for educational purposes to improve knowledge about computer vision and its applications in agriculture.
Furthermore, computer vision can assist in identifying diseases and defects in seedlings, which is crucial for determining their quality. The benefits of automating this process include increased efficiency, improved quality, and cost reduction through a decrease in manual labor and waste.
Based on these studies, it will be possible to develop an automated line for grafting grape cuttings. The line will be equipped with two conveyor belts, a delta robot, and a computer vision system, which will increase the productivity of the process.

Author Contributions

Conceptualization, A.K. and M.R.; software, M.R.; validation, M.R. and V.K.; writing—review and editing, N.G. (Nadezhda Gallini) and A.K.; project administration, N.G. (Natalia Gorbunova) and Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ollat, N.; Bordenave, L.; Tandonnet, J.P.; Boursiquot, J.M.; Marguerit, E. Grapevine rootstocks: Origins and perspectives. Acta Hortic. 2016, 1136, 11–22. [Google Scholar] [CrossRef]
  2. Migicovsky, Z.; Cousins, P.; Jordan, L.M.; Myles, S.; Striegler, R.K.; Verdegaal, P.; Chitwood, D.H. Grapevine rootstocks affect growth-related scion phenotypes. Plant Direct 2019, 5, e00324. [Google Scholar] [CrossRef] [PubMed]
  3. Harris, Z.N.; Pratt, J.E.; Kovacs, L.G.; Klein, L.L.; Kwasniewski, M.T.; Londo, J.P.; Wu, A.S.; Miller, A.J. Grapevine scion gene expression is driven by rootstock and environment interaction. BMC Plant Biol. 2023, 23, 211. [Google Scholar] [CrossRef] [PubMed]
  4. Bianchi, D.M.; Ricciardi, V.; Pozzoli, C.; Grossi, D.; Caramanico, L.; Pindo, M.; Stefani, E.; Cestaro, A.; Brancadoro, L.; De Lorenzis, G. Physiological and Transcriptomic Evaluation of Drought Effect on Own-Rooted and Grafted Grapevine Rootstock (1103P and 101–14MGt). Plants 2023, 12, 1080. [Google Scholar] [CrossRef] [PubMed]
  5. Bozzolo, A.; Thomas, A.L.; Harris, J.L.; Liu, C.; Kwasniewski, M.T.; Striegler, R.K. Performance of ‘Chambourcin’ Winegrape on 10 Different Root Systems in Southern Missouri, USA. HortTechnology 2023, 33, 253–261. [Google Scholar] [CrossRef]
  6. Egorov, E.A.; Shadrina, Z.h.A.; Kochyan, G.A. Assessment of the state and prospects of development of viticulture and nursery breeding in the Russian Federation. Fruit Grow. Vitic. South Russ. 2020, 61, 1–15. [Google Scholar]
  7. Boso, S.; Santiago, J.; Martínez, M. The influence of 110-Ritcher and SO4 rootstocks on the performance of scions of Vitis vinifera L. cv. Albariño clones. Span. J. Agric. Res. 2008, 6, 96. [Google Scholar] [CrossRef] [Green Version]
  8. Lafotaine, M.; Lafotaine, M. Rootstock Efeect on Guality; Lafotaine, M., Schultz, H., Eds.; Workshop; Geisenheim Research Ctntre: Geisenheim, Germany, 2001. [Google Scholar]
  9. Becker, A. Newly Bred Varieties of Phylloxera Tolerant Rootstocks; Becker, A., Herrmann, J., Eds.; Workshop; Geisenyeim Research Centre: Geisenheim, Germany, 2001. [Google Scholar]
  10. Wallis, C.M.; Wallingford, A.K.; Chen, J. Grapevine rootstock effects on scion sap phenolic levels, resistance to Xylella fastidiosa infection, and progression of Pierce’s disease. Front. Plant Sci. 2013, 4, 502. [Google Scholar] [CrossRef] [Green Version]
  11. Waite, H.; Whitelaw-Weckert, M.; Torley, P. Grapevine propagation: Principles and methods for the production of high-quality grapevine planting material. N. Z. J. Crop. Hortic. Sci. 2015, 43, 144–161. [Google Scholar] [CrossRef]
  12. Cookson, S.J.; Ollat, N. Grafting with rootstocks induces extensive transcriptional re-programming in the shoot apical meristem of grapevine. BMC Plant Biol. 2013, 13, 147. [Google Scholar] [CrossRef]
  13. Mudge, K.; Janick, J.; Scofield, S.; Goldschmidt, E.E. A History of Grafting. Hortic. Rev. 2009, 35, 437–493. [Google Scholar] [CrossRef]
  14. Assunção, M.; Santos, C.; Brazão, J.; Eiras-Dias, J.E.; Fevereiro, P. Understanding the molecular mechanisms underlying graft success in grapevine. BMC Plant Biol. 2019, 19, 396. [Google Scholar] [CrossRef] [Green Version]
  15. Potgieter, A.B.; Zhao, Y.; Zarco-Tejada, P.J.; Chenu, K.; Zhang, Y.; Porker, K.; Biddulph, B.; Dang, Y.P.; Neale, T.; Roosta, F.; et al. Evolution and application of digital technologies to predict crop type and crop phenology in agriculture. Silico Plants 2021, 3, diab017. [Google Scholar] [CrossRef]
  16. Yang, W.; Feng, H.; Zhang, X.; Zhang, J.; Doonan, J.; Batchelor, W.D.; Xiong, L.; Yan, J. Crop Phenomics and High-Throughput Phenotyping: Past Decades, Current Challenges, and Future Perspectives. Mol. Plant 2020, 13, 187–214. [Google Scholar] [CrossRef] [Green Version]
  17. Tripathi, M.K.; Maktedar, D.D. A role of computer vision in fruits and vegetables among various horticulture products of agriculture fields: A survey. Inf. Process. Agric. 2020, 7, 183–203. [Google Scholar] [CrossRef]
  18. Narvaez, F.; Reina, G.; Torres-Torriti, M.; Kantor, G.; Cheein, F. A Survey of Ranging and Imaging Techniques for Precision Agriculture Phenotyping. IEEE/ASME Trans. Mechatron. 2017, 22, 2428–2439. [Google Scholar] [CrossRef]
  19. Roy, A.M.; Bose, R.; Bhaduri, J. A fast accurate fine-grain object detection model based on YOLOv4 deep neural network. Neural Comput. Appl. 2022, 34, 3895–3921. [Google Scholar] [CrossRef]
  20. Katal, N.; Rzanny, M.; Mäder, P.; Wäldchen, J. Deep Learning in Plant Phenological Research: A Systematic Literature Review. Front. Plant Sci. 2022, 13, 805738. [Google Scholar] [CrossRef]
  21. Arya, S.; Sandhu, K.S.; Singh, J. Deep learning: As the new frontier in high-throughput plant phenotyping. Euphytica 2022, 218, 47. [Google Scholar] [CrossRef]
  22. Yalcin, H. Plant phenology recognition using deep learning: Deep-Pheno. In Proceedings of the 6th International Conference on Agro-Geoinformatics, Fairfax, VA, USA, 7–10 August 2017; pp. 1–5. [Google Scholar] [CrossRef]
  23. Correia, D.L.; Bouachir, W.; Gervais, D.; Pureswaran, D.; Kneeshaw, D.D.; De Grandpre, L. Leveraging Artificial Intelligence for Large-Scale Plant Phenology Studies from Noisy Time-Lapse Images. IEEE Access 2020, 8, 13151–13160. [Google Scholar] [CrossRef]
  24. Mann, H.M.R.; Iosifidis, A.; Jepsen, J.U.; Welker, J.M.; Loonen, M.J.J.E.; Høye, T.T. Automatic flower detection and phenology monitoring using time-lapse cameras and deep learning. Remote Sens. Ecol. Conserv. 2022, 8, 765–777. [Google Scholar] [CrossRef]
  25. Samiei, S.; Rasti, P.; Vu, J.; Buitink, J.; Rousseau, D. Deep learning-based detection of seedling development. Plant Methods 2020, 16, 103. [Google Scholar] [CrossRef] [PubMed]
  26. Roy, A.M.; Bhaduri, J. A Deep Learning Enabled Multi-Class Plant Disease Detection Model Based on Computer Vision. AI 2021, 2, 413–428. [Google Scholar] [CrossRef]
  27. Fuentes, A.F.; Yoon, S.; Park, D.S. Deep Learning-Based Phenotyping System with Glocal Description of Plant Anomalies and Symptoms. Front. Plant Sci. 2019, 10, 1321. [Google Scholar] [CrossRef] [PubMed]
  28. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Mohimont, L.; Alin, F.; Rondeau, M.; Gaveau, N.; Steffenel, L.A. Computer Vision and Deep Learning for Precision Viticulture. Agronomy 2022, 12, 2463. [Google Scholar] [CrossRef]
  30. Kiktev, N.; Lendiel, T.; Pasichnyk, N.; Khort, D.; Kutyrev, A. Using IoT Technology to Automate Complex Biotechnical Objects. In Proceedings of the 2021 IEEE 8th International Conference on Problems of Infocommunications, Science and Technology (PIC S&T), Kharkiv, Ukraine, 5–7 October 2021; pp. 17–22. [Google Scholar] [CrossRef]
  31. Oblizanov, A.; Shevskaya, N.; Kazak, A.; Rudenko, M.; Dorofeeva, A. Evaluation Metrics Research for Explainable Artificial Intelligence Global Methods Using Synthetic Data. Appl. Syst. Innov. 2023, 6, 26. [Google Scholar] [CrossRef]
  32. Kazak, A.; Plugatar, Y.; Johnson, J.; Grishin, Y.; Chetyrbok, P.; Korzin, V.; Kaur, P.; Kokodey, T. The Use of Machine Learning for Comparative Analysis of Amperometric and Chemiluminescent Methods for Determining Antioxidant Activity and Determining the Phenolic Profile of Wines. Appl. Syst. Innov. 2022, 5, 104. [Google Scholar] [CrossRef]
  33. Abisha, A.; Bharathi, N. Review on Plant health and Stress with various AI techniques and Big data. In Proceedings of the 2021 International Conference on System, Computation, Automation and Networking (ICSCAN), Puducherry, India, 30–31 July 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar]
  34. Molina, M.; Jiménez-Navarro, M.J.; Martínez-Álvarez, F.; Asencio-Cortés, G. A Model-Based Deep Transfer Learning Algorithm for Phenology Forecasting Using Satellite Imagery. In Proceedings of the Hybrid Artificial Intelligent Systems, Bilbao, Spain, 22–24 September 2021; pp. 511–523. [Google Scholar] [CrossRef]
  35. Pearse, G.; Watt, M.S.; Soewarto, J.; Tan, A.Y. Deep Learning and Phenology Enhance Large-Scale Tree Species Classification in Aerial Imagery during a Biosecurity Response. Remote Sens. 2021, 13, 1789. [Google Scholar] [CrossRef]
  36. Rodrigues, L.; Magalhães, S.A.; da Silva, D.Q.; dos Santos, F.N.; Cunha, M. Computer Vision and Deep Learning as Tools for Leveraging Dynamic Phenological Classification in Vegetable Crops. Agronomy 2023, 13, 463. [Google Scholar] [CrossRef]
  37. Jiang, Y.; Li, C.; Paterson, A.H.; Robertson, J.S. DeepSeedling: Deep convolutional network and Kalman filter for plant seedling detection and counting in the field. Plant Methods 2019, 15, 141. [Google Scholar] [CrossRef] [Green Version]
  38. de Villiers, H.A.; Otten, G.; Chauhan, A.; Meesters, L. Autoencoder-based 3D representation learning for industrial seedling abnormality detection. Comput. Electron. Agric. 2023, 206, 107619. [Google Scholar] [CrossRef]
  39. Aguiar, A.S.; Magalhães, S.A.; dos Santos, F.N.; Castro, L.; Pinho, T.; Valente, J.; Martins, R.; Boaventura-Cunha, J. Grape Bunch Detection at Different Growth Stages Using Deep Learning Quantized Models. Agronomy 2021, 11, 1890. [Google Scholar] [CrossRef]
  40. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
  41. Yang, B.; Xu, Y. Applications of deep-learning approaches in horticultural research: A review. Hortic. Res. 2021, 8, 123. [Google Scholar] [CrossRef]
  42. Millan, B.; Diago, M.P.; Aquino, A.; Palacios, F.; Tardaguila, J. Vineyard pruning weight assessment by machine vision: Towards an on-the-go measurement system. OENO One 2019, 53. [Google Scholar] [CrossRef]
  43. Silwal, A.; Yandún, F.; Nellithimaru, A.K.; Bates, T.; Kantor, G.A. Bumblebee: A Path Towards Fully Autonomous Robotic Vine Pruning. Field Robot. 2021, 2, 1661–1696. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of the use of computer vision to improve the affinity of rootstock-graft combinations.
Figure 1. Schematic representation of the use of computer vision to improve the affinity of rootstock-graft combinations.
Inventions 08 00092 g001
Figure 2. The results of the program developed for the RGB color of an image captured by video camera no. 1. (Initial picture (a); Applying a mask using color to the original image (b); Colorized original picture (c); Selecting colors in the image (mask 1) (d); Selecting colors in the image (mask 2) (e); Selecting colors in the image (mask 3) (f); Selecting colors in the image (mask 4) (g); Selecting colors in the image (mask 5) (h)).
Figure 2. The results of the program developed for the RGB color of an image captured by video camera no. 1. (Initial picture (a); Applying a mask using color to the original image (b); Colorized original picture (c); Selecting colors in the image (mask 1) (d); Selecting colors in the image (mask 2) (e); Selecting colors in the image (mask 3) (f); Selecting colors in the image (mask 4) (g); Selecting colors in the image (mask 5) (h)).
Inventions 08 00092 g002
Figure 3. An example of how contour analysis works (a), and when the algorithm is running with the cv. CHAIN_APPROX_SIMPLE parameter (b).
Figure 3. An example of how contour analysis works (a), and when the algorithm is running with the cv. CHAIN_APPROX_SIMPLE parameter (b).
Inventions 08 00092 g003
Figure 4. Display of analysis results.
Figure 4. Display of analysis results.
Inventions 08 00092 g004
Figure 5. Results obtained from testing the plant contour analysis algorithm (Original image (a); Selection of the contour on the seedling (b); Converted image in black and white (c); Using an algorithm to extract contours (d)).
Figure 5. Results obtained from testing the plant contour analysis algorithm (Original image (a); Selection of the contour on the seedling (b); Converted image in black and white (c); Using an algorithm to extract contours (d)).
Inventions 08 00092 g005
Figure 6. Calculation of the radii of ellipses (Calculation of the radii of ellipses without taking into account the error for images with noise (a); Calculation with error (b)).
Figure 6. Calculation of the radii of ellipses (Calculation of the radii of ellipses without taking into account the error for images with noise (a); Calculation with error (b)).
Inventions 08 00092 g006
Figure 7. Image processing based on information about the color of the object (Original image (a); Colorized original picture RGB (b); Masked image (c); Colorized original HSV picture (d)).
Figure 7. Image processing based on information about the color of the object (Original image (a); Colorized original picture RGB (b); Masked image (c); Colorized original HSV picture (d)).
Inventions 08 00092 g007
Figure 8. Definition of a mask of all three colors: red, green, blue (Applying a mask (a); Applying the Contour Analysis Algorithm (b); Converted image in black and white (c); Using an algorithm to extract contours (d)).
Figure 8. Definition of a mask of all three colors: red, green, blue (Applying a mask (a); Applying the Contour Analysis Algorithm (b); Converted image in black and white (c); Using an algorithm to extract contours (d)).
Inventions 08 00092 g008
Figure 9. The use of the contour analysis algorithm (Original image (a); Path selection (b); Converted image in black and white (c); Using an algorithm to extract contours (d)).
Figure 9. The use of the contour analysis algorithm (Original image (a); Path selection (b); Converted image in black and white (c); Using an algorithm to extract contours (d)).
Inventions 08 00092 g009
Figure 10. The use of the contour analysis algorithm to evaluate damage in the center of a seedling in the form of a fault (Original image (a); Path selection (b); Converted image in black and white (c); Using an algorithm to extract contours (d)).
Figure 10. The use of the contour analysis algorithm to evaluate damage in the center of a seedling in the form of a fault (Original image (a); Path selection (b); Converted image in black and white (c); Using an algorithm to extract contours (d)).
Inventions 08 00092 g010
Figure 11. The use of the contour analysis algorithm for an image that is slightly blurry (Original image (a); Path selection (b); Converted image in black and white (c); Using an algorithm to extract contours (d)).
Figure 11. The use of the contour analysis algorithm for an image that is slightly blurry (Original image (a); Path selection (b); Converted image in black and white (c); Using an algorithm to extract contours (d)).
Inventions 08 00092 g011
Figure 12. Dataset labeling information for initial training.
Figure 12. Dataset labeling information for initial training.
Inventions 08 00092 g012
Figure 13. Dataset labeling information for final training.
Figure 13. Dataset labeling information for final training.
Inventions 08 00092 g013
Figure 14. Efficiency graph for the algorithm used in initial models.
Figure 14. Efficiency graph for the algorithm used in initial models.
Inventions 08 00092 g014
Figure 15. Efficiency graph for the algorithm used in the final training of the model.
Figure 15. Efficiency graph for the algorithm used in the final training of the model.
Inventions 08 00092 g015
Figure 16. Graphs of metric changes, average precision, and loss function.
Figure 16. Graphs of metric changes, average precision, and loss function.
Inventions 08 00092 g016
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rudenko, M.; Plugatar, Y.; Korzin, V.; Kazak, A.; Gallini, N.; Gorbunova, N. The Use of Computer Vision to Improve the Affinity of Rootstock-Graft Combinations and Identify Diseases of Grape Seedlings. Inventions 2023, 8, 92. https://doi.org/10.3390/inventions8040092

AMA Style

Rudenko M, Plugatar Y, Korzin V, Kazak A, Gallini N, Gorbunova N. The Use of Computer Vision to Improve the Affinity of Rootstock-Graft Combinations and Identify Diseases of Grape Seedlings. Inventions. 2023; 8(4):92. https://doi.org/10.3390/inventions8040092

Chicago/Turabian Style

Rudenko, Marina, Yurij Plugatar, Vadim Korzin, Anatoliy Kazak, Nadezhda Gallini, and Natalia Gorbunova. 2023. "The Use of Computer Vision to Improve the Affinity of Rootstock-Graft Combinations and Identify Diseases of Grape Seedlings" Inventions 8, no. 4: 92. https://doi.org/10.3390/inventions8040092

Article Metrics

Back to TopTop