Next Article in Journal
Design and Analysis: Servo-Tube-Powered Liquid Jet Injector for Drug Delivery Applications
Previous Article in Journal
Effectiveness of a Serious Game Design and Game Mechanic Factors for Attention and Executive Function Improvement in the Elderly: A Pretest-Posttest Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of Continuum Robot Arm and Gripper for Harvesting Cherry Tomatoes

by
Azamat Yeshmukhametov
1,2,
Koichi Koganezawa
3,
Yoshio Yamamoto
4,
Zholdas Buribayev
1,*,
Zhassuzak Mukhtar
1,5 and
Yedilkhan Amirgaliyev
5
1
Department of Information Science, Kazakh National University, Al-Farabi 71, Almaty 050040, Kazakhstan
2
Department of Robotics, Nazarbayev University, Nur-Sultan 010000, Kazakhstan
3
Department of Mechanical Engineering, Tokai University, Hiratsuka 151-8677, Japan
4
Department of Precision Engineering, Tokai University, Hiratsuka 151-8677, Japan
5
Laboratory of Artificial Intelligence and Robotics, Institute of Information and Computational Technologies, Pushkin 125, Almaty 050000, Kazakhstan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(14), 6922; https://doi.org/10.3390/app12146922
Submission received: 31 May 2022 / Revised: 27 June 2022 / Accepted: 28 June 2022 / Published: 8 July 2022
(This article belongs to the Section Robotics and Automation)

Abstract

:
Smart farming technology is becoming of the actual topics in the modern world of technology. Contemporary farming technology expands robot applications by using AI for the recognition of variable patterns. Moreover, the agriculture field demands a safety robot, due to the fragile surrounded confined space and it must be adaptable to extremely constrained working environments. Therefore, this research paper presents a novel tomato harvesting robot arm based on a continuum robot structure. The proposed continuum robot arm flexible backbone structure provides safety and efficient work in a confined workspace. This research paper consists of three parts: the first part of the paper contains the robot design and the newly designed tomato harvesting gripper tool. The second part of the paper describes the machine learning part for detecting matured tomatoes and the distance measuring technique with a single camera. The third part of the research paper explains robot kinematics and control algorithms. The final part of the research paper explains the experimental results. As a result of the conducted experiment, the tomato harvesting speed of the proposed robot was 56 s for a single tomato. Meanwhile, the tomato recognition accuracy was 96 percent.

1. Introduction

Digitalization of the contemporary agriculture industry refers to the broad application of AI (artificial intelligence), which involves robotics, big data, and machine learning [1]. Based on statistics from the World Bank, the average age of agricultural workers in the world is over 50 years old and modern trends in employment demonstrate the urbanization of the population, which means the young generation is mostly not interested in farming and agriculture [2]. Despite the labor shortage in the agriculture field, food consumption is projected to increase in the next decades [3]. Therefore, to eliminate such a labor shortage, robots and smart farming technology should compensate for human labor in the agriculture industry [4].
The tomato is a fruit in the worldwide market with a gradually increasing consumption rate year by year. Manual harvesting of tomatoes is a typical labor-intensive work, which makes it impractical human labor in terms of effectiveness. Moreover, tomatoes are very soft and prone to bruising, which makes it difficult to introduce an automatic harvesting system [5]. Furthermore, one of the challenging issues in tomato harvesting is the separation of the tomato from the stem in a gentle way. The tomato separation is a mostly negligible phenomenon in the design of a gripper tool to grab the tomato [6]. Additionally, the proposed design should be safe in the harvesting process, because the surrounding workspace environment is quite fragile for rigid bodies.
The proposed tomato harvesting robotic system consists of three components: A moving platform to carry the manipulator, the manipulator based on a continuum robot structure, and a grasping tool. Additionally, the software part includes the tomato recognition algorithm and control process.
In the 2010s researchers and scholars developed various types of robots for tomato harvesting, many researchers utilized KUKA, Universal Robots, and SCARA manipulators on mobile robot platforms to collect tomatoes in greenhouses [7,8,9]. However, commercially available manipulators are designed to work in a structured environment, which means in the factory or manufacturing area, where the workspace is always constant and presents no change. The agriculture field tends to change within the growing plants, such a variable working environment demands a new technical solution for harvesting robots [10]. One of the proposed technical solutions by Dr. Tokunawa proposed a continuum manipulator with a flexible structure, which has proven to be safe with wide reachability but possesses a low payload capacity [11]. Similar harvesting robot arms are proposed by Henten et al. who designed and developed a cucumber harvesting robot arm with a thermal cutter, of which the successful harvesting rate was reported to be 74% [12,13].
Furthermore, the competitive solution proposed by Zhao et al. proposed a dual-arm SCARA robot, where one arm holds a tomato and the other arm cuts the stem. This requires additional training for tomato stem detection, and the two arms must work synchronously with high precision [7]. A similar solution also had been proposed by Kounalakis et al. which used a UR robot manipulator with an RGBD camera for detection and a cutting tool for tomato separation [14]. Moreover, detaching the tomato is also challenging. For instance, Wang et al. developed a gripper with a clamp mechanism to cut the stem after grasping the tomato [15], similar to Zhao et al. Furthermore, Hiroaki et al. proposed a plucking gripper with an infinite rotational joint to automatically detach tomatoes [16], but the success rate of detaching in the real application was only 60%. A similar gripper design has been also proposed by Root Ai Company as its Virgo robot has a SCARA-type arm that detaches tomatoes by twisting them after grasping [17]. Panasonic Co., Ltd., (also presented a commercially available tomato picker robot that takes only 2–3 s in one tomato picking cycle, which will be the fastest machine [18]. However, the above-mentioned prototypes cannot detach tomatoes accompanied by their sepal, which may cause problems in terms of tomato durability during transportation.
The latest crucial part of the harvesting process is a trained tomato detection system based on a machine learning model. There are many proposed solutions by scholars and researchers to recognize the tomato and discriminate between matured and immature tomatoes [19,20,21,22,23,24,25]. One of the popular methods of tomato detection uses image data obtained by an RGB stereo camera [26,27,28,29]. The most popular detection model is the YOLO model, the main advantage of the YOLO model is that it can work with low computation devices, such as microprocessor-based boards, such as Rasberry Pi [30].
This research presents a novel tomato harvesting robot arm based on continuum robot structure and the design of a new grasping tool with a passive stem cutting mechanism. Moreover, this research provides a machine learning model for tomato recognition and a control algorithm for the tomato harvesting process. In this research, the proposed robotic arm is named TakoBot [31,32] (Tako in Japanese means octopus, Bot comes from the robot). This paper is organized in the following order: design concept, kinematic/kinetic formulation, development of the recognition system, and experimental results followed by some conclusive remarks.

2. Robot Design

2.1. A. Continuum Part Design

According to the intended application, the robot should have the following features:
-
Flexible structure to work in confined workspaces
-
Decent precision of motion
-
The payload capacity is more than 100 g
-
Tomato grasping with no damage
-
Discrimination ability of tomato ripeness
-
The ability of tomatoes to detach
The proposed continuum manipulator named TakoBot was designed to meet the above first three requirements.
TakoBot is a discrete hyper-redundant cable-driven continuum robot arm. It consists of three main parts: the continuum arm, pretension unit, and control box. The continuum part has two sections: the first section is located at the distal portion and the second section located at the proximal portion as shown in Figure 1. Each section contains serially connected five segments driven by four wires that are encapsulated by the individual compression springs. Therefore, a total of eight wires drive the TakoBot. One segment consists of two spacer discs which are interconnected by a universal joint and four compression springs (see Figure 2). Four wires that govern the motions of the manipulator are encapsulated in the four compression springs as shown in Figure 2b, this arrangement allows to compress springs with minimum probability of unpredictable buckling. In the center of the disc, a linear bearing is mounted to slide the disc along the linear shaft (10 mm sliding length) between the adjacent universal joints. This sliding disc mechanism avoids a local excessive concentration of spring compression by evenly distributing the spring force from segment to segment, which contributes to stabilizing the manipulators’ motion as a result.

2.2. B. Actuating Unit

TakoBot pulls and releases wires using the linear lead screws with which screw rods are rotated by stepping motors with a rated torque of 0.49 N/m. In total, TakoBot utilizes four stepping motors: two motors for the first section and two motors for the second section. Each motor drives two wires using the push and pull principle. Each wire is fixed to the screw housing enclosing the screw nut with a wire sleeve that slides along a linear shaft to prevent the screw nut from rotating (Figure 3).

2.3. C. Pretension Part

Holding a certain level of wire tension for wire-driven continuum robots during the entire task motion is a challenging issue. Some studies reported that wire slacking sometimes occurs in wire-driven actuation systems, especially in the case of using multiple numbers of wires. Furthermore, the proposed manipulator employed push-pull actuation (one motor drives two wires in a push-pull manner), which will make the condition worse in return for reducing the number of actuators. However, for a one-wire-one-actuator system, all motors should work synchronously to keep the required tension; technically such an approach requires complicated feedback control using additional tension sensors. As a countermeasure, a proposed device prevents cable slack during movement in a passive mechanical way. The developed pretension mechanism allows for compensation for the cable tension of eight cables simultaneously, to a certain level, without using sensors. This enhances device stability and applicability in severe environments.
The pretension mechanism (PtM) consists of five parts: a pretension octagon base, an inner part, springs with shafts, rollers, and roller holders. The octagon base and inner part are connected by paired linear shafts, and the roller holders slide along the shaft on the mounted linear bearings. To prevent wire friction, the PtM device is equipped with idler pulleys (Figure 4).

2.4. D. Gripper Design

Designing the gripper tool for harvesting tomatoes is a challenging issue because the tomato is a soft and juicy fruit. Grasping the tomato should be gentle to prevent any failures, such as overpressure. Moreover, the tool design should consider detaching the tomato from the tomato stem (Figure 5). Therefore, we designed a gripper tool with a semi-spherical shape for grasping spherical objects, such as a tomato. For detaching tomatoes, we added cutting blades on the edges of the cup. This design makes it possible to separate the tomato in a consecutive procedure, and it improves the harvesting time (Figure 6). The gripper cup size (30–35 mm) was selected based on the average size of a cherry tomato.
Compared with other grippers, this prototype can separate sequentially and does not require sensors to control. A lack of electronics allows the robot to work in a wet and highly humid environment. Furthermore, the proposed gripper can cut the tomato with its sepal, which helps to increase the storage time for harvested tomatoes, while other prototypes leave the sepal on the stem.

3. Tomato Recognition System

Recognition of matured tomatoes is also a critical issue among tomato harvesting robots. In this research, we employed machine learning based on neural networks to distinguish matured tomatoes from immature ones and other similar fruits as well. As a tomato classifier, we utilized a neural network YOLO (You Look Only Once) for the recognition of different types of tomatoes. The network architecture of YOLO v5 consists of three consecutive parts: Backbone, Neck, and Head (see Figure 7). All collected datasets are sent to the CSPDarknet for extracting features (Backbone). After obtaining features, data are transferred to the PANet for highlighting features (Neck). In the end, obtained results, such as class, assessment, position, and object dimensions are evaluated (head) [32].
In the Backbone stage, the CSPDarknet uses a modified convolutional neural network for connecting layers of the deep learning network with the effect of alleviating vanishing gradient problems. It uses a CSPNet strategy to split the base layer feature map into two parts and then merges them through an inter-stage hierarchy. This splitting and merging will increase the gradient flow through the network [33].
In the next Neck stage, the PANet is used to improve the instance segmentation process by preserving the spatial information of the object. The PANet is a feature extractor that generates multiple layers of feature map information. It effectively segments and stores spatial information, which helps in localizing pixels to form a mask in the next stage [34].
In the Head stage, the final evaluation is performed. It applies anchor blocks to extract features and generate the final output vectors containing the predicted bounding box coordinates (center, height, width), forecast confidence score, and probability classes.
In this research, we classified tomatoes into three classes by digit numbers: “0” stands for a red color or matured tomato, “1” for a green tomato or immature tomato, and “2” for a yellow tomato evaluated as one in turning to the red or matured tomato. In a process of recognizing tomatoes on photographic images obtained by the camera, the neural network will indicate them by circumscribing them with rectangles and classifying them by the above-mentioned digits numbers in the left corner.
To assess the performance of the machine learning algorithms, many assessment metrics have been developed. We employ metrics using a confusion matrix (see Figure 8), which contains four combinations: True Positive (TP)—the number of objects that the classifier evaluates as positive and actual positives; True Negative (TN)—the number of objects that were classified as negative and do not belong to the negative class; False Positive (FP)—number of objects classified as positive, but are negative; False Negative (FN)—the number of objects that the classifier evaluates as negative objects, but in reality are positive.
Figure 8 shows a confusion matrix with normalization for a multiclass (i.e., three classes) classification. The diagonal shows the number of TP combinations for each class: 94% of objects in class 0, 96% of those in class 1, 92% of those in class 2 were classified correctly. Additionally, 1% of those in class 0 are erroneously predicted as objects in class 1, and another 1%—as objects in class 2. The remaining 4% were not assigned to any class, therefore, they are false negative solutions. For class 1, the remaining 4%, except for TP solutions, are also false negative indicators. And for class 2, 1% of all objects are erroneously classified as objects in class 0, and another 1% of the classifier predicted as objects in class 1. The proportion of false-negative decisions from objects in class 2 is 6%.
Based on these matrix combinations, the main metrics of the classification ability of the algorithm are calculated, such as precision, recall, and accuracy; the Precision metric is the ratio between the true-positive results (TP) and all positively classified objects (TP and FP) that represents the ability to distinguish a given class from all other classes. As shown in Figure 9, the value of the precision metric rapidly increases as the iteration progresses, which proves that the high accuracy of recognition was achieved in the early stage.
The recall metric is also determined using TP results, but instead of false-positive decisions (FP), it takes into account the number of objects classified as negative, but positive (FN). This metric evaluates the ability to detect a certain class and shows how many positive examples are lost because of classification. The higher the value of the recall metric, the lower the value of the loss of correct predictions. In other words, the recall metric is associated with the confidence of a trained neural network. In a neural network, the neuron weight is associated with accuracy, the higher the weight the more accurate the trained module. Figure 10 shows that the initial recall value is less than 0.5, which indicates staying at a poor level for the quality of the algorithm, but it rapidly increases to almost one.
In YOLO, one of the significant metrics is the confidence metric, which provides information about the reliability of the classifier’s predictions. In the case of increasing the confidence threshold (the mean of precision), the value of the precision metric increases, and the recall will decrease. Figure 11 shows that the reliability threshold was 0.966 (the mean of 0, 1 and 2 class values), which means that almost all the classes achieve ideal accuracy.
The results of this trial of assessment confirm that the YOLOv5 algorithm has high accuracy, enough to recognize ripe tomatoes.
We experimented in terms of confidence metrics. We prepared real cherry tomatoes obtained from the grocery store and fake cherry tomatoes printed out with dimensions and shapes similar to real ones. Then, we set up the fake tomatoes for recognition; the neural network recognized them as real cherry tomatoes, but with 70–75 percent precision. Furthermore, we put a real tomato neighboring the fake ones, in which the neural network instantly changed the decision and recognized the real tomato with 96 percent precision (Figure 12).
Figure 13 shows real experimental results of tomato recognition by classifying them into three classes; “0” as red (ripened), “1” as green (no ripened), and “2” as a yellow tomato (expecting to be ripened one soon). The experiment was conducted in the agricultural greenhouse near Almaty city, Kazakhstan. For the dataset, we collected more than 1500 photos of the tomato plant as reference data to train the neural network. In this experiment, we used a borescope camera with a 3-megapixel resolution for dataset collection.
The main difference between YOLO and other Convolutional Neural Network (CNN) algorithms used for object detection is that it recognizes objects very quickly in real-time. The principle of operation of YOLO involves the input of the entire image at once, which passes through the convolutional neural network only once.
The main technical difference of YOLOv5 is that it was implemented on the PyTorch framework. This framework does not require a special API to work with the Python programming language. PyTorch also uses a dynamic graph model to make it easier for machine learning experts to write code.
When detecting objects, YOLOv5 shows relatively good results in recognizing smaller objects compared to Faster RCNN [35].
Additionally, the Mask R-CNN model was trained on the same dataset.
Table 1 shows the results of calculating the mAp(mean average precision), mAr(mean average recall), and f1 score metrics:
Based on this table, we see that the value of the YOLOv5 metrics shows a very good result compared to Mask R-CNN. Thus, it can be argued that the Mask R-CNN algorithm showed a poor result in training since the values of mAP = 0.13 and F1 score = 0.23 are low.
To measure the distance between a tomato and the camera, we used a measurement method by using a single camera (Figure 14).
It allows calibrating the relative position of the gripper tool for perfect grasping of the tomato without any damage (Figure 15).
The camera generates a one-to-one relationship between the object and the image. Using this principle, we can deduce a distance from the camera to an object (d) from known parameters: focal length (f), the radius of the tomato in the image plane (r), and the radius of the tomato in the object plane (R) with d = f × R r . However, a drawback of this method is size limitation, this means that the program for the recognition requires information about the real size of the object and compares with predefined size of it. For instance, in this research, we applied this method only to cherry tomatoes of about 30 mm in diameter. This means that the distance measurement is applicable only for tomatoes with about a 30 mm diameter [35].

4. Kinematic and Kinetic Formulations

4.1. A. Forward Kinematic Formulation

Coordinate systems are set at every universal joint.
The homogeneous coordinate transform matrices:
0 1   ,     H 0 , 1 = ( R u 0 , 1 0 0 0 1 ) ,   u 0 , 1 = ( x 0 y 0 l 0 )
i 1 i ,     H i 1 ,   i = ( R u i 1 , i 0 0 0 1 ) , u i 1 , i = ( 0 0 L ) ,   ( i = 2 ,   ,   n )
R =   R z ( θ z i ) R x ( θ x i ) R y ( θ y i )
where x 0 and y 0 are an initial position of the base. R x ( θ x i ) and R y ( θ y i ) are the rotation matrices of the ith universal joint that has two rotation angles θ x i and θ y i , R z ( θ z 1 ) is the rotation matrix of the ith disk with a rotation angle θ z 1 along the axial axis and L is the length between neighboring universal joints (Figure 16).
Multiplying the H-matrices successively, we obtain unit vectors and the position vector of the ith coordinate system;
H 0 , i = H 0 , 1 H 1 , 2 H i 1 ,   i = ( i i j i k i u i 0 0 0 1 )
where u i is the position of the ith universal joint Ui  ( i = 1 ,   ,   n 1 ) .
The position vector p i of the end-point P n and position of sliding plates P i   ( i = 1 ,   ,   n 1 ) of the manipulator are obtained by,
( p i 1 ) = H 0 , i ( 0 0 l i 1 ) T ,       ( i = 1 ,   ,   n )
where l n is a fixed length between the nth universal joint and the most distal plate.
The position vectors of eight holes a i ,   c i ,   â i ,   c ^ i in the first section and for the second section b i , d i ,   b ^ i , d ^ i   at the base plate are determined as,
a 0 = ( a x a y 0 ) ,       b 0 = ( b x b y 0 ) ,     c 0 = ( c x c y 0 ) ,     d 0 = ( d x d y 0 ) , a ^ 0 = ( a ^ x a ^ y 0 ) ,       b ^ 0 = ( b ^ x b ^ y 0 ) ,     c ^ 0 = ( c ^ x c ^ y 0 ) ,     d ^ 0 = ( d ^ x d ^ y 0 ) ,    
where l i is an axial length between the ith universal joint and the ith plate, which varies as the plate slides along rods, except l n .

4.2. B. Kinetic Formulation

TakoBot has two actuating sections: the first section (distal part) and the second section (proximal part). Each section operates by four actuating wires driven by two motors, in gerenal number of actuated cables are eight. Kinetic formulation describes the motion with force in combination with the springs and motor angle. Moreover, in this formulation, we also need to consider the pretension mechanism formulation in order to calculate the wire tension.
The second segment has m units and the first segment has n−m units.
Four pairs of wires are labeled by a and a ^ , b   and b ^ , c and c ^ , d and d ^ ,
Equilibrium in moments at Un belonging to the first segment is
( S a , n f a ) ( a n a n 1 ¯ ) × ( a n u n ) + ( S a ^ , n f a ^ ) ( a ^ n a ^ n 1 ¯ ) × ( a ^ n u n ) + ( S c , n f c ) ( c n c n 1 ¯ ) × ( c n u n ) + ( S c ^ , n f c ^ ) ( c ^ n c ^ n 1 ¯ ) × ( c ^ n u n ) +                                               m w ( p n u n ) × g = ( 0 0 0 )
where a n a n 1 ¯   =   a n   a n - 1 | a n a n - 1 | , etc., m w is a payload applying at the end-point and g is the gravity acceleration vector.
Equilibrium in moments at Ui, ( i = m + 1 ,   ,   n   -   1 ) , belonging to the first segment is
( S a , n f a ) ( a n a n 1 ¯ ) × ( a n u n ) + ( S a ^ , n f a ^ ) ( a ^ n a ^ n 1 ¯ ) × ( a ^ n u n ) + ( S c , n f c ) ( c n c n 1 ¯ ) × ( c n u n ) + ( S c ^ , n f c ^ ) ( c ^ n c ^ n 1 ¯ ) × ( c ^ n u n ) + m p k = i + 1 n ( p k u i ) × g = ( 0 0 0 )
where f a ,     f a ^ ,   f c ,   f c ^ are wire tensions, S a , i , S a ^ , i , S c , i , S c ^ , i ( i = m + 1 , , n ) are spring tensions of the ith unit. “ × ” means a cross product and “|*|”, means   the   modulus   of   a   vector   .     m p is the mass of one unit including the plate, the rod, and the universal joint (Figure 17).
The spring tensions are obtained as,
S a , i = k ( L | a i a i 1 | ) , S a ^ , i = k ( L | a ^ i a ^ i 1 | ) , S c , i = k ( L | c i c i 1 ) , S c ^ , i = k ( L | c ^ i c ^ i 1 | ) , )
with spring coefficient k. Equations (9) and (10) contain 3(nm) equations including 4(nm)−1 variables of the nm universal joints angles θ xi , θ yi , θ zi ,   ( i = m + 1 ,   ,   n ) and slide length of plates l i   ( i = m + 1 ,   ,   n - 1 ) .
Equilibrium in force at the ith plate ( i = m + 1 ,   ,   n - 1 ) is,
[ S a , i + 1 ( a i + 1 a i ¯ ) + S a , i ( a i a i 1 ¯ ) + S a ^ , i + 1 ( a ^ i + 1 a ^ i ¯ ) + S a ^ , i ( a ^ i a ^ i 1 ¯ ) S c , i + 1 ( c i + 1 c i ¯ ) + S c , i ( c i c i 1 ¯ ) S c ^ , i + 1 ( c ^ i + 1 c ^ i ) ¯ ) S c ^ , i ( c ^ i c ^ i 1 ¯ ) + ( n i ) m p g ] ( p i u i ) = 0
(10) provide n−m−1 equations. Combined with (7) and (8), we obtain 4(nm)−1 equations, which suffices in number to solve for 4(nm)−1 variables;   θ x , i ,   θ y , i   ,   θ z , i   ( i = m + 1 , , n )   and   l i   ( i = m + 1 ,   ,   n - 1 ) for a given set of wire tensions f a ,     f a ^ ,   f c ,   f c ^
Equilibrium in moments at Um, the universal joint located at the most distal position belonging to the second segment is
S a , m + 1 ( a m + 1 a m ¯ ) × ( a m u m ) + ( S b , m f b ) ( b m b m 1 ¯ ) × ( b m u m ) S a ^ , m + 1 ( a ^ m + 1 a ^ m ¯ ) × ( a ^ m u m ) + ( S b ^ , m f b ^ ) ( b ^ m b ^ m 1 ¯ ) × ( b ^ m u m ) S c , m + 1 ( c m + 1 c m ¯ ) × ( c m u m ) + ( S d , m f d ) ( d m d m 1 ¯ ) × ( d m u m ) S c ^ , m + 1 ( c ^ m + 1 c ^ m ¯ ) × ( c ^ m u m ) + ( S d ^ , m f d ^ ) ( d ^ m d ^ m 1 ¯ ) × ( d ^ m u m ) + ( m w ( p n u m ) + m p k = m + 1 n 1 ( p k u m ) ) × g = ( 0 0 0 )
For the second segment, we can derive similar equations as (8)–(10) by replacing { a i ,   a ^ i ,   c i ,   c ^ i }   with { b i ,   b ^ i ,   d i ,   d ^ i } , { S a , i , S a ^ , i , S c , i , S c ^ , i } with { S b , i , S b ^ , i , S d , i , S d ^ , i } for i = 1 ,   ,   m 1 in (8) and for i = 1 ,   ,   m in (9) and (10).
As a result, we obtain 4m equations included by (11), which suffices in number to solve for 4m variables; θ x , i ,   θ y , i ,   θ z , i   and   l i   ( i = 1 ,   ,   m ) for a given set of wire tensions f b ,     f b ^ ,   f d ,   f d ^ (Figure 17).

4.3. C. Pretention Mechanism Formulation

The pre-tension spring receives 2f, therefore:
2 f σ = k p u p σ ,   σ = a , b , c , d 2 f σ ^ = k p u p σ ^ ,   σ ^ = a ^ , b ^ , c ^ , d ^
where u p σ ,   u p σ ^ are compression length of the pretension spring of which the spring constant is k p     u p σ and u p σ ^ are determined by the motor rotation angle and wire length (Figure 18).
2 u p σ = 2 u ¯ p σ + λ φ σ 2 π + i = 1 n | σ i σ i 1 | n L 2 u p σ ^ = 2 u ¯ p σ ^ λ φ σ 2 π + i = 1 n | σ ^ i σ ^ i 1 | nL
where 2 u ¯ p σ and 2 u ¯ p σ ^ are the compression length of the pretension spring, which are preset initially.
Substituting Equation (12) into Equation (13), we have;
Wire tensions f a ,     f a ^ ,   f c ,   f c ^ , f b ,     f b ^ ,   f d ,   f d ^ . are determined according to four motor angles ϕ a ,   ϕ b , ϕ c , ϕ d (Figure 19)
As
f a = 1 2 k p ( 2 Ū p a + λ ϕ a 4 π + 1 2 i = 1 n | a i a i 1 | n L ) , f a ^ = 1 2 k p ( 2 Ū p â λ ϕ a 4 π + i = 1 n | â i â i 1 | nL ) f c = 1 2 k p ( 2 Ū pc + λ ϕ c 4 π + 1 2 i = 1 n | c i c i 1 | nL )   ,   f c ^ = 1 2 k p ( 2 Ū p c ^ λ ϕ c ^ 4 π + 1 2 i = 1 n | c ^ i c ^ i 1 | nL ) f b = 1 2 k p ( 2 Ū pb + λ ϕ b 4 π + 1 2 i = 1 n | b i b i 1 | nL ) ,   f b ^ = 1 2 k p ( 2 Ū p b ^ λ ϕ b ^ 4 π + 1 2 i = 1 n | b ^ i b ^ i 1 | nL ) f d = 1 2 k p ( 2 Ū pd + λ ϕ d 4 π + 1 2 i = 1 n | d i d i 1 | nL ) ,   f d ^ = 1 2 k p ( 2 Ū p d ^ λ ϕ d ^ 4 π + 1 2 i = 1 n | d ^ i d ^ i 1 | nL )
where ϕ p is a motor rotation angle to generate a pretension, λ is a leadeof the screw rod and k p is the spring constant of the pretension spring.

4.4. D. Inverse Kinematic Solution

According to the given set of variables θ x , i ,   θ y , i   ,   θ z , i   ( i = 1 ,   ,   n )   and   l i   ( i = 1 ,   ,   n 1 ) , we calculate the end-point position by Equation (6),
( p n 1 ) = H 0 , n ( 0 0 l n 1 ) ( i n j n k n r n 0 0 0 1 ) ( 0 0 l n 1 ) = ( k n l n + r n 1 )
Taking a total differentiation of p n = k n l n + r n with respect to θ x , i ,   θ y , i   ,   θ z , i   ( i = 1 ,   ,   n )   and   l i   ( i = 1 ,   ,   n - 1 ) and also motor angles ϕ a ,   ϕ b ,   ϕ c ,   ϕ d ,
Δ p n = p n v Δ v + p n ϕ Δ ϕ
where v = ( θ x 1 , θ x 2 ,   , θ x n ,   θ y 1 , θ y 2 ,   , θ yn   , θ z 1 , θ z 2 ,   , θ zn ,   l 1   ,   l 2   ,   , l n 1 )   ,     R 4 n 1 and ϕ = ( ϕ a ,   ϕ b ,   ϕ c ,   ϕ d   ) .   p n v R 3 × 4 n 1 and p n ϕ     R 3 × 4 .
Whereas, let w = ( w 1 ,   w 2 ,   w 4 n - 1 ) T = 0 4 n 1 represent the 4n−1 equations provided by Equations (7), (8), (10) and (11), which also include θ x , i ,   θ y , i   ,   θ z , i   ( i = 1 ,   ,   n )   ,   l i   ( i = 1 ,   ,   n 1 ) and also the motor angles ϕ a ,   ϕ b ,   ϕ c ,   ϕ d .
Taking a total differentiation for w = 0 4 n 1 as well, we have,
Δ w = w v Δ v + w ϕ Δ ϕ = 0 4 n 1
where w v R ( 4 n 1 ) × ( 4 n 1 )   and w ϕ R ( 4 n 1 ) × 4   . Since w v is a square matrix, we can solve (19) with respect to the vector Δ v as,
Δ v = ( w v ) 1 w ϕ Δ ϕ
Substituting (17) into (15), we have
Δ p n = p n v ( w v ) 1 w ϕ Δ ϕ + p n ϕ Δ ϕ = ( p n ϕ p n v ( w v ) 1 w ϕ ) Δ ϕ = J Δ ϕ
which can be solved for Δ ϕ , by using a generalized inverse of the Jacobian J R 3 × 4
Δ ϕ = J Δ p n + P ( J ) Ψ
where J R 4 × 3 is a generalized inverse of J and P ( J ) R 4 × 4 is a null projection operator of J , and Δ ϕ N R 4 is a correction of ϕ to minimize a positive scalar potential φ by making use of a redundant actuation. We use J = J T ( J   J T ) 1 and P ( J ) = I J J .
Equation (19) provides a variety of motor angles Δ ϕ for a given position and direction variation Δ p n .
Applying the Euler method, we have the following variational equation,
φ + ( φ / ϕ ) Δ ϕ N = 0
which is solved by
Δ ϕ N = φ ( φ / ϕ ) ( φ / ϕ ) T ( φ ϕ ) T
As a candidate of φ , we take φ = k n n 2 , where k n z is the z component of k n : the unit vector of the end-point orienting an axial direction. It means that the axial direction of the end-point takes on a horizontal plane as far as possible while keeping a designated position (Figure 20).

5. Control

TakoBot’s control architecture consists of two main parts: software and hardware (Figure 21). The work process starts with the software. Firstly, it scans ripe tomatoes. After detecting tomatoes, the camera measures the distance. Finally, the measured information helps calculate the robot’s inverse kinematics. Calculated inverse kinematics sends the information and coordinates of the tomato to the Arduino board. Thus, the Arduino board sends data to the motors to drive the TakoBot to make the gripper get to the desired position.
TakoBot has six motors, one micro-servo motor for the gripper tool, and five bipolar stepping motors. Four stepping motors control the manipulator and one motor controls the linear slider. Power consumption is divided into two parts as well. The motor and motor drivers (TMC2208) consume 12 V, but the Arduino board and micro-servo motor consume 6 V for the gripper part.
For tomato selection, we made an algorithm that measures the priority value P defined below. First of all, the camera detects several tomatoes. Next, the algorithm designed a two-dimensional frame, in which all detected tomatoes are covered in it. Subsequently, it measures the horizontal location H and the vertical location V, and the priority value defined by P = H + V is calculated in each detected tomato. The tomato picking process starts from the one having the lowest value of P, to the highest one. The tomato picking diagram is illustrated in Figure 22.

6. Experiment

As a controller, we used an Arduino Uno board and TMC2208 motor drivers for the stepping motors. For continuous work, we also equipped a fan to cool the electronic parts
For the experiment, we fabricated cherry tomatoes and hung them in front of the manipulator. The task was to reach and grasp the tomato, detach it from the stem, and put it into the basket. During the experiment, we also tested robot manipulability, such as reaching the object from various angles (Figure 23).
The total length of the robot arm is 800 mm, the slender part or mobile part of the robot is 400 mm, the other 400 mm is the control box and actuators. The slider length of the platform is 1000 mm, but only 600 mm is available because the stationary part of the manipulator takes 400 mm.
According to the conducted experiment, TakoBot demonstrated high feasibility in working in a confined workspace as well as high reachability. A single arm was enough to perform the given task. In a real-world experiment, additional obstacles were placed in the working space to test the robot’s reachability and the obstacle avoidance capability of the robot arm. In this experiment, we used hard white paper to imitate a confined workspace.
During the experiment, we discovered that harvesting time increased under existing obstacles. In such a case, the robot’s slender part spent more time fitting to the newly constrained environment and reaching the object (Figure 24). Furthermore, the tomato grasping process also takes more time, taking almost half the time of the whole harvesting process. However, the success rate of the tomato was sufficiently high, and the manipulator was able to grasp all detected tomatoes, but it spent more time completing the task.

7. Conclusions

This research paper described a new continuum robot design and state-of-the-art application of the robot in the agriculture industry. In general, continuum robots have a limited payload capacity feature, therefore, an enormous research and design iteration has been conducted to improve the robot payload capacity capability by using a pretension mechanism. Such a pick and place capability for continuum robots would explore new horizons of robot application. Moreover, based on conducted experimental results, this robot can work in collaboration with humans safely. Proposed kinematics and kinetic formulations are also simplified for broad application. In addition, the proposed gripper tool for tomato separation demonstrated a reliable simple technical solution for the harvesting process.
Additionally, the tomato separation process, tomato grasping tool design, AI-based tomato recognition system, and control principle (kinematic formulation of hardware and control devise) are explained.
However, it was found that the tomato harvesting process took an average of 56 s (Figure 25), which is slower in comparison with human work. This working cycle time issue could be improved by optimizing the robot size and control algorithm.
The future plan for the robot is to adapt the robot to the real-world environment and improve the control algorithm by implementing a nonlinear model of the predictive control method and collecting data to optimize the robot trajectory. Likewise, improvements could decrease the total harvesting time and success rate of tomato recognition.

Author Contributions

Conceptualization, A.Y.; K.K. and Y.Y. methodology, Z.B.; software, A.Y., Z.M. and Z.B.; validation A.Y., Resources K.K. and Y.Y.; investigation, A.Y.; dataset collection, Z.M., Z.B.; writing—original draft preparation, A.Y.; writing—review and editing, A.Y., K.K. and Y.Y.; supervision Y.A., funding acquisition, Z.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been funded by the science committee of the ministry of education and science of the Republic of Kazakhstan (grant no. AP08857573).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Altalak, M.; Uddin, M.A.; Alajmi, A.; Rizg, A. Smart Agriculture Applications Using Deep Learning Technologies: A Survey. Appl. Sci. 2022, 12, 5919. [Google Scholar] [CrossRef]
  2. The World Bank. Global Consumption Database for 2019, Fresh or Chilled Vegetables Section; The World Bank: Washington, DC, USA, 2019. [Google Scholar]
  3. Kitzes, J.; Wackernagel, M.; Loh, J.; Peller, A.; Goldfinger, S.; Cheng, D.; Tea, K. Shrink and share humanity’s present and future ecological foot-print. Philos. Trans. Roy. Soc. Lond. B Biol. Sci. 2008, 363, 467–475. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Li, Y.; Feng, Q.; Li, T.; Xie, F.; Liu, C.; Xiong, Z. Advance of Target Visual Information Acquisition Technology for Fresh Fruit Robotic Harvesting: A Review. Agronomy 2022, 12, 1336. [Google Scholar] [CrossRef]
  5. Jun, J.; Kim, J.; Seol, J.; Kim, J.; Son, H.I. Towards an Efficient Tomato Harvesting Robot: 3D Perception, Manipulation, and End-Effector. IEEE Access 2021, 9, 17631–17640. [Google Scholar] [CrossRef]
  6. Gao, J.; Zhang, F.; Zhang, J.; Yuan, T.; Yin, J.; Guo, H.; Yang, C. Development and evaluation of a pneumatic finger-like end-effector for cherry tomato harvesting robot in greenhouse. Comput. Electron. Agric. 2022, 197, 106879. [Google Scholar] [CrossRef]
  7. Zhao, Y.; Gong, L.; Liu, C.; Huang, Y. Dual-Arm Robot Design and Testing for Harvesting Tomato in Greenhouse; International Federation of Automatic Control; Elsevier: Amsterdam, The Netherlands, 2016. [Google Scholar]
  8. Ling, X.; Zhao, Y.; Gong, L.; Liu, C.; Wang, T. Dual-Arm Cooperation and Implementing for Robotic Harvesting Tomato using Binocular Vision; Robotics and Autonomous Systems; Elsevier: Amsterdam, The Netherlands, 2019. [Google Scholar]
  9. Feng, Q.; Wang, X.; Wang, G.; Li, Z. Design and test of tomatoes harvesting robot. In Proceedings of the 2015 IEEE International Conference on Information and Automation, Liajiang, China, 2–5 August 2015. [Google Scholar]
  10. Fujinaga, T.; Yasukawa, S.; Ishii, K. Evaluation of tomato fruit harvestability for robotic harvesting. In Proceedings of the 2021 IEEE/SICE International Symposium on System Integration (SII), Iwaki, Japan, 11–14 January 2021; pp. 35–39. [Google Scholar] [CrossRef]
  11. Takaaki, T.; Koichi, O.; Akinori, H. 1 segment continuum manipulator for automatic harvesting robot: Prototype and modeling. In Proceedings of the 2017 IEEE International Conference on Mechatronics and Automation, Takamatsu, Japan, 6–9 August 2017. [Google Scholar]
  12. Van Henten, E.J.; Hemming, J.; Van Tuijl, B.A.J.; Kornet, J.G.; Meuleman, J.; Bontsema, J.; Van Os, E.A. An Autonomous Robot for Harvesting Cucumbers in Greenhouses. Auton. Robot. 2002, 13, 241–258. [Google Scholar] [CrossRef]
  13. Van Henten, E.J.; Hemming, J.; van Tuiji, B.; Kornet, J.; Bontsema, J.; van Os, E. Field test of an autonomous cucumber picking robot. Biosyst. Eng. 2003, 86, 305–313. [Google Scholar] [CrossRef]
  14. Kounalakis, N.; Kalykakis, E.; Pettas, M.; Makris, A.; Kavoussanos, M.M.; Sfakiotakis, M.; Fasoulas, J. Development of a Tomato Harvesting Robot: Peduncle Recognition and Approaching. In Proceedings of the 2021 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, 11–13 June 2021; pp. 1–6. [Google Scholar] [CrossRef]
  15. Hayashi, S.; Shigematsu, K.; Yamamoto, S.; Kobayashi, K.; Kohno, Y.; Kamata, J.; Kurita, M. Evaluation of a strawberry-harvesting robot in a field test. Biosyst. Eng. 2010, 105, 160–171. [Google Scholar] [CrossRef]
  16. Hiroaki, Y.; Kotaro, N.; Takaomi, H.; Masayuki, I. Development of an autonomous tomato harvesting robot with rotational plucking grip-per. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016. [Google Scholar]
  17. Root AI Company. Intro Virgo. 2019. Available online: https://root-ai.com/#intro (accessed on 20 November 2020).
  18. Panasonic Company. Introducing AI-equipped Tomato Harvesting Robots to Farms May Help to Create Jobs. 2018. Available online: https://news.panasonic.com/global/stories/2018/57801.html (accessed on 20 November 2020).
  19. Chen, X.; Yang, S.X. A practical solution for ripe tomato recognition and localisation. J Real-Time Image Proc 2013, 8, 35–51. [Google Scholar] [CrossRef]
  20. Huang, L.; Yang, S.X.; He, D. Abscission Point Extraction for Ripe Tomato Harvesting Robots. Intell. Autom. Soft Comput. 2012, 18, 751–763. [Google Scholar] [CrossRef]
  21. Arefi, A.; Mollags, A.M.; Mollazade, K.; Teimourlou, R.F. Recognition and localization of ripen tomato based on machine vision. Aust. J. Crop. Sci. 2011, 5, 1144–1149. [Google Scholar]
  22. Zhang, F. Ripe Tomato Recognition with Computer Vision. In Proceedings of the 2015 International Industrial Informatics and Computer Engineering Conference, Xi’an, China, 10–11 January 2015; Atlantis Press: Paris, France, 2015; pp. 466–469. [Google Scholar]
  23. Benavides, M.; Cantón-Garbín, M.; Sánchez-Molina, J.A.; Rodríguez, F. Automatic Tomato and Peduncle Location System Based on Computer Vision for Use in Robotized Harvesting. Appl. Sci. 2020, 10, 5887. [Google Scholar] [CrossRef]
  24. Malik, M.H.; Zhang, T.; Li, H.; Zhang, M.; Shabbir, S.; Saeed, A. Mature Tomato Fruit Detection Algorithm Based on improved HSV and Watershed Algorithm. IFAC PapersOnLine 2018, 51, 431–436. [Google Scholar] [CrossRef]
  25. Yuanshen, Z.; Liang, G.; Yixiang, H.; Chengliang, L. Robust tomato recognition for robotic harvesting using feature images fusion. Sensors 2016, 16, 173. [Google Scholar] [CrossRef] [Green Version]
  26. Yoshida, T.; Fukao, T.; Hasegawa, T. A Tomato Recognition Mehod for Harvesting with Robots Using Point Clouds. In Proceedings of the 2019 IEEE/SICE International Symposium on System Integration, Paris, France, 14–16 January 2019. [Google Scholar]
  27. Yoshida, T.; Fukao, T.; Hasegawa, T. Fast Detection of Tomato Peduncle Using Point Cloud with a Harvesting Robot. J. Robot. Mechatron. 2018, 30, 180–186. [Google Scholar] [CrossRef]
  28. Xiangyu, C.; Krishneel, C.; Yoshimaru, T.; Kotaro, N.; Hiroaki, Y.; Kei, O.; Masayuki, I. Reasoning–Based Vision Recognition for Agricultural Humanoid Robot toward Tomato Harvesting. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015. [Google Scholar]
  29. Biqing, L.; Yongfa, L.; Hongyan, Z.; Shiyong, Z. The design and Realization of Cherry Tomato Harvesting Robot based on IOT. Int. J. Online Biomed. Eng. 2016, 12, 23–26. [Google Scholar]
  30. Magalhães, S.; Castro, L.; Moreira, G.; dos Santos, F.; Cunha, M.; Dias, J.; Moreira, A. Evaluating the Single-Shot MultiBox Detector and YOLO Deep Learning Models for the Detection of Tomatoes in a Greenhouse. Sensors 2021, 21, 3569. [Google Scholar] [CrossRef] [PubMed]
  31. Yeshmukhametov, A.; Koganezawa, K.; Yamamoto, Y. Design and Kinematics of Cable-Driven Continuum Robot Arm with Universal Joint Backbone. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics, Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 2444–2449. [Google Scholar] [CrossRef]
  32. Yeshmukhametov, A.; Koganezawa, K.; Yamamoto, Y. A Novel Discrete Wire-Driven Continuum Robot Arm with Passive Sliding Disc: De-sign, Kinematics and Passive Tension Control. Robotics 2019, 8, 51. [Google Scholar] [CrossRef] [Green Version]
  33. Yin, H.; Chai, Y.; Yang, S.X.; Mittal, G.S. Ripe Tomato Recognition and Localization for a Tomato Harvesting Robotic System. In Proceedings of the 2009 International Conference of Soft Computing and Pattern Recognition, Malacca, Malaysia, 4–7 December 2009; pp. 557–562. [Google Scholar] [CrossRef]
  34. Sural, S.; Qian, G.; Pramanik, S. Segmentation and histogram generation using the HSV color space for image retrieval. In Proceedings of the International Conference on Image Processing, Barcelona, Spain, 22–25 September 2002; Volume 2. [Google Scholar] [CrossRef]
  35. Wu, W.; Liu, H.; Li, L.; Long, Y.; Wang, X.; Wang, Z.; Chang, Y. Application of local fully Convolutional Neural Network combined with YOLO v5 algorithm in small target de-tection of remote sensing image. PLoS ONE 2021, 16, e0259283. [Google Scholar] [CrossRef] [PubMed]
  36. Cao, Y.-T.; Wang, J.-M.; Sun, Y.-K.; Duan, X.-J. Circle Marker Based Distance Measurement Using a Single Camera. Lect. Notes Softw. Eng. 2013, 1, 376–380. [Google Scholar] [CrossRef] [Green Version]
Figure 1. TakoBot design and experimental setup.
Figure 1. TakoBot design and experimental setup.
Applsci 12 06922 g001
Figure 2. (a) TakoBot segment design and (b) transparent view.
Figure 2. (a) TakoBot segment design and (b) transparent view.
Applsci 12 06922 g002
Figure 3. TakoBot wire-actuating unit.
Figure 3. TakoBot wire-actuating unit.
Applsci 12 06922 g003
Figure 4. TakoBot pretension mechanism design and structure. (a) Pretension mechanism CAD view; (b) Wire routing schematics and pretension device structure.
Figure 4. TakoBot pretension mechanism design and structure. (a) Pretension mechanism CAD view; (b) Wire routing schematics and pretension device structure.
Applsci 12 06922 g004
Figure 5. (a) Gripper CAD design and (b) Fabricated prototype.
Figure 5. (a) Gripper CAD design and (b) Fabricated prototype.
Applsci 12 06922 g005
Figure 6. Gripper working process.
Figure 6. Gripper working process.
Applsci 12 06922 g006
Figure 7. Architecture YOLOv5.
Figure 7. Architecture YOLOv5.
Applsci 12 06922 g007
Figure 8. Confusion matrix.
Figure 8. Confusion matrix.
Applsci 12 06922 g008
Figure 9. An example of growing precision metric in the iteration process.
Figure 9. An example of growing precision metric in the iteration process.
Applsci 12 06922 g009
Figure 10. An example of a growing recall metric in the iteration process.
Figure 10. An example of a growing recall metric in the iteration process.
Applsci 12 06922 g010
Figure 11. Assessment of precision all classes for YOLO v5 model.
Figure 11. Assessment of precision all classes for YOLO v5 model.
Applsci 12 06922 g011
Figure 12. Experiment test with real and fake tomatoes recognition.
Figure 12. Experiment test with real and fake tomatoes recognition.
Applsci 12 06922 g012
Figure 13. Testing of the tomato recognition in a real environment.
Figure 13. Testing of the tomato recognition in a real environment.
Applsci 12 06922 g013
Figure 14. Tomato detection architecture.
Figure 14. Tomato detection architecture.
Applsci 12 06922 g014
Figure 15. The gripper and camera allocation and the way to measure object location by a single camera [36]. (a) CAD view of gripper and camera structure. (b) Principal working diagram of the camera for distance measurement.
Figure 15. The gripper and camera allocation and the way to measure object location by a single camera [36]. (a) CAD view of gripper and camera structure. (b) Principal working diagram of the camera for distance measurement.
Applsci 12 06922 g015
Figure 16. TakoBot kinematic structure.
Figure 16. TakoBot kinematic structure.
Applsci 12 06922 g016
Figure 17. Cable eyelet arrangement for end and mid-section discs.
Figure 17. Cable eyelet arrangement for end and mid-section discs.
Applsci 12 06922 g017
Figure 18. Pretension mechanism structure.
Figure 18. Pretension mechanism structure.
Applsci 12 06922 g018
Figure 19. End-section kinetic structure.
Figure 19. End-section kinetic structure.
Applsci 12 06922 g019
Figure 20. End-effector orientation vector.
Figure 20. End-effector orientation vector.
Applsci 12 06922 g020
Figure 21. TakoBot control architecture.
Figure 21. TakoBot control architecture.
Applsci 12 06922 g021
Figure 22. Tomato picking priority diagram.
Figure 22. Tomato picking priority diagram.
Applsci 12 06922 g022
Figure 23. TakoBot object reaching angles.
Figure 23. TakoBot object reaching angles.
Applsci 12 06922 g023
Figure 24. Tomato harvesting procedure. (a) scanning and reaching for the tomato, (b) grasping the tomato, (c) separating the tomato, (d) putting it in the basket.
Figure 24. Tomato harvesting procedure. (a) scanning and reaching for the tomato, (b) grasping the tomato, (c) separating the tomato, (d) putting it in the basket.
Applsci 12 06922 g024
Figure 25. Tomato harvesting experiment timeline and trajectory graph.
Figure 25. Tomato harvesting experiment timeline and trajectory graph.
Applsci 12 06922 g025
Table 1. Comparison table of YOLOv5 and Mask R-CNN.
Table 1. Comparison table of YOLOv5 and Mask R-CNN.
Value of MetricsYOLOv5Mask R-CNN
mAp0.900.13
mAr0.870.82
F1score0.890.23
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yeshmukhametov, A.; Koganezawa, K.; Yamamoto, Y.; Buribayev, Z.; Mukhtar, Z.; Amirgaliyev, Y. Development of Continuum Robot Arm and Gripper for Harvesting Cherry Tomatoes. Appl. Sci. 2022, 12, 6922. https://doi.org/10.3390/app12146922

AMA Style

Yeshmukhametov A, Koganezawa K, Yamamoto Y, Buribayev Z, Mukhtar Z, Amirgaliyev Y. Development of Continuum Robot Arm and Gripper for Harvesting Cherry Tomatoes. Applied Sciences. 2022; 12(14):6922. https://doi.org/10.3390/app12146922

Chicago/Turabian Style

Yeshmukhametov, Azamat, Koichi Koganezawa, Yoshio Yamamoto, Zholdas Buribayev, Zhassuzak Mukhtar, and Yedilkhan Amirgaliyev. 2022. "Development of Continuum Robot Arm and Gripper for Harvesting Cherry Tomatoes" Applied Sciences 12, no. 14: 6922. https://doi.org/10.3390/app12146922

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop