**2. Interactive Cognition Phenotyping Method**

In many industrial applications, the machine vision techniques for object detection and measurement are mature. Industrial robots generally use non-interactive passive detection methods to achieve cognition of the surrounding environment. However, occlusion and overlap rarely exist in industrial scenes; in other words, the scenes are structured. Hence, non-interactive cognition methods can basically meet the cognition requirements. However, in fields, simple machine vision inspection methods are not compatible with complex unstructured agricultural scenes [25]. It is difficult to perform phenotyping for crops in occlusion scenes. To solve this problem, we propose a new phenotyping paradigm of interactive cognition. A phenotyping robot is introduced to interact with the surrounding plants. The robot mimics breeding experts' manual operations of removing occlusion and overlap, while performing phenotyping in fields. A detectable quasi-structured environment is constructed; therefore, full cognition of the crops can be achieved through machine vision-based detecting methods.

### *2.1. Interactively Cognitive Humanoid Field Phenotyping Robot*

In order to interactive with crops and construct various detectable scenes for phenotyping, the robot needs to have high operational dexterity. To perform the phenotyping operations of experts, we used a bio-inspired design methodology to design the humanoid robot ontology. The robot ontology, as shown in Figure 1, is based on an open-source project named InMoov [26], and it has been redesigned to improve its adaptability to the agricultural working environment. Its shoulder and arm have five degrees of freedom, ensuring the completion of complex actions, such as those carried out by humans, and sufficient space for movement. The manipulator is a humanoid mechanical hand, inspired by an open-source project [27]. The mechanical hand has one degree of freedom. Five fingers can grip and stretch at the same time so that phenotyping actions, such as separating ears and handling stalks, can be performed.

**Figure 1.** Interactively cognitive humanoid phenotyping robot.

The robot is placed on a field truss platform that can move along the tracks in field. The robot can move to a suitable position to interact with the plants under analysis. It can move along two mutually perpendicular horizontal tracks with a moving speed of 0.1 m/s to 0.3 m/s. It can descend 25 cm towards the ground and lift 75 cm above the ground.

The liftable line-structured light system equipped on the chest of the robot body is used for environmental detection and cognition. The system consists of a Basler acA2500- 14gc color camera and line laser module that can scan up and down, driven by a stepper motor. The camera has a horizontal and vertical resolution of 2590 × 1942 px, a frame rate of 14 FPS, and a sensor area of 1/2.5 inch. The scanning speed is approximately 20 mm/s and the scanning stroke is 500 mm. The 3D reconstruction of plants and measurement of many phenotypic parameters can be realized using the structured light system.

An interactive system that consists of a raspberry Pi, a microphone and a PiCamera is mounted on the robot's head. The PiCamera can screen live video of the field and transmit video streams to the server built by the raspberry Pi. The video stream delay is about 0.5 s, and the resolution is 1280 × 960 with a 30-fps frame rate.

#### *2.2. Interactive Cognition Phenotyping Process*

When the robot moves to the front of the plant under analysis, it can actively interact with the plant to build a more detectable environment if there is evidence of occlusion and overlap. As shown in Figure 2, when the plant is sheltered by other plants, the robot arm can push aside the plants to remove the occlusion. Then, the plant can be detected by the vision system and full phenotypic data can be acquired. Similarly, when the back part of a plant is occluded by the front part, the same active interaction process can be used to build a phenotype detectable environment.

**Figure 2.** The robot removing occlusion.

The robot operates on a field truss platform and it can move along two mutually perpendicular horizontal tracks. A fixed position in the field can be taken as the origin of the absolute coordinate system, and the two moving directions are the X-axis and the Y-axis, respectively. We use the motor-driven signals of the servo motors as odometers. When the robot moves to a position to measure a specific plant, the moving distance along the two directions can be calculated by the pulse number of the motor-driven signals. Therefore, the geographic coordinates of the robot can be determined. The relative distance of the plant to the robot can be measured by the pre-calibrated structural light system and the geographic coordinate of the plant can be determined. In our experimental field spot, where the longest moving distance is 50 m, the measurement error of the robot geographic coordinates is approximately 2 cm. In the robot operating space, the structural light system measurement error is approximately 0.1 cm. In this manner, an electronic map of every plant in the field can be established. Phenotypic data of every plant measured by the robot platform can be recorded on the map. With the electronic map, the robot platform can measure the same plant at different growth stages, thus establishing a complete full growth cycle phenotype database to provide complete phenotypic data for crop breeding.

Despite the introduction of the robot technique and active interactive cognition method, the efficiency and accuracy of automatic phenotyping can still be considerably improved, which is required to release the "Phenotyping Bottleneck". In addition, with the use of electronic maps, automatic phenotyping of full growth cycles can be realized.

#### **3. Bio-Inspired Operational Forms**

In natural agricultural environments, it is extremely difficult for robots to perform fully autonomous measurements and cognition. To date, operation in these non-structured scenes cannot reach relatively high accuracy. As a result, phenotyping schedules and operation need to be formulated first. Due to the humanoid structure of the robot, a bioinspired solution is proposed. By mimicking phenotyping operations of breeding experts, the phenotyping operational schedules are regularized.

The human–robot interactive technique (HRI) is used to regularize the phenotyping schedule. Breeding experts remotely control the robot platform to perform interactive phenotyping operations with the HRI system. The HRI framework is shown in Figure 3.

**Figure 3.** Human–robot interactive (HRI) framework for interactive phenotyping.
