**1. Introduction**

Physical pest control with laser power is widely considered as effective in reducing the pollution to the environment and even the damage to human health from the chemical pesticide [1,2]. Since 1980, many researchers have explored the outcome of pest elimination with lasers [3–5]. It has been demonstrated in these studies that laser power can cause damage to the exoskeleton and underlying tissues of pests, disrupt the anabolism of tissue cells, and ultimately kill pests [6,7]. Li et al. [5] found that the 24 h mortality rate of the fourth larval instar of *Pieris rapae* (L.) (Lepidoptera: Pieridae) reached 100% under the optimal working parameter combination of laser power of 7.5 W, an irradiation area of 6.189 mm2, the laser opening time of 1.177 s, and the irradiation position in the middle of the abdomen. Therefore, to make laser pest control technology applicable in engineering settings, a pest control device is required to accurately focus the laser on the middle of the pest's abdomen to ensure that the laser kills the pests precisely under intense energy.

In this respect, machine vision technology can be applied to identify the pests present in the field [8,9]. However, most pests have a protective color for defense. In particular, the image background is complex and pest image features are less than prominent due to the intensive planting of crops [10]. Moreover, prior research on pest identification has mainly focused on the classification and counting of the pest species, with little attention paid to the 3D location of pests. Therefore, deep learning technology and binocular vision are integrated in this study to accurately identify and locate the laser strike point on the pest, thus providing technical support for robotic pest control in vegetables.

**Citation:** Li, Y.; Feng, Q.; Lin, J.; Hu, Z.; Lei, X.; Xiang, Y. 3D Locating System for Pests' Laser Control Based on Multi-Constraint Stereo Matching. *Agriculture* **2022**, *12*, 766. https:// doi.org/10.3390/agriculture12060766

Academic Editor: Andrea Colantoni

Received: 12 April 2022 Accepted: 25 May 2022 Published: 27 May 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The mask regional convolutional neural network (Mask R-CNN) model first proposed by He et al. [11] can be used for instance segmentation and detection of pest images and achieves multiple research results in pest detection tasks [12,13]. Wang et al. [14] constructed a *Drosophila* instance segmentation model for automatically detecting and segmenting *Drosophila* wing, chest, and abdomen images, with an average precision of 94%. The instance segmentation can obtain target contour information without image morphological processing and is more suitable for accurate pest identification in laser pest control tasks. However, the above methods are used to segment RGB images of pests in specific environments, such as laboratory environments [15] and yellow sticky traps [16]. Existing algorithms still accurately segment pest targets with protective color characteristics in field environments.

As an extension of computer vision technology, near-infrared (NIR) imaging technology is used in insect species identification [17] and plant disease monitoring [18] widely. Sankaran et al. [19], based on visible-near infrared and thermal imaging technology, quickly identified citrus greening with an average precision of 87%. Luo et al. [20] used NIR imaging technology to track and monitor the structure and physiological phenology of Mediterranean tree-grass ecosystems under seasonal drought. Our team [21] proposed a monocular camera unit with an 850 nm optical bandpass filter to capture the image for identifying the pests, and the NIR image was confirmed to highlight the gray difference between the larvae of *P. rapae* and the vegetable leaves (Figure 1).

**Figure 1.** Comparison of near-infrared imaging effects of *Pieris rapae* on cabbage leaves. (**a**) The original image. (**b**) Near-infrared image. In the process of image acquisition, *P. rapae* and cabbage leaves were placed in a black box and an 850 nm infrared filter was installed on the camera to collect near-infrared images with an 850 nm ring light source. The original image is not equipped with a filter but is equipped with a white ring light with the same power as the 850 nm.

After identifying and segmenting pests in the field, the laser strike point is located in three dimensions based on binocular stereo vision. Stereo matching is an important factor affecting the location accuracy of binocular vision. Based on the constraint range and search strategy, the matching algorithm can be divided into local [22,23], global [24,25], and semi-global [26,27] stereo matching. However, the smaller larvae of *P. rapae* remain. With the 4th and 5th instar larvae of *P. rapae* as an example, their average widths reach 1.564 mm and 2.738 mm, respectively [28]. The above stereo matching of the global parallax map for the small target pests will result in low matching efficiency and poor location accuracy. Therefore, on the basis of the determined operation range, the candidate matching region was narrowed by the multi-constrained method to improve the efficiency and location accuracy of the stereo matching.

In this study, we designed a 3D locating system for pests' laser control to eliminate the above problems of inconspicuous pest image features, unclear location of strike points, and inefficient matching algorithms. A binocular camera unit with an optical filter of 850 nm wavelength was designed to capture the pest image. The ResNet50-based Mask R-CNN extracted the bounding box and the segmentation mask of the *P. rapae* pixel area, and the laser strike point was located in the middle of the pest abdomen, which was extracted through an improved ZS thinning algorithm with smoothing iterations. Furthermore, a multi-constrained matching method was adopted on the stereo rectification images. The subpixel target points in the images on the left and right were optimally matched by fitting the optimal parallax value with the most similar feature between the template area among the two images. The 3D coordinates of each laser strike point were located according to its pixel coordinates in the two images. Finally, the recognition and localization performance of the system for targets at different locations was evaluated by implementing it on a field test platform. The research results can provide theoretical reference for the automatic laser strike of the pest control robot.

#### **2. Materials and Methods**
