*Article* **A Real-Time Shrimp with and without Shells Recognition Method for Automatic Peeling Machines Based on Tactile Perception**

**Xueshen Chen, Yuesong Xiong, Peina Dang, Chonggang Tao, Changpeng Wu, Enzao Zhang and Tao Wu \***

College of Engineering, South China Agricultural University, Guangzhou 510642, China **\*** Correspondence: wt55pub@scau.edu.cn

**Abstract:** Accurate and automatic real-time recognition of shrimp with and without shells is the key to improve the efficiency of automatic peeling machines and reduce the labor cost. Existing methods cannot obtain excellent accuracy in the absence of target samples because there are too many species of shrimp to obtain a complete dataset. In this paper, we propose a tactile recognition method with universal applicability. First, we obtained tactile data, e.g., the texture and hardness of the surface of the shrimp, through a novel layout using the same type of sensors, and constructed fusion features based on the energy and nonstationary volatility (ENSV). Second, the ENSV features were input to an adaptive recognition boundary model (ARBM) for training to obtain the recognition boundary of shrimp with and without shells. Finally, the effectiveness of the proposed model was verified by comparison with other tactile models. The method was tested with different species of shrimp and the results were 88.2%, 87.0%, and 89.4%, respectively. The recognition accuracy of the overall, shrimp with shells and shrimp without shells verified the generalizability of the proposed method. This method can help to improve the efficiency of automatic peeling machines and reduce the labor cost.

**Keywords:** shrimp; automatic peeling machines; tactile perception; recognition

**1. Introduction**

The shrimp industry is a key sector of the fishing industry [1]. Research on equipment for the automated processing of shrimp is important because manual processing not only leads to low productivity and high production costs but also reduces the quality of shrimp products [2,3]. The typical process used by shrimp peeling equipment is to first remove the head of the shrimp, followed by the shell, by squeezing it through a roller sleeve [4,5]. The automated recognition of shrimp with and without shells must be explored because existing automatic peeling machines are not perfect and require the secondary manual recognition of shrimp with shells.

Machine vision is widely used as a nondestructive detection technique for the quality evaluation and body measurement of shrimp [6–8] Some scholars have implemented shrimp detection tasks by extracting color, shape, and texture features from images and combining them with machine learning models [9–11]. Deep learning, which can automatically learn the feature representations of original image pixel data without relying on specific features, has achieved great success in the field of image recognition [12,13]. Zhang et al. proposed a YOLOv3 multisource fish detection framework based on multiscale fusion and identified fish bodies in fish images based on a CenterNet target detection network with an average accuracy of 90.2% [14]. Conrady et al. constructed a sea bream recognition model based on a mask region-based convolutional neural network (R-CNN) with good accuracy [15]. However, the visual method can recognize samples that are similar to the training samples [16]. As there are more than 2000 shrimp species, it is difficult to obtain a comprehensive sample dataset. In addition, its processing is mainly in the form of

**Citation:** Chen, X.; Xiong, Y.; Dang, P.; Tao, C.; Wu, C.; Zhang, E.; Wu, T. A Real-Time Shrimp with and without Shells Recognition Method for Automatic Peeling Machines Based on Tactile Perception. *Agriculture* **2023**, *13*, 422. https:// doi.org/10.3390/agriculture13020422

Academic Editors: Jin Yuan, Wei Ji and Qingchun Feng

Received: 24 October 2022 Revised: 14 January 2023 Accepted: 30 January 2023 Published: 10 February 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

video [17], which limits its application in shrimp identification because of its long training time and high equipment requirements.

Tactile sensing is another form of perception that ignores the influence of shrimp species. Tactile sensing recognizes and detects the objects to be measured by analyzing the tactile time-series signals of these objects [18,19]. It is widely used in different fields owing to its high processing speed and recognition accuracy for objects with large force differences [20–22] Wang's team and Zhang's team applied principal component analysis (PCA) to reduce the dimensionality of tactile signals and recognize different objects by machine learning methods [23,24]. Keser's team and Qin's team used the discrete wavelet transform (DWT) method to generate feature vectors of tactile sample signals and then implemented the classification of tactile signals [25,26]. In the abovementioned studies, most of the tactile data on the object being tested are homogeneous, and whole or partial features are directly extracted for recognition by manual experience. However, the tactile data obtained from the surface of the shrimp is inhomogeneous, and shrimp with shells have complex and variable shell attachment sites, making it difficult to obtain accurate experimental results.

In this paper, we propose a method to identify shrimp with and without shells by tactile sensation. First, we use two sensors of the same type to obtain tactile data on the texture and hardness of the shrimp surface, and construct fusion features based on energy and nonstationary volatility (ENSV). Then, based on the feature distribution of the ENSV, an adaptive recognition boundary model (ARBM) is constructed. Finally, we verify the feasibility and generalizability of the proposed method. The main contributions of this study are as follows.


#### **2. Materials and Methods**

### *2.1. Experimental Setup*

In this study, a tactile sensor was developed. When the tactile sensor slides across the surface of an object, it senses the surface texture and hardness information of the object and transmits signals over time through two sensing cells. The tactile sensor consists of four carbon fiber plates (Zesheng Carbon Fiber Products Factory, Zhongshan, China) and two piezoelectric film polyvinylidene fluoride (PVDF) sensors (Jiangmen Antai Electronics Co., Ltd., Jiangmen, China). The fabrication of the tactile sensor proposed in this study is simple, as shown in Figure 1a.

Four carbon fiber plates were offset and stacked in turn. When the tactile sensor touches an object, it amplifies the vibration features to show the tactile features of the object. Two piezoelectric film PVDF sensors with a copper block embedded in each end increases the visibility and recognizability of the tactile signal.

One piezoelectric film PVDF sensor (Sensor A) is horizontally installed in the middle of four carbon fiber plates. The copper block extends out of the carbon fiber plate and is in a suspended state. In this manner, the piezoelectric film PVDF sensor can obtain the surface texture information when the object is touched by the tactile sensor. The other piezoelectric film PVDF sensor (Sensor B) is installed in the middle of the longest carbon fiber plate. The copper block faces downward along the carbon fiber sheet. In this manner, the piezoelectric film PVDF sensor can obtain the hardness information when the object is touched by the tactile sensor. The material specifications of the tactile sensor are listed in Table 1.

**Figure 1.** Schematic diagram of tactile sensor. (**a**) The physical diagram of tactile sensor; (**b**) diagram of experimental setup of tactile time-series acquisition.


**Table 1.** Material and structural parameters of the tactile sensors.

The conveyor belt speed is fixed, the carbon fiber plate of the tactile sensor scans the surface of the shrimp. Sensor A captures information about the texture of the shrimp's body by vibrating as the carbon fiber plate comes into contact with the shrimp. Sensor B, which is bent by the force created by the contact, captures information about the hardness of the shrimp. Shrimp with shells generally have a hard and rough surface, whereas shrimp without shells have a soft and smooth surface. This differential data of the shrimp's body surface is obtained through the use of two sensing units.

The experimental setup for tactile time-series acquisition is shown in Figure 1b. First, the shrimp were transported by a conveyor belt. When the shrimp pass the laser sensor, the data from the tactile sensor were acquired. Then, the Arduino (Shanghai Longzhan Information Technology Co., Ltd., Shanghai, China) collected the output signals of the tactile sensor. These were transmitted to the Bluetooth module and wirelessly transmitted to the computer in real time for processing and analysis. The data visualization interface is based on the LabVIEW software for computing. Finally, the obtained tactile signals were processed in a MATLAB (mathematical tool) environment.

#### *2.2. Data Processing*

Tactile time-series data were obtained from shrimp with and without shells. First, discrete tactile data were preprocessed by theoretical waveform analysis. Second, the ENSV features were extracted from the preprocessed tactile data. Finally, the ENSV was input into the ARBM to obtain the recognition models of shrimp with and without shells.

#### 2.2.1. Tactile Signal Acquisition and Preprocessing

The tactile sensors described in Section 2.1 were used to acquire tactile data from the shrimp. All samples were placed on a conveyor belt moving at a speed of 0.1 m/s for tactile data acquisition. Taking into account the distance between the end of the tactile sensor and the laser sensor, data acquisition starts 2000 ms after the laser sensor is activated to analyze the data efficiently and reduce storage space. The sampling frequency of the analog signal of the tactile sensor was set to 1300 Hz, which is twice of that of the tactile data frequency to ensure that the tactile data does not overlap in the frequency domain. To obtain the complete tactile sensing process of the shrimps, the data capacity of one sample was set to 5000 data points (2500 data points each for the Sensor A and Sensor B). The acquisition ends after 5000 data were collected for each sample.

The raw signal plot is shown in Figure 2a. The blue waveform represents the data acquired by Sensor A, i.e., the shrimp surface texture information. The red waveform represents the data acquired by Sensor B, i.e., the shrimp hardness information. During the dynamic process of data acquisition, the raw output signal contains a DC component, which leads to a nonzero starting signal and different starting values for the two sensing cells.

**Figure 2.** Data preprocessing process diagram. (**a**) Waveform diagram of the original tactile signal; (**b**) Waveform diagram of the tactile data after preprocessing.

When the energy features were extracted, the direct calculation of energy features would result in large energy values for each segment. When the nonstationary volatility was extracted, the direct calculation of nonstationary volatility of tactile signals would result in small nonstationary volatility values for each segment. Both cases affect the recognition accuracy of the sensor. Therefore, it is necessary to filter the DC components of the signal. However, when the DC component is filtered, the tactile signal will contain data less than 0, which leads to errors in the calculation of nonstationary volatility values. The data after preprocessing are shown in Figure 2b.

To reduce the interference of DC components in feature extraction, two tactile signal preprocessing methods were used. These are the direct filtering of the DC components from tactile signals when extracting energy features, and the minimum value filtering of tactile signals when extracting nonstationary volatility features. The specific raw signal processing is expressed as Formulas (1) and (2).

$$SC\_N = TS\_N' - \overline{TS\_N'} \tag{1}$$

$$SM\_N = TS\_N' - \min\left(TS\_N'\right) \tag{2}$$

where *SCN* is the filtered DC component signal, *N* is the number of sampling points per sensing cell (*N* = 2500), *TS <sup>N</sup>* is the original tactile signal, *TS <sup>N</sup>* is the average value of the raw tactile signal, *SMN* is the minimum value of the filtered signal, and min *TS N* is the minimum value of the raw tactile signal.

Tactile signals are directly used to train the model to recognize different objects by undergoing a complex learning process that ignores the detailed feature information about the surface texture and hardness [27]. The segmentation of the preprocessed tactile signal can tap into the details of the tactile signal, and reducing the signal length to process at each instant while keeping the signal characteristics [28]. The sliding window method was used to segment the data with a certain step size to ensure data continuity after segmentation. The effects of data segmentation are shown in Figure 3a,b. The number of segments is calculated by

$$i = \frac{N - Sl + Ss}{Ss} \tag{3}$$

where *i* is the number of segments (*i* is an integer), *N* is the length of the preprocessed data, *Sl* is the window data length, and *Ss* is the sliding step of the segments. The window data length and sliding step length were set as 50 and 10, respectively.

**Figure 3.** Schematic diagram of tactile data segmentation. (**a**) Schematic diagram of tactile signal segmentation with DC component filtering; (**b**) schematic diagram of tactile signals segmentation with minimum value filtering.

#### 2.2.2. ENSV Features Extraction

Machine learning techniques combined with feature extraction methods can improve the recognition accuracy as well as speed-up the training process. We selected the ENSV features as the feature vector for the recognition of shrimp with and without shells. The energy feature in ENSV characteristics can well reflect the changes of the force on the sensor, and the nonstationary volatility feature can make the sensor more clear in the force process. The fusion of the two features can reduce the interference of invalid information and amplify the tactile differences between shrimp with and without shells.

First, the energy features of each segment were extracted after DC component filtering. Second, the nonstationary volatility features of each segment were extracted after minimum filtering. Finally, the energy and nonstationary volatility features were fused to obtain the identification feature vector. The feature extraction process is shown in Figure 4.

In the process of acquiring tactile signals, there is a difference in the blocking force between the tactile sensor and shrimp with and without shells. The surface of shrimp without shells is smooth and soft, producing a small blocking force. In contrast, the surface of shrimp with shells is rough and hard, producing a large blocking force. To describe the process of changing force on the tactile sensor as it slides across the shrimp surface, we extracted the energy of each segment as a feature after DC component filtering. The calculation formula is expressed as (4). The effect is shown in Figure 4a.

$$E\_I = \frac{\sum\_{n=(i-1)\,Ss}^{(i-1)\,Ss+SI} (Sq\_i)^2}{SI} \tag{4}$$

where *Ei* is the average energy of each segment. *Sqi* is the tactile data of each segment after DC component filtering, (*n* = 1, 2, ... , 2500). *n* is the sequence number of the sampling point. The formula of the segmented energy feature vector is expressed as (5).

$$E = (E\_1, E\_2 \dots E\_i) \tag{5}$$

**Figure 4.** Feature extraction process diagram. (**a**) The result of the energy features of each segment; (**b**) the result of the nonstationary degree of fluctuation of each segment; (**c**) the result of feature fusion.

When the tactile sensor is not in contact with the shrimp, the tactile sensor data is stable. When the tactile sensor is in contact with the shrimp, it deforms and vibrates, and tactile signals produce nonstationary volatilities. This type of volatility differs from that of a stationary signal. We extracted the nonstationary volatility of each segment as a feature after minimum value filtering. The effect is shown in Figure 4b.

Ideally, for stationary volatility data, the sum of squares of any two tactile data points is equal to two times the square of the initial value. Let *Swn* be the value of any sampling point in a segment after the minimum value is deleted. *Swn*+*m* is the value of exploring *m* sampling points backwards from the nth sampling point. These are expressed as

$$Sw(n) = (Sw\_n)^2 + (Sw\_{n+m})^2\tag{6}$$

where *Sw* is a function that varies with the sampling point *n*, represented as *Sw*(*n*). In an ideal case, the *Sw* is constant for stationary data. The mathematical expectation of the *Sw* in a certain segment is

$$\mathbb{E}Sw = \frac{\sum\_{n=(i-1)\mathbb{S}s}^{(i-1)\mathbb{S}s+Sl} Sw(n)}{N} \tag{7}$$

The relative mean square deviation (*σWi*) of *Sw*(*n*) and its mathematical expectation *ESw* is

$$
\sigma W\_i = \frac{\sqrt{E\left\{\left[Sw(n) - ESw\right]^2\right\}}}{ESw} \tag{8}
$$

*σWi* increases with the degree of nonstationarity. If the data are stationary under ideal conditions, then *σWi* = 0. The degree of nonstationary volatility of the feature vector is expressed as

$$
\sigma \mathcal{W} = (\sigma \mathcal{W}\_1, \sigma \mathcal{W}\_2, \dots, \sigma \mathcal{W}\_i) \tag{9}
$$

This feature amplifies the textural and hardness characteristics of the tactile sensor during contact with the shrimp, and reduces the data interference in the noncontact state. The expression of this feature is provided in (10) and illustrated in Figure 4c.

$$V = E \odot \sigma \mathcal{W} \tag{10}$$

#### 2.2.3. ARBM Construction

As mentioned in the introduction, shrimp with shells have complex shell attachment sites. Therefore, we propose an ARBM to solve this problem. First, the ENSV feature vectors were pretrained using a back-propagation (BP) neural network fitting model, as shown in Figure 5. Then, the feature vectors of shrimp samples with and without shells were assumed to be located in different circular regions, and the center of each class was calculated, as shown in Figure 6a. Finally, the radius of the recognition boundary was obtained by training. The recognition boundary of shrimp without shells was retained, while that of shrimp with shells was discarded. The shrimp with and without shells are located outside and inside the boundary, respectively, as shown in Figure 6b.

**Figure 5.** Pretraining flow chart.

**Figure 6.** Boundary training and recognition schematic. (**a**) The boundary training schematic; (**b**) the recognition schematic.

#### Pretraining

We pretrained the model using shrimp with and without shells as prior knowledge. This enables more respective clustering of the ENSV feature vector distributions of shrimp with and without shells. Artificial neural networks were used to classify the feature dataset [29]. In general, BP neural networks do not have strict data distribution requirements. These can automatically transform the initial "bottom" feature representation into the "top" feature representation through multilevel and nonlinear transformations [30]. This part uses ENSV as prior knowledge for the pretraining process. The data from Sensor A and Sensor B are trained separately. The number of neural nodes in the input layer corresponds to the number of segments of the samples. The model input is the ENSV feature vectors extracted from the sample. The number of neurons in the hidden layer is 10. The number of neural nodes in the output layer corresponds to whether the shrimps have shells (1 for shrimp with shells and 0 for shrimp without shells), as shown in Figure 5.

#### Boundary Training

In this section we input the data to the pretrained neural network fitting model. Then the fitted values of shrimp with and without shells from different sensors are obtained. Place the values of Sensor A and Sensor B in the same two-dimensional coordinate system. The flow chart of boundary training and recognition is shown in Figure 6.

Pretraining uses the ENSV features of Sensor A and Sensor B as input quantities and shrimp with and without shells as output quantities. This process groups shrimp by their class and separates different classes. To make data computation more efficient and improve real-time processing, we use a circular boundary defined by only two parameters (radius and cluster center) to simplify the data analysis. Before training the recognition model, the centers of the feature vector distributions of shrimp with and without shells must be determined. Shrimp with shells are one class, while shrimp without shells are another. The sample dataset of a class is treated as a cluster, and the cluster centers are determined by calculating the mean feature vector of each cluster.

$$c\_k = \frac{1}{|D\_k|} \sum\_{(Q\_j, Y\_j) \in D\_k} Q\_j \tag{11}$$

where *Dk* = (*Q*1,*Y*1),..., Q*j*, Y*<sup>j</sup>* is the set of ENSV and its label. For shrimp with and without shells, Y*<sup>j</sup>* = 1 and Y*<sup>j</sup>* = 0, respectively. *Dk* is the number of sample sets marked as the same class.*ck* is the cluster center.

Define Δ*<sup>k</sup>* as the radius of the recognition boundary relative to the center of the circle *ck*. The ENSV should satisfy the following constraints:

$$\forall Q\_j \in D\_k, \|Q\_j - c\_k\|\_2 \le \Delta\_k \tag{12}$$

where *Qj* − *ck* <sup>2</sup> represents the Euclidean distance between *Qj* and *ck*. The SoftPlus activation function was used to map the radius and radius parameters.

$$
\Delta\_k = \log\left(1 + e^{\widehat{\Delta\_k}}\right) \tag{13}
$$

where Δ*<sup>k</sup>* is the cluster radius and Δ *<sup>k</sup>* is the radius parameter.

On the one hand, it is hoped that the recognition boundary can surround most shrimp with and without shells. On the other hand, it is also hoped that the boundary of the circle is not too far from the center of the cluster. Therefore, the following boundary loss function is adopted.

$$\mathcal{L}\_{b} = \frac{1}{M} \sum\_{j=1}^{M} \left[ \delta\_{j} \left( \left\| Q\_{j} - c\_{Y\_{j}} \right\|\_{2} - \Delta\_{Y\_{j}} \right) + (1 - \delta\_{j}) \left( \Delta\_{Y\_{j}} - \left\| Q\_{j} - c\_{Y\_{j}} \right\|\_{2} \right) \right] \tag{14}$$

where *M* is the total number of shrimp samples with and without shells, and *Yj* is the label of the ith sample. *δ<sup>j</sup>* is defined as

$$\delta\_{\dot{j}} = \begin{cases} 1, \|Q\_{\dot{j}} - c\_k\|\_2 > \Delta\_{Y\_{\dot{j}}} \\ 0, \|Q\_{\dot{j}} - c\_k\|\_2 \le \Delta\_{Y\_{\dot{j}}} \end{cases} \tag{15}$$

Then, the radius parameter Δ *<sup>k</sup>* is optimized using a random gradient descent:

$$
\widehat{\Delta\_k} = \widehat{\Delta\_k} - \eta \frac{\partial L\_b}{\partial \widehat{\Delta\_k}} \tag{16}
$$

where *η* is the learning rate of the boundary parameters. The *<sup>∂</sup>Lb ∂*Δ *k* is calculated by

$$\frac{\partial L\_b}{\partial \widehat{\Delta}\_k} = \frac{\sum\_{j=1}^M \delta'(Y\_j = k) \cdot (-1)^{\delta\_j}}{\sum\_{j=1}^M \delta'(Y\_j = k)} \cdot \frac{1}{1 + e^{-\widehat{\Delta}\_k}} \tag{17}$$

If *Yj* = *k*, then *δ yj* = *k* = 1; if *Yj* = *k*, then *δ yj* = *k* = 0. In this way, the learned radius parameters not only surrounds most shrimp with and without shells, but also avoids the cluster centers of each type.

After learning the center and recognition boundary radius of shrimp with and without shells, we discarded the boundary of shrimp with shells and retained that of shrimp without shells. This is because shrimp with shells have different shell attachment areas, resulting in a wider spatial distribution of feature vectors for tactile recognition. In contrast, the feature vectors of shrimp without shells are relatively fixed. The presence of interference samples affects the recognition accuracy when training is conducted using only shrimp samples without shell samples. In addition, the maximum number of shrimp with shells must be recognized to ensure the effectiveness of the industrial production process.

During the test, the distance between the test sample and the class center of shrimp without shells was calculated. When the distance is less than the radius of the boundary of the shrimp without shells, it is judged as shrimp without shells; otherwise, it is judged as shrimp with shells.

#### **3. Results and Discussion**

Two experiments were conducted to evaluate the performance of the proposed method in recognizing shrimp with and without shells. In one experiment, the species Macrobrachium rosenbergii was selected as the training sample, and the trained ARBM was compared with the proposed tactile recognition model. In the other experiment, five different shrimp species were selected for testing, and the trained ARBM model was compared with the vision model. The overall recognition accuracy for shrimp with and without shells is the performance evaluation index expressed as (18)–(20).

$$AT = \frac{TP + TN}{TP + TN + FP + FN} \tag{18}$$

$$AS = \frac{TP}{TP + FN} \tag{19}$$

$$AP = \frac{TN}{FP + TN} \tag{20}$$

where *AT* is the overall recognition accuracy, *TP* is the number of correct recognitions of shrimp with shells, *TN* is the number of correct recognitions of shrimp without shells, *FP* is the number of incorrect recognitions of shrimp with shells, *FN* is the number of incorrect recognitions of shrimp without shells, *AS* is the recognition accuracy of shrimp with shells, and *AP* is the recognition accuracy of shrimp without shells.

#### *3.1. Compare Different Tactile Recognition Models*

To verify the validity of the ENSV-ARBM, we selected headless Macrobrachium rosenbergii shrimps as our experimental samples. The samples of Macrobrachium rosenbergii had a length of 8.2–9.8 cm and a weight of 35.2–40.1 g. Five hundred (500) shrimp with shells and another 500 without shells were examined.

First, the speed of the conveyor was fixed at 0.1 m/s. The shrimps passed the tactile sensor at specific time intervals, which must be longer than the time required to fully acquire the tactile sensations of a shrimp. After each shrimp passes the tactile sensor, the corresponding tag is manually recorded and the tactile data is saved. The experiment is conducted in MATLAB 2022a 64-bit (MATLAB, 2022a) platforms using a 2.7 GHz notebook computer with an Intel(R) Core (TM) CPU and 8 GB RAM. The samples of the tactile recognition of shrimp are shown in Figure 7. The device described in Section 2.1 was selected for data collection. Tactile data were collected from all experimental samples (i.e., 500 shrimp with shells and 500 without shells). Finally, 70% of shrimp with and without shells were randomly selected as the training set, 15% as the validation set, and 15% as the test set.

**Figure 7.** Plot of raw data of shrimp with and without shells.

The identification of shrimp with and without shells is based on the difference in their waveforms. When a tactile sensor scans a shrimp with shells, the grooves on its body cause the sensor to produce a more pronounced jitter and oscillation signal. As the surface area of the shrimp shell increases, the duration of the oscillation signal generated by the sensor decreases. Sensor B, located on the outermost carbon fiber plate, detects a certain protruding waveform due to the increased hardness of the shrimp's body. On the other hand, when a tactile sensor scans a shrimp without shells, the friction gradually increases as it scans the shrimp's smooth and soft body. As a result, the waveforms obtained from Sensors A and B on the surface carbon fiber plate are smoother and contain less energy. The results of the comparisons with tactile perception methods proposed in the literature are listed in Table 2.

**Table 2.** Comparison of the results of the proposed scheme with other tactile methods.


The statistical results in Table 2 show that ENSV-ARBM method has the highest *AT*, *AS*, and *AP* of 88.7%, 85.3%, and 92.0%, respectively. The ENSV feature is a fusion of the energy and nonstationary volatility features, in which the energy feature reflects the

dynamic changes in the force during the tactile process, and the nonstationary volatility feature extracts the fluctuating data during the contact between the sensor and shrimp surface. The fusion of these two features can effectively amplify the differences in the surface texture and hardness between shrimp with and without shells as well as mask invalid data to identify the physically significant features of both types of shrimps. The ARBM is a recognition model based on the spatial distribution of the data, which enables the secondary classification of shrimp with and without shells. The model uses the distribution boundary of the sample space of shrimp without shells as a classification boundary in the presence uncertainty regarding the attachment surface of shrimp samples with shells.

The results in Table 2 demonstrate the effectiveness of the ENSV-ARBM-based tactile recognition of shrimp with and without shells. The overall recognition rate is better than that of the other tactile recognition algorithms. Shrimp without shells are smooth and soft to the touch, whereas shrimp with shells are rough and hard to the touch. By effectively extracting the texture and hardness tactile features of different shrimp body surfaces, the recognition accuracy of shrimp with and without shells can be improved. In addition, the stable tactile data boundaries of shrimp without shells reduces the problem of the complex shell attachment locations of shrimp with shells.

#### *3.2. Compare Different Vision Recognition Models*

To verify the generalizability of the proposed ENSV-ARBM for the tactile recognition of shrimp with and without shells, we selected five different shrimp species for comparative experiments using the machine vision approach and the tactile approach described in Section 3.1. These include Panulirus argus, Macrobrachium rosenbergii, Penaeus chinensis, Oratosquilla oratoria, and Metapenaeus ensis, as shown in Figure 8. There were 100 shrimp with shells and 100 shrimp without shells for each species.

**Figure 8.** Photos of five different shrimp species. (**a**) Panulirus argus; (**b**) Macrobrachium rosenbergii; (**c**) Penaeus chinensis; (**d**) Penaeus japonicus; and (**e**) Metapenaeus ensis.

After decapitation, we measured the size and weight of the shrimp samples using a ruler and an electronic scale, respectively. The samples of Panulirus argus had a length of 13.1–15.9 cm and a weight of 69.3–72.5 g. The samples of Macrobrachium rosenbergii had a length of 8.2–9.8 cm and a weight of 35.2–40.1 g. The samples of Penaeus chinensis had a length of 10.7–12.8 cm and a weight of 33.3–39.8 g. The samples of Oratosquilla oratoria had a length of 11.3–13.1 cm and a weight of 35.4–42.5 g. The samples of Metapenaeus ensis had a length of 7.3–8.4 cm and a weight of 28.7–32.0 g.

In the area of tactile recognition, we selected the ENSV-ARBM-based tactile recognition method for our experiments. The trained model in Section 3.1 was selected to test the five different shrimp species. In the area of visual recognition, we used an industrial camera (HIKVISION) with a CMOS sensor as the data source for visual recognition. The sensor size is 22.3 mm × 14.9 mm, the effective pixels are 18 million, and the acquired image resolution is 2928 × 3904 (pixels). To test the fairness of the assessment, samples of 500 shrimp with shells and 500 without shells were photographed along the conveyor belt. The image information obtained was fed into the YOLOv3 and R-CNN frameworks for training purposes. The trained model then was applied to test the recognition of five different shrimp species with and without shells. The experimental results are listed in Tables 3 and 4.


**Table 3.** Comparison of results between the proposed scheme and other tactile methods.

**Table 4.** Comparison of results between the proposed scheme and other vision methods.


From Table 3, we can see that our proposed tactile perception method is better compared to other tactile methods. From Table 4, in terms of the average overall recognition accuracy, the ENSV-ARBM-based tactile recognition method exhibited the best performance for the *AT*, *AS*, and *AP* with 88.2%, 87.0%, and 89.4%, respectively. This was followed by the YOLOv3. R-CNN exhibited the worst performance for the *AT*, *AS*, and *AP* with 82.8%, 83.8%, and 81.8%, respectively. With regard to the recognition accuracy for each shrimp species, the vision recognition methods for the Macrobrachium rosenbergii and Panulirus argus were better than the tactile recognition approach.

Macrobrachium rosenbergii and Panulirus argus, with and without shells, were visually distinguished. The Penaeus chinensis and Metapenaeus ensis have transparent shells; hence, the visual recognition method misidentified shrimp with shells as shrimp without shells when the shells were attached to the tail. For Penaeus japonicus, both shrimp with and without shells showed a black color; hence, the visual method misidentified shrimp without shells as shrimp with shells.

The ENSV-ARBM-based tactile recognition method identifies whether shrimps have shells mainly through the dynamic changes in the texture and hardness of the shrimp surface. Although the flesh and shell of different shrimp species have different forms, textures, and colors, the variations in texture and hardness are similar. The machine vision training samples must be comprehensive, whereas the tactile method only identifies the physical features of texture and hardness of shrimp with or without shells; hence, it has better universality. The experimental results demonstrate the universality of ENSV-ARBMbased tactile recognition and provides good results for the recognition of different shrimp species.
