**Figure 15.** The distribution of embeddings from the old leaf task. *3.3. Continual Tasks Experiment with Our Proposed Method*

*3.3. Continual Tasks Experiment with Our Proposed Method*  As known, due to the forgetting problem, the basic CNN model cannot balance new and old tasks. Taking tthe sequential tasks from the crop pest dataset to the plant leaf dataset as an example, As known, due to the forgetting problem, the basic CNN model cannot balance new and old tasks. Taking tthe sequential tasks from the crop pest dataset to the plant leaf dataset as an example, we used the designed GAN model to abstract the most important information of the old task (crop pest) and generated the abstracted images as memory for the future task, automatically ignoring the trivial details. When a new task comes, the abstracted images in memory will be retrieved and mixed with the new dataset, and then fed to the metric learning model.

> Owing to this mechanism, the metric learning model can accumulate knowledge and better understand what it has learnt. The stored memory can be expanded, as does the increased ability to handle more continual tasks. The distribution of the model's output embeddings, corresponding to the testing images from both new and old tasks, is shown in Figure 16.

> The results show the ability of our method to continually distinguish the similarity between input paired images and classify the testing images. All the categories from new and old tasks are separated clearly, which means that the metric learning model has a good performance on both new and old tasks, alleviating the forgetting problem. Compared with Section 3.2, the alleviated extent of catastrophic forgetting for the crop pest task and plant leaf task is 50% and 40%, respectively.

> In addition, the results presented above are clear, and easily assessed. However, this is not always the case if we want to go further, e.g., an evaluation for the grouping results. In our opinion, the sum of the nearest distances between centers of groups will be a good choice. In detail, firstly, the center point of each group is calculated by the mean value; then, for every group center, the nearest distance with others is calculated; and finally, the sum of the nearest distances between the centers of groups is calculated, which is called the score. The evaluation metric should be proportional to the score, which means the larger the score is, the better the model's performance is.

with the new dataset, and then fed to the metric learning model.

we used the designed GAN model to abstract the most important information of the old task (crop pest) and generated the abstracted images as memory for the future task, automatically ignoring the trivial details. When a new task comes, the abstracted images in memory will be retrieved and mixed

Owing to this mechanism, the metric learning model can accumulate knowledge and better

understand what it has learnt. The stored memory can be expanded, as does the increased ability to handle more continual tasks. The distribution of the model's output embeddings, corresponding to

**Figure 16.** The distribution of embeddings from new and old tasks. **Figure 16.** The distribution of embeddings from new and old tasks.

### **4. Discussion**

The results show the ability of our method to continually distinguish the similarity between We conduct the discussion about this work from the following three aspects.

### input paired images and classify the testing images. All the categories from new and old tasks are *4.1. Idea and Contents*

separated clearly, which means that the metric learning model has a good performance on both new and old tasks, alleviating the forgetting problem. Compared with Section 3.2, the alleviated extent of catastrophic forgetting for the crop pest task and plant leaf task is 50% and 40%, respectively. In addition, the results presented above are clear, and easily assessed. However, this is not always the case if we want to go further, e.g., an evaluation for the grouping results. In our opinion, The existing traditional models cannot accumulate the knowledge from old tasks, which means that they are all task specific, only focusing on the current task while forgetting the prior ones. This is a lack of flexibility and is quite different from humans' learning style. Besides, at present, there are mainly two basic types of neural network learning principles: Probability based on back-propagation error and similarity-based metric comparison. The former is more mature, but metric-based similarity learning is closer to biological learning.

the sum of the nearest distances between centers of groups will be a good choice. In detail, firstly, the center point of each group is calculated by the mean value; then, for every group center, the nearest distance with others is calculated; and finally, the sum of the nearest distances between the centers of groups is calculated, which is called the score. The evaluation metric should be proportional to the score, which means the larger the score is, the better the model's performance is. So, from the bio-inspired perspective, we imitated the way biology learns and remembers, and proposed a continual metric learning method based on memory storage and retrieval to balance old and new tasks. Through several comparative experiments, it was found that the basic metric learning model can perform a single task excellently, distinguishing different categories well. However, when it is faced with continual tasks, the obvious forgetting problem occurs, and its poor flexibility loses the ability of dealing with old tasks. However, the addition of memory storage and retrieval in our method helps alleviate the forgetting problem, as all the categories from old and new tasks can be separated clearly, with good performance on both old and new tasks.

### **4. Discussion**  *4.2. Contributions to Existing Research*

learning is closer to biological learning.

We conduct the discussion about this work from the following three aspects. We proposed an ANN-based continual classification method via memory storage and retrieval, combining the CNN and GAN technology, on the common agricultural datasets, such as the crop pest dataset and plant leaf dataset. The key contributions are two points: Few data and high flexibility.

*4.1. Idea and Contents*  The existing traditional models cannot accumulate the knowledge from old tasks, which means that they are all task specific, only focusing on the current task while forgetting the prior ones. This is a lack of flexibility and is quite different from humans' learning style. Besides, at present, there are mainly two basic types of neural network learning principles: Probability based on back-propagation As known, the big scale of the dataset is the basic requirement for the existing typical deep neural networks. However, the collection and labelling of big datasets are laborious and time consuming. So, research based on few data is a promising way. The metric learning used in this work only requires few raw data, because what it cares about is the paired inputs. Although the size of the raw dataset is small, the number of combinations of pairs from the same category and different categories can be expanded hundreds of times. Besides, the proposed continual learning method based on memory storage and retrieval increases the flexibility of the classification model, allowing it to balance old and

proposed a continual metric learning method based on memory storage and retrieval to balance old and new tasks. Through several comparative experiments, it was found that the basic metric learning model can perform a single task excellently, distinguishing different categories well. However, when it is faced with continual tasks, the obvious forgetting problem occurs, and its poor flexibility loses

So, from the bio-inspired perspective, we imitated the way biology learns and remembers, and

new tasks, by accumulating knowledge and alleviating forgetting. This can be regarded as another small step towards more intelligent and flexible studies in agriculture.
