Next Article in Journal
Enhancing Financial Time Series Prediction with Quantum-Enhanced Synthetic Data Generation: A Case Study on the S&P 500 Using a Quantum Wasserstein Generative Adversarial Network Approach with a Gradient Penalty
Previous Article in Journal
Design and Optimization of Coil for Transcutaneous Energy Transmission System
 
 
Article
Peer-Review Record

Simultaneous Velocity and Texture Classification from a Neuromorphic Tactile Sensor Using Spiking Neural Networks

Electronics 2024, 13(11), 2159; https://doi.org/10.3390/electronics13112159
by George Brayshaw 1,2,*, Benjamin Ward-Cherrier 1 and Martin J. Pearson 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Electronics 2024, 13(11), 2159; https://doi.org/10.3390/electronics13112159
Submission received: 22 April 2024 / Revised: 24 May 2024 / Accepted: 30 May 2024 / Published: 1 June 2024
(This article belongs to the Special Issue Neuromorphic Devices, Circuits, Systems and Their Applications)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

1. There are too few texture color types and texture types. Too simple for visual tasks. The author should add more types to the textures database, or add some categories similar to textures. Or it shows that these textures have representative significance in the work of machine touch.

2.The introduction section needs to be reorganized. The current introduction uses too much space at the beginning to introduce past work. When reading, it is difficult for readers to understand the author's summary of the purpose of this work and implementation methods in time. The author can appropriately refine the introduction of other studies. If the author feels it is necessary, a separate part of the content can be listed separately for introduction.

3. It is recommended that the author rephrase various chapters in 2.4 Training and 2.5 metrics. For some indicators and parameters involved in the article, it is recommended to use mathematical symbols as expressions instead of just using a text description.

4. Figure 7 Why is the accuracy rate of profile 5 of Vel Prof Classfier so low? Please give the author a more detailed explanation. Usually this situation may be caused by overfitting.

5. Overall, the topic of this study is relatively novel and has strong practical significance. However, overfitting and lack of generalization may be an important problem that limits this research. Compared with the texture classification work in machine vision, the number of classifications in the experiment is too small, and the data used for training is also too small. Potential overfitting characteristics are also shown in the results. It is recommended that the author adopt mature classification tools and use pre-trained models to improve generalization.

Comments on the Quality of English Language

The author has high-quality presentation skills and the article is written fluently. However, there are still some grammatical errors. Please check the entire article again. In addition, authors need to pay attention to the difference between scientific writing and daily writing. There are many articles on research using deep learning in the mechanical field. Here the author can recommend an article as a reference for scitific writing: Xu, Q., Nie, Z., Xu, H., Zhou, H., Attar, H.R., Li, N., Xie, F. and Liu, X.J. , 2022. SuperMeshing: A new deep learning architecture for increasing the mesh density of physical fields in metal forming numerical simulation. Journal of Applied Mechanics, 89(1), p.011002.

Author Response

1.

The colour of the texture does not in any way affect performance as this is a purely tactile task. The sensor is optical, but as discussed within the paper, the camera records the displacement of markers across a soft tip. This means the camera does not view the texture directly, rather it records the displacement of pins in response to tactile stimuli. the operation of an event camera is agnostic to RGB colour values as discussed in cited literature.

The textures used, as mentioned in the related section of the paper, span the range of human perceptual dimensions of texture.

“Based on prior studies, humans identify textures using distinct tactile dimensions [40]. These dimensions being; roughness, hardness, temperature and stickiness. Our dataset aims to include textures that encompass the full range of each perceptual dimension”

 

2.

We have separated our introduction into Introduction and Related Work sections in order to more clearly differentiate between literature review and introductory sections. This shortens the introduction and makes it easier to understand the work’s place within the field. Other studies have been introduced in the related works section, which focusses on detailing the current status of the field.

 

3.

Thank you for this comment. We have replaced our text descriptions in Section 4.3 with detailed equations to clarify how we derive values for each of our metrics.

 

4.

Like the reviewer, we noticed this same result in our confusion matrix during writing. We have added some discussion of this to the paper:

“The confusion seen within Vel_Prof sets (4-7, 8-11), indicate that the system responds well to identify the movement of the sensor across the texture, only failing to correctly classify the precise acceleration/deceleration for said profile. Interestingly, the confusion displayed by Vel_Prof 5 (37%) with profiles 6 and 7, further indicates that the network is getting confused within each movement profile shape. We hypothesise that this confusion is potentially due to the temporal cropping of our data to the first 1000ms. Fig.3 showcases how, for our constant acceleration and triangular shaped velocity profiles, similar trajectories are followed during this period of each sample, whereas constant velocity profiles appear to be more differentiable.”

5.

Thank you for recognising the novelty and significance of the work. As mentioned in response to point 1, this is not a vision classification task, but rather a purely tactile task. Within the tactile field, the number of textures and samples collected is in line with current SOA approaches. Cited works below utilise datasets of between 8 and 12 textures:

Gupta, Anupam Kumar, et al. "Spatio-temporal encoding improves neuromorphic tactile texture classification." IEEE Sensors Journal 21.17 (2021): 19038-19046.

Huang, Shiyao, and Hao Wu. "Texture recognition based on perception data from a bionic tactile sensor." Sensors 21.15 (2021): 5224.

Das, Shemonto, Vinicius Prado da Fonseca, and Amilcar Soares. "Active learning strategies for robotic tactile texture recognition tasks." Frontiers in Robotics and AI 11 (2024): 1281060.

Reviewer 2 Report

Comments and Suggestions for Authors

1、Please give a more detailed description of table 1, so the reader can understand the main content of your research better.
2、The paper contains a large number of experimental parameter settings, please check these parameter settings carefully to ensure that these experiments can be reproduced.
3、Can the authors elaborate on the current research status and main shortcomings in this field using dedicated paragraph sections?

Author Response

1.

We have improved the description of Table 1 (now Table 2) within the caption to clarify that this table shows the grid search space for our training parameters. We clarify here that the peak values reported are the grid search parameters that yielded the highest accuracy classifiers:

“Table 2: Search Space for the performed training parameter grid search. The Peak Values shown here are the training parameters that produced the highest accuracy classifiers.”

 

2.

We have looked over our methodology and can confirm that our quoted parameters are correct and can be reproduced. We have plans to release the collected dataset used.

 

3.

We added some recent publications to our literature review and discussed their contributions and shortcomings, in order to give a more complete review of the field:

“Coa et al.[21] present a zero-shot approach to identify previously unseen textures based learned perceptual dimensions. Using an optical tactile sensor, this work achieves accuracies of 83\% when classifying these unseen textures. However the collection of data within this experiment relies on the controlled pressing of a tactile sensor (Gelsight) against the texture surface. This methodology creates detailed tactile images of the texture surface but does not examine the surface during movement or the application shear forces, an important effect during tactile interactions that we seek to investigate here. Yang et al. [22] highlights the importance of these shear forces when identifying texture using tactile sensors.”

Reviewer 3 Report

Comments and Suggestions for Authors

Please find the review comments attached.

Comments for author File: Comments.pdf

Author Response

Thank you for your general overview. Please find specific comments addressed below.

1.

We have separated our introduction into Introduction and Related Work sections in order to more clearly differentiate between literature review and introductory sections.

Within this field it is often difficult to draw direct comparisons with other works due to the unique characteristics of each experimental setup (number of textures, contact method, differing sensor modalities) resulting in a lack of standardised benchmarking tests. We have instead demonstrated incremental improvements to our system relative to prior works, expanding to different exploratory movements and classifying said movements and textures simultaneously.

 

We added some recent publications to our literature review in correspondence with your comments

 

2.

Thank you for your suggestion. We have added an outline of the paper in the final paragraph of the introduction:

“This paper is structured as follows. Section 2 showcases related works in the field of artificial texture classification. Section 3 contains the equipment and experimental setup used to collect the tactile dataset used within this work. Section 4 demonstrates our preprocessing, network design, optimisations and analysis techniques. Section 5 details the results of our data collection, network optimisations, and analysis for our networks. Section 6 discusses our findings, potential limitations of the work and recommendations for further work. Finally, Section7 briefly summarises the work and contributions presented.”

 

3.

We have now split our “Materials and Methods" section into “Experimental Setup” and  Proposed Method” in order to differentiate between hardware and software methodologies. Our contributions are mentioned at the end of our introduction section:

“Towards the understanding of these trade offs we present the following contributions:

  • The development of an end-to-end tactile neuromorphic system capable of movement

invariant texture classification.

  • Simultaneous classification of movement profile of a tactile sensor across different

surfaces.

  • A multi-objective optimisation analysis of network size, activity and accuracy to fit edge platform constraints.”

 

4.

We have described the dataset collection process in detail in Section 3.2. Table 1 has been added to detail the features of each of our unprocessed and preprocessed datasets.

 

5.

The section 3.2 Dataset Collection explains the formation of the dataset used within this work. We highlight how these are real-world textures, explored using a neuroTac sensor with a robotic arm controlling movement. The following paragraph from the same section details to use of this dataset:

“Our collected dataset comprised of 100 iterations of each texture, at each Vel_Prof, resulting in 14400 samples (12 textures * 12 Vel_Prof * 100 iterations). A validation set was initially split from the dataset at a ratio of 0.2. From the remaining 80% of the dataset, a train/test split of 0.6/0.4 was applied. This resulted in; a validation split of 2880 samples, a training split of 6912, and a testing split of 4608. For the results in this work, the classification performance is given by the accuracy of the networks when classifying across the separated validation set.”

We have chosen these textures to try and cover a wide range of textural tactile dimensions as denoted by the following:

“Based on prior studies, humans identify textures using distinct tactile dimensions [40]. These dimensions being; roughness, hardness, temperature and stickiness. Our dataset aims to include textures that encompass the full range of each perceptual dimension”

 

6.

Thank you for this comment. We recognise the usefulness of the suggested metrics in aiding understanding of classifier performance. Here we have chosen to display our classification results through confusion matrices (Figure 7) which inherently contain information pertaining to f1-score, precision, and recall. Additionally, these matrices highlight the confusion between individual classes. This is of interest within this application as it allows us to infer perceptual similarities between our classes.

 

7.

Hardware development is not the focus of the work presented and thus we refer to reader to prior work detailing the development of the sensor:

“The tactile sensor used within this work is the neuroTac optical tactile sensor first presented in [19]”

We have added the following sentence referring the reader to a publication in which the specific workings of the neuromorphic hardware platform is described.

“Loihi2 is a neuromorphic processing platform designed specifically for the deployment of SNNs [6].“

 

8.

Within our “Related Works” section we discuss the SOA works published in artificial tactile texture classification. For example:

“Gupta et al. [ 25 ] report accuracies of up to 98% when using a support vector machine (SVM) to classify texture data from a neuromorphic sensor.”

“Prior works by the authors demonstrate the feasibility of the neuroTac sensor […] These works, while presenting high classification performances of 94%, are trained and tested on texture samples explored using limited movement profiles, constrained to a constant velocity and contact force.”

“Taunyazoz et al. [23] present results from a texture classification experiment using an iCub robot for data collection. […]. Across this heterogeneous dataset, the system presented achieves a classification accuracy of 98%, showing generalisation across the varied exploratory movements.”

As mentioned in response to point 1, it is often hard, if at all possible, to draw direct comparisons to other SOA works within this specific field. This is due to the unique experimental setups of each published work.

 

9.

Thank you for this, these issues have now been corrected.

 

10.

Thank you for this, this has been checked and all acronyms are now fully defined at their first mention.

 

11.

Thank you for the suggestion. We have had the manuscript proof-read and corrected some minor writing and typographical errors.

 

12.

The training procedure, including relevant parameters, is described in Section 4.2. The SLAYER training algorithm used within this work is referenced for further reading here:

“Within this work we train our SNNs using the SLAYER algorithm [44] employed by the Lava framework (github.com/lava-nc/lava-dl). This algorithm allows for the training of SNNs in a process similar to gradient back propagation.”

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors

Dear authors,

I appreciate the authors for addressing almost all of my concerns in the revised version. As a minor correction, I recommend moving the added URL (github.com/lava-nc/lava-dl) to the footnote and hyperlinking it.

Good luck

Back to TopTop