Next Article in Journal
Enhancing Financial Time Series Prediction with Quantum-Enhanced Synthetic Data Generation: A Case Study on the S&P 500 Using a Quantum Wasserstein Generative Adversarial Network Approach with a Gradient Penalty
Previous Article in Journal
Design and Optimization of Coil for Transcutaneous Energy Transmission System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simultaneous Velocity and Texture Classification from a Neuromorphic Tactile Sensor Using Spiking Neural Networks

by
George Brayshaw
1,2,*,
Benjamin Ward-Cherrier
1 and
Martin J. Pearson
2
1
School of Engineering Mathematics and Technology, University of Bristol, Bristol BS8 1QU, UK
2
School of Engineering, The University of the West of England, Bristol BS16 1QY, UK
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(11), 2159; https://doi.org/10.3390/electronics13112159
Submission received: 22 April 2024 / Revised: 24 May 2024 / Accepted: 30 May 2024 / Published: 1 June 2024
(This article belongs to the Special Issue Neuromorphic Devices, Circuits, Systems and Their Applications)

Abstract

:
The neuroTac, a neuromorphic visuo-tactile sensor that leverages the high temporal resolution of event-based cameras, is ideally suited to applications in robotic manipulators and prosthetic devices. In this paper, we pair the neuroTac with Spiking Neural Networks (SNNs) to achieve a movement-invariant neuromorphic tactile sensing method for robust texture classification. Alongside this, we demonstrate the ability of this approach to extract movement profiles from purely tactile data. Our systems achieve accuracies of 95% and 83% across their respective tasks (texture and movement classification). We then seek to reduce the size and spiking activity of our networks with the aim of deployment to edge neuromorphic hardware. This multi-objective optimisation investigation using Pareto frontiers highlights several design trade-offs, where high activity and large network sizes can both be reduced by up to 68% and 94% at the cost of slight decreases in accuracy (8%).

1. Introduction

Due to their rapid evolution, the proficiency of robotic manipulators and prosthetics lags far behind human performance and reliability, even for simple tasks. Physical interactions rely on a plethora of heterogeneous sensory inputs, often utilising both visual and tactile cues to perform basic tasks such as grasping, pouring and palpating. Expanding the sensory capabilities of manipulators is, therefore, an important issue for research as we look toward more competent systems. With the introduction of additional sensing modalities comes the delicate matter of developing algorithms to integrate and utilise these tactile senses, such that the robot is able to shape its interactions accordingly. The ability to classify texture is one such component of tactile perception that affords the manipulator richer knowledge of their environment and knowledge about how to approach, continue or avoid different interactions. More complex and capable systems often come at the cost of increased processing power, latency and/or weight. The application of brain-inspired neuromorphic hardware and processing techniques is well placed to facilitate further advancement, avoiding these common drawbacks by providing the potential to reduce power consumption [1,2,3] and offer increased performance in terms of latency [4,5] due to relative data sparsity.
Several processing platforms are available for the deployment of neuromorphic systems, including but not limited to Intel’s Loihi 1 [6] and its successor board Loihi 2 [7], IBM TrueNorth [8], and SpiNNaker [9]. These systems leverage neuromorphic processing techniques to provide energy-efficient edge platforms for the practical implementation of SNNs. Like any edge computing platform, constraints exist that limit the number of neurons and weighted connections they are able to update in real-time. Seeking to appropriately utilise these energy and computational benefits requires engineers to give careful consideration to design choices that impact memory and power consumption.
As we push toward a more human-level tactile interaction, the fusion of tactile sensing and computer-vision-based cues in robotic manipulation systems has yielded positive results for various tasks [10,11]. Within the domain of active prosthesis, vision-based cues are less useful, with the user providing visual input and control over the device. These factors, coupled with the potential occlusion of vision in real-world environments, encourages a move towards purely tactile-based sensing solutions.
In order to properly utilise these technological advances, edge neuromorphic solutions must be devised, presenting unique challenges to development. Limitations, such as constraints on how the textures are explored and the encoding of non-spiking inputs, may restrict their ability to perform in real-world classification tasks, especially where environments are much less structured. Accordingly, a pragmatic trade-off exists when engineering these systems based on their intended applications. Towards establishing an understanding of these trade-offs, we present the following contributions:
  • The development of an end-to-end tactile neuromorphic system capable of movement-invariant texture classification.
  • The simultaneous classification of the movement profile of a tactile sensor across different surfaces.
  • A multi-objective optimisation analysis of network size, activity and accuracy to fit edge platform constraints.
This paper is structured as follows. Section 2 showcases related works in the field of artificial texture classification. Section 3 contains the equipment and experimental setup used to collect the tactile dataset used within this work. Section 4 demonstrates our preprocessing, network design, optimisations and analysis techniques. Section 5 details the results of our data collection, network optimisations, and analysis for our networks. Section 6 discusses our findings, the potential limitations of the work and recommendations for further work. Finally, Section 7 briefly summarises the work and contributions presented.

2. Related Works

Tactile texture classification has been widely explored using a multitude of different sensing technologies. These include capacitive [12], triboelectric [13], multimodal sensors such as the biotac [14,15], and optical sensors. Open-source optical tactile sensors such as the TacTip [16] and Gelsight [17] provide large spatial dimensionality in comparison to sensors of the same size while demonstrating sub-millimeter accuracy across a range of tactile tasks. While other sensor solutions may require additional encoding steps to convert their data into a spiking format for use with neuromorphic hardware, optical sensors are able to exploit event-based vision technology. Event-based cameras are able to operate at frequencies of ≤10 khz [18], outperforming standard video cameras by over 2 orders of magnitude. Prior works by the authors demonstrate the feasibility of the neuroTac sensor [19], a neuromorphic optical tactile sensor, for texture classification, using both traditional machine learning algorithms [20] and Spiking Neural Networks [2]. These works, while presenting high classification performances of ≤94%, are trained and tested on texture samples explored using limited movement profiles, constrained to a constant velocity and contact force. Other State of the Art (SOTA) works encounter similar issues with constrained experimental conditions. Coa et al. [21] present a zero-shot approach to identify previously unseen textures based on perceptual dimensions learned from training with other textures. Using an optical tactile sensor, this work achieves an accuracy of 83% when classifying these unseen textures. However, the collection of data within this experiment relies on the controlled pressing of a tactile sensor (Gelsight) against the texture surface. This methodology creates detailed tactile images of the texture surface but does not examine the surface during movement or the application shear forces, an important effect during tactile interactions that we seek to investigate here. Yang et al. [22] highlights the importance of these shear forces when identifying texture using tactile sensors.
In real-world applications, texture exploration cannot be assumed to utilise a consistent movement, especially when a user controls the motion of the sensor across the surface, as is the case with a prosthetic device. It is therefore advisable for the classification system to be either generalised across a wide range of movements, textures, and contact forces or to be able to perceive and utilise the movement of the sensor across the surface to inform its decisions. Hollins et al. [23] note how tactile afferent responses are dependant on the scanning speed used to interact with a given texture, while cortical neurons are able to disentangle these signals to allow for our innate ability for movement-invariant texture discrimination. Similarly, Boundy-Singer et al. [24] imply that different mechanisms within the brain work to compensate for different movements in order to maintain this high level of perception.
A simple approach toward movement-invariant texture discrimination is to train a machine learning model across a dataset utilising various exploratory patterns, allowing the training step to decouple the complex relationships between sensor output and texture features. Taunyazoz et al. [25] present results from a texture classification experiment using an iCub robot for data collection. They note how the compliance in the limbs of the robot allows for a single Degree of Freedom (DOF) to control the movement of the robot’s sensorised hand across the textures, with said compliance providing different perturbations for the collection of each sample. Across this heterogeneous dataset, the system presented achieves a classification accuracy of 98%, showing generalisation across the varied exploratory movements. A similar approach using a passive touch approach, where the texture is moved under the sensor rather than the sensor itself moving, is used within [26]. Both of these works utilise standard machine learning approaches to achieve these classification results. Gupta et al. [27] report accuracies of up to 98% when using a Support Vector Machine (SVM) to classify texture data from a neuromorphic sensor. Their experiment adjusted the angle at which the sensor was positioned, providing some perceived variance to the movement as it related to the sensor array.
Whereas many works seek to achieve texture classification regardless of exploratory movement, the classification of specific movements is an area not often explored within artificial tactile sensing literature. Kinesthesis within robotic systems is often performed by processing the output of a purpose built sensor [28,29]. While perhaps analogous to classifying known movements, specific movements must be detected amid noisy signals in the field of tactile slip detection. Works such as those presented in [30,31,32] must be able to disentangle task-specific tactile signals from movements caused by object slippage. The ability of these slip detection algorithms to detect slippage when interacting with previously unseen objects is also a key factor in their applicability to real-world interactions. A similar objective is sought within our work: the classification of different velocity profiles from a tactile sensor with the goal of this classification being invariant to contact surface.
Neuromorphic systems offer the potential for decreased latency and power efficiency while still providing strong performance in classification tasks. To use SNNs in different applications, one must consider all of these aspects during design and optimisation. Many works exist that investigate the impacts of different neuromorphic hardware [33,34,35], encoding schemes [36] and neuron models [37] but all seek to either maximise task performance or minimise one or more of power consumption, latency or memory footprint [2,38,39]. Optimising across these metrics becomes even more important when working on edge applications such as prosthetic devices. These optimisations are discussed within [40], where an exploration into spiking activity versus classification accuracy is performed for a neuromorphic tactile texture task. In this work, we introduce more dimensions to this optimisation problem by including metrics for SNN size, with memory usage being a key constraint on edge platforms.

3. Experimental Setup

3.1. neuroTac Sensor

The tactile sensor used within this work is the neuroTac optical tactile sensor first presented in [19]. The sensor mimics human tactile perception by combining the biomimetic design of the open-source TacTip sensor [16] with an event-based camera. The sensor’s soft, domed surface is embedded with internal pin-like structures with markers to emulate the distributed discrete nature of mechanoreceptors in human skin. The deflection of these markers, as the result of deformation of the sensor’s skin, is tracked by the event-based camera to produce a rich, natively spiking output in response to contact. The version of the neuroTac used in this experiment utilises a DVXplorer (Inivation) event-camera, a higher resolution camera than that used within prior works with the neuroTac [19,20,41], and a tip with 331 pins laid out in two-dimensional concentric circles, stereographically mapped to the inside of the sensor skin (see Figure 1).

3.2. Dataset Collection

Within this work, we expand on the range of naturalistic textures explored in previous works by the authors towards a more comprehensive tactile dataset [2,20]. Based on prior studies, humans identify textures using distinct tactile dimensions [42]. These dimensions are roughness, hardness, temperature and stickiness. Our dataset aims to include textures that encompass the full range of each perceptual dimension. Figure 2 shows the textures that are explored within this work.
In order to collect the datasets used during this work, a Franka Research 3 robotic arm (RoboDK Software S.L, Barcelona, Spain) is used to move the neuroTac across the textures with a constrained downwards force within the bounds of F = 1.51 N ± 0.5 N , approximating forces recorded for human tactile texture exploration by Smith et al. [43]. The downward force applied by the tip of the sensor is verified by a ROBOTIQ FT 300-S Force Torque Sensor (ROBOTIQ Europe, Vaulx-en-Velin, France) at both the beginning and end of the sample. The setup for this movement is shown in Figure 1.
Within our experiment, the sensor is moved across the textures using different velocity profiles ( V e l _ P r o f ): constant velocity, constant acceleration and constant deceleration. Each are explored at different starting velocities or rates of change, as shown in Figure 3. The velocities achieved by these movement profiles (≤60 mm/s) are consistent with human exploratory patterns for texture, as presented in [44].
Our collected dataset comprised of 100 iterations of each texture, at each V e l _ P r o f , resulted in 14,400 samples (12 textures × 12 V e l _ P r o f × 100 iterations). A separated test set was initially split from the dataset at a ratio of 0.2. From the remaining 80% of the dataset, a train/validation split of 0.6/0.4 was applied. This resulted in a test split of 2880 samples, a training split of 6912, and a validation split of 4608. For the results in this work, the classification performance is given by the accuracy of the networks when classifying across the test set. Details of the collected and preprocessed datasets are shown in Table 1.

4. Proposed Method

4.1. Networks

Preprocessing

The large input space provided by the inivation DVXplorer event camera within the neuroTac (640 × 480) must be reduced in order to increase processing throughput and for eventual deployment to Loihi2. Loihi2 is a neuromorphic processing platform designed specifically for the deployment of SNNs [6]. This reduction is performed by a combination of cropping and pooling of the camera frame. Cropping is performed to remove pixels on the edges of the frame that do not have a view of the sensors internal pins. A pooling operation is performed to further reduce the input space. This operation is inspired by the neuromorphic pooling algorithm outlined in Rizzo et al. [45], as has been utilised in prior work [2]. A pooling kernel (of size k x k y ) is shifted a set distance ( s t r i d e ) over the 2D input space. At each timestep, if the kernel contains a number of spikes above a given threshold p t h r e s h , it outputs a spike to the corresponding pixel in the output mapping. By tuning the kernel size ( k x and k y ), s t r i d e , and spiking threshold ( p t h r e s h ), we are able to reduce spatial dimensionality and perform basic noise filtering within one operation. The following parameters were used for the pooling of samples within this work: k x = 4 , k y = 4 , and s t r i d e = 4 , p t h r e s h = 1 . This pooling resulted in an output size of 78 × 78. We found that further pooling resulted in an unacceptably large degradation of performance.
Moving the sensor across the surface with different V e l _ P r o f could lead to samples with inconsistent temporal lengths, i.e., higher velocity movements, moved across textures of consistent sizes, will inherently result in shorter sample lengths. An option to combat the effects of this is to pad shorter samples to maintain a sample length more consistent with the slower movements. This, however, is undesirable as the network may learn to identify samples due to the absence of spikes at the end of certain samples rather than learning the features present within the spiking patterns. Therefore, samples collected from the sensor are cropped in the temporal domain to maintain an even sample length across all V e l _ P r o f . A beneficial side effect of this temporal cropping is an overall reduction in dataset size, leading to faster training. For the training of our networks in this article, samples were cropped to 1000 ms.
Figure 4 demonstrates how the observed spiking patterns change for a given sample during each step of this preprocessing.

4.2. Training

Within this work, we train our SNNs using the SLAYER algorithm [46] employed by the Lava framework (https://github.com/lava-nc/lava-dl accessed on 7 January 2024). This algorithm allows for the training of SNNs in a process similar to gradient back propagation. Our networks all comprise densely connected layers of current-based (CUBA) Leaky-Integrate and Fire (LIF) neurons [47]. The neuron parameters are set within Lava at 1.25 voltage threshold, 0.03 voltage decay constant, 0.25 current decay constant, and 0.03 time constant ( τ ). The parameters of these neurons are kept consistent for each model. These values were found through manual tuning with a random dropout applied to 10% of neurons in each layer of each network to reduce overfitting. Each network was trained for 100 epochs with an initial learning rate of 0.01 that decays by 33% every 20 epochs. Our networks all feature a neuron in the output layer for each of the potential output classifications, with classification determined by the output neuron with the highest number of spikes accrued within the sample time.
Data sparsity is a driving factor behind the power efficiency of neuromorphic hardware. While analogous to higher amounts of information being processed, high spiking activity within networks correlates to higher energy overheads, even when running on purpose built neuromorphic hardware [48,49]. Here, a potential trade-off exists between information processing and energy efficiency. Previously mentioned limitations present within edge hardware further constrains the size and complexity of the networks we seek to train and deploy. With the goals of deploying to the edge in the future, we investigate the effects of altering network size and spiking activity within this work.
Several training parameters can be altered in order to elicit different behaviours from the network. The Spike Rate loss function used by Lava SLAYER utilises a t r u e r a t e parameter ( 0 < t r u e r a t e < 1 ) to control the network’s spiking activity. This parameter acts as a set point, encouraging the network to obtain a target spiking rate in the correctly classified neuron. For example, at t r u e r a t e = 1.0 the loss function will encourage the network to output a spike within the correct output neuron at every time step. The error is then calculated based on this set point and the network’s ability to create this level of spiking activity. Increasing this set point, therefore, increases the overall spiking activity of the network. An opposing f a l s e r a t e parameter works conversely to reduce the spiking activity in the incorrect output neurons. This f a l s e r a t e parameter remains set at 0.02 for all networks within this work.

4.2.1. Texture Classifier

Our first method for movement-invariant texture classification is the training of a network using data from all V e l _ P r o f , for all textures, without considering the difference in V e l _ P r o f . This, ideally, results in a simple, effective and robust classification system without the complications of further features.
An input layer of size 6084, followed by a single hidden layer and a final output layer of size 12, provides one output neuron per texture. The size of the hidden layer was selected based on a grid search detailed in Figure 5 and Table 2.

4.2.2. Velocity Profile Classifier

Here, we introduce a V e l _ P r o f classification network. The structure of this network is similar to that of the texture classifier and is trained using the same data as the texture classifier network. This network, however, is trained to classify the sample as one of the 12 movement profiles described in Section 3.2.
The same grid search used for the texture classifier was performed upon this network, again shown in Figure 5 and Table 2.

4.3. Metrics

As mentioned previously, when appraising our SNNs for their utility on edge platforms we must also consider the constraints of this application domain. In order to achieve this, we use the following metrics that account for performance, the management of resources, and power consumption.

4.3.1. Accuracy

An objective measure of classification performance calculated as a ratio of true-positive ( T P ) and true-negative ( T N ) classifications to the number of samples within the test set (N). This is shown below in Equation (1).
A c c u r a c y = T P + T N N

4.3.2. Total Number of Weights

The memory resource limitations of edge platforms often limit the size of the network we are able to construct. The total number of weighted connections ( W t ) within our network is an easily comparable metric that we actively seek to minimise within this work. Equation (2) shows the calculation for this metric given that our network layers are densely connected.
W t = l = 0 L 1 i l i l + 1
where W t is the total number of of weighted connections in the network, L is the number of layers in the network, and i l is the number of neurons in layer l.

4.3.3. Spiking Activity

As discussed above (Section 4.2), spiking activity is analogous to power consumption. With one of the advantages of neuromorphic hardware being its relatively low energy footprint, we also seek to minimise the spiking activity within our networks. Spiking activity (S) is defined in Equation (3).
S = n = 0 N i = 0 I s i N
where n is the current sample, N is the total number of samples in the test set, I is the total number of neurons in the network, and s i is the number of spikes that occur for neuron i within the current sample.

4.4. Optimisation Techniques

4.4.1. Grid Search

A grid search optimisation is performed on each of our networks in order to determine the highest performing networks in terms of accuracy across the search space. The full search space and optimisation results from the performed grid search are shown within Table 2. From the results of our grid search, different analysis techniques allow us to analyse potential trade-offs when choosing a network to deploy for a specific application.

4.4.2. Pareto Frontier Analysis

Pareto frontiers [50] present a series of equally efficient solutions along a multi-dimensional search space, providing insights into potential design trade-offs when optimising for specific metrics. We look to find Pareto front approximations from our grid search, enabling us to find optimal solutions that provide trade-offs between each of our three metrics: accuracy, the average spiking activity (S) of the network, and the total number of weights present in the network ( W t ). We form Pareto frontiers for both our texture and V e l _ P r o f classifiers, seeking to find a series of equal solutions depending on the choice of metric. We do so using a skyline query [51].

5. Results

5.1. Data Inspection

Figure 6 shows several raster plots comparing the data output from the neuroTac sensor when moved across both different textures with the same V e l _ P r o f and the effects different V e l _ P r o f have on the output from the same texture.
Figure 6a,b showcases the variance in spiking activity produced at the same V e l _ P r o f depending on the method of exploration. Visual inspection implies a large loss of data due to temporal clipping; however, prior works have proven that initial movement of the neuroTac across a texture is sufficient to accurately classify [2,20].
Large differences in spiking patterns are also observed for the same texture across different movement profiles. Figure 6c,d showcases this with the higher speed achieved by V e l _ P r o f 3, evidenced by a shorter sample length when compared to profile 2. As previously mentioned, the temporal clipping in our preprocessing is implemented to avoid this imbalance.

5.2. Grid Search Results

In order to determine the optimal hidden layer size of our texture, V e l _ P r o f , we perform a grid search using the parameters noted in Table 2. The results of this optimisation are shown in both Table 3 and Figure 5. These finalised values indicate that, generally, towards the upper limits of the search space is where the classifiers achieve optimal accuracy. Increasing network activity, via an increase in target t r u e r a t e , is also shown to increase the accuracy of our texture and V e l _ P r o f classifiers.
Our texture classifier generally achieves high accuracies up to 0.95 accuracy. As intuitively expected, classification performance tends to increase alongside an increase in network size and spiking activity, showing respective Spearman’s rank correlation coefficients with accuracy of ρ = 0.51 and ρ = 0.89 , respectively. However, classification performance tends to plateau during the grid search, with classification performance improving 1 % as hidden layer size increases between 200 and 500 nodes. This may indicate that the weights present within smaller networks contain sufficient capacity to learn the correct spatio-temporal features from our data.
Our V e l _ P r o f classifier provides slightly lower accuracies of up to 0.82 . Spearman’s rank correlations within this grid search again yield strong correlations between both network size and accuracy ( ρ = 0.71 ), and spiking activity and accuracy ( ρ = 0.81 ). As with the texture classifier, this grid search also appears to converge, with smaller networks (hidden layer size of 125) reporting accuracies only 1 % lower than the large 500 hidden layer networks.
Figure 7 presents the confusion matrices for our highest accuracy texture and V e l _ P r o f classifiers. While the texture classification network performs well across all textures, its highest rates of error come from the MDF, Denim and Satin textures. The confusion shown between the two wooded textures of MDF and plywood is perhaps expected. Our profile classifier demonstrates its lowest classification performance among the linear acceleration profiles (4–7), with 77 % accuracy recorded for these profiles; a similar performance is seen for triangular acceleration (9–12), with an accuracy of 78 % . Comparatively high performance from the linear V e l _ P r o f (0–3) gives respective accuracies of 95 % .

5.3. Pareto Front Analysis

Figure 8 shows a 3D scatter plot of our grid search, with Pareto frontier tests indicated. These frontier tests are deemed as non-dominated solutions that provide equal trade-offs, for the three metrics we look to optimise. While many tests are shown to exist along these frontiers, we summarise the solutions within Table 3, presenting the points along the frontier where one may choose to fully minimise or maximise one of our metrics. For both our texture and the V e l _ P r o f classifiers, where there are several solutions along the frontier that minimise a given metric, we prioritise the solution that returns the highest accuracy.

5.4. Comparative Analysis

Table 3 encompasses the solutions found through our methodologies.
Pareto front analysis of our texture classifier grid search returns the same optimal solution when searching the frontier for maximum accuracy. When looking to minimise spiking activity, we see a decrease of 98.6 % in S, alongside a similar decrease in W t ( 94 % decrease), when compared to the peak accuracy solution. These gains come at the cost of accuracy with a decrease in performance of 28 % . Searching for minimum network size solutions along the frontier results in a classifier that is able to again reduce W t by 94 % and reduce S by 91 % (with a smaller loss of 8 % in accuracy).
Similar results are shown for our V e l _ P r o f classifier. The Pareto front includes the peak network parameters at the apex of accuracy maximisation. Minimising S along this front results in a respective 95 % and 94 % reduction in S and W t , at the expense of a large 49 % reduction in accuracy. Finding the smallest network from this test results in the same 94 % reduction in W t ; however, the comparatively low 68 % reduction in S results in classification performance only 14 % below peak accuracy.

6. Discussion

This work has presented a series of SNNs able to classify a variety of naturalistic textures or movements based on input from a neuromorphic tactile sensor. We then provide an investigation into the effects of network size and spiking activity on the classification performance of our systems, with the goal of finding solutions that are able to deploy to constrained edge neuromorphic platforms.
From our grid search detailed in Section 5.2, we can see that the larger texture and V e l _ P r o f classifiers achieved a higher performance than smaller networks, regardless of spiking activity. This is expected as larger networks contain a higher capacity for learning. Interestingly, despite having a much larger input space afforded by the DVXplorer camera, the hidden layer size for both networks trended towards the network size observed in prior work [2].
As observed within Figure 5, MDF and Plywood provide the highest confusion during classification across all texture classifiers. We assume that both being similar wooden textures leads to this confusion and posit that that may also lead to classification confusion within human subjects. While this confusion is discussed here, these occurrences are minimal with all texture classifiers reporting ≥95% accuracy.
Relatively low classification performance was seen for our V e l _ P r o f classifiers when compared to our texture classifiers (peak 82%). From the confusion matrices of our peak performance network, our classifier seems to perform well at identifying different constant velocities (profiles 0–3). The main confusion occurs within profiles that require acceleration across the texture. The confusion seen within V e l _ P r o f sets (4–7, 8–11) indicates that the system responds well to identifying the movement of the sensor across the texture, only failing to correctly classify the precise acceleration/deceleration for the said profile. Interestingly, the confusion displayed by V e l _ P r o f 5 (37%) with profiles 6 and 7 further indicates that the network is getting confused within each movement profile shape. We hypothesise that this confusion is potentially due to the temporal cropping of our data to the first 1000 ms. Figure 3 showcases how, for our constant acceleration and triangular-shaped velocity profiles, similar trajectories are followed during this period of each sample, whereas constant velocity profiles appear to be more differentiable.
Our Pareto frontier analysis of our networks has presented a viable method for design engineers to tailor power, size and performance based on the requirements of different edge applications. For applications such as active prosthetics, where power consumption is often cited as a key design consideration [52,53], our analysis provides a solution that has an S 98.6% lower than a solution found by simply optimising a SNN for accuracy. Our Pareto solutions that provide the smallest, lowest spiking networks among our texture classifiers exceed the human baseline accuracy of ≈65% [2,20] despite their relatively poor performance when compared to the peak accuracy classifiers. This indicates that human-level texture classification is possible even on constrained edge environments.
Additional steps to further reduce W t such as pruning are not investigated during this work, with a focus primarily on parameters used to train our networks. These techniques could be applied to networks trained in order to further reduce network size, and in turn spiking activity, for even more highly constrained environments. These techniques have proven to be effective for reducing the size of large scale spiking networks [54].
The integration of our texture and V e l _ P r o f classifiers into a combined tactile system is a logical future step for this work. Prior studies have demonstrated how combining both proprioceptive and tactile senses using Neural Networks (NNs) generally leads to an increase in texture classification performance [22,28]. Alternative approaches using reservoir computing paradigms could also be explored due to their largely interconnected neuronal structure mimicking areas of the brain [55]. These architectures have also previously been explored on edge neuromorphic platforms, underpinning their utility [56].
Finally, we plan to deploy the networks developed within this work to Loihi2, a neuromorphic processor designed for Spiking Neural Network implementation [7]. This will enable embedded processing within robotic hands that require rapid and low power processing solutions.

7. Conclusions

In this work, we present an end-to-end system utilising a neuromorphic tactile sensor and a Spiking Neural Network to classify naturalistic textures in a movement-invariant manner. A parallel network utilising the same sensory input is also introduced to classify movements. Our classifiers achieve accuracies of 95 % and 82 % for the texture and velocity profiles ( V e l _ P r o f ), respectively. After performing a grid search of training parameters controlling network size and spiking activity, we analyse the Pareto frontier to discuss potential design trade-offs for the deployment of these networks to edge hardware environments. In doing so, we are able to reduce the size of our texture classification network by up to 94 % and spiking activity by up to 68 % while still maintaining an accuracy above human baseline performance ( 87 % ).

Author Contributions

Conceptualisation, G.B., B.W.-C. and M.J.P.; methodology, G.B.; software, G.B.; validation, G.B.; formal analysis, G.B.; investigation, G.B.; resources, B.W.-C.; data curation, G.B.; writing—original draft preparation, G.B.; writing—review and editing, G.B., B.W.-C. and M.J.P.; visualisation, G.B.; supervision, B.W.-C. and M.J.P.; project administration, B.W.-C. and M.J.P.; and funding acquisition, G.B. and B.W.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the EPSRC Centre for Doctoral Training in Future Autonomous and Robotic Systems (FARSCOPE) at the Bristol Robotics Laboratory and a Royal Academy of Engineering Research Fellowship on “Shared autonomy neuroprosthetics: Bridging the gap between artificial and biological touch” (RF\202021\20\171).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DOFDegree of Freedom
MDFMedium-Density Fiberboard
NNNeural Network
SLAYERSpike LAYer Error Reassignment algorithm
SNNSpiking Neural Network
SOTAState of The Art
SVMSupport Vector Machine

References

  1. Liu, D.; Yu, H.; Chai, Y. Low-Power Computing with Neuromorphic Engineering. Adv. Intell. Syst. 2021, 3, 2000150. [Google Scholar] [CrossRef]
  2. Brayshaw, G.; Ward-Cherrier, B.; Pearson, M. A Neuromorphic System for the Real-time Classification of Natural Textures. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024. [Google Scholar]
  3. Zhu, J.; Zhang, T.; Yang, Y.; Huang, R. A comprehensive review on emerging artificial neuromorphic devices. Appl. Phys. Rev. 2020, 7, 011312. [Google Scholar] [CrossRef]
  4. Davies, M. Lessons from Loihi: Progress in Neuromorphic Computing. In Proceedings of the 2021 Symposium on VLSI Circuits, Kyoto, Japan, 13–19 June 2021; pp. 1–2, ISSN 2158-5636. [Google Scholar] [CrossRef]
  5. Singh, S.; Sarma, A.; Lu, S.; Sengupta, A.; Narayanan, V.; Das, C.R. Gesture-SNN: Co-optimizing accuracy, latency and energy of SNNs for neuromorphic vision sensors. In Proceedings of the 2021 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), Boston, MA, USA, 26–28 July 2021; pp. 1–6. [Google Scholar] [CrossRef]
  6. Davies, M.; Srinivasa, N.; Lin, T.H.; Chinya, G.; Cao, Y.; Choday, S.H.; Dimou, G.; Joshi, P.; Imam, N.; Jain, S.; et al. Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. IEEE Micro 2018, 38, 82–99. [Google Scholar] [CrossRef]
  7. Orchard, G.; Frady, E.P.; Rubin, D.B.D.; Sanborn, S.; Shrestha, S.B.; Sommer, F.T.; Davies, M. Efficient Neuromorphic Signal Processing with Loihi 2. In Proceedings of the 2021 IEEE Workshop on Signal Processing Systems (SiPS), Coimbra, Portugal, 19–21 October 2021; pp. 254–259, ISSN 2374-7390. [Google Scholar] [CrossRef]
  8. DeBole, M.V.; Taba, B.; Amir, A.; Akopyan, F.; Andreopoulos, A.; Risk, W.P.; Kusnitz, J.; Ortega Otero, C.; Nayak, T.K.; Appuswamy, R.; et al. TrueNorth: Accelerating From Zero to 64 Million Neurons in 10 Years. Computer 2019, 52, 20–29. [Google Scholar] [CrossRef]
  9. Furber, S.B.; Galluppi, F.; Temple, S.; Plana, L.A. The SpiNNaker Project. Proc. IEEE 2014, 102, 652–665. [Google Scholar] [CrossRef]
  10. Björkman, M.; Bekiroglu, Y.; Högman, V.; Kragic, D. Enhancing visual perception of shape through tactile glances. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 3180–3186, ISSN 2153-0866. [Google Scholar] [CrossRef]
  11. Bekiroglu, Y.; Song, D.; Wang, L.; Kragic, D. A probabilistic framework for task-oriented grasp stability assessment. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 3040–3047, ISSN 1050-4729. [Google Scholar] [CrossRef]
  12. Papakostas, T.; Lima, J.; Lowe, M. A large area force sensor for smart skin applications. In Proceedings of the 2002 IEEE SENSORS, Orlando, FL, USA, 12–14 June 2002; Volume 2, pp. 1620–1624. [Google Scholar] [CrossRef]
  13. Song, Z.; Yin, J.; Wang, Z.; Lu, C.; Yang, Z.; Zhao, Z.; Lin, Z.; Wang, J.; Wu, C.; Cheng, J.; et al. A flexible triboelectric tactile sensor for simultaneous material and texture recognition. Nano Energy 2022, 93, 106798. [Google Scholar] [CrossRef]
  14. Fishel, J.A.; Loeb, G.E. Sensing tactile microvibrations with the BioTac—Comparison with human sensitivity. In Proceedings of the 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), Rome, Italy, 24–27 June 2012; pp. 1122–1127, ISSN 2155-1782. [Google Scholar] [CrossRef]
  15. Fishel, J.; Loeb, G. Bayesian Exploration for Intelligent Identification of Textures. Front. Neurorobotics 2012, 6, 4. [Google Scholar] [CrossRef]
  16. Ward-Cherrier, B.; Pestell, N.; Cramphorn, L.; Winstone, B.; Giannaccini, M.E.; Rossiter, J.; Lepora, N.F. The TacTip Family: Soft Optical Tactile Sensors with 3D-Printed Biomimetic Morphologies. Soft Robot. 2018, 5, 216–227. [Google Scholar] [CrossRef]
  17. Yuan, W.; Dong, S.; Adelson, E.H. GelSight: High-Resolution Robot Tactile Sensors for Estimating Geometry and Force. Sensors 2017, 17, 2762. [Google Scholar] [CrossRef]
  18. Brandli, C.; Berner, R.; Yang, M.; Liu, S.C.; Delbruck, T. A 240 × 180 130 dB 3 µs Latency Global Shutter Spatiotemporal Vision Sensor. IEEE J. Solid State Circuits 2014, 49, 2333–2341. [Google Scholar] [CrossRef]
  19. Ward-Cherrier, B.; Pestell, N.; Lepora, N.F. NeuroTac: A Neuromorphic Optical Tactile Sensor applied to Texture Recognition. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 2654–2660, ISSN 2577-087X. [Google Scholar] [CrossRef]
  20. Brayshaw, G.; Ward-Cherrier, B.; Pearson, M. Temporal and Spatio-temporal domains for Neuromorphic Tactile Texture Classification. In Proceedings of the 2022 Annual Neuro-Inspired Computational Elements Conference (NICE ’22), Online, 28 March–1 April 2022; pp. 50–57. [Google Scholar] [CrossRef]
  21. Cao, G.; Jiang, J.; Bollegala, D.; Li, M.; Luo, S. Multimodal zero-shot learning for tactile texture recognition. Robot. Auton. Syst. 2024, 176, 104688. [Google Scholar] [CrossRef]
  22. Yang, J.H.; Kim, S.Y.; Lim, S.C. Effects of Sensing Tactile Arrays, Shear Force, and Proprioception of Robot on Texture Recognition. Sensors 2023, 23, 3201. [Google Scholar] [CrossRef] [PubMed]
  23. Lieber, J.D.; Bensmaia, S.J. The neural basis of tactile texture perception. Curr. Opin. Neurobiol. 2022, 76, 102621. [Google Scholar] [CrossRef] [PubMed]
  24. Boundy-Singer, Z.M.; Saal, H.P.; Bensmaia, S.J. Speed invariance of tactile texture perception. J. Neurophysiol. 2017, 118, 2371–2377. [Google Scholar] [CrossRef] [PubMed]
  25. Taunyazov, T.; Koh, H.F.; Wu, Y.; Cai, C.; Soh, H. Towards Effective Tactile Identification of Textures using a Hybrid Touch Approach. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 4269–4275, ISSN 2577-087X. [Google Scholar] [CrossRef]
  26. Lima, B.M.R.; da Fonseca, V.P.; de Oliveira, T.E.A.d.; Zhu, Q.; Petriu, E.M. Dynamic Tactile Exploration for Texture Classification using a Miniaturized Multi-modal Tactile Sensor and Machine Learning. In Proceedings of the 2020 IEEE International Systems Conference (SysCon), Montreal, QC, Canada, 24 August–20 September 2020; pp. 1–7, ISSN 2472-9647. [Google Scholar] [CrossRef]
  27. Gupta, A.K.; Ghosh, R.; Swaminathan, A.N.; Deverakonda, B.; Ponraj, G.; Soares, A.B.; Thakor, N.V. A Neuromorphic Approach to Tactile Texture Recognition. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 1322–1328. [Google Scholar] [CrossRef]
  28. Rostamian, B.; Koolani, M.; Abdollahzade, P.; Lankarany, M.; Falotico, E.; Amiri, M.; V. Thakor, N. Texture recognition based on multi-sensory integration of proprioceptive and tactile signals. Sci. Rep. 2022, 12, 21690. [Google Scholar] [CrossRef] [PubMed]
  29. Sachs, N.A.; Loeb, G.E. Development of a BIONic Muscle Spindle for Prosthetic Proprioception. IEEE Trans. Biomed. Eng. 2007, 54, 1031–1041. [Google Scholar] [CrossRef] [PubMed]
  30. Chen, W.; Khamis, H.; Birznieks, I.; Lepora, N.F.; Redmond, S.J. Tactile Sensors for Friction Estimation and Incipient Slip Detection—Toward Dexterous Robotic Manipulation: A Review. IEEE Sensors J. 2018, 18, 9049–9064. [Google Scholar] [CrossRef]
  31. James, J.W.; Pestell, N.; Lepora, N.F. Slip Detection With a Biomimetic Tactile Sensor. IEEE Robot. Autom. Lett. 2018, 3, 3340–3346. [Google Scholar] [CrossRef]
  32. Bulens, D.C.; Lepora, N.F.; Redmond, S.J.; Ward-Cherrier, B. Incipient Slip Detection with a Biomimetic Skin Morphology. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 8972–8978, ISSN 2153-0866. [Google Scholar] [CrossRef]
  33. Huynh, P.K.; Varshika, M.L.; Paul, A.; Isik, M.; Balaji, A.; Das, A. Implementing Spiking Neural Networks on Neuromorphic Architectures: A Review. arXiv 2022, arXiv:2202.08897. [Google Scholar] [CrossRef]
  34. Young, A.R.; Dean, M.E.; Plank, J.S.; S. Rose, G. A Review of Spiking Neuromorphic Hardware Communication Systems. IEEE Access 2019, 7, 135606–135620. [Google Scholar] [CrossRef]
  35. Ielmini, D.; Ambrogio, S. Emerging neuromorphic devices. Nanotechnology 2019, 31, 092001. [Google Scholar] [CrossRef] [PubMed]
  36. Guo, W.; Fouda, M.E.; Eltawil, A.M.; Salama, K.N. Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic Systems. Front. Neurosci. 2021, 15, 638474. [Google Scholar] [CrossRef] [PubMed]
  37. Yamazaki, K.; Vo-Ho, V.K.; Bulsara, D.; Le, N. Spiking Neural Networks and Their Applications: A Review. Brain Sci. 2022, 12, 863. [Google Scholar] [CrossRef] [PubMed]
  38. Gallego, G.; Delbruck, T.; Orchard, G.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A.J.; Conradt, J.; Daniilidis, K.; et al. Event-Based Vision: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 154–180. [Google Scholar] [CrossRef]
  39. Falanga, D.; Kleber, K.; Scaramuzza, D. Dynamic obstacle avoidance for quadrotors with event cameras. Sci. Robot. 2020, 5, eaaz9712. [Google Scholar] [CrossRef] [PubMed]
  40. Ali, H.A.H.; Abbass, Y.; Gianoglio, C.; Ibrahim, A.; Oh, C.; Valle, M. Neuromorphic Tactile Sensing System for Textural Features Classification. IEEE Sensors J. 2024, 24, 17193–17207. [Google Scholar] [CrossRef]
  41. Macdonald, F.L.A.; Lepora, N.F.; Conradt, J.; Ward-Cherrier, B. Neuromorphic Tactile Edge Orientation Classification in an Unsupervised Spiking Neural Network. Sensors 2022, 22, 6998. [Google Scholar] [CrossRef] [PubMed]
  42. Hollins, M.; Bensmaïa, S.; Karlof, K.; Young, F. Individual differences in perceptual space for tactile textures: Evidence from multidimensional scaling. Percept. Psychophys. 2000, 62, 1534–1544. [Google Scholar] [CrossRef] [PubMed]
  43. Smith, A.M.; Basile, G.; Theriault-Groom, J.; Fortier-Poisson, P.; Campion, G.; Hayward, V. Roughness of simulated surfaces examined with a haptic tool: Effects of spatial period, friction, and resistance amplitude. Exp. Brain Res. 2010, 202, 33–43. [Google Scholar] [CrossRef]
  44. Callier, T.; Saal, H.P.; Davis-Berg, E.C.; Bensmaia, S.J. Kinematics of unconstrained tactile texture exploration. J. Neurophysiol. 2015, 113, 3013–3020. [Google Scholar] [CrossRef]
  45. Rizzo, C.P.; Schuman, C.D.; Plank, J.S. Neuromorphic Downsampling of Event-Based Camera Output. In Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference (NICE ’23), San Antonio, TX, USA, 11–14 April 2023; pp. 26–34. [Google Scholar] [CrossRef]
  46. Shrestha, S.B.; Orchard, G. Slayer: Spike layer error reassignment in time. Adv. Neural Inf. Process. Syst. 2018, 3. [Google Scholar] [CrossRef]
  47. Cavallari, S.; Panzeri, S.; Mazzoni, A. Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks. Front. Neural Circuits 2014, 8, 12. [Google Scholar] [CrossRef] [PubMed]
  48. Martinelli, F.; Dellaferrera, G.; Mainar, P.; Cernak, M. Spiking Neural Networks Trained with Backpropagation for Low Power Neuromorphic Implementation of Voice Activity Detection. In Proceedings of the ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 8544–8548, ISSN 2379-190X. [Google Scholar] [CrossRef]
  49. Fontanini, R.; Esseni, D.; Loghi, M. Reducing the Spike Rate in Deep Spiking Neural Networks. In Proceedings of the Proceedings of the International Conference on Neuromorphic Systems (ICONS ’22), Knoxville, TN, USA, 27–29 July 2022; pp. 1–8. [Google Scholar] [CrossRef]
  50. Tušar, T.; Filipič, B. Visualization of Pareto Front Approximations in Evolutionary Multiobjective Optimization: A Critical Review and the Prosection Method. IEEE Trans. Evol. Comput. 2015, 19, 225–245. [Google Scholar] [CrossRef]
  51. Tiakas, E.; Papadopoulos, A.N.; Manolopoulos, Y. Skyline queries: An introduction. In Proceedings of the 2015 6th International Conference on Information, Intelligence, Systems and Applications (IISA), Corfu, Greece, 6–8 July 2015; pp. 1–6. [Google Scholar] [CrossRef]
  52. Esposito, D.; Savino, S.; Cosenza, C.; Gargiulo, G.D.; Fratini, A.; Cesarelli, G.; Bifulco, P. Study on the Activation Speed and the Energy Consumption of “Federica” Prosthetic Hand. In Proceedings of the XV Mediterranean Conference on Medical and Biological Engineering and Computing—MEDICON 2019, Coimbra, Portugal, 26–28 September 2019; Henriques, J., Neves, N., de Carvalho, P., Eds.; Springer: Cham, Switzerland, 2020; pp. 594–603. [Google Scholar] [CrossRef]
  53. Smail, L.C.; Neal, C.; Wilkins, C.; Packham, T.L. Comfort and function remain key factors in upper limb prosthetic abandonment: Findings of a scoping review. Disabil. Rehabil. Assist. Technol. 2021, 16, 821–830. [Google Scholar] [CrossRef] [PubMed]
  54. Iglesias, J.; Eriksson, J.; Grize, F.; Tomassini, M.; Villa, A.E.P. Dynamics of pruning in simulated large-scale spiking neural networks. Biosystems 2005, 79, 11–20. [Google Scholar] [CrossRef]
  55. Seoane, L.F. Evolutionary aspects of reservoir computing. Philos. Trans. R. Soc. B Biol. Sci. 2019, 374, 20180377. [Google Scholar] [CrossRef]
  56. Gaurav, R.; Stewart, T.C.; Yi, Y.C. Spiking Reservoir Computing for Temporal Edge Intelligence on Loihi. In Proceedings of the 2022 IEEE/ACM 7th Symposium on Edge Computing (SEC), Seattle, WA, USA, 5–8 December 2022; pp. 526–530. [Google Scholar] [CrossRef]
Figure 1. (Left) Exploded view of the neuroTac optical tactile sensor. The event camera (DVXplorer) is encased within a 3d printed mount, facing the inside of a compliant tip through a lens. An LED ring lights the internal pins of the tip, allowing the camera to capture the movement of these pins as a stream of events. (Top Right) The inside of the tip. The particular tip used within this work uses 331 pins in concentric rings. (Bottom Right) Experimental setup showing the neuroTac deployed on the FRANKA arm.
Figure 1. (Left) Exploded view of the neuroTac optical tactile sensor. The event camera (DVXplorer) is encased within a 3d printed mount, facing the inside of a compliant tip through a lens. An LED ring lights the internal pins of the tip, allowing the camera to capture the movement of these pins as a stream of events. (Top Right) The inside of the tip. The particular tip used within this work uses 331 pins in concentric rings. (Bottom Right) Experimental setup showing the neuroTac deployed on the FRANKA arm.
Electronics 13 02159 g001
Figure 2. Textures classified within this work. From 0–11: aluminium, acrylic, Medium-Density Fiberboard (MDF), plywood, fake leather, denim, silicon, foam, satin, velvet, fake fur and wool.
Figure 2. Textures classified within this work. From 0–11: aluminium, acrylic, Medium-Density Fiberboard (MDF), plywood, fake leather, denim, silicon, foam, satin, velvet, fake fur and wool.
Electronics 13 02159 g002
Figure 3. The V e l _ P r o f of the sensor tip during experimentation. (Left) Constant velocity shows a rapid acceleration as the arm moves to a set velocity. This velocity is maintained during sensing. (Centre) Constant acceleration shows a constant increase in velocity during recording. (Right) A triangle-shaped V e l _ P r o f includes the initial linear acceleration of the sensor across the surface before an equal constant deceleration once the midpoint of the movement has been reached.
Figure 3. The V e l _ P r o f of the sensor tip during experimentation. (Left) Constant velocity shows a rapid acceleration as the arm moves to a set velocity. This velocity is maintained during sensing. (Centre) Constant acceleration shows a constant increase in velocity during recording. (Right) A triangle-shaped V e l _ P r o f includes the initial linear acceleration of the sensor across the surface before an equal constant deceleration once the midpoint of the movement has been reached.
Electronics 13 02159 g003
Figure 4. Raster plot representations of a sample during each step of preprocessing. The flattened input vector of the sample decreases from 307,200 to 6084 neurons, greatly reducing network size and processing throughput. A raw output sample from the neuroTac sensor is shown in (a). This raw sample is then cropped spatially, resulting in the large reduction in neurons shown by (b). The shaded region of plot (c) represents the temporal cropping step applied between plots (c,d). Each of these steps further described in Section 4.1.
Figure 4. Raster plot representations of a sample during each step of preprocessing. The flattened input vector of the sample decreases from 307,200 to 6084 neurons, greatly reducing network size and processing throughput. A raw output sample from the neuroTac sensor is shown in (a). This raw sample is then cropped spatially, resulting in the large reduction in neurons shown by (b). The shaded region of plot (c) represents the temporal cropping step applied between plots (c,d). Each of these steps further described in Section 4.1.
Electronics 13 02159 g004aElectronics 13 02159 g004b
Figure 5. The results of grid search optimisation for texture classification network and profile classifier network.
Figure 5. The results of grid search optimisation for texture classification network and profile classifier network.
Electronics 13 02159 g005
Figure 6. Top: Figures depicting the difference in spiking output from the neuroTac for texture (a) velvet (Label 9) and (b) fake fur (Label 10) when moved across the surface using the same linear acceleration ( V e l _ P r o f 4). Bottom: Figures showing the difference in spiking output from the sensor when moved across the silicon texture (Label 6) using two different linear velocities ((c) V e l _ P r o f 3 and (d) V e l _ P r o f 2). The shaded region to the right of each plot demonstrates the data discarded by the temporal crop performed during preprocessing, as explained in Section 4.1.
Figure 6. Top: Figures depicting the difference in spiking output from the neuroTac for texture (a) velvet (Label 9) and (b) fake fur (Label 10) when moved across the surface using the same linear acceleration ( V e l _ P r o f 4). Bottom: Figures showing the difference in spiking output from the sensor when moved across the silicon texture (Label 6) using two different linear velocities ((c) V e l _ P r o f 3 and (d) V e l _ P r o f 2). The shaded region to the right of each plot demonstrates the data discarded by the temporal crop performed during preprocessing, as explained in Section 4.1.
Electronics 13 02159 g006
Figure 7. The confusion matrices produced by the highest accuracy texture classification network and V e l _ P r o f network.
Figure 7. The confusion matrices produced by the highest accuracy texture classification network and V e l _ P r o f network.
Electronics 13 02159 g007
Figure 8. Results of our grid search plotted as a 3D scatter. Tests that fall along the Pareto frontier are shown in red, with other tests in blue.
Figure 8. Results of our grid search plotted as a 3D scatter. Tests that fall along the Pareto frontier are shown in red, with other tests in blue.
Electronics 13 02159 g008
Table 1. Features of raw and preprocessed datasets.
Table 1. Features of raw and preprocessed datasets.
DatasetNumber of SamplesNumber of TexturesNumber of MovementsSpatial Resolution (Pixels)Sample Length (ms)
Raw Data14,4001212640 × 4801400–4800
Processed Data---78 × 781000
Table 2. Search Space for the performed training parameter grid search. The Peak Values shown here are the training parameters that produced the highest accuracy classifiers.
Table 2. Search Space for the performed training parameter grid search. The Peak Values shown here are the training parameters that produced the highest accuracy classifiers.
ParameterSearch SpaceClassifierPeak Value
Hidden Layer Size S = { x x = 25 n ,   n N ,   x 500 } Texture425
V e l _ P r o f 450
True Rate S = { 0.2 + 0.1 n n N 0 ,   0 n 7 } Texture0.9
V e l _ P r o f 0.3
Table 3. Comparison of our networks analysed in this work. Bold values indicate optimal values for that metric across the performed tests.
Table 3. Comparison of our networks analysed in this work. Bold values indicate optimal values for that metric across the performed tests.
ClassifierMethodAccuracyS ( × 10 6 ) W t ( × 10 6 )
TexturePeak0.959.072.59
Pareto Accuracy0.959.072.59
Pareto Spiking Activity0.680.130.15
Pareto Network Size0.870.750.15
V e l _ P r o f Peak0.831.702.74
Pareto Accuracy0.831.702.74
Pareto Spiking Activity0.420.080.15
Pareto Network Size0.710.540.15
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brayshaw, G.; Ward-Cherrier, B.; Pearson, M.J. Simultaneous Velocity and Texture Classification from a Neuromorphic Tactile Sensor Using Spiking Neural Networks. Electronics 2024, 13, 2159. https://doi.org/10.3390/electronics13112159

AMA Style

Brayshaw G, Ward-Cherrier B, Pearson MJ. Simultaneous Velocity and Texture Classification from a Neuromorphic Tactile Sensor Using Spiking Neural Networks. Electronics. 2024; 13(11):2159. https://doi.org/10.3390/electronics13112159

Chicago/Turabian Style

Brayshaw, George, Benjamin Ward-Cherrier, and Martin J. Pearson. 2024. "Simultaneous Velocity and Texture Classification from a Neuromorphic Tactile Sensor Using Spiking Neural Networks" Electronics 13, no. 11: 2159. https://doi.org/10.3390/electronics13112159

APA Style

Brayshaw, G., Ward-Cherrier, B., & Pearson, M. J. (2024). Simultaneous Velocity and Texture Classification from a Neuromorphic Tactile Sensor Using Spiking Neural Networks. Electronics, 13(11), 2159. https://doi.org/10.3390/electronics13112159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop