Next Article in Journal
Electromagnetic Interference from Solar Photovoltaic Systems: A Review
Previous Article in Journal
Research on the Integrated Converter and Its Control for Fuel Cell Hybrid Electric Vehicles with Three Power Sources
Previous Article in Special Issue
An 88 dB SNDR 100 kHz BW Sturdy MASH Delta-Sigma Modulator Using Self-Cascoded Floating Inverter Amplifiers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Scalable, Multi-Core, Multi-Function, Integrated CMOS/Memristor Sensor Interface for Neural Sensing Applications

Centre for Electronics Frontiers, IMNS, School of Engineering, University of Edinburgh, Edinburgh EH9 3JL, UK
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(1), 30; https://doi.org/10.3390/electronics14010030
Submission received: 29 November 2024 / Revised: 20 December 2024 / Accepted: 23 December 2024 / Published: 25 December 2024
(This article belongs to the Special Issue Analog and Mixed Circuit: Design and Applications)

Abstract

:
This paper presents the architecture, design, and testing results of a scalable, multi-core, multi-function sensor interface, integrating CMOS technology and memristor elements for efficient neuromorphic and bio-inspired analysis. The architecture leverages the high-density and non-volatile properties of memristors to support different analysis functions. Each processing core is equipped with hybrid CMOS/memristor arrays, enabling real-time parallel acquisition and analysis, and each can be configured independently. The system facilitates communication between cores and is fully scalable. The first implementation supports 16 input channels, storing 256 neural signal samples, and the second implementation supports 576 input channels, storing 9k neural signal samples.

1. Introduction

Neural signal acquisition and analysis plays an important role in the diagnosis and monitoring of neurological disorders affecting the functioning of the brain, spine, and nervous system. These disorders include conditions such as epilepsy, seizures, and the consequences of injury and neurodegenerative diseases such as dementia [1,2]. Advances in neural signal analysis have been enabled by the implementation of machine learning (ML) algorithms on silicon, facilitating low-latency detection of neurological disorders [3,4,5,6,7]. Although still in their infancy, these various applications are developing rapidly and likely to become increasingly informative, accurate, and more complex as our understanding of the brain and the mechanisms by which it operates improves.
A neural recording system typically consists of an acquisition layer using neural probes and electrode arrays, a signal conditioning layer, one or more processing layers to classify activity, and a presentation layer to summarize results, as shown in Figure 1.
The acquisition of neural signals inevitably requires contact to the subject, and the sensors used are either by non-invasive or invasive means. Non-invasive methods, such as electroencephalography (EEG), rely on electrodes attached to the scalp. The body is a good insulator and electrical potentials some distance beneath the scalp can be detected. The signals detected are the averaged activity of a large population of neurons. Being non-invasive, this approach is preferred but lacks spatial resolution, has low signal amplitude, is sensitive to the environment, and can be affected by interference from involuntary body movements or other activity. Invasive methods may consist of electrodes in the subdural space such as electrocorticography (ECoG), or probes inserted directly into the cerebral cortex. The latter facilitates greater information due to better spatial resolution, allows accurate targeting, and provides improved signal SNR. Invasive methods may also facilitate obtaining signals from individual neurons. Since invasive methods require a surgical procedure, which may cause some trauma to the target area, they are only warranted when there is the potential to improve quality of life. The various probe types are illustrated in Figure 2.
Signals obtained from the brain have the characteristics shown in Figure 3. All signals typically have an amplitude in the order of micro-Volts. Local field potentials are accumulated from the activity of many neurons and are low in frequency and occupy a frequency range from 0.1 Hz to ~300 Hz, while action potentials are neuron-specific and occupy a frequency range from 300 Hz to ~10 kHz.

2. Materials and Methods

2.1. Neural Analysis

Processing initially requires signal acquisition and/or sample capture to present a flow of data for analysis. The low amplitude of signals obtained from neural probes is too weak for direct analysis and thus requires conditioning. This can be accomplished by a combination of gain and filtering. For capture, neural signals are continuous but pedestrian in nature; thus, incoming data can be sampled and then processed with little loss of information. The interpretation of neural data requires assessment of data amplitude and temporal properties. The location of probes is also important, given that certain parts of the brain are associated with different functions and disorders.
Although captured data has typically been subject to amplification via a gain stage, signal amplitude components are still a reflection of neural health. For example, the absence of responses or the difference between the amplitudes of responses to different stimuli are informative. Reducing amplitude over time is indicative of degenerative disorders due to declining synaptic strength (plasticity), affecting cognitive function [8]. Temporal components are prevalent and more easily observed; these include frequency, waveform shape, and waveform form factor. Waveform frequencies tend to indicate the level of activity within the brain. This is evident for activities that are intense with regard to sensory inputs, which are motion-rich and require motor responses/commands and rapid decision making. Frequency offers insights into the speed at which neurons are operating (the axon propagation velocity), which is also a reflection of neuron refraction (the speed at which neurons can fire successively). Comparison of current activity rates versus previously recorded activity rates may assist diagnosis [9].
The waveform shape of neural signals differs depending on the types of neurons participating in activity. Neurons can loosely be classified as excitatory or inhibitory, promoting or preventing action potentials. Imbalances in these two types are evident with epilepsy, where an excess of excitation is present, and autism, where an excess of inhibitory activity is present. Again, visibility of these waveform shapes may reveal abnormalities when compared to legacy data. A further temporal component are oscillations, often referred to as brain waves. Such oscillations have been detected in different frequency bands and are associated with different forms of brain activity, as follows:
  • Delta, 0.5–4 Hz, deep sleep;
  • Theta, 4–8 Hz, creativity, intuition, shallow sleep;
  • Alpha, 8–12 Hz, relaxation, imagination, concentration;
  • Beta, 12–30 Hz, reasoning, logic, alertness;
  • Gamma, 30–100 Hz, memory, attention, schizophrenia.
These oscillations are not fully understood, but appear to be a form of communication mechanism between different parts of the brain [10].
In summary, neural analysis requires three main processes: the capture of neural activity, the identification of signal types, and the sorting of results to extract information. The identification of features requires analysis of amplitude, frequency, waveform shape, and waveform frequency band.

2.2. Neural Analysis System Practicalities

With regard to performance, neural analysis must be completed in a timely manner to facilitate real-time systems. It must, therefore, exhibit low latency and not be constrained by competition for resources such as memory access. Given it may be performed close to the signal source and thus positioned as an edge device, it must be energy-efficient. These issues present speed and power requirements. Neural signals from a subject do not remain consistent or endure with time; any system must be configurable, such that parameters/coefficients involved in the analysis process can be reprogrammed without requiring hardware updates. This obviously presents a critical requirement for systems that may be implanted.
The ability to interface to a large probe count, capturing a stream of samples from each, requires high throughput and parallelism. The required connectivity is inevitably dense and it is desirable that connectivity is optimized, with an appropriate trade of signal count for bandwidth. But when scaling, it is desirable that the various interfaces remain consistent.
Finally, the support of different analysis tasks may be re-tasked beyond updates to parameters/coefficients. It is desirable that the system can be re-configured to re-deploy any circuitry that may have become redundant as a consequence of that re-tasking. Again, this is very important for systems that may be implanted.

2.3. Memristors

The suggestion of memristance is attributed to Leon Chua in 1971, who postulated the existence of a fourth passive element, alongside the resistor, capacitor, and inductor; this element occupying the position shown in Figure 4.
Memristors were realized in 2008 and their behavior is well documented [11,12]. The device exhibits a unique pinched hysteresis loop, acting as an electrically programmable resistor. The characteristic is bi-directional, allowing the resistance to be programmed to a low-resistance state (LRS) or a high-resistance state (HRS) or at some value between these states. The programmed resistance is non-volatile, making the device ideal for memory/storage-related applications. Memristors are typically formed by a vertical stack including a metal oxide layer, such as TiO2 for example, where the resistance is electrically modulated by the migration of oxygen vacancies. Given the width/length of a typical stack are a fraction of a micron, memristors are compact devices and excellent candidates for integration. Memristors have since been employed in a variety of memory, reconfigurable, and neuromorphic applications including compute-in-memory.
Neural signal technology has, to date, implemented machine learning (ML) algorithms on low-power integrated circuit chips for low-latency detection of neurological disorders on the edge. Such processors lack versatility, executing algorithms specific to solitary use cases and are not readily customized or reconfigured. In prior studies, it was demonstrated that the intrinsic resistive switching characteristics of memristors can be utilized for the detection of neural activities and epilepsy biomarkers with higher energy efficiency than conventional processing methods [13,14]. To utilize memristors and exploit the benefits of geometry and future scaling, it is desirable to integrate memristors directly into digital/analog circuit structures. To this end, a BEOL fabrication process has been developed [15]. This process allows memristors to be deposited onto the surface of CMOS wafers.

3. Application of Memristors

As previously discussed, the signals from neural probes are low amplitude with noise content, and these require conditioning before subsequent analysis. It is essential that the signal path includes a gain stage; this could be local to the probe electronics or the front-end of a neural processor. Noise is present due to muscle movement, mechanical movement, and/or electrical disturbances. These noise artefacts are best removed by filtering within the front-end gain stage and prior to any analysis [16].
Analysis is required for incoming data received over a given time interval; it is therefore appropriate that data is sampled during that interval and each sample is stored until the analysis process is completed, after which, it may then be discarded. To accomplish this, two options are prevalent. Firstly, the use of a sample-hold circuit that holds a sampled value on a capacitor structure. Such circuits are sensitive to leakage and disturbances; the held value is thus vulnerable versus storage time. Secondly, conversion of a sampled value to a digital representation (using an ADC), which is then stored in static logic. This requires a larger circuit, with control logic and static memory. A third solution utilizes memristors, which are small and have both state dependence and a continuous characteristic, allowing high-density storage of analog information. Thus, sample capture/storage can be accomplished using memristors, with a memristor acting as a sensor, integrating an applied bias current and manifesting as resistance. This is herein termed Memristive Integrated Sensing (MIS) [13,14].

3.1. Identification of Signal Types

The activity captured during a sampling interval represents both local field potentials and action potentials such as spikes. From Section 2.2, the activity captured requires analysis in respect of amplitude, frequency, waveform shape, and frequency band.

3.1.1. Amplitude

The activity captured during the sampling interval requires comparison versus a reference level to establish whether features such as spikes are evident. This thresholding process may require adjustment of the reference levels based on the incoming signal strength and is thus adaptive. This can be obtained from data captured via MIS, using a bespoke ADC with controllable thresholds.

3.1.2. Frequency

The frequency of activity can be extracted by inspection of the signal amplitude for a procession of samples over a given duration. This can similarly be obtained from data captured via MIS. If features occur at different amplitudes, multiple ADC conversions, each with adjusted thresholds, can be used to separate each.

3.1.3. Waveform Shapes

Any features detected require comparison against one or more signal templates to establish whether they match known signal types, where each may be a signature associated with a specific neuron type. Spikes may also overlap and the resulting feature may not then match any known type. However, additional templates can be used that represent the typical result where specific combinations of spikes coincide. This can be accomplished using memristors arranged in a crossbar structure and configured for Vector Matrix Multiplication (VMM) [17]. Each captured sample within the sampling interval is multiplied by weights, represented by memristor resistance. The weights are chosen to reflect the target wave shape (i.e., the template). The accumulated result from the VMM is indicative of a match to the target wave shape. This is herein termed Template Matching (TM) [18,19].

3.1.4. Waveform Frequency Band

Brain waves have been observed in specific frequency bands and activity captured in the sampling interval may show these. Ideally, these need to be detected and the frequency band determined. This can be accomplished using memristors in a similar manner to TM, by using a crossbar structure to act as a Finite Impulse Response (FIR) filter [20]. Each column of a crossbar can encode the weights corresponding to the coefficients associated with different frequency bands. The VMM thus produces an accumulated result indicative of a frequency band. This is herein termed FIR Filtering.

3.2. Sorting of Results to Extract Information

The features detected require sorting to cluster those of similar type and to examine their rate of occurrence. Various clustering algorithms have been developed for this purpose. Once clustered, each cluster is examined to establish whether the form factor of its contents is sufficiently similar to another cluster that it can be merged, or whether its contents are sufficiently disparate that the cluster should be split. Finally, validation tests may be used to determine metrics that represent the quality and consistency of sorted data. Many sorting and validation algorithms are computationally intense and these activities are assumed to be performed remotely to the signal source, at the application level.

4. Development of a Memristor-Based System

The concept of a memristor-based solution is shown in Figure 5. A memristor VMM crossbar fulfills many of the analysis requirements and the use of memristors in general satisfies the system requirements. It has very few stages and, hence, low latency. Furthermore, it is not constrained by resource limitations with respect to memory access. Since a crossbar relies on static logic, which is only active during analysis, it is by definition energy-efficient. The parameters/coefficients (or weights) are represented by programmable memristor resistance, and such values can be updated electrically and arbitrarily. It could be limiting if only one operation were permitted at any given time; it is likely that some probe groups may be located to serve a specific purpose and only require one analysis function, which may not be required by others. Conversely, some probe groups may be active in unison, all requiring a function such as TM. Given the quest for high probe count, a distributed approach is appropriate. The first concept of a memristor-based system for neural analysis was introduced in [21]. This presented the concept of a multi-core solution shown in Figure 6, where all cores can support a common set of analysis modes but can also be configured independently. Such a core being referred to as a Process Element (PE).
The system has two main parts, the sensing Front-End channels and an array of PE. Each PE comprising a memristive crossbar, DAC, ADC, and Reconfigurable Interface. The latter organizes the appropriate stimuli to effect VMM, FIR, TM, MIS, and reservoir computing (RC) modes. The PE accepts analog inputs and presents analog outputs. The PE also accepts a digital, serialized input and presents a digital, serialized output. A PE would have analog inputs, which may be connected to neural probes, and analog outputs, which present the VMM result. To support functional parallelism, a PE is capable of multiple operating modes and these operations are applied to a given probe group. PE may be used in a shallow, highly parallel configuration with all PE performing a common operation, in a partitioned configuration with groups of PE performing different operations or in a deep configuration such as a Feed Forward Neural Network, where PE may be connected successively, such that the outputs of one PE may connect to the inputs of another PE. Any PE may connect to any other PE in this manner, the connectivity of which is thus arbitrary. The system can be scaled in two dimensions on silicon die and in three dimensions by stacking such die.

4.1. FFN/RCN

A Feed Forward Neural Network (FFN) is shown in Figure 7. The input layer would be connected to probe signals, and each successive intermediate stage supports additional complexity. The latter, however, adds training workload.
A variation on this theme concerns Reservoir Computation, illustrated in Figure 8 [14,22,23,24]. This is a recurrent network, with a randomly connected and initialized reservoir. The input layer is fed into the reservoir using randomly assigned weights. The reservoir is a fixed connection arrangement of neurons, which introduces non-linearities. Since the input to hidden connections and hidden-to-hidden connections are random, learning is only applied to the output layer. The internal weights are typically either high or low values to make the reservoir sparse but containing strong paths. Given it is recurrent, the reservoir stores some level of history and thus responds to previous and current stimuli (an example of an Echo State Network [25]); the reservoir could thus be considered as a network of oscillators. The output layer weights are adjusted to produce the desired responses.
A PE can function as a physical RC, equivalent to the minimum complexity echo state network and governed by the following equations:
x t + 1 = H ( V s t + 1 + W x t )
y t + 1 = U x t
where
  • s is the input vector (mapped to sensing channel outputs).
  • V is the input connection weights (fixed unity value for all inputs, with random signs).
  • x is the reservoir internal states (mapped to memristor resistance).
  • W is the reservoir internal connection weights 0 or 1 (mapped to word line selection).
  • H is the reservoir activation function mapped to the nonlinear memristor voltage/resistance dynamics.
  • U is the trained weights for the output layer (mapped to source line selection).
  • y is the reservoir output.

4.2. Interconnection

Interconnections between PE blocks, where the analog outputs of one PE connect to the analog inputs of any other PE, would present a significant challenge to implement a suitable analog switch matrix. It is highly likely the latter would be a large structure, with many resistive paths and heavy capacitive loads, unless littered with buffer repeaters compromising energy efficiency. It is not considered efficient to pass analog traffic beyond the next nearest PE.
To support arbitrary interconnection, a PE supports input and output paths where the incoming and/or outgoing data are available as a digital, serialized stream conveyed using shift registers. A PE has a serial input to an internal shift register to optionally convert an incoming digital, serial stream to an analog representation. A PE would thus have analog and digital input options. A PE also has a serial output from an internal shift register, optionally holding a digital representation of the analog PE result. A PE would thus have analog and digital output options. With serial connections, the transport of data from any PE to any other PE becomes trivial. The transmission of serial data can easily be accomplished at clock speeds beyond 100 MHz, allowing serial registers in different PE to be concatenated, reducing pin count. In a speed-aggressive circumstance, serial registers could be controlled independently or concatenated to a shallower depth. It should be noted that the transmission time from a source PE to a destination PE is consistent but not uniform; this depends on their relative positions within the serial shift register chain(s).
PE can be interconnected using analog outputs to analog inputs, a parallel connection from each PE to its nearest neighbor (this connection is dedicated). PE may be interconnected using digital means with serial connection from a PE to any other PE (this connection is arbitrary).
To support scaling, PE may be arranged in a two-dimensional array, where there are three interconnection types: analog successive, digital arbitrary, and hybrid. This is illustrated in Figure 9. Digital arbitrary, is not limited to the transfer of bytes or words; data can be exchanged at bit level. If a two-dimensional structure was integrated on silicon, a three-dimensional array would also be possible by stacking die or stacking packaged devices.

5. Operating Modes

With a reconfigurable interface (RI) the 1T1R memristor crossbar (XBAR) can be configured to operate in multiple modes, described as follows and illustrated in Figure 10.

5.1. Calibration

Calibration mode allows the resistances of all memristors to be read in a row-progressive manner. In this mode, bit lines are connected to the I2V (at common mode voltage), the source lines set to V R e a d , and each word line (row) is active in sequence on successive master clock cycles. The output voltage of the I2V represents the current V R e a d / R m , from which the resistance can be determined.

5.2. Initialization

Initialization mode allows the resistance of all memristors to be programmed and read back; this mode would be used for forming. Note that the stochastic nature of memristor programming requires a closed loop system, such that the desired final value is actually achieved by a process of iteration. When writing, bit lines are disconnected from the I2V and set to V W r i t e in sequence, source lines are set to the (sixteen) DAC values, and the word lines are all active. When reading, bit lines are connected to the I2V (at common mode voltage), source lines are set to V R e a d , and each word line is active in sequence on successive master clock cycles.

5.3. FIR

In FIR mode, the memristors are initialized with the FIR coefficients. FIR mode has bit lines connected to the I2V at common mode voltage, the source lines set to sixteen input samples with incremental deltas generated by a sample-hold delay-line, and the word lines all active.

5.4. TM

TM mode is the same as FIR mode, but with the memristors initialized with the TM coefficients.

5.5. Feed Forward Network

The Feed Forward Network (FFN) mode allows PE to be concatenated, bypassing the Front-End circuits. This mode is required for the successive and hybrid PE interconnection schemes.

5.6. MIS

MIS mode requires the programming of a memristors value, in sympathy with an analog input. This is accomplished using a switched-capacitor amplifier pulser circuit, which produces a pulse-width modulated (PWM) representation of the analog input voltage together with a sign output. The sign output is applied to the source line and the PWM stream is applied to the bit line. When the incoming analog voltage is higher than the pulser common mode voltage, the sign output will be low and the PWM has a 20% duty cycle (or 80% active). Conversely, when the incoming analog voltage is lower, the sign will be high and the PWM has an 80% duty cycle (or 20% active). When writing, bit lines are connected to PWM pulse sources, modulated by the analog input voltage. The bit lines are active in sequence on successive master clock cycles, the source lines are connected to the pulser sign bit, and word lines are enabled in sequence every 32 master clock cycles. When reading, bit lines are connected to the I2V (at common mode voltage), and the source lines are set to V R e a d . There are two options for the word lines, either all word lines enabled or word lines enabled in sequence every 32 master clock cycles. The memristors are thus written in sequence, row 0 (WL<0>) from BL<0> to BL<15>, row 1 (WL<1>) from BL<0> to BL<15>, and so on. The memristors can be read back in the same sequence or by reading the bit lines with word lines active concurrently.

5.7. RC

The support of RC is implemented as follows and illustrated in Figure 11. Each column of memristors is used to represent one 16-node reservoir. Each PE then consists of 16 reservoirs to process 16-channel inputs. The bit lines represent spatial channels and the source lines temporal states. Each row of memristors thus represents one internal node in the reservoirs, which processes one sample of the input time-series. To process 16 channels, the SC amplifiers are time-division multiplexed.
There are three phases to RC operation. In the first phase, the initial product is accomplished by the RI. The SC amplifiers can be configured to apply a positive or negative sign (weight) to the sensor input samples. Thus, the sensing channel sample s is multiplied by a random sign V to produce V s t + 1 . The product is then summed with the current crossbar output W x t , where W (either 0 or 1) is multiplied by the memristor resistance x. This produces the sum V s t + 1 + W x t . Given the reservoir internal connection coefficients are implemented on the word lines, these can easily be re-configured to support multiple reservoir topologies (e.g., Simple Cycle Reservoir, Delay Line Reservoir, etc.). In the second phase, the product/sum generated in the first phase is used to program the crossbar memristors, which have a response to stimulus of function H. The memristor states become x t + 1 . In the third phase, external weights U are applied to the crossbar source lines via the DAC. The crossbar output is then y t + 1 = U x t , and the reservoir dynamic is complete.

6. Implementation and Testing

The implementation is shown in Figure 12 and Figure 13, the test setup is shown in Figure 14. The PE (red outline) measures 1.4 × 1.2 mm. The 16-Channel Front-End (yellow outline) measures 1.1 × 0.6 mm. The silicon is processed to 5th metal for full CMOS and to 4th metal for the memristor version.
The source line (SL), word line (WL), bit line (BL), and the crossbar (XBAR) mode signals are measured from the chip to validate its reconfigurable operations. In Figure 15, the chip was configured to operate in calibration mode by programming the on-chip shift registers. Due to the limitation of the oscilloscope channel count, only one out of the sixteen SLs, WLs, and BLs are monitored at the same time. During calibration, the XBAR is always in READ mode, and as shown in the figure, the reference voltages for memristor resistance (MR) readout are applied via the SL and BL, and the WLs can be enabled (active low) one at a time to read out the MR line by line.
Figure 16 shows the measured waveforms when the chip is configured in the initialization (memristor electro-forming and programming) mode. During initialization, the XBAR mode alternates between WRITE and READ to program the memristors to a desired resistance value in a ’write-and-verify’ manner. During the WRITE phase, all WLs are enabled, the programming signals are applied via SLs, and the selection of the memristors for programming is achieved via the BLs. In the READ phase, the WLs are enabled one at a time and the MR readout reference voltages are applied via SLs and BLs like the calibration mode.
Figure 17 shows the measured waveforms when the chip is configured in FIR mode. For FIR filtering, the input signal passes through a sample and hold delay-line achieved by an on-chip switched-capacitor amplifier array (SCA) generated as described in [21]. The delayed samples of the input signals are applied via the SLs to the memristors, which realize the FIR filter coefficients. A reference voltage is applied via the BLs to read out the resulting currents through the I2V block shown in Figure 10. Note that the delay-line signal on SL has troughs as shown in Figure 17 because of the SCA reset phases. To ensure the reset phase does not corrupt the resulting current, the memristors are enabled in a discrete-time manner with the WLs enabled for only half a cycle. In this way, the SCA resets are only applied when the WLs are disabled to solve this issue.
Figure 18 shows the measured waveforms during TM mode. As explained previously, the TM mode operates in a similar manner as per the FIR mode. The difference being that TM mode has an additional option to enable only one WL at a time as shown in the figure. The purpose of this option is to facilitate template matching normalization.
Figure 19 shows the measured waveforms during VMM mode for feed forward network computation. The chip is tested with a ramp input signal which is sampled by the on-chip SCAs and then applied to the SLs. The VMM results are read out via BLs. Like the FIR and TM modes, the WLs are enabled in only half cycle to avoid corruption with the SCA reset phases.
Figure 20 shows the measured waveforms during MIS mode. The chip is tested with an input ramp signal, which is sampled and converted to pulse-width modulated (PWM) signals through the on-chip SCA and PWM logic [21]. The PWM pulses are applied to the memristors via BLs and the the polarity of the pulses are applied via SLs. As shown in the figure, the SL polarity flips as the input signal (marked in purple) crosses the threshold. In MIS mode, the XBAR alternates between WRITE and READ phases. In the WRITE phase, the PWM pulses are applied to the memristors with one WL enabled at a time. During READ phase, all WLs are enabled to read out the MR values.
Figure 21 shows the measured waveforms during RC mode. The mode is tested with an input ramp signal ( V s t + 1 ) and a 2.5 V common-mode signal that emulates the reservoir dynamic feedback ( W x t ). The on-chip SCA calculates the sum V s t + 1 + W x t and the result is applied to the memristor via SLs as shown in the figure. In the output phase, the reservoir output layer coefficients are applied on SL (via on-chip DAC) and the results read out via BLs with all WLs enabled.
Figure 22 shows verification of the readout circuits when using a fixed test resistance.

7. Discussion and Conclusions

This paper presents the detailed operation principles and up-scaling strategies of a reconfigurable memristor/CMOS neural signal processor that was introduced in [21]. While [21] shows the pre-silicon verification in software simulation, this paper reports the post-silicon hardware measurement results that validate the reconfigurable operation of the chip. Note that since the design and verification results of the Front-End channels have been presented in [16], they are not repeated in his paper. Subsequently, we will test the process for integrating the memristors onto the CMOS, shown in Figure 23.
Our next step is thus to test the processing functionalities with the integrated memristors present. Moreover, the system described has been extended to the 2D matrix shown in Figure 24.
The device has a 6 × 6 array of PE, with each PE supporting 16 inputs and storing 16 samples per input (total of 6 × 6 × 16 × 16 = 9 × 1024). The die measures 9.855 × 9.855 mm. The evaluation version shown has 306 bond pads, the final version has 130. In conclusion, this paper has presented a scalable, multi-core, multi-function, integrated CMOS/Memristor sensor interface for neural sensing applications. The system has been demonstrated on silicon as a single instance implementation supporting 16 channels and 256 samples, and a 2D implementation has been developed supporting 576 channels and 9 k samples.

Author Contributions

Conceptualization, G.R., S.W. and X.J.; Data curation, S.W.; Formal analysis, G.R.; Funding acquisition, T.P. and S.W.; Investigation, G.R.; Methodology, G.R.; Project administration, S.W.; Resources, T.P.; Supervision, T.P., A.S. and S.W.; Validation, G.R., S.W. and S.S.; Visualization, G.R., S.W. and S.S.; Writing—original draft, G.R.; Writing—review and editing, S.W., A.S., S.S. and T.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by EPSRC FORTE Programme Grant (EP/R024642/1), the RAEng Chair in Emerging Technologies under Grant CiET1819/2/93, and the Royal Society under grant IEC/NSFC/223067.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to thank Shady Agwa for support in digital design flow, Fan Yang for test board design, and Patrick Foster for support in PCB assembly.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
APAction Potentials
BEOLBack End of Line
CEFCentre for Electronics Frontiers
ECoGElectrocorticography
EEGElectroencephalography
EPEvoked Potentials
ERPEvent Related Potentials
FFNFeed Forward Network
FIRFinite Impulse Response
FORTEFunctional Oxide Reconfigurable Technologies
HFHigh Frequency
LFPLocal Field Potentials
MISMemristive Integrated Sensing
PEProcess Element
RCReservoir Computing
RCNReservoir Compute Network
TMTemplate Matching
VMMVector Matrix Multiplication

References

  1. Drew, L. Decoding the business of brain–computer interfaces. Nat. Electron. 2023, 6, 90–95. [Google Scholar] [CrossRef]
  2. Pei, D.; Vinjamuri, R. Advances in Neural Signal Processing; IntechOpen: London, UK, 2020; ISBN 978-1-83968-396-1. [Google Scholar]
  3. Tsai, C.W.; Jiang, R.; Zhang, L.; Zhang, M.; Wu, L.; Guo, J.; Yan, Z.; Yoo, J. SciCNN: A 0-Shot-Retraining Patient-Independent Epilepsy-Tracking SoC. In Proceedings of the 2023 IEEE International Solid- State Circuits Conference (ISSCC), San Francisco, CA, USA, 19–23 February 2023. [Google Scholar]
  4. Shin, U.; Ding, C.; Zhu, B.; Vyza, Y.; Trouillet, A.; Revol, E.C.; Lacour, S.P.; Shoaran, M. NeuralTree: A 256-Channel 0.227-μJ/Class Versatile Neural Activity Classification and Closed-Loop Neuromodulation SoC. IEEE J. Solid-State Circuits 2022, 57, 3243–3257. [Google Scholar] [CrossRef] [PubMed]
  5. Chua, A.; Jordan, M.I.; Muller, R. SOUL: An Energy-Efficient Unsupervised Online Learning Seizure Detection Classifier. IEEE J. Solid-State Circuits 2022, 57, 2532–2544. [Google Scholar] [CrossRef]
  6. Wang, Y.; Sun, Q.; Luo, H.; Chen, X.; Wang, X.; Zhang, H. 26.3 A Closed-Loop Neuromodulation Chipset with 2-Level Classification Achieving 1.5 Vpp CM Interference Tolerance, 35 dB Stimulation Artifact Rejection in 0.5ms and 97.8% Sensitivity Seizure Detection. In Proceedings of the 2020 IEEE International Solid- State Circuits Conference—(ISSCC), San Francisco, CA, USA, 16–20 February 2020; pp. 406–408. [Google Scholar]
  7. O’Leary, G.; Groppe, D.M.; Valiante, T.A.; Verma, N.; Genov, R. NURIP: Neural Interface Processor for Brain-State Classification and Programmable-Waveform Neurostimulation. IEEE J. Solid-State Circuits 2018, 53, 3150–3162. [Google Scholar] [CrossRef]
  8. Appelbaum, L.G.; Shenasa, M.A.; Stolz, L.; Daskalakis, Z. Synaptic plasticity and mental health: Methods, challenges and opportunities. Neuropsychopharmacology 2022, 48, 113–120. [Google Scholar] [CrossRef]
  9. Seidl, A.H. Regulation of Conduction Time along Axons. Neuroscience 2014, 276, 126–134. [Google Scholar] [CrossRef] [PubMed]
  10. Clayton, M.S.; Yeung, N.; Kadosh, R.C. The roles of cortical oscillations in sustained attention. Trends Cogn. Sci. 2015, 19, 188–195. [Google Scholar] [CrossRef]
  11. Omar, E.; Aly, H.H.; Fedawy, M. A Brief introduction to Memristor Device. IJAEBS 2023, 4, 171–198. [Google Scholar] [CrossRef]
  12. Cavallini, M.; Hemmatian, Z.; Riminucci, A.; Prezioso, M.; Morandi, V.; Murgia, M. Regenerable Resistive Switching in Silicon Oxide Based Nanojunctions. Adv. Mater. 2012, 24, 1197–1201. [Google Scholar] [CrossRef]
  13. Gupta, I.; Serb, A.; Khiat, A.; Zeitler, R.; Vassanelli, S.; Prodromakis, T. Real-time encoding and compression of neuronal spikes by metal-oxide memristors. Nat. Commun. 2016, 7, 12805. [Google Scholar] [CrossRef] [PubMed]
  14. Liu, Z.; Tang, J.; Gao, B.; Li, X.; Yao, P.; Lin, Y.; Liu, D.; Hong, B.; Qian, H.; Wu, H. Multichannel parallel processing of neural signals in memristor arrays. Sci. Adv. 2020, 6, 2–10. [Google Scholar] [CrossRef] [PubMed]
  15. Mifsud, A.; Shen, J.; Feng, P.; Xie, L.; Wang, C.; Pan, Y.; Maheshwari, S.; Agwa, S.; Stathopoulos, S.; Wang, S.; et al. A CMOS-based Characterisation Platform for Emerging RRAM Technologies. In Proceedings of the 2022 IEEE International Symposium on Circuits and Systems (ISCAS), Austin, TX, USA, 27 May–1 June 2022. [Google Scholar]
  16. Jiang, X.; Sbandati, C.; Reynolds, G.; Wang, C.; Papavassiliou, C.; Serb, A.; Prodromakis, T.; Wang, S. A Neural Recording System With 16 Reconfigurable Front-end Channels and Memristive Processing/Memory Unit. In Proceedings of the 2023 IEEE NEWCAS Conference, Edinburgh, UK, 26–28 June 2023. [Google Scholar]
  17. Wan, W.; Kubendran, R.; Schaefer, C.; Eryilmaz, S.B.; Zhang, W.; Wu, D.; Deiss, S.; Raina, P.; Qian, H.; Gao, B.; et al. A compute-in-memory chip based on resistive random-access memory. Nature 2022, 608, 504–512. [Google Scholar] [CrossRef]
  18. Shi, Y.; Ananthakrishnan, A.; Oh, S.; Liu, X.; Hota, G.; Cauwenberghs, G.; Kuzum, D. High Throughput Neuromorphic Brain Interface with CuOx Resistive Crossbars for Real-time Spike Sorting. In Proceedings of the 2021 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 11–15 December 2021; pp. 16.5.1–16.5.4. [Google Scholar]
  19. Abhang, P.; Mehotra, S. Technological Basics of EEG Recording and Operation of Apparatus. In Introduction to EEG- and Speech-Based Emotion Recognition; Academic Press: Cambridge, MA, USA, 2016. [Google Scholar]
  20. Liu, Z.; Tang, J.; Gao, B.; Yao, P.; Li, X.; Liu, D.; Zhou, Y.; Qian, H.; Hong, B.; Wu, H. Neural signal analysis with memristor arrays towards high-efficiency brain–machine interfaces. Nat. Commun. 2020, 11, 4234. [Google Scholar] [CrossRef] [PubMed]
  21. Reynolds, G.; Jiang, X.; Serb, A.; Prodromakis, T.; Wang, S. An Integrated CMOS/Memristor Bio-Processor for Re-configurable Neural Signal Processing. In Proceedings of the 2023 IEEE Biomedical Circuits and Systems Conference (BioCAS), Toronto, ON, Canada, 19–21 October 2023. [Google Scholar]
  22. Moon, J.; Ma, W.; Shin, J.H.; Cai, F.; Du, C.; Lee, S.H.; Lu, W.D. Temporal data classification and forecasting using a memristor-based reservoir computing system. Nat. Electron. 2019, 2, 480–487. [Google Scholar] [CrossRef]
  23. Zhong, Y.; Tang, J.; Li, X.; Gao, B.; Qian, H.; Wu, H. Dynamic memristor-based reservoir computing for high-efficiency temporal signal processing. Nat. Commun. 2021, 12, 408. [Google Scholar] [CrossRef] [PubMed]
  24. Tanaka, G.; Yamane, T.; Héroux, J.B.; Nakane, R.; Kanazawa, N.; Takeda, S.; Numata, H.; Nakano, D.; Hirose, A. Recent advances in physical reservoir computing: A review. Neural Netw. 2019, 115, 100–123. [Google Scholar] [CrossRef] [PubMed]
  25. Rodan, A.; Tiňo, P. Minimum complexity echo state network. IEEE Trans. Neural Netw. 2011, 22, 131–144. [Google Scholar] [CrossRef]
Figure 1. Acquisition flow, showing the four functional layers.
Figure 1. Acquisition flow, showing the four functional layers.
Electronics 14 00030 g001
Figure 2. Brain probe types (not to scale). Multiple probes at multiple locations may be employed simultaneously.
Figure 2. Brain probe types (not to scale). Multiple probes at multiple locations may be employed simultaneously.
Electronics 14 00030 g002
Figure 3. Brain signal types associated with location of measurement.
Figure 3. Brain signal types associated with location of measurement.
Electronics 14 00030 g003
Figure 4. The memristor (in red) and other passive elements with respect to the fundamental electrical quantities.
Figure 4. The memristor (in red) and other passive elements with respect to the fundamental electrical quantities.
Electronics 14 00030 g004
Figure 5. Memristor-based analysis concept. Neural sensor signals are applied to a memristor crossbar structure, which can be configured to perform different modes of operation.
Figure 5. Memristor-based analysis concept. Neural sensor signals are applied to a memristor crossbar structure, which can be configured to perform different modes of operation.
Electronics 14 00030 g005
Figure 6. General architecture. All blocks except the Front-End are contained within a Process Element (PE) block. The Reconfigurable Interface (RI) configures the PE sub-blocks to facilitate the various operating modes.
Figure 6. General architecture. All blocks except the Front-End are contained within a Process Element (PE) block. The Reconfigurable Interface (RI) configures the PE sub-blocks to facilitate the various operating modes.
Electronics 14 00030 g006
Figure 7. General scheme of a Feed Forward Neural Network.
Figure 7. General scheme of a Feed Forward Neural Network.
Electronics 14 00030 g007
Figure 8. General scheme of a Reservoir Compute Network.
Figure 8. General scheme of a Reservoir Compute Network.
Electronics 14 00030 g008
Figure 9. PE Chaining, showing the different manner in which PE blocks may be interconnected. Analog connections are always from a PE to its nearest neighbor, whereas digital connections are arbitrary.
Figure 9. PE Chaining, showing the different manner in which PE blocks may be interconnected. Analog connections are always from a PE to its nearest neighbor, whereas digital connections are arbitrary.
Electronics 14 00030 g009
Figure 10. Summary of the internal stimulus used to accomplish each operating mode. The system can be easily scaled with crossbar size.
Figure 10. Summary of the internal stimulus used to accomplish each operating mode. The system can be easily scaled with crossbar size.
Electronics 14 00030 g010
Figure 11. RC sequence.
Figure 11. RC sequence.
Electronics 14 00030 g011
Figure 12. Front-End and PE Layout.
Figure 12. Front-End and PE Layout.
Electronics 14 00030 g012
Figure 13. Die Image.
Figure 13. Die Image.
Electronics 14 00030 g013
Figure 14. Test Setup.
Figure 14. Test Setup.
Electronics 14 00030 g014
Figure 15. Measured chip output waveforms (1T1R crossbar control voltages) under calibration mode.
Figure 15. Measured chip output waveforms (1T1R crossbar control voltages) under calibration mode.
Electronics 14 00030 g015
Figure 16. Measured chip output waveforms (1T1R crossbar control voltages) under initialization mode.
Figure 16. Measured chip output waveforms (1T1R crossbar control voltages) under initialization mode.
Electronics 14 00030 g016
Figure 17. Measured chip output waveforms (1T1R crossbar control voltages) under FIR filtering mode.
Figure 17. Measured chip output waveforms (1T1R crossbar control voltages) under FIR filtering mode.
Electronics 14 00030 g017
Figure 18. Measured chip output waveforms (1T1R crossbar control voltages) under TM mode.
Figure 18. Measured chip output waveforms (1T1R crossbar control voltages) under TM mode.
Electronics 14 00030 g018
Figure 19. Measured chip output waveforms (1T1R crossbar control voltages) under VMM mode.
Figure 19. Measured chip output waveforms (1T1R crossbar control voltages) under VMM mode.
Electronics 14 00030 g019
Figure 20. Measured chip output waveforms (1T1R crossbar control voltages) under MIS mode.
Figure 20. Measured chip output waveforms (1T1R crossbar control voltages) under MIS mode.
Electronics 14 00030 g020
Figure 21. Measured chip output waveforms (1T1R crossbar control voltages) under RC mode.
Figure 21. Measured chip output waveforms (1T1R crossbar control voltages) under RC mode.
Electronics 14 00030 g021
Figure 22. Verification of readout when measuring fixed resistances. Delta is percentage error between measured resistance (Rm) and test resistance (Rt).
Figure 22. Verification of readout when measuring fixed resistances. Delta is percentage error between measured resistance (Rm) and test resistance (Rt).
Electronics 14 00030 g022
Figure 23. Image of memristor integration onto CMOS.
Figure 23. Image of memristor integration onto CMOS.
Electronics 14 00030 g023
Figure 24. Layout of an upscaled processor design.
Figure 24. Layout of an upscaled processor design.
Electronics 14 00030 g024
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Reynolds, G.; Jiang, X.; Wang, S.; Serb, A.; Stathopolous, S.; Prodromakis, T. A Scalable, Multi-Core, Multi-Function, Integrated CMOS/Memristor Sensor Interface for Neural Sensing Applications. Electronics 2025, 14, 30. https://doi.org/10.3390/electronics14010030

AMA Style

Reynolds G, Jiang X, Wang S, Serb A, Stathopolous S, Prodromakis T. A Scalable, Multi-Core, Multi-Function, Integrated CMOS/Memristor Sensor Interface for Neural Sensing Applications. Electronics. 2025; 14(1):30. https://doi.org/10.3390/electronics14010030

Chicago/Turabian Style

Reynolds, Grahame, Xiongfei Jiang, Shiwei Wang, Alex Serb, Spyros Stathopolous, and Themis Prodromakis. 2025. "A Scalable, Multi-Core, Multi-Function, Integrated CMOS/Memristor Sensor Interface for Neural Sensing Applications" Electronics 14, no. 1: 30. https://doi.org/10.3390/electronics14010030

APA Style

Reynolds, G., Jiang, X., Wang, S., Serb, A., Stathopolous, S., & Prodromakis, T. (2025). A Scalable, Multi-Core, Multi-Function, Integrated CMOS/Memristor Sensor Interface for Neural Sensing Applications. Electronics, 14(1), 30. https://doi.org/10.3390/electronics14010030

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop