1. Introduction
Electrophysiology, recoding, and analysis of the electrical fields generated by the electrical cell activity remain one of the key tools used in neuroscience to investigate cognitive functions. Modern electrophysiology techniques allow the recording of large-scale cell populations, collecting both single-neuron activity dynamics as well as aggregated activity [
1]. This results in signals which span a large range of frequencies, with useful information spread through all their spectrum. The nature of this data makes signal processing tasks a challenge.
Combined with appropriate behavioral tasks, electrophysiological recordings in freely moving animals have provided invaluable information to understand sensory processing, spatial navigation, memory formation, and decision making, to mention some examples. However, electrophysiological studies in behaving animals have been traditionally performed in well-controlled but severely constrained laboratory conditions, in relatively reduced size arenas or task apparatus, and involving a limited and often artificial (i.e., pressing a lever bar) repertoire of behaviors. Therefore, more natural and elaborated experimental conditions in ecologically meaningful contexts are required [
2].
The need for open and meaningful spaces conflicts with the tethered nature of most electrophysiology systems, as it requires a physical connection between electrodes implanted in the experimental subject and the recording equipment. While this does not pose an issue for small, enclosed spaces and simple maze topologies [
1,
3,
4,
5,
6,
7], it prevents large arenas with enriched environments and social experiments with complex interactions with conspecifics (i.e., [
8]). Wiring limits the distance the subjects are able to travel, can become tangled with environmental objects or damaged by the animals themselves. To solve these shortcomings, developments have been made towards wireless electrophysiology systems [
9,
10].
There are two main approaches to wireless acquisition: Data loggers and radio transmission. Dataloggers are devices running from batteries and able to store all recorded data into a local non-volatile storage medium. They have been used in a variety of animals, from fish [
11] to birds [
12]. However, the main drawbacks of this approach are its limited storage capacity and the impossibility of performing closed-loop experiments dependent on real-time data.
Radio transmission can transmit data to a remote receiver in real-time. Multiple methods exist for encoding and sending data through a radio stream. Analog neural data can be modulated, with multiple channels merged via time multiplexing and sent over a carrier frequency [
13,
14]. While an analog transmitter requires less energy [
13], analog signals are more susceptible to noise than digital signaling, and the absence of an arbitration protocol prevents multiple devices from sharing the same frequencies.
Digital transmission can use simple one-directional carrier modulation [
9], which alone adds noise resistance to the transmission or through complex protocols. Such protocols can add extra features such as synchronization, arbitration, or bidirectional control [
15]. Some examples of widespread digital protocols are Bluetooth [
16], a low power protocol designed for data rates up to 2 Mbit/s, Bluetooth Low-Energy [
17], a slightly slower (up to 1.37 Mbit/s) version with reduced power needs or WiFi 802.11 b/g [
18], a high-speed protocol with data rates of up to 54 Mbit/s and advanced arbitration capabilities, but with higher power requirements. Some projects have developed a custom protocol, being able to fine-tune the power-performance trade-off [
19].
Power is the main bottleneck of wireless devices, limiting data rates and device operating life. Lowering power consumption allows for longer operational time, increased data rates, and reduced battery weights. As such, minimizing power requirements is a goal for every wireless device. In the case of radiofrequency systems, the power bottleneck stems from the power requirements of high-bandwidth data transmission [
16]. Different approaches exist to reduce energy consumption in these devices. For example, developing the core hardware as a custom-made Application-Specific Integrated Circuit (ASIC) can help by integrating electronics highly optimized for the task [
9,
13] at the expense of increased development and production costs. Another line of improvement, since the bulk of power requirements stem from the radiofrequency transmission, is the development of specialized protocols which can yield improvements over generalist, commercial ones [
20]. Research is also being made in fields like antenna optimization [
21,
22], to reduce the power needs of radiofrequency signals further, as well as optimize wireless power transmission.
A different approach, compatible with the previous ones and applicable to both data loggers and radiofrequency transmission, is to reduce the bandwidth needs of the data. A neural recording including fast activity transients, like spikes, require a sampling rate of at least 20 KS/s [
23,
24]. This combined with the multichannel acquisition, typical on modern high-density electrophysiology recordings, results in bandwidths of tens of megabits per second [
22]. Compression techniques can be used to reduce bandwidth needs which, in turn, decreases the power consumption of the wireless transmitter.
A wide variety of compression methods can be used for neural signals [
25]. One important parameter when considering a compression algorithm for a wireless implant is its complexity and power requirements. Some algorithms, such as wavelet compression, ref. [
26] can yield excellent compression ratios but require circuitry capable of handling advanced mathematical operations. This creates two disadvantages: Extra computational needs often translate to extra power consumption, which diminishes the overall effect on power saving. In addition, these circuit requirements limit the number of devices in which they can be implemented. It is common that small, ultra-low-power commercial Integrated Circuits (ICs) do not have these advanced characteristics, making these algorithms only possible on higher-end devices.
In contrast, algorithms with lesser computational requirements, with lower compression ratios due to their simplicity, are often used on only a particular part of the signal spectrum. For example, lower-frequency Local Field Potentials (LFP) tends to have high inter-channel redundancy, making high compression ratios with simple techniques possible [
27]. High-frequency spikes, in contrast, are sparse events, so it is possible to use spike-detection algorithms and only perform compression for the discrete, individual events [
28,
29,
30,
31]. Both techniques can be combined, compressing and sending both LFPs and spikes separately by the same device [
32,
33]. These approaches, however, are not able to provide a complete, continuous view of the entire acquired signal.
A fundamental characteristic of compression algorithms is the accuracy of the reconstruction of the original signal after it has been compressed. In this sense, lossless algorithms produce, after decompression, signals identical to the original ones, while lossy algorithms, which generally have a higher compression performance, introduce, however, signal distortions [
25,
34].
An example of a low-resource lossy compression algorithm is compressed sensing [
35,
36]. This method works by sampling a signal below the Nyquist frequency, thus reducing data size [
37] with negligible requirements on power or resources in the encoder. The computational burden lies entirely in the decoder, which must reconstruct the signal through complex mathematical operations [
35,
36,
38,
39]. This method is, however, a lossy algorithm that introduces distortions in the data. Moreover, both its compression efficiency and signal distortion are affected by acquisition noise [
40]. While the simplicity of the encoder makes it a good candidate for wireless devices [
31,
33,
41], it is limited by the distortions it introduces.
Here we describe a lossless compression algorithm for brain electrophysiology, able to reduce bandwidth and, by extension, transmission power requirements, along with a novel hardware implementation focused on resource minimization. It can compress data to 40–60% of its original size with no signal distortion and requires little power for its processing. This algorithm is based on a combination of delta compression and Huffman coding, both requiring little computational power and thus adding minimal extra power needs. The algorithm implementation is optimized to minimize hardware resources, not requiring any specialized hardware, which makes it possible to be used in a wide range of devices including low-cost or small ultra-low-power ICs. This, coupled with its ability to be configured for any number of channels and sampling rates, offers great flexibility for designing a variety of battery-powered wireless acquisition devices suitable for different experimental needs.
In addition to the algorithm, a low-overhead communications protocol was designed to allow compressed data to be efficiently shared between components for cases in which the electronic device for compression is different from the wireless transmitter or storage controller. Finally, a hardware prototype was created, implementing all the designs, and tested in vivo to validate the compression ratios and the resulting power consumption reduction in transmission, which decreased in a similar proportion as the bandwidth.
Figure 1 shows an overview of the developed systems and devices.
5. Discussion
Studying complex and ecologically meaningful behaviors in animals is necessary to move experimental cognitive neuroscience forward [
54], but requires experimental conditions closer to the natural conditions or even experiments in the real world. This often implies large spaces filled with elements such as obstacles, hiding places or even burrows, and environments shared by multiple animals. All these elements render devices tethered to the animals impractical, as the wiring would limit mobility and animal-animal or animal-context interactions.
Wireless implants can record brain activity during extended periods of time allow free movement of animals in complex environments, opening the possibility to a new generation of neurophysiological investigations in behaving animals.
For a wireless device, autonomy is crucial, with power usage being often the most limiting factor. Wireless data transmission has large power requirements, depending on the data rate, with higher rates requiring faster and more powerful signal processing. Reducing data rate lowers the power needs by either using slower, less power-demanding protocols or allowing the transmitter to be in a powered-up state for only brief periods of time, sending small bursts, and keeping it in a powered-down, low-power state most of the time.
Compression is an efficient technique to reduce the data rate, but only if the power needed for compression is lower than the power saved by rate reduction. However, some compression methods can distort the integrity of the data. A lossless compression system for brain electrophysiology must be able to faithfully transmit all the information contained in the wide range of the signals, which spans from 0 Hz to several kHz. This is the case for the compression algorithm presented here, which has demonstrated both its low energy footprint and power reduction during wireless transmission. This reduction was demonstrated on a regular Wi-Fi IEEE 802.11g chip. While useful for testing, this device is designed for high-bandwidth and not optimized for low power, with high energy consumption in static link usage. Using custom wireless protocols or specialized low-power devices, will reduce transmission power needs, further decreasing power needs. Especially interesting are the recent developments on IoT-related wireless protocols and devices, such as IEEE 802.11AH [
55], designed for low-power transmission while allowing a variety of different data rates.
Although this compression scheme was originally designed for wireless transmission, it could easily be adapted for other electrophysiology applications. Data loggers are an immediate example, as the algorithm would add negligible extra power and resource requirements while doubling the capacity of storage devices, thus greatly increasing the system autonomy. Wired acquisition systems can also benefit from compression, as link bandwidth often limits the maximum possible channel count in headstages. An example of such an ultra-high channel count system that could benefit from the ability to integrate more probes per headstage would be Neuropixels, high-density CMOS-based neural probes [
56,
57].
This flexibility of usage is reinforced by the low-resource nature of the development. Being kept with minimal hardware needs makes the algorithm easy to fit in existing designs, being able to be implemented in a variety of devices. This is also important for power consumption as, unless highly-optimized custom chips are used, devices with more hardware resources tend to be bigger and with more power requirements. Being low-resources makes it possible to be implemented in simple, low-power, commercial chips.
The focus on implementability in a diverse range of low-power, commercially available devices, including low-end ones, imposes hard limits on the algorithm complexity and, by extension, performance. Similar algorithms focusing on lossless or near-lossless compression can achieve ratios of near 20% [
58] by separating LFP and single spikes and compressing them independently. This has one downside of defining a hard frequency threshold, with the risk of losing data in the middle range. Moreover, band separation requires the use of digital filter circuitry, which might not be present on all commercial devices. As a counterpart, similar results to ours of approximately 50% reduction can be achieved by exploiting spatial redundancy [
59]. Although this approach requires some extra resources, which are more easily adjusted in a custom-made ASIC, it could also be used in many existing hardware. Coupling both algorithms could yield increased results by exploiting both temporal and spatial characteristics. As a comparison, lossy algorithms can achieve data reductions below 10% of the original size [
41] by introducing distortion to the neural signals or focusing on specific parts, such as compressing and transmitting Spikes only [
39].
The developed transmission protocol further reinforces the flexibility of the algorithm and its implementation by being able to maintain long-term signal integrity in the cases where data losses are expected. This might be the case for ultra-low-power wireless transmission protocols, as the drawback of expending less energy on link maintenance is the possibility of short interruptions on transmission, with their related packet losses. Being able to recover from such events makes the complete design suitable for almost any situation.
Data integrity and compression efficiency are two elements that must always be balanced. In this work, the compression algorithm was developed with the former in mind, being virtually lossless, with compression noise being below the noise floor of the acquisition chip. There are methods in which the compression ratio can be increased while introducing noise into the signal. One such way is in the delta coding step. As seen in
Figure 4(B2), large delta values are rare and often the result of acquisition artifacts. Those uncommon, large values could be removed by trimming the most significant bits, further reducing word width [
60]. In this case, any time such a large jump occurred, either naturally or by an acquisition artifact, the DC offset of the signal would drift from its real value while maintaining most of its characteristics. In this case, the signal would be corrected at the start of the following block. Another way to increase compression would be to trim even more bits before delta coding. This would result in a loss of resolution, with an equivalent noise of
. Conversely, if an acquisition chip with a lower noise floor were used, the number of discarded bits could be lowered, albeit with a slight impact on compression ratios.
Compression efficiency can also be improved without degrading signal quality by the optimization of the Huffman dictionary.
Section 4.3 shows how creating a customized dictionary with data previously recorded from the same experimental animals can increase compression. Understanding the specific factors that lead to these improvements could help further improve the performance. Current suspicions point to them being related to the physical properties of the experiment, such as electrode impedance and acquisition rate, which affect how the signal varies over time and such the result of delta coding. More research on this topic needs to be done to optimize further the procedure presented here.