Next Article in Journal
The Impact of the COVID-19 Pandemic on the Decision to Use Solar Energy and Install Photovoltaic Panels in Households in the Years 2019–2021 within the Area of a Selected Polish Municipality
Next Article in Special Issue
A Comprehensive Study of Random Forest for Short-Term Load Forecasting
Previous Article in Journal
DFT Study of Heteronuclear (TMFeO3)x Molecular Clusters (Where TM = Sc, Ti, Fe and x = 2, 4, 8) for Photocatalytic and Photovoltaic Applications
Previous Article in Special Issue
Offshore Wind Power Forecasting—A New Hyperparameter Optimisation Algorithm for Deep Learning Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Spiking Neural Network Based Wind Power Forecasting Model for Neuromorphic Devices

by
Juan Manuel González Sopeña
1,*,
Vikram Pakrashi
2,3,4 and
Bidisha Ghosh
1,5
1
QUANT Group, Department of Civil, Structural and Environmental Engineering, Trinity College Dublin, D02 PN40 Dublin, Ireland
2
UCD Centre for Mechanics, Dynamical Systems and Risk Laboratory, School of Mechanical & Materials Engineering, University College Dublin, D04 V1W8 Dublin, Ireland
3
SFI MaREI Centre, University College Dublin, D04 V1W8 Dublin, Ireland
4
The Energy Institute, University College Dublin, D04 V1W8 Dublin, Ireland
5
CONNECT: SFI Research Centre for Future Networks & Communications, Trinity College Dublin, D02 PN40 Dublin, Ireland
*
Author to whom correspondence should be addressed.
Energies 2022, 15(19), 7256; https://doi.org/10.3390/en15197256
Submission received: 5 September 2022 / Revised: 25 September 2022 / Accepted: 28 September 2022 / Published: 2 October 2022
(This article belongs to the Special Issue Intelligent Forecasting and Optimization in Electrical Power Systems)

Abstract

:
Many authors have reported the use of deep learning techniques to model wind power forecasts. For shorter-term prediction horizons, the training and deployment of such models is hindered by their computational cost. Neuromorphic computing provides a new paradigm to overcome this barrier through the development of devices suited for applications where latency and low-energy consumption play a key role, as is the case in real-time short-term wind power forecasting. The use of biologically inspired algorithms adapted to the architecture of neuromorphic devices, such as spiking neural networks, is essential to maximize their potential. In this paper, we propose a short-term wind power forecasting model based on spiking neural networks adapted to the computational abilities of Loihi, a neuromorphic device developed by Intel. A case study is presented with real wind power generation data from Ireland to evaluate the ability of the proposed approach, reaching a normalised mean absolute error of 2.84 percent for one-step-ahead wind power forecasts. The study illustrates the plausibility of the development of neuromorphic devices aligned with the specific demands of the wind energy sector.

1. Introduction

A large number of machine learning (ML) and deep learning (DL) models have been developed and applied to time series data of a varied nature for tasks such as forecasting [1], classification [2], and clustering [3]. This trend has been also been observed in the field of wind power forecasting (WPF) [4], particularly the use of artificial neural networks (ANNs) [5], which are usually trained with the backpropagation algorithm [6]. Recurrent neural networks, such as the gated recurrent unit (GRUs) [7] and long short-term memory (LSTM) neurons [8], can learn temporal features on wind data, whereas convolutional neural networks (CNNs) capture spatial ones [9]. Other ML algorithms that have been applied in the literature are support-vector machines [10], random forests [11], gradient boosting machines [12], and neuro-fuzzy models [13,14]. DL methods such as deep neural networks are built by stacking multiple layers between the input and output layers to extract higher-level features from the data [15]. Deep neural architectures such as deep belief networks [16], deep convolutional networks [17] and N-BEATS [18] have been applied in the WPF literature. Furthermore, the abilities of ML/DL as a modelling tool have proven valuable for solar power forecasting [19] and renewable energy systems [20].
Accurate WPFs can be estimated using ML/DL architectures considering different types of data collected at a wind farm [21,22]. However, such models may be associated with a high computational cost, a critical factor for edge computing [23], including those applications for renewable energy [24]. For instance, the low latency inherent in neuromorphic devices can be critical for transmission system operators to manage the grid in real time and the decision-making process of traders participating in electricity markets, specifically to correct their positions in intraday markets. Thus, neuromorphic computing provides an alternative to the computational complexity of ML/DL models [25] with the development of devices inspired by the energy-efficient nature of biological systems such as the Intel’s Loihi chip [26]. The architecture of spiking neural networks (SNNs) [27] resemble more closely biological neurons, and are thus adequate for implementation in neuromorphic devices to unleash their potential in terms of low latency and lower energy consumption. However, training SNNs remains a challenge, as the well-known backpropagation algorithm cannot be applied, due to the non-differentiable nature of spikes. The current approaches to train spiking DL algorithms can be broadly divided into online and offline approaches [28]. Online approaches first implement an SNN in neuromorphic hardware, leveraging on-chip plasticity to train the spiking network and evolve its parameters with the arrival of new data [29]. This approach includes online approximations of the backpropagation algorithm [30,31] and evolving SNNs [32]. On the other hand, the SNN is trained before deploying the model for offline approaches. These can be further divided into two categories, considering how the training stage is performed. One possibility is to train a conventional ANN using the backpropagation algorithm, and later map the parameters into an equivalent SNN model [33]. This approach is known as ANN-to-SNN conversion. Alternatively, a direct training approach uses a variation of error backpropagation to optimize directly the parameters of an SNN [34].
In addition, the research community has been developing specific software platforms to implement applications based on SNNs. For instance, Nengo [35] is a software based on the principles of the Neural Engineering Framework (NEF), a theoretical framework to implement large-scale neural models with cognitive abilities [36]. This same software was later extended with the sister library NengoDL [37], aiming to combine the principles of neuromorphic modelling with the well-known deep learning framework TensorFlow [38] to build deep spiking neural models by ANN-to-SNN conversion. Alternatively, other frameworks can directly train SNNs, such as the Spike Layer Error Reassignment (SLAYER) algorithm proposed by Shrestha and Orchard [39]. Recently, in October 2021, Intel’s Neuromorphic Computing Lab released the first version of Lava [40], an open-source software framework, to implement neuromorphic applications for the Intel Loihi architecture [41].
The SNN features are maximized within the framework provided by neuromorphic computing. However, no study has realistically attempted to model short-term WPFs using SNNs while considering the current computational abilities of neuromorphic devices to date. We have to remember that neuromorphic computing is still in its infancy, so the goal is to reach an acceptable level of performance to build up our knowledge regarding the implementation of spiking-based models in WPF, and not to outperform the current well-established neural network models [28]. Therefore, we propose a SNN model for short-term WPF, adapted to the hardware capacity of the current state-of-the-art neuromorphic devices, particularly the neuromorphic chip Loihi developed by Intel. The aim of this study is not solely constrained to achieving highly accurate WPFs, but also to efficiently design WPF models that leverage the neuromorphic processors’ power efficiency. The proposed forecasting approach was designed by applying the modelling framework provided by NengoDL to build spiking neuron models, and NengoLoihi, a complementary library, to implement such models on Loihi hardware.
The rest of this paper is structured as follows. Section 2 describes the ANN-to-SNN conversion method used to train spiking neural networks, as well as the spiking model architecture tailored to Loihi hardware. Section 3 presents a case study using this methodology for short-term WPF, using real data from an Irish wind farm. Section 4 contains the concluding remarks and the scope for future research work.

2. Methodology

NengoDL [37] is a modelling framework that includes tools to design biological neuronal models and the optimization methods used to train ML/DL models. Such optimization methods are usually incompatible with SNNs, as spikes are not differentiable. NengoDL links SNNs and these optimization methods by performing the necessary transformations to apply the ANN-to-SNN conversion method proposed by Hunsberger and Eliasmith [42], which allows for the use of a rate-based version of the spiking model in the training stage and the SNN for inference. The design of this rate-based approximation is key to successfully mapping its parameters into a spiking network, so the parameters of the network must be carefully tuned to ensure a minimal loss of performance during the conversion, and the architecture of the model must be tailored to subsequently build the network on Loihi hardware. Typically, six steps were followed to build and evaluate the performance of our proposed SNN model within the framework provided by NengoDL, as follows:
  • Build the non-spiking neural model as usual. The network must be designed considering the specific requirements for its implementation on Loihi hardware, such as the communication with the chip.
  • Train the equivalent rate-based network with the methodology described by Hunsberger and Eliasmith [42], the default method implemented in NengoDL to train SNNs.
  • Replace the activation functions with their spiking counterparts. We used spiking Rectified Linear Unit (ReLU) activations for the inference process. The activation profile of this function is restricted by the discretization required for the Loihi chip [43], leading to discrepancies compared to the theoretical spiking ReLU activation (Figure 1). Such discrepancies increase for higher firing rates due to this discretization. Furthermore, the Loihi chip can only fire a spike once per timestep, limiting its firing rate at a maximum of 1000 Hz. This constraint does not exist otherwise, and multiple spikes could, in theory, be fired simultaneously and exceed that value [44].
  • Run the network using the NengoDL framework, setting parameters such as the number of timesteps that each input will present to the spiking model, allowing for the network to settle and spike in the given timeframe, and the firing rate scale, letting the network spike at a higher rate. These preliminary results will help us monitor the neural activities and tune the parameters of the SNN.
  • Once an acceptable model performance is reached, we need to configure some additional parameters to set up the SNN for Loihi and simulate it for either Loihi hardware or the emulator [45] to replicate the chip’s behavior. This is achieved with the extra functionalities provided by the library NengoLoihi.
  • Collect the results to evaluate them. One-step ahead point predictions are calculated, and normalized mean absolute error (NMAE) [46] is the metric used to measure the accuracy of these forecasts.
In the remainder of this section, we introduce how the ANN-to-SNN conversion is performed and the model architecture chosen to forecast wind power.

2.1. ANN-to-SNN Conversion

The non-differentiable nature of spikes impedes the use of the backpropagation algorithm to train spiking neurons [47]. ANN-to-SNN conversion sorts this out by mapping the parameters of a trained ANN to an equivalent SNN. Thus, the main challenge is how to train the non-spiking model so that there is only a small loss of performance in the conversion process. The first point is choosing an adequate spiking activation function. Cao et al. [48] established an equivalence between the ReLU activation function [49] and the spiking neuron’s firing rate. The ANN-to-SNN conversion method implemented in NengoDL was proposed by Hunsberger and Eliasmith [42]. This method is valid for both linear (such as ReLU) and non-linear activation functions such as leaky integrate-and-fire (LIF) by smoothing the equivalent rate equation employed to train the ANN. To understand this, let us look at the equation governing the dynamics of an LIF neuron:
τ R C d v ( t ) d t = v ( t ) + I ( t )
where τ R C is the membrane time constant, v ( t ) is the membrane voltage, and I ( t ) is the input current. The neuron will fire a spike if it reaches a certain threshold V and after the potential is reset during a certain period of time (known as refractory period τ r e f ). The dynamics of the neuron are recovered after the refractory period τ r e f is ended. If a constant input current is given to the neuron, the steady-state firing rate (i.e., the time that it takes to the neuron to reach the threshold to fire a spike) can be determined as:
r ( j ) = τ r e f + τ R C log ( 1 + V ρ ( j V ) ) 1
where ρ ( x ) = m a x ( x , 0 ) . However, this function is not completely differentiable, so the LIF rate equation is softened to address this problem and allow for use of the backpropagation algorithm [42]. The hard maximum ρ is replaced by a soft maximum ρ defined as:
ρ ( x ) = γ log ( 1 + e x / γ )
After training the conventional ANN, the parameters of the SNN are identical to its non-spiking counterpart, with only the neurons themselves changing. The performance of the spiking network can be further enhanced by tuning additional parameters. For instance, if using a linear activation function for the spiking forecasting model, the spiking firing rate can easily be increased after training by applying a scale to the input weights of the neurons to make them spike at a faster rate. The output of the network is divided for the same scale to not affect the behavior of the trained network. This way of proceeding is not optimal for non-linear activation functions. Instead, the firing rates can be optimized during training with regularization, so the firing rates are encouraged to spike at a certain firing rate [43]. Furthermore, a synaptic filter can be applied to reduce any possible noise found in the output of the spiking network.

2.2. Spiking Model Architecture

The model architecture (Figure 2) is slightly different to conventional ANNs, as it has to be adapted to the requirements of the Loihi hardware. The first distinctive feature of this network is the off-chip layer. This layer is a prerequisite to transmitting any information with the hardware, as it only communicates with spikes. Thus, this initial layer is run off-chip and converts the input into spikes [44]. The rest of the network is run on the hardware. A convolutional (conv-layer) and a regular fully connected layer (dense-layer) are used to process the data and generate the forecast. The convolutional layer is constituted by filters in the form of convolutions:
( f g ) ( t ) = f ( τ ) g ( t τ ) d τ
where ( f g ) indicates the convolution between the functions f and g, in which the function f can be considered as a filter or kernel and g as the input data. On the right hand side, g ( t τ ) indicates that the input data g are reversed and shifted to a certain time t. It is important to notice that not all types of neural networks are currently available in this ANN-to-SNN conversion framework (e.g., LSTM neurons are not supported). The activation function of all these three layers will be a spiking ReLU activation for inference. The equivalent ANN used during training follows the same architecture, including the off-chip layer, although it only behaves as a regular convolutional layer in this case. ReLU activation functions are used instead during the training stage.
Following up on our previous work [46,50], this model architecture is applied to 10-min resolution wind power data. The data used in this paper were collected for a wind turbine of a site located in Ireland (the exact location cannot be disclosed due to confidentiality reasons) for a two-year-and-a-half period (from January 2017 to June 2019). As input, the model uses previous wind power observations to provide the one-step-ahead forecasts as the output.
The wind power data were preprocessed using the variational mode decomposition (VMD) algorithm [51]. In particular, the data were decomposed into 8 subseries (known as modes) with different levels of complexity [52], giving us the opportunity to examine and adapt the SNN architecture under varied conditions. The forecasts of each mode were later aggregated to subsequently estimate the WPF [53].

3. Results

First, examples using a synthetic sine wave signal and load data are given to clarify some details of the steps that need to be followed to successfully convert an ANN model into a spiking one. Later, a case study is presented using data from an Irish wind farm.

3.1. Synthetic Signal Forecasting

Before applying the methodology to wind power data, let us present an example with a more simple signal (a synthetic sine wave) to clarify and further explain the details of tuning the parameters to achieve a good performance with the spiking network. For simplicity, the example using this signal was run within the NengoDL framework, so any additional parameters used to implement the model on Loihi hardware can be dismissed (such as the off-chip layer); therefore, a basic feedforward neural network (FFNN) model was used instead of the previously described model architecture, which suffices to accurately predict such a basic signal.
During the initial evaluation of the spiking network model (Steps 3 and 4), considering the discretization of the activation function required for Loihi hardware is of importance to posteriorly transfer our model without a significant drop in performance. Therefore, we must be particularly careful when scaling the firing rate of the spikes, as very high rates will not work on Loihi hardware. Let us examine the implications of disregarding this point with the example shown in Figure 3: we build the FFNN model (Step 1) and train it with a rate-based (i.e., non-spiking) ReLU activation (Step 2). Then, we replace the activation for its spiking counterpart, scaling the firing rate with a high enough value (Step 3). The neural activities of three neurons when presenting an input are shown in Figure 3a,b, having replaced the ReLU activation function with the theoretical spiking ReLU and the discretized version with Loihi, respectively. Two of these neurons (shown in green and yellow) fire very fast in the first case, but their behavior is diminished in the second one due to the activation profile, impacting the performance of the model when all the input vectors conforming with the testing set are presented to the network (Step 4), as displayed in Figure 3c). Thus, the firing rate of this network should be lowered to satisfy the hardware specifications required in the following steps to implement the model on neuromorphic devices.
The tuning of the firing rate scale, as well as the amplitude of the spikes, are essential to achieve a good forecasting accuracy while simultaneously trying to find a balance between the firing rates (enough spikes must be generated to transmit the information to the network) and the sparsity of spiking networks (leveraging the promise of low energy consumption by neuromorphic devices). Following the same example, let us a fix a certain spiking amplitude and experiment with different firing rate scales to find this trade-off, considering the Loihi-tailored spiking ReLU activation. The neural activities of the same three neurons are shown in Figure 4a for a scale of 1 (i.e., keeping the same input weights as the original SNN), in Figure 4b for a scale of 5 (a linear scale of 5 is applied to the inputs of the neurons), and in Figure 4c using a scale of 50. As expected, the spikes fire much faster when increasing this parameter, with the spikes being almost indistinguishable in the latter case, thus reducing the sparsity of this network. Between the neural activities shown in Figure 4a,b, the mean firing rates are low (6 and 30.9 Hz) and show a more sparse firing rate, meaning that both are, in principle, better-suited to this application. The preliminary results computed within NengoDL (Figure 4d) indicate that a scale of 5 provides a slightly better performance, being the most adequate value for this parameter. Naturally, tuning these parameters is a harder task when dealing with more complex data and more complex spiking architectures, as we will see in the following section.

3.2. Load Forecasting

Let us set another example using real data to calculate one-step ahead forecasts. In particular, short-term load forecasting is of interest due to its close relation with WPF, as both are necessary to operate and maintain the stability of the electrical grid [54]. Furthermore, load demand data show regular daily and weekly patterns, which are not observed in wind power data [55], so a model architecture formed of CNNs is a good candidate to extract such features [56]. Records of aggregated hourly demand data from Ireland can be found on the European Network of Transmission System Operators for Electricity (ENTSO-E) website [57]. The available measurements were recorded between 2016 and 2018.
As usual, we built and trained the rate-based equivalent of the model, and subsequently the activation functions were replaced. Then, the spike parameters were tuned without specifying any hardware requirements, and we monitored the initial results to choose the best values for these parameters. Some of these initial forecasts are shown in Figure 5. The existing patterns in load data were captured by the model, and adjusting the spikes’ parameters is fairly straightforward. The dashed red line (obtained using an amplitude of 0.05 and a firing rate scale of 50) more closely matches the test data than the rest, so these values were chosen for its implementation on Loihi’s emulator (or the hardware itself, if available).
As indicated in Step 5, the network must be further adjusted to be run on Loihi. In our particular case, we must indicate what layers are run on- and off-chip, but other adjustments might be needed for more complex networks, such as distributing the connections of the network over multiple cores on Loihi [44]. Figure 6a shows that neurons are effectively firing in each layer, whereas Figure 6b compares the initial forecasts thaat were obtained previously while tuning the spike parameters (the red dashed line) and the load forecasts emulating the Loihi chip (dash-dot green line). It can be observed that the model architecture translates well to the emulator after fine-tuning those hardware specifications, resulting in similar load forecasts with respect to the initial evaluation of Step 4.

3.3. Wind Power Forecasting

Noting the computational abilities of current neuromorphic devices, let us apply the proposed model architecture to build the spiking forecasting models for each mode extracted from Irish wind-power data after preprocessing the data with the VMD algorithm. The exact location of the wind farm is not disclosed for confidentiality reasons. The library Nengo [35] was used to simulate neuromorphic algorithms, together with the extensions NengoDL [37] for deep learning and NengoLoihi to emulate the behavior of Loihi hardware. The original neural-network-based models were implemented using Keras with Tensorflow backend for the rest of the models [38,58].
Following the proposed methodology, we first built the model (Step 1), and trained the rate-based neural network model to set its network parameters (Step 2). Then, we transformed it into an SNN by switching the activation functions to spiking ones (Step 3). In Step 4, we set empiric values for the amplitude and firing rate of the spikes (Table 1) within the NengoDL framework until we obtained a reasonable performance from the spiking model. The spiking amplitude modulates the amount of information transmitted to the subsequent layers of the network, whereas the firing rate adjusts how fast the spikes are being fired. If the firing rate is high, the behavior will be closer to the non-spiking model, and thus the performance will increase, but at the cost of losing the characteristic temporal sparsity provided by the spikes [59]. In addition, a high firing rate will lead to detrimental results on Loihi because of the discrepancy resulting from discretizing the spiking activation function (as shown in Figure 1). The low mean firing rates of these preliminary results (Table 2) suggest that the selected parameters are potentially good for implementation on Loihi. Afterwards, we configured some additional parameters to run the model on the Loihi emulator (Step 5). In particular, we must indicate what part of the model is run off-chip (in this case, the off-chip layer we use to communicate with the chip) and how long each input vector is presented to the network (in our case, we show each one for 0.4 s).
The information recorded in Steps 4 and 5 is shown in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14 for modes 1–8, respectively. Part a of these figures shows the neural activities of each layer (limited to 5 neurons for illustrative purposes). These neural activities correspond to the first input vector fed to the model, and produce the first point forecast, shown on part b. This constitutes one of the main differences in comparison to ANNs. Neurons in regular ANNs are static entities, which are always activated every time a new input arrives to the model, whereas neurons of SNN models will only be activated if certain dynamic conditions are met. From an user perspective, the neural activities help us visualize the mean firing rates shown in Table 2: modes 1, 3, and 8 exhibit higher firing rates, which translates into a large number of spikes being generated during this timeframe, whereas the rest of the modes present a more sparse behavior, resulting in a lower generation of spikes. In some cases, such as mode 4 (Figure 10) and mode 5 (Figure 11), the neurons of the off-chip layer need a long time to settle and thus start to spike, delaying the neural response of subsequent layers. Even if temporal sparsity is a desirable feature in a spiking model, in the sense that a smaller number of spikes means a lower consumption of energy (as a non-activated neuron will consume no energy), it might occasionally be advisable to finetune the firing rate of the off-chip layer to propagate the information faster to the rest of the network, as delays in the neural responses could degrade model performance. A quote on exact power consumption on the chip is erroneous in the current context, since the entire hardware is active during implementation, while only a minute fraction is actually used for the proposed problem. Under such circumstances, power quotes become relevant with sector-customised chips and a better handling of SNN architecture for DL, a direction in which the industry and current research is quickly moving.
At this stage, the performance of our models can finally be examined (Step 6). The model is designed to provide one-step-ahead point forecasts (Figure 15). The dashed-red lines show the forecasts obtained while tuning the model using the NengoDL framework in Step 4. While this preliminary model is able to forecast increasing/decreasing trends of power generation, it is not as accurate for high or low power-generation scenarios. Nonetheless, this initial assessment allows for us to prepare our model for Loihi (dash-dot green line), which demonstrates the same skill in detecting increasing/decreasing trends of power generation as the preliminary model, while showing a better ability to forecast high/low power-generation values. This difference in performance also arises from the model architecture itself. When the model is initially evaluated outside the Loihi framework, it cannot discern that the first layer is only set to start to generate spikes. Such nuance is captured when the model is configured for implementation on Loihi. Additionally, we observe that the forecasts are not as accurate compared to a non-spiking VMD-GRU model that we used in a previous study using the same data [50], and the outputs are generally noisier. However, this is an expected outcome due to the current limitations of neuromorphic hardware.
In conclusion, we have successfully transformed a non-spiking neural model into a spiking one with a reasonably good performance, having achieved a 2.84% NMAE for one-step ahead forecasts with the model being adapted to neuromorphic hardware. This type of proof is not only helpful to prove that industrial applications such as WPF modelling can be transferred to non-von Neumann architectures such as neuromorphic computing, but to provide guidelines to the manufacturers of such hardware (e.g., Intel and the development of devices as Loihi) to cater to the industry’s needs.

4. Conclusions

Neuromorphic computing provides a new paradigm to build energy-efficient, low-latency algorithms in contrast to the current state-of-the-art ML/DL strategies, thus potentially reducing the computational cost of training and deploying artificial-intelligence-based models. In particular, SNNs aim to learn in a more biologically plausible manner [60] by more closely mimicking the spike-based transmission of information that occurs in the brain [61]. At present, the two major challenges for the use and implementation of SNNs are (1) the training of such models, as the well-established training strategies based on the backpropagation algorithm applied to ML/DL cannot be directly used, as spikes are not differentiable, and (2) the implementation of SNNs on neuromorphic hardware, as SNNs must be tailored to cater to the specific requirements of the hardware. The first challenge has been addressed with different approaches to date, such as ANN-to-SNN conversion, and using variations in error backpropagation to directly train SNNs. The second challenge is hardware-dependent, and should be addressed according to the requisites of the hardware used to implement the SNN. Additionally, there is currently a lack of studies applying neuromorphic computing for practical cases that are useful for both research and industrial practices, such as the design of WPF models [62].
In this paper, we adopt an ANN-to-SNN conversion approach to forecast wind power, and obtain these WPFs emulating or running the spiking model using the neuromorphic hardware Loihi [41]. SNNs are designed using the framework provided by the software Nengo [35,37]. First, we build and train the non-spiking neural network. After training, we map the parameters and replace the activation functions for their spiking counterparts, which will be used during the prediction stage. Then, without considering hardware specific constraints, some preliminary results are evaluated to tune some spike-related parameters such as the firing rate or the amplitude of the spikes. Finally, the SNN is further adjusted to be run on the hardware emulator (or actually running the model on Loihi if available) to obtain the WPFs. Following all these steps, we managed to reach our goal of achieving a good level of performance with the proposed spiking architecture, obtaining a NMAE of 2.84% for one-step ahead forecasts when the model is emulated on Loihi.
As neuromorphic computing is not a well-established technology at present, there is room for future research. First, the proposed ANN-to-SNN conversion approach for short-term WPF can be further refined by tuning the firing rates of each layer individually and considering the use of synaptic filters to smooth the output. Second, the modelling of spiking neural networks can be improved by (1) directly training the network to increase the efficiency of neuromorphic devices from the energy point of view and (2) using online approximations of the backpropagation algorithm to adjust network parameters with the arrival of new data. Third, it will be possible to implement more complex biological inspired neural networks in the near future, as the computational capability of neuromorphic computing continues to increase with the development of new devices, such as Loihi 2 [63]. As reducing the computational cost is one of the main reasons to use neuromorphic devices, any future line of research must address model performance in terms of energy consumption to verify that it achieves a significant reduction in consumption compared to conventional computer architectures. To achieve that, the neural spiking models must not only be emulated but run on a real neuromorphic device to realistically measure this feature.

Author Contributions

Conceptualization, J.M.G.S., V.P. and B.G.; methodology, J.M.G.S. and B.G.; software, J.M.G.S.; validation, J.M.G.S., V.P. and B.G.; formal analysis, J.M.G.S.; investigation, J.M.G.S., V.P. and B.G.; resources, J.M.G.S., V.P. and B.G.; data curation, J.M.G.S., V.P. and B.G.; writing—original draft preparation, J.M.G.S. and B.G.; writing—review and editing, J.M.G.S., V.P. and B.G.; visualization, J.M.G.S.; supervision, V.P. and B.G.; project administration, V.P. and B.G.; funding acquisition, V.P. and B.G. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the funding of SEAI WindPearl Project 18/RDD/263.

Data Availability Statement

Relevant computed data are available from authors upon reasonable request to the corresponding author.

Acknowledgments

The authors like to thank George Vathakkattil Joseph and Aasifa Rounak for their technical support. Bidisha Ghosh would like to acknowledge the support of ENABLE (Grant number 16/SP/3804) and Connect Center (Grant number 13/RC/2077_P2). Vikram Pakrashi would like to acknowledge the support of SFI MaREI centre (Grant number RC2302_2), the resources and support of Intel Neuromorphic Research Community, Accenture NeuroSHM project with UCD, Science Foundation Ireland NexSys 21/SPP/3756, and UCD Energy Institute.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANNArtificial Neural Network
CNNConvolutional Neural Network
DLDeep Learning
ENTSO-EEuropean Network of Transmission System Operators for Electricity
FFNNFeedforward Neural Network
GRUGated Recurrent Unit
LIFLeaky Integrate-and-Fire
LSTMLong Short-Term Memory
MLMachine Learning
NEFNeural Engineering Framework
NMAE  Normalized Mean Absolute Error
SLAYERSpike Layer Error Reassignment
ReLURectified Linear Unit
SNNSpiking Neural Network
VMDVariational Mode Decomposition
WPFWind Power Forecasting

References

  1. Lim, B.; Zohren, S. Time-series forecasting with deep learning: A survey. Philos. Trans. R. Soc. A 2021, 379, 20200209. [Google Scholar] [CrossRef] [PubMed]
  2. Fawaz, H.I.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Deep learning for time series classification: A review. Data Min. Knowl. Discov. 2019, 33, 917–963. [Google Scholar] [CrossRef] [Green Version]
  3. Ma, Q.; Zheng, J.; Li, S.; Cottrell, G.W. Learning representations for time series clustering. Adv. Neural Inf. Process. Syst. 2019, 32, 3781–3791. [Google Scholar] [CrossRef]
  4. Wang, Y.; Zou, R.; Liu, F.; Zhang, L.; Liu, Q. A review of wind speed and wind power forecasting with deep neural networks. Appl. Energy 2021, 304, 117766. [Google Scholar] [CrossRef]
  5. Marugán, A.P.; Márquez, F.P.G.; Perez, J.M.P.; Ruiz-Hernández, D. A survey of artificial neural network in wind energy systems. Appl. Energy 2018, 228, 1822–1836. [Google Scholar] [CrossRef] [Green Version]
  6. Rumelhart, D.E.; Durbin, R.; Golden, R.; Chauvin, Y. Backpropagation: The basic theory. In Backpropagation: Theory, Architectures and Applications; Psychology Press: London, UK, 1995; pp. 1–34. [Google Scholar]
  7. Wang, R.; Li, C.; Fu, W.; Tang, G. Deep learning method based on gated recurrent unit and variational mode decomposition for short-term wind power interval prediction. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 3814–3827. [Google Scholar] [CrossRef] [PubMed]
  8. Li, C.; Tang, G.; Xue, X.; Chen, X.; Wang, R.; Zhang, C. The short-term interval prediction of wind power using the deep learning model with gradient descend optimization. Renew. Energy 2020, 155, 197–211. [Google Scholar] [CrossRef]
  9. Yildiz, C.; Acikgoz, H.; Korkmaz, D.; Budak, U. An improved residual-based convolutional neural network for very short-term wind power forecasting. Energy Convers. Manag. 2021, 228, 113731. [Google Scholar] [CrossRef]
  10. He, Y.; Li, H.; Wang, S.; Yao, X. Uncertainty analysis of wind power probability density forecasting based on cubic spline interpolation and support vector quantile regression. Neurocomputing 2021, 430, 121–137. [Google Scholar] [CrossRef]
  11. Lahouar, A.; Slama, J.B.H. Hour-ahead wind power forecast based on random forests. Renew. Energy 2017, 109, 529–541. [Google Scholar] [CrossRef]
  12. Landry, M.; Erlinger, T.P.; Patschke, D.; Varrichio, C. Probabilistic gradient boosting machines for GEFCom2014 wind forecasting. Int. J. Forecast. 2016, 32, 1061–1066. [Google Scholar] [CrossRef]
  13. Mohammadzaheri, M.; Mirsepahi, A.; Asef-afshar, O.; Koohi, H. Neuro-fuzzy modeling of superheating system of a steam power plant. Appl. Math. Sci 2007, 1, 2091–2099. [Google Scholar]
  14. Mohammadzaheri, M.; AlQallaf, A.; Ghodsi, M.; Ziaiefar, H. Development of a fuzzy model to estimate the head of gaseous petroleum fluids driven by electrical submersible pumps. Fuzzy Inf. Eng. 2018, 10, 99–106. [Google Scholar] [CrossRef] [Green Version]
  15. Bengio, Y. Learning Deep Architectures for AI; Now Publishers Inc.: Hannover, MA, USA, 2009. [Google Scholar]
  16. Wang, K.; Qi, X.; Liu, H.; Song, J. Deep belief network based k-means cluster approach for short-term wind power forecasting. Energy 2018, 165, 840–852. [Google Scholar] [CrossRef]
  17. Hong, Y.Y.; Rioflorido, C.L.P.P. A hybrid deep learning-based neural network for 24-h ahead wind power forecasting. Appl. Energy 2019, 250, 530–539. [Google Scholar] [CrossRef]
  18. Putz, D.; Gumhalter, M.; Auer, H. A novel approach to multi-horizon wind power forecasting based on deep neural architecture. Renew. Energy 2021, 178, 494–505. [Google Scholar] [CrossRef]
  19. Munawar, U.; Wang, Z. A framework of using machine learning approaches for short-term solar power forecasting. J. Electr. Eng. Technol. 2020, 15, 561–569. [Google Scholar] [CrossRef]
  20. Nam, K.; Hwangbo, S.; Yoo, C. A deep learning-based forecasting model for renewable energy scenarios to guide sustainable energy policy: A case study of Korea. Renew. Sustain. Energy Rev. 2020, 122, 109725. [Google Scholar] [CrossRef]
  21. González Sopeña, J.; Pakrashi, V.; Ghosh, B. Can we improve short-term wind power forecasts using turbine-level data? A case study in Ireland. In Proceedings of the 2021 IEEE Madrid PowerTech, Madrid, Spain, 28 June–July 2021; pp. 1–6. [Google Scholar]
  22. González Sopeña, J.; Maury, C.; Pakrashi, V.; Ghosh, B. Turbine-Level Clustering for Improved Short-Term Wind Power Forecasting; Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2022; Volume 2265, p. 022052. [Google Scholar]
  23. Wang, X.; Han, Y.; Leung, V.C.; Niyato, D.; Yan, X.; Chen, X. Convergence of edge computing and deep learning: A comprehensive survey. IEEE Commun. Surv. Tutor. 2020, 22, 869–904. [Google Scholar] [CrossRef] [Green Version]
  24. Li, W.; Yang, T.; Delicato, F.C.; Pires, P.F.; Tari, Z.; Khan, S.U.; Zomaya, A.Y. On enabling sustainable edge computing with renewable energy resources. IEEE Commun. Mag. 2018, 56, 94–101. [Google Scholar] [CrossRef]
  25. Justus, D.; Brennan, J.; Bonner, S.; McGough, A.S. Predicting the computational cost of deep learning models. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 10–13 December 2018; pp. 3873–3882. [Google Scholar]
  26. Lin, C.K.; Wild, A.; Chinya, G.N.; Cao, Y.; Davies, M.; Lavery, D.M.; Wang, H. Programming spiking neural networks on Intel’s Loihi. Computer 2018, 51, 52–61. [Google Scholar] [CrossRef]
  27. Maass, W. Networks of spiking neurons: The third generation of neural network models. Neural Netw. 1997, 10, 1659–1671. [Google Scholar] [CrossRef]
  28. Davies, M.; Wild, A.; Orchard, G.; Sandamirskaya, Y.; Guerra, G.A.F.; Joshi, P.; Plank, P.; Risbud, S.R. Advancing neuromorphic computing with Loihi: A survey of results and outlook. Proc. IEEE 2021, 109, 911–934. [Google Scholar] [CrossRef]
  29. Stewart, K.; Orchard, G.; Shrestha, S.B.; Neftci, E. On-chip few-shot learning with surrogate gradient descent on a neuromorphic processor. In Proceedings of the 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Genova, Italy, 31 August–2 September 2020; pp. 223–227. [Google Scholar]
  30. Tavanaei, A.; Maida, A. BP-STDP: Approximating backpropagation using spike timing dependent plasticity. Neurocomputing 2019, 330, 39–47. [Google Scholar] [CrossRef] [Green Version]
  31. Bellec, G.; Scherr, F.; Subramoney, A.; Hajek, E.; Salaj, D.; Legenstein, R.; Maass, W. A solution to the learning dilemma for recurrent networks of spiking neurons. Nat. Commun. 2020, 11, 3625. [Google Scholar] [CrossRef]
  32. Kasabov, N.; Scott, N.M.; Tu, E.; Marks, S.; Sengupta, N.; Capecci, E.; Othman, M.; Doborjeh, M.G.; Murli, N.; Hartono, R.; et al. Evolving spatio-temporal data machines based on the NeuCube neuromorphic framework: Design methodology and selected applications. Neural Netw. 2016, 78, 1–14. [Google Scholar] [CrossRef] [Green Version]
  33. Diehl, P.U.; Neil, D.; Binas, J.; Cook, M.; Liu, S.C.; Pfeiffer, M. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In Proceedings of the 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 12–17 July 2015; pp. 1–8. [Google Scholar]
  34. Taherkhani, A.; Belatreche, A.; Li, Y.; Maguire, L.P. DL-ReSuMe: A delay learning-based remote supervised method for spiking neurons. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 3137–3149. [Google Scholar] [CrossRef]
  35. Bekolay, T.; Bergstra, J.; Hunsberger, E.; DeWolf, T.; Stewart, T.C.; Rasmussen, D.; Choo, X.; Voelker, A.; Eliasmith, C. Nengo: A Python tool for building large-scale functional brain models. Front. Neuroinform. 2014, 7, 48. [Google Scholar] [CrossRef]
  36. Eliasmith, C.; Anderson, C.H. Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
  37. Rasmussen, D. NengoDL: Combining deep learning and neuromorphic modelling methods. Neuroinformatics 2019, 17, 611–628. [Google Scholar] [CrossRef] [Green Version]
  38. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: http://tensorflow.org (accessed on 27 September 2022).
  39. Shrestha, S.B.; Orchard, G. Slayer: Spike layer error reassignment in time. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 1412–1421. [Google Scholar]
  40. Intel’s Neuromorphic Computing Lab. Lava: A Software Framework for Neuromorphic Computing. 2021. Available online: https://github.com/lava-nc/lava (accessed on 25 March 2022).
  41. Davies, M.; Srinivasa, N.; Lin, T.H.; Chinya, G.; Cao, Y.; Choday, S.H.; Dimou, G.; Joshi, P.; Imam, N.; Jain, S.; et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 2018, 38, 82–99. [Google Scholar] [CrossRef]
  42. Hunsberger, E.; Eliasmith, C. Training spiking deep networks for neuromorphic hardware. arXiv 2016, arXiv:1611.05141. [Google Scholar]
  43. DeWolf, T.; Jaworski, P.; Eliasmith, C. Nengo and low-power AI hardware for robust, embedded neurorobotics. Front. Neurorobot. 2020, 14, 568359. [Google Scholar] [CrossRef] [PubMed]
  44. Applied Brain Research. Converting a Keras Model to an SNN on Loihi. 2021. Available online: https://www.nengo.ai/nengo-loihi/v1.0.0/examples/keras-to-loihi.html (accessed on 20 April 2022).
  45. Voelker, A.R.; Eliasmith, C. Programming neuromorphics using the Neural Engineering Framework. In Handbook of Neuroengineering; Springer Nature: Singapore, 2020; pp. 1–43. [Google Scholar]
  46. González Sopeña, J.; Pakrashi, V.; Ghosh, B. An overview of performance evaluation metrics for short-term statistical wind power forecasting. Renew. Sustain. Energy Rev. 2021, 138, 110515. [Google Scholar] [CrossRef]
  47. Tavanaei, A.; Ghodrati, M.; Kheradpisheh, S.R.; Masquelier, T.; Maida, A. Deep learning in spiking neural networks. Neural Netw. 2019, 111, 47–63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Cao, Y.; Chen, Y.; Khosla, D. Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vis. 2015, 113, 54–66. [Google Scholar] [CrossRef]
  49. Schmidt-Hieber, J. Nonparametric regression using deep neural networks with ReLU activation function. Ann. Stat. 2020, 48, 1875–1897. [Google Scholar]
  50. González Sopeña, J.M.; Pakrashi, V.; Ghosh, B. Decomposition-based hybrid models for very short-term wind power forecasting. Eng. Proc. 2021, 5, 39. [Google Scholar]
  51. Dragomiretskiy, K.; Zosso, D. Variational mode decomposition. IEEE Trans. Signal Process. 2013, 62, 531–544. [Google Scholar] [CrossRef]
  52. Tang, L.; Lv, H.; Yang, F.; Yu, L. Complexity testing techniques for time series data: A comprehensive literature review. Chaos Solitons Fractals 2015, 81, 117–135. [Google Scholar] [CrossRef]
  53. Ren, Y.; Suganthan, P.; Srikanth, N. A comparative study of empirical mode decomposition-based short-term wind speed forecasting methods. IEEE Trans. Sustain. Energy 2014, 6, 236–244. [Google Scholar] [CrossRef]
  54. Hong, T.; Fan, S. Probabilistic electric load forecasting: A tutorial review. Int. J. Forecast. 2016, 32, 914–938. [Google Scholar] [CrossRef]
  55. Quan, H.; Srinivasan, D.; Khosravi, A. Short-term load and wind power forecasting using neural network-based prediction intervals. IEEE Trans. Neural Netw. Learn. Syst. 2013, 25, 303–315. [Google Scholar] [CrossRef] [PubMed]
  56. Sadaei, H.J.; e Silva, P.C.d.L.; Guimarães, F.G.; Lee, M.H. Short-term load forecasting by using a combined method of convolutional neural networks and fuzzy time series. Energy 2019, 175, 365–377. [Google Scholar] [CrossRef]
  57. ENTSO-E. Hourly Load Demand Data. 2021. Available online: https://www.entsoe.eu/data/power-stats/ (accessed on 20 April 2022).
  58. Chollet, F. Keras. 2015. Available online: https://github.com/fchollet/keras (accessed on 20 April 2022).
  59. Patel, K.; Hunsberger, E.; Batir, S.; Eliasmith, C. A spiking neural network for image segmentation. arXiv 2021, arXiv:2106.08921. [Google Scholar]
  60. Tan, C.; Šarlija, M.; Kasabov, N. Spiking neural networks: Background, recent development and the NeuCube architecture. Neural Process. Lett. 2020, 52, 1675–1701. [Google Scholar] [CrossRef]
  61. Kasabov, N.K. Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence; Springer: Berlin, Germany, 2019. [Google Scholar]
  62. Davies, M. Benchmarks for progress in neuromorphic computing. Nature Mach. Intell. 2019, 1, 386–388. [Google Scholar] [CrossRef]
  63. Orchard, G.; Frady, E.P.; Rubin, D.B.D.; Sanborn, S.; Shrestha, S.B.; Sommer, F.T.; Davies, M. Efficient Neuromorphic Signal Processing with Loihi 2. In Proceedings of the 2021 IEEE Workshop on Signal Processing Systems (SiPS), Coimbra, Portugal, 19–21 October 2021; pp. 254–259. [Google Scholar]
Figure 1. Spiking ReLU activation profile (based on DeWolf et al. [43]).
Figure 1. Spiking ReLU activation profile (based on DeWolf et al. [43]).
Energies 15 07256 g001
Figure 2. SNN model architecture.
Figure 2. SNN model architecture.
Energies 15 07256 g002
Figure 3. (a) Neural activities using a spiking ReLU activation for inference (one input vector is shown to the network during 50 timesteps), (b) neural activities using the discretized version of the spiking ReLU activation, and (c) predictions over the testing set.
Figure 3. (a) Neural activities using a spiking ReLU activation for inference (one input vector is shown to the network during 50 timesteps), (b) neural activities using the discretized version of the spiking ReLU activation, and (c) predictions over the testing set.
Energies 15 07256 g003
Figure 4. (a) Neural activities setting an amplitude = 0.01 and a firing rate scale = 1, (b) Neural activities setting an amplitude = 0.01 and a firing rate scale = 5, (c) Neural activities setting an amplitude = 0.01 and a firing rate scale = 50, and (d) predictions over the testing set.
Figure 4. (a) Neural activities setting an amplitude = 0.01 and a firing rate scale = 1, (b) Neural activities setting an amplitude = 0.01 and a firing rate scale = 5, (c) Neural activities setting an amplitude = 0.01 and a firing rate scale = 50, and (d) predictions over the testing set.
Energies 15 07256 g004
Figure 5. Preliminary one-step-ahead load forecasts, setting different spike amplitudes and firing rates.
Figure 5. Preliminary one-step-ahead load forecasts, setting different spike amplitudes and firing rates.
Energies 15 07256 g005
Figure 6. Results for one-step ahead load forecasts: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Figure 6. Results for one-step ahead load forecasts: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Energies 15 07256 g006aEnergies 15 07256 g006b
Figure 7. Results for mode 1: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Figure 7. Results for mode 1: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Energies 15 07256 g007
Figure 8. Results for mode 2: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Figure 8. Results for mode 2: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Energies 15 07256 g008
Figure 9. Results for mode 3: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Figure 9. Results for mode 3: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Energies 15 07256 g009
Figure 10. Results for mode 4: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Figure 10. Results for mode 4: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Energies 15 07256 g010
Figure 11. Results for mode 5: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Figure 11. Results for mode 5: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Energies 15 07256 g011
Figure 12. Results for mode 6: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Figure 12. Results for mode 6: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Energies 15 07256 g012
Figure 13. Results for mode 7: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Figure 13. Results for mode 7: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Energies 15 07256 g013
Figure 14. Results for mode 8: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Figure 14. Results for mode 8: (a) Neural activities of 5 neurons of each layer. One input vector is shown over 1000 timesteps. (b) Predictions over the testing set with the SNN architecture (dashed red line) and running the SNN on the Loihi emulator (dash-dot green line).
Energies 15 07256 g014
Figure 15. One-step ahead WPFs with the SNN architecture (dashed red line), running the SNN on the Loihi emulator (dash-dot green line) and a non-spiking VMD-GRU model (purple crosses) over the testing set.
Figure 15. One-step ahead WPFs with the SNN architecture (dashed red line), running the SNN on the Loihi emulator (dash-dot green line) and a non-spiking VMD-GRU model (purple crosses) over the testing set.
Energies 15 07256 g015
Table 1. Main spiking network parameters.
Table 1. Main spiking network parameters.
Neuron TypeSpiking AmplitudeFiring Rate Scale
Mode 1Spiking ReLU0.125
Mode 2Spiking ReLU0.0540
Mode 3Spiking ReLU0.0170
Mode 4Spiking ReLU0.190
Mode 5Spiking ReLU0.3200
Mode 6Spiking ReLU0.3200
Mode 7Spiking ReLU0.5400
Mode 8Spiking ReLU1.5500
Table 2. Mean firing rates (Hz) for each layer.
Table 2. Mean firing rates (Hz) for each layer.
Off-Chip LayerConv LayerDense Layer
Mode 18.18.312.0
Mode 21.91.82.1
Mode 37.24.92.5
Mode 41.61.21.0
Mode 51.21.31.0
Mode 61.11.01.0
Mode 71.41.11.0
Mode 83.79.611.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

González Sopeña, J.M.; Pakrashi, V.; Ghosh, B. A Spiking Neural Network Based Wind Power Forecasting Model for Neuromorphic Devices. Energies 2022, 15, 7256. https://doi.org/10.3390/en15197256

AMA Style

González Sopeña JM, Pakrashi V, Ghosh B. A Spiking Neural Network Based Wind Power Forecasting Model for Neuromorphic Devices. Energies. 2022; 15(19):7256. https://doi.org/10.3390/en15197256

Chicago/Turabian Style

González Sopeña, Juan Manuel, Vikram Pakrashi, and Bidisha Ghosh. 2022. "A Spiking Neural Network Based Wind Power Forecasting Model for Neuromorphic Devices" Energies 15, no. 19: 7256. https://doi.org/10.3390/en15197256

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop