Next Article in Journal
Improved Capacity Retention of SiO2-Coated LiNi0.6Mn0.2Co0.2O2 Cathode Material for Lithium-Ion Batteries
Previous Article in Journal
Extracting Retinal Anatomy and Pathological Structure Using Multiscale Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Toward Better PV Panel’s Output Power Prediction; a Module Based on Nonlinear Autoregressive Neural Network with Exogenous Inputs

Department of Computer Engineering, An-Najah National University, Omar Ibn Al-Khattab St., PO Box 7, Nablus, Palestine
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(18), 3670; https://doi.org/10.3390/app9183670
Submission received: 27 July 2019 / Revised: 30 August 2019 / Accepted: 2 September 2019 / Published: 4 September 2019

Abstract

:
Much work has been carried out for modeling the output power of photovoltaic panels. Using artificial neural networks (ANNS), one could efficiently model the output power of heterogeneous photovoltaic (HPV) panels. However, due to the existing different types of artificial neural network implementations, it has become hard to choose the best approach to use for a specific application. This raises the need for studies that develop models using the different neural networks types and compare the efficiency of these different types for that specific application. In this work, two neural network types, namely, the nonlinear autoregressive network with exogenous inputs (NARX) and the deep feed-forward (DFF) neural network, have been developed and compared for modeling the maximum output power of HPV panels. Both neural networks have four exogenous inputs and two outputs. Matlab/Simulink is used in evaluating the proposed two models under a variety of atmospheric conditions. A comprehensive evaluation, including a Diebold-Mariano (DM) test, is applied to verify the ability of the proposed networks. Moreover, the work further investigates the two developed neural networks using their actual implementation on a low-cost microcontroller. Both neural networks have performed very well; however, the NARX model performance is much better compared with DFF. Using the NARX network, a prediction of PV output power could be obtained, with half the execution time required to obtain the same prediction with the DFF neural network, and with accuracy of ±0.18 W.

1. Introduction

In the last decade, solar energy has been of great interest to investors, governments, energy operators, and international organizations. This is because of its multiple environmental and economic benefits. To generate electricity from solar energy, one can either use solar thermal or photovoltaic (PV) panels. However, the PV panel’s systems have gained global acceptance and are now playing an important part in providing sustainable clean energy [1]. Consequently, over the past few years, there have been tremendous growths in the PV panel’s usage all over the globe. This can be clearly induced from Figure 1, in which the x-axis characterizes the year, and where the y-axis characterizes the amount of power produced from PV panels in Gigawatts (GW). As illustrated in Figure 1, the global capacity of power produced from PV panel’s installations increased in two years (2006–2008) from 6.7 GW to 16.0 GW and increase continuously to reach 100.5 GW in the next four years (2008–2012). In total, the installed PV panels’ capacity has increased all over the globe from 6.7 GW to 404 GW in the last 12 years. This continuous growth is anticipated to progress at a similar or even at a higher rate in the forthcoming years. This can be related to the technological benefit of the PV panels. In summary; from the year 2006 to 2017, we see an exponential increase in the amount of power produced from the PV panels [2]. This exponential growth of the PV panel’s power production, all over the globe, is proportional to the exponential increase in the number of PV panels used to produce such power.
However, the power generated from a PV panel depends on a number of uncertain meteorological aspects such as atmospheric temperature, PV panel temperature, and solar irradiance. Thus, the accurate prediction of PV panel’s output power production is highly difficult. The unforeseen PV panel’s power output creates different negative effects on the quality, the stability, and the reliability of an electrical grid system. These negative effects hinder any possibility for effective distribution scheduling and/or management of the generated power [3,4]. On the other hand, precise forecasting of the PV panel’s output power production can improve the reliability of the system. This could be carried out to keep an effective schedule that would maintain the power quality and stabilization with a grid with secure operation [5].
As a result, a precise prediction of PV power production is of great importance and imposes a rich research area with many challenges. This can be clearly induced by the amount of research that has been carried out to forecast solar power and PV panel’s output power through different perspectives. Various methods have been implemented, including numerical weather prediction [2,6,7,8], statistical [2,9,10,11], and artificial neural networks (ANNS) [2,12,13,14,15,16]. Several studies have illustrated that ANNS are more appropriate when compared to other techniques; especially when a complicated and non-linear connection occurs between the data without any prior presumption [17,18,19].
In addition, because of the excellent outcomes achieved in actual applications [20,21,22,23], ANNs are chosen to be of the most popular models. One great benefit of the ANN model is its ability to model non-linear data associations. The authors in [24,25] employed a neural network to energy consumption approximation. Deb et al. [26] discussed a procedure to predict the energy consumption of day time load of cooling institutional buildings. Paoli et al. [27] developed a multi-layer perceptron for solar radiation forecasting. Mandal et al. [16] presented a PV system power output one-hour-ahead forecasting by combining radial basis and wavelet transform function neural network. Saberian et al. [28] used both the feed-forward neural network (FFNN) and the general regression neural network (GRNN) to forecast the power output of photovoltaic cells. They concluded that FFNN outperformed GRNN.
All previously cited studies [12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28] modeled the output power of photovoltaic panels using static neural networks techniques (e.g., FFNN). In actual life applications, there are many non-linear systems, such as the PV panels, whose behavior varies according to their contemporary state. Therefore, in the past few years, researchers have developed new dynamic forecasting techniques that improve the accuracy and stability of model prediction [29]. Among these techniques are the dynamic recurrent neural network (RNN) and the nonlinear autoregressive network with exogenous inputs (NARX) [30,31]. Wunsch et al. [32] applied the NARX to obtain groundwater level forecasts for several wells in southwest Germany. In [33], convolutional neural network combined with a recurrent neural network (RNN) is used for automatic lip-reading recognition. Ahmad et al. [34] used an autoregressive RNN with exogenous inputs and weather variables to provide a day-ahead forecast of hourly solar irradiance in New Zealand. In [35], RNN combined with a wavelet neural network (forming a diagonal recurrent wavelet neural network) is presented to perform day-ahead solar irradiance forecasting. As opposed to recurrent networks, NARX have a lower number of feedback connections, which comes only from the output neuron rather than from hidden states. Hence, NARX has lower implementation complexity compared with RNN, especially when it comes to hardware implementation. Moreover, there are not many studies showing the effect of using such techniques in the PV output power forecasting field.
Nevertheless, the field of PV panel’s output power prediction is an open field and much work is required toward obtaining better and more accurate techniques. In the field of PV panel’s and solar energy, the area of ANNS has yet many benefits to offer. Due to many ANNS models and variations, one needs to explore the different kinds of ANNS and their customization to reveal the benefits ANNS can offer to accommodate different criteria. Among these criteria, we can point out, finding small size ANNS to fit a low-cost embedded system, providing for more accuracy, and the support for heterogeneity in terms of different geolocation also manufacturing characteristics.
Hence, this study focuses on the output power modeling of heterogeneous PV panels using the NARX neural network. Moreover, a neural network module based on deep feed-forward (static network) is developed for comparison and investigation purposes of the proposed network. The deep feed-forward (DFF) has gained a lot of attention in recent years due to flexibility and accuracy in different application. In the end, the work presents a comparison between the running time of both networks implementations ona low-cost microcontroller. This implementation allows the proposed work to be used as a submodule in real-time PV panel’s monitoring system.

2. Artificial Neural Networks Topology

In this work, two promising neural network architectures have been developed for modeling the output power of HPV. The first model is the DFF neural network, and the second is the NARX neural network. The two models developed to tackle different manufactured PV panels. The following subsections describe the topology and the training algorithm used to implement these models.

2.1. DFF Neural Network

A DFF network, also called a Multilayer Perceptron (MLP), is a neural network structure that is used in many applications. It is applied to both discrete and discontinuous problems [36,37].
MLPs consist of neurons, which are structured into layers. The input layer and the output layer are the first layer and the last layer, respectively. The hidden layers are the ones in between the first layer and the last layer.
There is a specific function for each layer in the MLP. The input signals are accepted through the input layer and redistributed to all the hidden layer neurons. The hidden layer stimulus patterns are propagated to the output layer, which produces the entire network output pattern. The hidden layer neurons detect the input patterns’ hidden features. The output layer uses these features to calculate the final output. Figure 2 shows the topology of the developed DFF neural network.
As illustrated in Figure 2, the developed DFF model is composed of four layers. The input layer accepts the panel’s temperature, the solar radiation, and the panel’s Short Circuit Current (SCC) and Open Circuit Voltage (OCV) at 1 kW/m2 and 25 °C. Each layer of the two hidden layers is composed of five neurons. Each of these neurons uses an activation function, which is the hyperbolic tangent sigmoid transfer function. The output layer consists of two nodes, which calculate the optimum operating voltage and the optimum current that correspond to the inputs at the input layer. Nodes of the output layer use a linear activation function.
Thus, the actual output of the neurons in the computational layers can be represented as in Equations (1)–(3):
H f j = 2 1 + e 2 ( i = 1 4 x i w i j θ j ) 1
H s k = 2 1 + e 2 ( j = 1 5 H f j w j k θ k ) 1
Y o l = k = 1 5 H s k w k l θ l
where Hfj is the jth node output in the first hidden layer, θ j is the threshold applied to the neuron j, wij is attained through the neural network training stage and it represents the weight between ith node in the input layer and the jth node in the first hidden layer, xi is the value of input I, Hsk is the kth node output in the second hidden layer, θ k is the threshold applied to the neuron k, wjk is obtained during the training phase of the neural network and it represents the weight between the jth node in the first hidden layer and kth node in the second hidden layer, Yol is the lth node output in the output layer, θ l is the threshold applied to the neuron l, and wkl is obtained during the training phase of the neural network and it represents the weight between the kth node in the second hidden layer and lth node in the output layer.

2.2. NARX Neural Network

NARX is a neural network with recurrent dynamic behavior [38,39,40] that has been effectively used for time series problems estimation. The main difference between MLP and NARX is that NARX allows a weighted feedback connection between layers of neurons. This allows the NARX models to consider lagged (past period) values of variables, which make them suitable for time series analysis. Alternatively, although in the literature, there are many approaches for time series analysis, such as the autoregressive integrated moving average approach and the autoregressive moving average approach; these approaches cannot cope with nonlinear problems [41,42]. NARX, however, can efficiently be used for modelling time series with non-linear behavior.
The NARX neural network can be constructed into one of two architectures [31], the parallel architecture (namely close-loop architecture), and the series-parallel architecture (namely, open-loop architecture), see Figure 3.
This study uses the open-loop architecture. This is because of the pure feed-forward network architecture and the past accurate values availability of the time series. The NARX network behavior, in compliance with the time series forecast, can be mathematically modeled by the equation shown in Equation (4) [31]:
y ( t ) = f ( y ( t 1 ) , y ( t 2 ) , y ( t n y ) , x ( t 1 ) , x ( t 2 ) , , x ( t n x ) )
This equation defines in what way a NARX network is used to forecast a data series y(t) value. The equation makes use of the previous values of the y series and another external series x. The f() unknown function is a mapping function, and it is the purpose of the training phase of the network to find an approximation of this mapping function. This is done through an optimization process of the neuron bias and the network weights.
As mentioned above, the NARX is a model with non-linear behavior that has the ability to approximate forthcoming values of time series using its former calculated values and some exterior data. In the presented work, NARX is used to estimate the values of the optimum operating voltage and current. As inputs, the NARX uses four inputs with exterior data at time t − 1. In addition, the NARX uses two time-series inputs that represent its previous outputs of the estimated optimum operating voltage and current at time t − 1. Figure 4 shows the developed NARX network discussed in this work.
As illustrated in Figure 4, the first layer has six nodes of inputs. These are the panel temperature, the solar radiation, the panel OCV, the panel SCC, and the two output values previously attained from the time series. In the middle (hidden) layer there are five nodes with hyperbolic tangent sigmoid transfer activation function. The last layer consists of two nodes, which calculate the two outputs of the network. These are the optimum operating current and voltage. A linear activation function is used in the nodes of the last layer.

2.3. Training Algorithm

Many training methods have been developed for neural networks [43,44]; these include the Gradient Descent training approach, the Resilient Back-propagation training approach, the Levenberg-Marquardt (LM) approach, and the Gauss-Newton training approach. The LM training approach has been selected for training both of the developed topologies (see Figure 3 and Figure 4). The LM training method is less sensitive to local converges, and therefore, it provides a better learning training approach. Moreover, it provides a balanced tradeoff between the stability and training speed [43].
The LM training approach was developed to estimate the derivative of the second-order without the particular Hessian matrix computation requirement. Instead, it only requires the Jacobian matrix, also the gradient vector.
Equation (5) is used to calculate the loss function gradient vector when the performance function resembles the mean squared error (MSE). Using the Jacobian matrix we can estimate the Hessian matrix as shown in Equation (6) [43]:
f = J T e
H J T J + μ I
In Equations (5) and (6), e denotes the error vector, I represents the identity matrix, the combination coefficient μ is always greater than zero, and the Jacobian matrix J can be defined as in Equation (7):
J i , j = e i w j , w h e r e j = 1 , , m a n d i = 1 , , n
Equations (5) and (6) can be combined to form the update rule ∆w shown in Equation (8) [43]:
Δ w = ( J J T + μ I ) 1    J T e
The state diagram for the training algorithm is shown in Figure 5. The training process begins by calculating the loss (MSE), the Hessian approximation, and the gradient. The training objective is to minimize the loss (MSE) as much as possible. This is done by tuning the parameter (µ) of the learning rate as follows:
  • µ is multiplied by 10 when the current epoch MSE exceeds the previous value.
  • µ is multiplied by 0.1 when the MSE is equal or less to the previous one.

2.4. Dataset Collection

Meteorological data consisting of solar irradiance and temperature were gathered in real-time from An-Najah Energy Research Center (ERC) [45]. The equipment used to collect the dataset consists of a temperature sensor to measure the PV panel body temperature (namely, the sensor WE710 with ±0.25 °C precision), and a high accuracy solar radiation sensor (namely, the sensor WE300 with ±1% precision).
Despite the precision of the used sensors, the data captured are sometimes unreliable, noisy, or incomplete. This is primarily due to a sensor error percentage or noise in connections. Hence, to overcome the noise in the obtained data, a moving average filter [46] was used, see Figure 6. The meteorological data were collected on an hourly based manner in the period from February 2018 to July 2018. Anomalies in data due to a broken or unfunctional device/sensor were eliminated.
The targeted output voltage (Vr) and current (Ir) are collected by running a MATLAB code that analyses the validated PV model output characteristics [47,48,49,50]. The dynamic behaviour of the proposed PV model has been validated previously by the author for different types of PV modules under variety of solar radiation and temperature conditions. High agreement between the outcomes of the model and the experimental data was achieved. Results and validation of the module can be found in the author previous work published in [50].
Three different photovoltaic panels were used to obtain data; these are the Astronergy-CHSM6610P panel, the Sharp’s-NUS0E3E panel, and Lorentz mono-crystalline panel. Standard testing conditions used when recording the PV panel’s manufacturer specifications. Table 1 shows the used PV panel’s manufacturer specifications.
For the proposed neural networks to be modeled, the collected data were separated into three parts: testing (25%), training (60%), and validation (15%). To have the same range of values, a normalization process [49] is done on all data to be in the range(−1 to +1) using Equation (9):
A = ( B B min ) [ A max A min B max B min ] + A min
where the normalized value is denoted by A, and B denotes the non-normalized value. When the result is obtained, it becomes normal after using Equation (10). Table 2 shows typical examples of the used datasets.
B = ( A A min ) [ B max B min A max A min ] + B min

3. Results and Discussion

3.1. Neural Network Structure

Several trials have been performed to determine the neural network configuration. Table 3 presents some important parameters for the selected configurations. The decisions have been made based on two imperatives attributes which are: the number of nodes in hidden layers, and the choice of each layer activation function.
Choosing each layer activation function is proven to be an important issue. Moreover, previous studies showed that multilayer network with linear output neurons and sigmoid hidden neurons are arbitrarily well-fitted to map multi-dimensional problems [36,49]. Hence, in this study, the activation function of the hyperbolic tangent is implemented for the hidden layers, also the linear activation function is implemented for the output layer. There is an advantage, especially when using the Levenberg-Marquardt training approach, to use the hyperbolic tangent function. The fact that this function is derivable, gives the advantage for easier neural network weights tuning.
The last important parameter, which must be studied to complete the structures of the neural networks, is the number of nodes in each hidden layer. Table 4 compares the best MSE results to various hidden layer’s number of neurons for both NARX and DFF network.
As shown in Table 4 the best-obtained structure of the DFF model is when having five neurons in each hidden layer. The MSE for both training and testing with this structure are 0.4167 and 0.4283, respectively. However, using the NARX model the results were much better. Result shows that when having five neurons in one hidden layer, the MSE for both testing and training are 0.006415 and 0.007589, respectively.

3.2. Evaluation of the Proposed Network Models

Before starting the simulation, the dataset was divided into three parts: testing (1680 cases), validation (840 cases), and training (5880 cases). The training and validation sets are used during the training stage. The training dataset part is used to find the neural network bias and weights. The over fitting was minimized using the validation set. Figure 7, shows the results throughout the training and validation stages.
As shown in Figure 7, the measured error with the NARX model is much lower compared with the DFF model. Moreover, Figure 8 illustrates the function of error autocorrelation (for the NARX network) which describes the time correlation of the prediction errors. It can be seen from Figure 8 that there were significant prediction errors correlation, the only autocorrelation value outside the interval occurs at lag 0.
After the training stage is done, the testing dataset part is used to evaluate the model and test its performance. Figure 9, Figure 10, Figure 11 and Figure 12, shows real time comparison between the real data of the networks models and the simulated results for two different manufacturing PV panels.
The results of these figures are obtained from the average hourly solar radiation and temperature distributions during April 2018 (see Figure 6). The comparison of NARX and DFF results demonstrate that NARX is more efficiently track the real time data compared with DFF.
To further verify the effectiveness and significance of the proposed models, we employed the Diebold-Mariano (DM) statistic [51]. The DM compares the forecast accuracy of two models. The MSE is adopted as the loss function to build the loss differential. The DM test results are exhibited in the following Table 5.
According to the DM test results presented in Table 5, the first two sample windows indicates that there is no significant different between the two forecasting models (p-value > 0.05); however, in the third test-sample window, the forecasting performance of the NARX model is better than the DFF model (p-value < 0.05).

3.3. Implementing the Two Networks on A Low-Cost Microcontroller

The aim of this work is to find an accurate forecasting neural network that is suitable for using as sub-module in PV monitoring, PV fault detection, or any PV system that requires PV output power prediction. To further investigate the two developed networks, both trained ANNs are loaded on the same hardware, namely the ATmega2560 8-bit low cost microcontroller. Both networks were implemented using the topologies shown in Figure 2 and Figure 4. Equations (1)–(3) were used for implementing the neurons with weights (wij, wjk, and wkl) obtained from the training phase of the neural networks. The same data set is used as input on both implementations and each neural network output is recorded along with the execution time, see Table 6. As shown in Table 6, there is a difference between values obtained from MATLAB simulation and hardware implementation. This is because the hardware implementation on low cost microcontroller requires some rounding in values and requires some tradeoff between memory and value precision. However, we notice that the NARX network showed better performance in term of execution time, also better accuracy over the DFF network compared to the results obtained in simulation. Using NARX network an output power value is obtained with a higher accuracy of ±0.18 W and with half the execution time compared to the DFF network. Using the DFF network the same output power value calculation requires double the execution time of the NARX network and with lower accuracy of ±0.59 W.

4. Conclusions

This paper discusses a novel and a more accurate heterogeneous photovoltaic (HPV) output power modeling topology using nonlinear autoregressive network with exogenous inputs (NARX). The work compares the proposed NARX neural network structure with the commonly used deep feed-forward neural network (DFF) structure. Both structures have been used to model the maximum output power of HPV panels.
The training phases of the neural networks are performed periodically. Several simulations are carried out with different evaluation criteria. Both of neural networks have decent modeling performance; nevertheless, NARX has outperformed DFF with mean square error of 0.007589.
Both models are further investigated by implementing them on real hardware for performance and execution time evaluation. The nonlinear autoregressive network with exogenous inputs outperform the deep feed-forward neural network with both accuracy and performance. Using the NARX neural network, a prediction of PV output power could be obtained with half the execution time required to obtain the same prediction with the deep feed-forward neural network and with an accuracy of ±0.18 W.
On the other hand, future research will need to be carried out on using this network model in PV fault detection. The NARX model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate MPP.

Author Contributions

Conceptualization, E.N. and S.S.; methodology, E.N. and S.S.; software, E.N. and S.S.; validation, E.N. and S.S.; formal analysis, E.N. and S.S.; investigation, E.N. and S.S.; resources, E.N. and S.S.; data curation, E.N. and S.S.; writing—original draft preparation, E.N. and S.S.; writing—review and editing, S.S.; visualization, E.N. and S.S.

Acknowledgments

We acknowledge the support of An Najah National University.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANNSArtificial Neural Networks
DFFDeep Feed-Forward
DMDiebold-Mariano
DPData Pattern
FFNNFeed-Forward Neural Network
GSolar radiation
GRNNGeneral Regression Neural Network
HPVHeterogeneous Photovoltaic
LMLevenberg-Marquardt
MLPMultilayer Perceptron
MSEMean Square Error
NARX networkNonlinear Autoregressive network with exogenous inputs
NDPNormalized Data Pattern
OCVOpen Circuit Voltage
PVPhotovoltaic
p-ValueProbability Value
RNNRecurrent Neural Network
SCCShort Circuit Current
TcPanel temperature
TDLTapped Delay Line

References

  1. Gielen, D.; Boshell, F.; Saygin, D.; Bazilian, M.D.; Wagner, N.; Gorini, R. The role of renewable energy in the global energy transformation. Energy Strategy Rev. 2019, 24, 38–50. [Google Scholar] [CrossRef]
  2. Kumar Das, U.; Soon Tey, K.; Seyedmahmoudian, M.; Mekhilef, S.; Idna Idris, M.Y.; Van Deventer, W.; Horanc, B.; Stojcevski, A. Forecasting of photovoltaic power generation and model optimization: A review. Renew. Sustain. Energy Rev. 2018, 81, 912–928. [Google Scholar] [CrossRef]
  3. Strzalka, A.; Alam, N.; Duminil, E.; Coors, V.; Eicker, U. Large scale integration of photovoltaics in cities. Appl. Energy 2012, 93, 413–421. [Google Scholar] [CrossRef]
  4. Woyte, A.; Van Thong, V.; Belmans, R.; Nijs, J. Voltage fluctuations on distribution level introduced by photovoltaic systems. IEEE Trans. Energy Convers. 2006, 21, 202–209. Available online: https://ieeexplore.ieee.org/abstract/document/1597338 (accessed on 27 July 2019). [CrossRef]
  5. Martin, L.; Zarzalejo, L.F.; Polo, J.; Navarro, A.; Marchante, R.; Cony, M. Prediction of global solar irradiance based on time series analysis: Application to solar thermal power plants energy production. Sol. Energy 2010, 84, 1772–1781. [Google Scholar] [CrossRef]
  6. Heinemann, D.; Lorenz, E.; Girodo, M. Forecasting of solar radiation. In Solar Energy Resource Management for Electricity Generation from Local Level to Global Scale; Nova Science Publishers: New York, NY, USA, 2006; Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.526.2530&rep=rep1&type=pdf (accessed on 27 July 2019).
  7. Perez, R.; Moore, K.; Wilcox, S.; Renn’e, D.; Zelenka, A. Forecasting solar radiation-preliminary evaluation of an approach based upon the national forecast database. Sol. Energy 2007, 81, 809–812. [Google Scholar] [CrossRef]
  8. Kudo, M.; Takeuchi, A.; Nozaki, Y.; Endo, H.; Sumita, J. Forecasting electric power generation in a photovoltaic power system for an energy network. Electr. Eng. Jpn. 2009, 167, 16–23. [Google Scholar] [CrossRef]
  9. Oudjana, S.H.; Hellal, A.; Mahamed, I.H. Short term photovoltaic power generation forecasting using neural network. In Proceedings of the 11th International Conference on Environment and Electrical Engineering (EEEIC), Venice, Italy, 17–19 May 2012; pp. 706–711. Available online: https://ieeexplore.ieee.org/document/6221469 (accessed on 27 July 2019).
  10. Huang, R.; Huang, T.; Gadh, R.; Li, N. Solar generation prediction using the ARMA model in a laboratory-level micro-grid. In Proceedings of the Third International Conference on Smart Grid Communications (SmartGridComm), Tainan, Taiwan, 5–8 November 2012; pp. 528–533. Available online: https://ieeexplore.ieee.org/abstract/document/6486039 (accessed on 27 July 2019).
  11. Box, G.E.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  12. Yona, A.; Senjyu, T.; Funabshi, T.; Sekine, H. Application of neural network to 24- hours-ahead generating power forecasting for PV system. Ieej. Trans. Power Energy 2008, 128, 33–39. Available online: https://ieeexplore.ieee.org/document/4441657 (accessed on 27 July 2019). [CrossRef]
  13. Capizzi, G.; Napoli, C.; Bonanno, F. Innovative second-generation wavelets construction with recurrent neural networks for solar radiation forecasting. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1805–1815. Available online: https://ieeexplore.ieee.org/document/6320656 (accessed on 27 July 2019). [CrossRef]
  14. Mellit, A.; Pavan, A.M. A 24-h forecast of solar irradiance using artificial neural network: Application for performance prediction of a grid-connected PV plant at Trieste, Italy. Sol. Energy 2010, 84, 807–821. [Google Scholar] [CrossRef]
  15. Wang, F.; Mi, Z.; Su, S.; Zhao, H. Short-term solar irradiance forecasting model based on artificial neural network using statistical feature parameters. Energies 2012, 5, 1355–1370. [Google Scholar] [CrossRef]
  16. Mandal, P.; SwarroopMadhira, S.T.; Ul haque, A.; Meng, J.; Pineda, R.L. Forecasting Power Output of Solar Photovoltaic System Using Wavelet Transform and Artificial Intelligence Techniques. Procedia Comput. Sci. 2012, 12, 332–337. [Google Scholar] [CrossRef] [Green Version]
  17. Neto, A.H.; Fiorelli, F.A.S. Comparison between detailed model simulation and artificial neural network for forecasting building energy consumption. Energy Build. 2008, 40, 2169–2176. [Google Scholar] [CrossRef]
  18. Aydinalp-Koksal, M.; Ugursal, V.I. Comparison of neural network, conditional demand analysis, and engineering approaches for modeling end-use energy consumption in the residential sector. Appl. Energy 2008, 85, 271–296. [Google Scholar] [CrossRef]
  19. Cavaleri, L.; Asteris, P.G.; Psyllaki, P.P.; Douvika, M.G.; Skentou, A.D.; Vaxevanidis, N.M. Prediction of Surface Treatment Effects on the Tribological Performance of Tool Steels Using Artificial Neural Networks. Appl. Sci. 2019, 9, 2788. [Google Scholar] [CrossRef]
  20. Ding, N.; Benoit, C.; Foggia, G. Neural network-based model design for short-term load forecast in distribution systems. IEEE Trans. Power Syst. 2016, 31, 72–81. Available online: https://ieeexplore.ieee.org/document/7031444?tp=&arnumber=7031444 (accessed on 27 July 2019). [CrossRef]
  21. Panapakidis, I.P.; Dagoumas, A.S. Day-ahead electricity price forecasting via the application of artificial neural network based models. Appl. Energy 2016, 172, 132–151. [Google Scholar] [CrossRef]
  22. Chae, Y.T. Artificial neural network model for forecasting sub-hourly electricity usage in commercial buildings. Energy Build. 2016, 111, 184–194. [Google Scholar] [CrossRef]
  23. Macas, M.; Moretti, F.; Fonti, A.; Giantomassi, A.; Comodi, G.; Annunziato, M.; Pizzuti, S.; Capra, C.A. The role of data sample size and dimensionality in neural network based forecasting of building heating related variables. Energy Build. 2016, 111, 299–310. [Google Scholar] [CrossRef]
  24. Baca Ruiz, L.G.; Cuéllar, M.P.; Calvo-Flores, M.D.; Pegalajar Jiménez, M.D.C. An Application of Non-Linear Autoregressive Neural Networks to Predict Energy Consumption in Public Buildings. Energies 2016, 9, 684. [Google Scholar] [CrossRef]
  25. Karatasou, S.; Santamouris, M.; Geros, V. Modeling and predicting building’s energy use with artificial neural networks: Methods and results. Energy Build. 2006, 38, 949–958. [Google Scholar] [CrossRef]
  26. Deb, C. Forecasting diurnal cooling energy load for institutional buildings using Artificial Neural Networks. Energy Build. 2016, 121, 284–297. [Google Scholar] [CrossRef]
  27. Paoli, C.; Voyant, C.; Muselli, M.; Nivet, M.L. Forecasting of preprocessed daily solar radiation time series using neural networks. Sol. Energy 2010, 84, 2146–2160. [Google Scholar] [CrossRef] [Green Version]
  28. Saberian, A.; Hizam, H.; Radzi, M.A.M.; Ab-Kadir, M.Z.A.; Mirzaei, M. Modelling and prediction of photovoltaic power output using artificial neural networks. Int. J. Photoenergy 2014, 1–10. [Google Scholar] [CrossRef]
  29. Cao, Q.; Ewing, B.T.; Thompson, M.A. Forecasting wind speed with recurrent neural networks. Eur. J. Oper. Res. 2012, 221, 148–154. [Google Scholar] [CrossRef]
  30. Mohanty, S.; Patra, P.K.; Sahoo, S.S. Prediction of global solar radiation using nonlinear auto regressive network with exogenous inputs (narx). In Proceedings of the 2015 39th National Systems Conference (NSC), Odisha, India, 14–16 December 2015; Available online: https://ieeexplore.ieee.org/document/7489103 (accessed on 27 July 2019).
  31. Boussaada, Z.; Curea, O.; Remaci, A.; Camblong, H.; Bellaaj, N.M. A Nonlinear Autoregressive Exogenous (NARX) Neural Network Model for the Prediction of the Daily Direct Solar Radiation. Energies 2018, 11, 620. [Google Scholar] [CrossRef]
  32. Wunscha, A.; Liescha, T.; Brodab, S. Forecasting groundwater levels using nonlinear autoregressive networks with exogenous input (NARX). J. Hydrol. 2018, 567, 743–758. [Google Scholar] [CrossRef]
  33. Lu, Y.; Li, H. Automatic Lip-Reading System Based on Deep Convolutional Neural Network and Attention-Based Long Short-Term Memory. Appl. Sci. 2019, 9, 1599. [Google Scholar] [CrossRef]
  34. Ahmad, A.; Anderson, T.N.; Lie, T.T. Hourly global solar irradiation forecasting for New Zealand. Sol. Energy 2015, 122, 1398–1408. [Google Scholar] [CrossRef] [Green Version]
  35. Cao, J.; Lin, X. Study of hourly and daily solar irradiation forecast using diagonal recurrent wavelet neural networks. Energy Convers. Manag. 2008, 49, 1396–1406. [Google Scholar] [CrossRef]
  36. Negnevitsky, M. Artificial Intelligence: A Guide to Intelligent Systems; Addison Wesley: Essex, UK, 2004. [Google Scholar]
  37. Abiodun, O.I.; Aman Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the art in artificial neural network applications: A survey. Heliyon 2018, 4, 1–41. [Google Scholar] [CrossRef]
  38. Pisoni, E.; Farina, M.; Carnevale, C.; Piroddi, L. Forecasting peak air pollution levels using NARX models. Eng. Appl. Artif. Intell. 2009, 22, 593–602. [Google Scholar] [CrossRef]
  39. Petroșanu, D.-M. Designing, Developing and Validating a Forecasting Method for the Month Ahead Hourly Electricity Consumption in the Case of Medium Industrial Consumers. Processes 2019, 7, 310. [Google Scholar] [CrossRef]
  40. Ma, Y.; Liu, H.; Zhu, Y.; Wang, F.; Luo, Z. The NARX Model-Based System Identification on Nonlinear, Rotor-Bearing Systems. Appl. Sci. 2017, 7, 911. [Google Scholar] [CrossRef]
  41. Aladag, C.H.; Egrioglu, E.; Kadilar, C. Forecasting nonlinear time series with a hybrid methodology. Appl. Math. Lett. 2009, 22, 1467–1470. [Google Scholar] [CrossRef] [Green Version]
  42. Yuan, C.; Liu, S.; Fang, Z. Comparison of China’s primary energy consumption forecasting by using ARIMA (the autoregressive integrated moving average) model and GM(1,1) model. Energy 2016, 100, 384–390. [Google Scholar] [CrossRef]
  43. Yu, H.; Wilamowski, B.M. Levenberg–Marquardt Training. In The Industrial Electronics Handbook, 2nd ed.; Intelligent Systems; CRC Press: Boca Raton, FL, USA, 2011; Volume 5, Available online: http://www.eng.auburn.edu/~wilambm/pap/2011/K10149_C012.pdf (accessed on 27 July 2019).
  44. Cömert, Z.; Kocamaz, A.F. A study of artificial neural network training algorithms for classification of cardiotocography signals. J. Sci. Technol. 2017, 7, 93–103. Available online: https://dergipark.org.tr/download/article-file/391300 (accessed on 27 July 2019). [CrossRef]
  45. Energy Research Center. 2019. Available online: https://www.najah.edu/en/community/scientific-centers/ (accessed on 27 July 2019).
  46. Smith, S.W. Digital Signal Processing: A Practical Guide for Engineers and Scientists; Spectrum, Inc.: San Diego, CA, USA, 2002. [Google Scholar]
  47. Natsheh, E.M.; Natsheh, A.R.; Albarbar, A.H. An automated tool for solar power systems. Appl. Sol. Energy 2014, 50, 221–227. Available online: https://link.springer.com/article/10.3103/S0003701X14040094 (accessed on 27 July 2019). [CrossRef]
  48. Samara, S.; Natsheh, E. Modeling the output power of heterogeneous photovoltaic panel’s based on artificial neural networks using low cost microcontrollers. Heliyon 2018, 4, 1–18. [Google Scholar] [CrossRef]
  49. Samara, S.; Natsheh, E. Intelligent Real-Time Photovoltaic Panel Monitoring System Using Artificial Neural Networks. IEEE Access. 2019, 7, 50287–50299. Available online: https://ieeexplore.ieee.org/abstract/document/8691444 (accessed on 27 July 2019). [CrossRef]
  50. Natsheh, E.M.; Albarbar, A. Photovoltaic model with MPP tracker for standalone/grid connected applications. In Proceedings of the IET Renewable Power Generation Conference, Edinburgh, UK, 6–8 September 2011. [Google Scholar]
  51. Tian, C.; Hao, Y. A Novel Nonlinear Combined Forecasting System for Short-Term Load Forecasting. Energies 2018, 11, 712. [Google Scholar] [CrossRef]
Figure 1. The worldwide growth of total photovoltaic (PV) systems installed from 2000 to 2017.
Figure 1. The worldwide growth of total photovoltaic (PV) systems installed from 2000 to 2017.
Applsci 09 03670 g001
Figure 2. The developed deep feed-forward (DFF) network topology.
Figure 2. The developed deep feed-forward (DFF) network topology.
Applsci 09 03670 g002
Figure 3. The two neural network architectures of nonlinear autoregressive network with exogenous inputs (NARX).
Figure 3. The two neural network architectures of nonlinear autoregressive network with exogenous inputs (NARX).
Applsci 09 03670 g003
Figure 4. The developed NARX network topology.
Figure 4. The developed NARX network topology.
Applsci 09 03670 g004
Figure 5. Levenberg-Marquardt (LM) state diagram.
Figure 5. Levenberg-Marquardt (LM) state diagram.
Applsci 09 03670 g005
Figure 6. Average hourly temperature and solar radiation distributions throughout April 2018 (a) with moving average filter (b) without moving average filter.
Figure 6. Average hourly temperature and solar radiation distributions throughout April 2018 (a) with moving average filter (b) without moving average filter.
Applsci 09 03670 g006aApplsci 09 03670 g006b
Figure 7. Regression of error for both the NARX and DFF neural networks.
Figure 7. Regression of error for both the NARX and DFF neural networks.
Applsci 09 03670 g007
Figure 8. NARX error autocorrelation (a) zoom out view (b) zoom-in view.
Figure 8. NARX error autocorrelation (a) zoom out view (b) zoom-in view.
Applsci 09 03670 g008
Figure 9. Real time comparison using the Astronergy-CHSM6610P PV panel (a) between the actual power and simulated DFF network (b) DFF voltage and current errors.
Figure 9. Real time comparison using the Astronergy-CHSM6610P PV panel (a) between the actual power and simulated DFF network (b) DFF voltage and current errors.
Applsci 09 03670 g009
Figure 10. Real time comparison using Sharp’s-NUS0E3E PV panel (a) between the actual power and simulated DFF network (b) DFF voltage and current errors.
Figure 10. Real time comparison using Sharp’s-NUS0E3E PV panel (a) between the actual power and simulated DFF network (b) DFF voltage and current errors.
Applsci 09 03670 g010
Figure 11. Real time comparison using the Astronergy-CHSM6610P PV panel (a) between the actual power and simulated NARX network (b) NARX voltage and current errors.
Figure 11. Real time comparison using the Astronergy-CHSM6610P PV panel (a) between the actual power and simulated NARX network (b) NARX voltage and current errors.
Applsci 09 03670 g011
Figure 12. Real time comparison using Sharp’s-NUS0E3E PV panel (a) between the actual power and simulated NARX network (b) NARX voltage and current errors.
Figure 12. Real time comparison using Sharp’s-NUS0E3E PV panel (a) between the actual power and simulated NARX network (b) NARX voltage and current errors.
Applsci 09 03670 g012
Table 1. The used PV panel’s manufacturer specifications.
Table 1. The used PV panel’s manufacturer specifications.
Astronergy-CHSM6610PSharp’s NUS0E3ELorentz Mono-Crystalline
Max power (Pm)225 W180 W75 W
Voltage when power at max29.76 V23.7 V16.5 V
Current when power at max7.55 A7.6 A4.6 A
Voltage at Open circuit 36.88 V30 V21.0 V
Current at Short circuit 8.27 A8.37 A5.4 A
OCV Temp coefficient−0.129 V/°C−104 mV/°C−60.8 mV/°C
SCC Temp coefficient+0.052%/°C+0.053%/°C3.0 mA/°C
Table 2. Typical random examples of the training sets.
Table 2. Typical random examples of the training sets.
1st Pattern2ed Pattern3ed Pattern4th Pattern5th Pattern6th Pattern
NDPDPNDPDPNDPDPNDPDPNDPDPNDPDP
G (W/m2)−0.0315485−0.18594100.1256565−0.34673300.015075100.21608610
TC (°C)0.020−0.3333150.333325−0.6666100.0200.666630
OCV a1.036.881.036.880.1335300.133530−1.021−1.021
SCC a0.93268.270.93268.271.08.371.08.37−1.05.4−1.05.4
Vr (V)0.751030.440.809231.350.353823.6150.494125.89−0.031916.79−0.099915.88
Ir (A)−0.11313.56−0.25732.990.04474.21−0.46862.15−0.46992.16−0.33612.65
a These input parameters allow the proposed networks to tackles different types of PV panel’s.
Table 3. Attributes of the proposed neural network model’sstructures. MSE: mean squared error.
Table 3. Attributes of the proposed neural network model’sstructures. MSE: mean squared error.
NARX StructureDFF Structure
AttributeChoiceAttributeChoice
Number of hidden layers1Num.of hidden layers2
Normalization interval of dataset[−1,1]Normalization Interval of dataset[−1,1]
Tapped delay line (TDL)1 bTDL0
ErrorMSE ErrorMSE
Training approachLevenberg-MarquardtTraining approachLevenberg-Marquardt
b One input delay and one output delay.
Table 4. Best-obtained results vs.the hidden layer’snumber of neurons.
Table 4. Best-obtained results vs.the hidden layer’snumber of neurons.
NARX ModelDFF Model
Number of Nodes/NeuronsMSE TrainMSE TestNumber of Nodes/NeuronsMSE TrainMSE Test
20.03650.04652 × 21.56182.9642
30.01310.03153 × 30.77640.8642
40.009640.0100654 × 40.58310.6945
50.0075890.0064154 × 50.48160.4951
60.0108320.0099215 × 50.41670.4283
70.0094120.0070525 × 60.45760.4891
80.0086410.0077316 × 50.43170.5172
Table 5. Results of the Diebold-Mariano (DM) test.
Table 5. Results of the Diebold-Mariano (DM) test.
Average Value
First Test-Sample Window (406)Second Test-Sample Window (912)Third Test-Sample Window (1680)
DM-test c1.68731.3783−6.5539
p-value0.09150.1681 <0.00001
c Significance level is 5%.
Table 6. Execution time and accuracy results from DFF and NARX networks hardware implementations on low cost microcontroller.
Table 6. Execution time and accuracy results from DFF and NARX networks hardware implementations on low cost microcontroller.
#InputMatlabsimulationDFF Hardware ImplementationNARX Hardware Implementation
#GTcOCVSCCDFF Output PowerNARX Output Power Output PowerExecution Time in MsOutput PowerExecution Time in Ms
1100536.888.2718.546268519.8074950919.948513.32819.46077.632
2135536.888.2727.4539487828.7470128629.350813.31228.5577.584
31951036.888.2741.6662504542.939656143.083613.26442.93587.68
43151536.888.2769.9171460271.167250469.766213.16871.1857.712
54302036.888.2795.083244696.2931093694.576813.23296.55687.616
61005308.3712.2537200513.1454735811.870413.40813.41367.504
71355308.3718.6099501519.5323249918.055313.40819.68127.552
819510308.3730.0520890830.9874152229.185613.61630.82387.648
931515308.3752.9269245653.8832089252.26913.56853.6647.648
1043020308.3774.609538375.5562595474.235613.4475.8317.44
111005215.42.538941762.799048922.344812.642.69467.184
121355215.44.62467094.972790764.6812.7045.16047.168
1319510215.49.407293839.8394362559.65412.8810.07727.424
1431515215.419.8151430720.2663757419.905612.84820.20597.392
1543020215.429.3881246529.8247653929.23212.59230.0827.408

Share and Cite

MDPI and ACS Style

Natsheh, E.; Samara, S. Toward Better PV Panel’s Output Power Prediction; a Module Based on Nonlinear Autoregressive Neural Network with Exogenous Inputs. Appl. Sci. 2019, 9, 3670. https://doi.org/10.3390/app9183670

AMA Style

Natsheh E, Samara S. Toward Better PV Panel’s Output Power Prediction; a Module Based on Nonlinear Autoregressive Neural Network with Exogenous Inputs. Applied Sciences. 2019; 9(18):3670. https://doi.org/10.3390/app9183670

Chicago/Turabian Style

Natsheh, Emad, and Sufyan Samara. 2019. "Toward Better PV Panel’s Output Power Prediction; a Module Based on Nonlinear Autoregressive Neural Network with Exogenous Inputs" Applied Sciences 9, no. 18: 3670. https://doi.org/10.3390/app9183670

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop