Next Article in Journal
Investigation of the Electrochemical Properties of Ni0.5Zn0.5Fe2O4 as Binder-Based and Binder-Free Electrodes of Supercapacitors
Next Article in Special Issue
Heat Release Kinetics upon Water Vapor Sorption Using Cation-Exchanged Zeolites and Prussian Blue Analogues as Adsorbents: Application to Short-Term Low-Temperature Thermochemical Storage of Energy
Previous Article in Journal
Modernization of the Public Transport Bus Fleet in the Context of Low-Carbon Development in Poland
Previous Article in Special Issue
Survey Summary on Salts Hydrates and Composites Used in Thermochemical Sorption Heat Storage: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Neural Network Simulation of Energetic Performance for Sorption Thermal Energy Storage Reactors

1
Université de Lyon, CNRS, INSA-Lyon, Université Claude Bernard Lyon1, CETHIL UMR5008, 69621 Villeurbanne, France
2
LafargeHolcim Innovation Center, 95 rue du Montmurier BP15, 38291 Saint-Quentin-Fallavier, France
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Energies 2021, 14(11), 3294; https://doi.org/10.3390/en14113294
Submission received: 29 April 2021 / Revised: 28 May 2021 / Accepted: 31 May 2021 / Published: 4 June 2021
(This article belongs to the Special Issue Advances in Heat Storage and Transformation Systems)

Abstract

:
Sorption thermal heat storage is a promising solution to improve the development of renewable energies and to promote a rational use of energy both for industry and households. These systems store thermal energy through physico-chemical sorption/desorption reactions that are also termed hydration/dehydration. Their introduction to the market requires to assess their energy performances, usually analysed by numerical simulation of the overall system. To address this, physical models are commonly developed and used. However, simulation based on such models are time-consuming which does not allow their use for yearly simulations. Artificial neural network (ANN)-based models, which are known for their computational efficiency, may overcome this issue. Therefore, the main objective of this study is to investigate the use of an ANN model to simulate a sorption heat storage system, instead of using a physical model. The neural network is trained using experimental results in order to evaluate this approach on actual systems. By using a recurrent neural network (RNN) and the Deep Learning Toolbox in MATLAB, a good accuracy is reached, and the predicted results are close to the experimental results. The root mean squared error for the prediction of the temperature difference during the thermal energy storage process is less than 3 K for both hydration and dehydration, the maximal temperature difference being, respectively, about 90 K and 40 K .

1. Introduction

Renewable energy deployment is often applied in conjunction with Thermal Energy Storage (TES) to balance the energy between production and demand, e.g., storing summer heat for winter heating. Chemical and sorption TES have been identified as promising technologies to solve the seasonal mismatch of solar energy storage [1] offering high energy densities around 600 k W h / m 3 and 200 k W h / m 3 , respectively, [2]. Despite their high energy densities, chemical and sorption TES suffer from their low technology readiness level (typically two to three), which justifies the intensive research on this topic that has occurred in the last ten years [1,2,3,4,5,6,7,8]. The main scientific bottlenecks are in the improvement of the heat and mass transfer in the TES during hydration and to provide a better integration of the TES within the system to increase the overall efficiency. Recently, the analysis of the energy chain of a zeolite TES has shown that approximately 70% of absorbed energy is converted into useful heat released on discharge. However, approximately half of this heat is then directly lost at the outlet of the adsorbent bed. The overall system efficiency is 36% [9]. Numerical simulation is usually used to address such problems with tools like TRNSYS, ENERGY+…that require robust and time efficient TES numerical models compatible with yearly simulation.
Detailed physical models of TES have been developed and reported in the literature [9,10]. Usually based on the analysis of heat and mass transfer in the reactor, they are complex and require large computational efforts, making it difficult to use them for yearly simulations. One possible strategy to overcome this difficulty is the use of regression methods based on machine learning which are considerably less demanding in terms of computation. Support-vector regression (SVR) is one classic option [11] to reproduce complex time series but with the advances in machine learning, SVR approaches are nowadays generally outperformed by artificial neural network (ANN)-based techniques. Basically, an ANN is composed of different layers: an input layer with the information given to the network, hidden layers showing the influence among different elements, and an output layer which gives the desired results [12].
Recently, Scapino et al. [5] have developed a potassium carbonate (K2CO3) TES model based on an artificial neural network (ANN). The network was trained with a set of simulated data produced by a physics-based model. It showed a significant reduction in time of simulation and acceptable discrepancies with respect to the physics-based model. Taking into consideration these promising results, the objective of our research work is to develop a neural network that will assess the performance of a zeolite TES. In our approach, however, experimental data instead of simulation data are used for the training and the validation of the neural network. We chose to use a recurrent neural network (RNN) as this category of ANN was designed for sequence modelling and therefore naturally applies to time series forecasting [13]. The main reason for the effectiveness of RNN is its ability to capture past information from data and use it for upcoming sequence steps.
The studied sorption thermal energy storage system is presented in Figure 1. The main part of the system consists of two vertical packed-bed reactors (top and bottom, Figure 1b,d). Each cylinder reactor (72 c m in diameter) can load 40 k g of zeolite materials (Alfa Aesar, 13X, beads of 1.6 m m to 2.5 m m , see Figure 1a). An air treatment system (Figure 1c) drives the sorbate flow at various temperature and humidity levels into the reactors according to different dehydration and hydration tests. At the inlet and outlet of reactors, temperature sensors, hydrometers and flow meters are installed to measure the properties of airflow. More details about the reactor system are available in [2]. The performance of the system could be explored with the variation of charging temperature, airflow rate, relative humidity in desorption mode, bed thickness and serial or parallel reactor test configurations [2]. Since we aimed to approach daily use cases, several sets of results were chosen (see Section 2.1) for ANN training and validation.

2. Methodology

2.1. Recurrent Neural Network

Recurrent neural networks (RNNs) form a category of ANNs which may be seen as general sequence processors. Contrary to more widespread ANNs, such as multilayered perceptrons which propagate data forwards only, RNNs propagate data both forwards and backwards. This feature allows RNNs to process sequences of data, and notably to handle time-dependent situations. In this study, we choose to use long short-term memory (LSTM) recurrent neural networks [13]. LSTM has the ability to overcome the vanishing and exploding gradient problems [15] often encountered in recurrent networks. Basically, this is achieved via special gates which determine which past information should be passed and which should be forgotten.
In this work, the data were organised to fit the created RNN model. The relevant parameters (temperature, air flow, air humidity, temperature difference) were selected based on a study of the material properties. Firstly, the same experimental data covering both cycles (hydration/dehydration) were used to train and validate the RNN model in order to verify the validity of this approach. Then, the data were separated into training and validation sets and used on the second RNN in order to provide a reliable evaluation of the accuracy of the model.
Training a neural network requires both a learning dataset and a validation dataset, with inputs and corresponding outputs. A large amount of learning data are recommended to gain a well-performing neural network. The chosen amount of validation data is usually significantly smaller. The root mean square error (RMSE, Equation (1)) was used to evaluate the accuracy of the model during the validation phase:
RMSE = i = 1 n ( y i ^ y i ) 2 n ,
where y 1 ^ , y 2 ^ , … y n ^ are the predicted values, y 1 , y 2 , … y n are the observed values, and n is the number of observations.
It should be noted that the RMSE is the square root of the variance of the residuals, which is an absolute measure of fit. The lower its value, the more accurate the model is. However, the RMSE depending on the scale of the used data implies that there is no normalised value assessing the reliability of the model.
In order to construct a model which is as accurate as possible, the different parameters of a RNN (e.g., number of hidden layers, number of epochs and the number of batches) are generally tested to achieve the best combination. The number of epochs corresponds to the number of times that all the training sets will undergo processing. By increasing the number of epochs, the output of the neural network goes from underfitting, optimum and finally to overfitting. The batch divides the total amount of training data. Its size corresponds to the number of training data that will be used in one pass. Therefore, the smaller this number is, the longer the training of the neural network will take, since a smaller amount of data will be used for each propagation. The number of hidden layers corresponds to the size of the network and the number of neurons that are used for learning. The larger this number is, the larger the neural network will be and the longer it will take to train it.

2.2. Data Processing

The data sets originating from [2] must be processed before training the neural network. The parameters selected from the data as inputs are: the relative humidity measured at the entry of the reactor ( R H i n ), the temperature of air flow at the entry of the reactor ( T i n ), the rate of air flow ( Q v ) inside the reactor. The parameter selected as an output is the variation of the temperature ( Δ T ) between the entry of the reactor and the output temperature. Those parameters are used for both hydration and dehydration in order to construct a neural network that can predict both phases. The MATLAB code used for training and validation is given in Appendix A.
Six different experiments of hydration and three experiments of dehydration are used for the training. This corresponds to a total of 15,700 measurements. For the validation of the model, 3 experiments of hydration and 1 of dehydration, 5500 measurements in total, are used.
All experiments used during the training process have to be in the same database. Yet, for proper training of the neural network, the different experiments have to be segregated. Therefore, during data processing, all experiments are numbered. All the data from the first experiment will be preceded by one, and so on. Thanks to a data processing code (see Appendix B), the neural network can separate each experiment as a cycle. The experiments were organised in a random way.

3. Results and Discussion

3.1. Determination of the Optimal Rnn Parameters for Training

A parametric study on three RNN parameters (mini batch size, number of max epoch, number of hidden layers) was performed to analyse the RMSE of the outlet temperature. The lower the RMSE is, the better the result. One parameter is modified at a time whereas the two others are set to a constant value of 200. Figure 2 summarises the influence of each parameter and gives the RMSE as a function of time. Figure 2a shows the influence of the batch size. It indicates that a small mini batch size will give better results than a bigger one. Indeed, a smaller number of data go through one pass, and therefore the RNN will update itself more efficiently because it processes less data. Figure 2b shows the influence of the max epoch number. The curve representing the number of epochs has an erratically decreasing shape. It shows that there is an optimum number of epochs (800 or 1300) that is not necessarily the largest one. Moreover, the curve has a repetitive shape showing some determinism in the number of epochs for which training is efficient. Finally, Figure 2c shows an optimum number (400) of the hidden Layers that is not the highest one. It also suggests that an even number of hidden layers works better than an uneven number.
From Figure 2, an optimal solution can be extracted with a minimum batch size to use between 100 and 300, the number of hidden layers should be 400 and the number of epochs should be 800 or 1300. Indeed, the results gave the lowest RMSE for each simulation. Note that the calculation time increases considerably when the number of epoch or the number of hidden layers increases. For our data set, the calculation time remained lower than 3 h.

3.2. Validation of the Model with the Same Training and Verification Dataset

According to the previous section the parameters of the RNN were set to (200, 800, 400). First, the model was trained with the data sets referenced as training in Table 1 and Table 2, in total nine training simulations. Then with the same data sets, the results were compared in order to understand if the model is able to reproduce the results of the training data. The RMSEs are shown in Table 3 for the nine trainings. They range from 0.9 to 2.31, which is relatively low, and the mean value for these 9 simulations is 1.60. This result indicates that the network predicted correctly the outcome of the data set that was used to create it.
Figure 3 and Figure 4 compare the results of the temperature difference as a function of the volume of data (the time step is one of 30 s for all the experiments) for both hydration and dehydration. As seen in Figure 3 and Figure 4, the prediction curve correctly follows the shape of the experimental curve but is slightly offset.
The average RMSE is around 1.60, which validates the above method using RNNs. The model would still be considered as accurate when considering this precision.

3.3. Validation of the Model with Specific Dataset

Many simulations computing different sets of parameters were run and can be found in Appendix C. The (200, 1300, 400) set of parameters gives the most accurate results for our dataset (Figure 5).
The RMSE values for the specific dataset are shown in Table 4. The average RMSE of the simulation is 2.37. The predictions given by the RNN are accurate but not perfect. For example, on diagram (d), the prediction curve follows the experimental curve well, but is offset during all the drops in temperature. The other prediction curves, (a), (b), and (c), are more acceptable. Comparing Figure 5 to Figure 3, it can be assessed that the prediction is less satisfactory when the neural network has never met the data. The RMSE is not the same for all the hydration simulations. This result could perhaps still be improved using better-suited parameters.

4. Conclusions

An RNN model suited for zeolite-based thermal energy storage was constructed. For quoted experimental data, the RNN model gave accurate results on the prediction of the temperature of air flow coming out of the reactor on complete cycles of both dehydration and hydration of zeolite. The calculation time of RNN model is lower than that of physics-based models (less than 2 or 3 h to compute more than 15,700 estimations). Moreover, once the RNN model is trained, the calculation time to predict any given data set is almost instantaneous (less than a minute for more than 7500 estimations).
It should however be mentioned that in the predictions given by the RNN a gap between the experimental results and the predicted values remains. The accuracy of the predictions could still be improved, especially when significant changes in values occur over a short period of time. A possible cure to this issue could be to use more appropriate metrics such as dynamic time warping (DTW) [16] which is less sensitive to time shifts although more computationally demanding.
The results from the RNN are nevertheless promising. It should be noted that the predictions for the dehydration cycles are often more accurate than for the hydration cycles, which may be due to the fact that the variation of the output parameter is twice as large in the dehydration cases than in the hydration cases. However, there are less experimental data available for the dehydration cycles. Furthermore, the experiments covered only full hydration and dehydration cycles. It would be more realistic to model the reactors’ behaviour using cycles including both hydration and dehydration in an experiment to see if an RNN is able to model accurately such cases.
Alternative machine learning based approaches for time series prediction could also be considered in future work. We plan to experiment with the gated recurrent unit (GRU) approach [17] which can be seen as a simplified version of LSTM and has been shown to often perform better than LSTM on small datasets as in our case. We may also evaluate the use of convolutional neural network (CNN)-based models. CNNs were originally devised for pattern recognition but have been successfully applied to time series forecasting in recent years [18]. They are known to be resistant to noise which might beneficial when using a experimental datasets.

Author Contributions

Conceptualization, F.K. and K.J.; Data curation, B.C.; Investigation, C.D. and M.-A.R.; Methodology, C.O.; Software, C.D. and M.-A.R.; Supervision, F.K. and K.J.; Visualization, C.D. and M.-A.R.; Writing—original draft, F.K.; Writing—review and editing, C.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data sets used in this study originate from [2].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Training Code (Matlab)

  • %% Create and train a Recurrent Neural Network (RNN)
  • %
  • % A set of data is used to train the network, where inputs
  • % and outputs are known. Another set of data is used to test
  • % (validate) the accuracy of the RNN.
  • %
  • % Graphs for each cycles will be plotted at the end of the
  • % program to compare the predicted values with the
  • % experimental ones.
  •  
  • %% Initialisation of the data
  • % Load the data from the files
  • filenamePredictorsX = fullfile("TrainingX.txt");
  • filenamePredictorsY = fullfile("TrainingY.txt");
  •  
  • % Put the data in matrix XTrain for the inputs and YTrain
  • % for the outputs using the prepareData() function
  • [XTrain] = prepareData(filenamePredictorsX);
  • [YTrain] = prepareData(filenamePredictorsY);
  •  
  • % Normalise matrix XTrain
  • mu = mean([XTrain{:}],2);
  • sig = std([XTrain{:}],0,2);
  •  
  • for i = 1:numel(XTrain)
  •    XTrain{i} = (XTrain{i} - mu) ./ sig;
  • end
  •  
  • %% Create the Neural Network
  •  
  • % Neural Network parameters
  • miniBatchSize = 100;
  • maxEpochs = 200;
  • numHiddenUnits = 200;
  •  
  • % Neural network initialisation
  • numResponses = size(YTrain{1},1);
  • featureDimension = size(XTrain{1},1);
  • layers = [ …
  •   sequenceInputLayer(featureDimension)
  •   lstmLayer(numHiddenUnits,’OutputMode’,’sequence’)
  •   fullyConnectedLayer(50)
  •   dropoutLayer(0.5)
  •   fullyConnectedLayer(numResponses)
  •   regressionLayer];
  •  
  • options = trainingOptions(’adam’, …
  •   ’MaxEpochs’,maxEpochs, …
  •   ’MiniBatchSize’,miniBatchSize, …
  •   ’InitialLearnRate’,0.01, …
  •   ’GradientThreshold’,1, …
  •   ’Shuffle’,’never’, …
  •   ’Plots’,’training-progress’,…
  •   ’Verbose’,0);
  •  
  • %% Train the Neural Network
  •  
  • net = trainNetwork(XTrain, YTrain, layers, options);
  •  
  • %% Save the Neural Network to a file
  •  
  • save ("netsave4","net");
  •  
  • %% Test the Neural Network
  •  
  • % Load the text files used to test the neural network and
  • % put them in a matrix
  •  
  • filenameResponsesX = fullfile("HDXValidation.txt");
  • filenameResponsesY = fullfile("HDYValidation.txt");
  • [XTest] = prepareData(filenameResponsesX);
  • [YVerif] = prepareData(filenameResponsesY);
  •  
  • % Normalise the test matrix
  •  
  • for i = 1:numel(XTest)
  •     XTest{i} = (XTest{i} - mu) ./ sig;
  • end
  •  
  • % Predict the outputs of the testing inputs through the
  • % neural network
  •  
  • YTest = predict(net, XTest);
  •  
  • %% Plot the results predicted with the neural network along
  • %  with the experimental results using the plotResults()
  • %  function
  •  
  • plotResults(XTest, YTest, YVerif)
  •  
  • %% Validation of the model
  •  
  • % Compute the RMSE at each cycle
  • % RMSE is an indicator of the discrepancy between
  • % experimental values and predicted values
  •  
  • RMSE = zeros(numel(XTest),1);
  •  
  • for i = 1:numel(XTest)
  •   RMSE(i)= sqrt(mean((YTest{i} - YVerif{i}).^2));
  • end
  •  
  • RMSE
  • %% plotResults() returns a figure for each test set plotting
  • %% the predicted values along with the experimental results.
  •  
  • function []= plotResults(XTest, YTest, YVerif)
  •     for i = 1:numel(XTest)
  •        YTestcurrent=YTest{i}’;
  •        YVerifcurrent=YVerif{i}’;
  •        figure(i)
  •        plot(YVerifcurrent,’--’)
  •          hold on
  •          plot(YTestcurrent,’.-’)
  •          off
  •     end
  • end

Appendix B. Data Processing Code (Matlab)

  •  
  • %% prepareData() returns a cell composed of one matrix per
  • %% experimental test data.
  •  
  • function [XTrain] = prepareData(filenamePredictors)
  •  
  •     dataTrain = dlmread(filenamePredictors);
  •     numObservations = max(dataTrain(:,1));
  •     XTrain = cell(numObservations,1);
  •  
  •     for i = 1:numObservations
  •         idx = dataTrain(:,1) == i;
  •  
  •         X = dataTrain(idx,2:end)’;
  •         XTrain{i} = X;
  •     end
  • end

Appendix C. Determination of Hyperparameters

TryMini
Batchsize
Max
Epoch
Hidden
Layers
RMSE1RMSE2RMSE3RMSE4Mean
RMSE
120010020018.5516.0824.3414.7318.43
220020020016.3415.004.6814.4412.62
320030020016.1114.958.2313.4413.18
420040020016.2815.212.8626.2115.14
520050020016.735.803.9017.5611.00
620060020016.1815.832.8213.1812.00
720070020023.6421.494.2612.2815.42
82008002002.561.962.432.253.30
92009002002.424.382.397.784.24
1020010002004.942.715.4913.676.70
11200110020022.923.174.373.688.54
12200120020024.3016.606.6014.8015.58
1320013002003.302.104.902.603.23
1420014002009.0012.002.8010.708.63
155020020016.1515.079.1413.6113.49
1610020020017.4115.634.7614.3313.03
1720020020016.5015.104.4013.5012.38
1830020020015.6015.103.5014.2012.10
1940020020016.6015.103.4014.1012.30
2050020020017.0015.405.3014.1012.95
2160020020016.2016.008.5014.2013.73
2270020020016.2015.704.6014.1012.65
2380020020020.3015.209.7023.0017.05
2490020020016.3015.205.3013.9012.68
2520020010016.1015.606.2014.2013.03
2620020020016.8014.805.4013.8012.70
2720020030016.5015.2024.5014.1017.58
282002004007.804.604.103.905.10
2920020050014.0014.503.5014.1011.53
3020020060016.2815.687.8713.8613.42
3120020070013.4415.377.1914.9712.74
32508004006.7614.822.8914.259.68
3320013002003.262.683.2515.996.30
3420080020017.1912.465.2915.6412.65
3520090020024.117.183.8812.9812.04
3620013004002.912.112.591.872.37

References

  1. Mahon, D.; Claudio, G.; Eames, P. A study of novel high performance and energy dense zeolite composite materials for domestic interseasonal thermochemical energy storage. Energy Procedia 2019, 158, 4489–4494. [Google Scholar] [CrossRef]
  2. Johannes, K.; Kuznik, F.; Hubert, J.L.; Durier, F.; Obrecht, C. Design and characterisation of a high powered energy dense zeolite thermal energy storage system for buildings. Appl. Energy 2015, 159, 80–86. [Google Scholar] [CrossRef]
  3. Chen, B.; Kuznik, F.; Horgnies, M.; Johannes, K.; Morin, V.; Gengembre, E. Physicochemical properties of ettringite/meta-ettringite for thermal energy storage: Review. Sol. Energy Mater. Sol. Cells 2019, 193, 320–334. [Google Scholar] [CrossRef]
  4. Hongois, S.; Kuznik, F.; Stevens, P.; Roux, J.J. Development and characterisation of a new MgSO4–zeolite composite for long-term thermal energy storage. Sol. Energy Mater. Sol. Cells 2011, 95, 1831–1837. [Google Scholar] [CrossRef]
  5. Scapino, L.; Zondag, H.A.; Diriken, J.; Rindt, C.C.; Van Bael, J.; Sciacovelli, A. Modeling the performance of a sorption thermal energy storage reactor using artificial neural networks. Appl. Energy 2019, 253, 113525. [Google Scholar] [CrossRef]
  6. Korhammer, K.; Druske, M.M.; Fopah-Lele, A.; Rammelberg, H.U.; Wegscheider, N.; Opel, O.; Osterland, T.; Ruck, W. Sorption and thermal characterization of composite materials based on chlorides for thermal energy storage. Appl. Energy 2016, 162, 1462–1472. [Google Scholar] [CrossRef]
  7. Fumey, B.; Weber, R.; Baldini, L. Sorption based long-term thermal energy storage—Process classification and analysis of performance limitations: A review. Renew. Sustain. Energy Rev. 2019, 111, 57–74. [Google Scholar] [CrossRef]
  8. Cabeza, L.; Martorell, I.; Miró, L.; Fernández, A.; Barreneche, C. 1-Introduction to thermal energy storage (TES) systems. In Advances in Thermal Energy Storage Systems; Cabeza, L.F., Ed.; Woodhead Publishing Series in Energy; Woodhead Publishing: Cambridge, UK, 2015; pp. 1–28. [Google Scholar] [CrossRef]
  9. Kuznik, F.; Gondre, D.; Johannes, K.; Obrecht, C.; David, D. Numerical modelling and investigations on a full-scale zeolite 13X open heat storage for buildings. Renew. Energy 2019, 132, 761–772. [Google Scholar] [CrossRef]
  10. Tatsidjodoung, P.; Le Pierrès, N.; Heintz, J.; Lagre, D.; Luo, L.; Durier, F. Experimental and numerical investigations of a zeolite 13X/water reactor for solar heat storage in buildings. Energy Convers. Manag. 2016, 108, 488–500. [Google Scholar] [CrossRef]
  11. Drucker, H.; Burges, C.J.; Kaufman, L.; Smola, A.; Vapnik, V. Support vector regression machines. Adv. Neural Inf. Process. Syst. 1997, 9, 155–161. [Google Scholar]
  12. Lawrence, J. Introduction to Neural Networks: Design. In Theory, and Applications; California Scientific Software: Nevada City, CA, USA, 1994. [Google Scholar]
  13. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  14. Kuznik, F.; Gondre, D.; Johannes, K.; Obrecht, C.; David, D. Sensitivity analysis of a zeolite energy storage model: Impact of parameters on heat storage density and discharge power density. Renew. Energy 2020, 149, 468–478. [Google Scholar] [CrossRef]
  15. Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 1994, 5, 157–166. [Google Scholar] [CrossRef] [PubMed]
  16. Gold, O.; Sharir, M. Dynamic Time Warping and Geometric Edit Distance: Breaking the Quadratic Barrier. ACM Trans. Algorithms 2018, 14. [Google Scholar] [CrossRef]
  17. Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
  18. Koprinska, I.; Wu, D.; Wang, Z. Convolutional neural networks for energy time series forecasting. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
Figure 1. Pictures of the experimental setup and zeolite material, adapted from [14].
Figure 1. Pictures of the experimental setup and zeolite material, adapted from [14].
Energies 14 03294 g001
Figure 2. Graphics showing the evolution of the chosen RNN parameter when the other two have a value of 200, (a) shows the influence of the MiniBatchSize, (b) the influence of the MaxEpoch and (c) the influence of hidden layers.
Figure 2. Graphics showing the evolution of the chosen RNN parameter when the other two have a value of 200, (a) shows the influence of the MiniBatchSize, (b) the influence of the MaxEpoch and (c) the influence of hidden layers.
Energies 14 03294 g002
Figure 3. Outcome of the prediction − best (left) and worst (right) predictions for hydration (Number of data on the x-axis, Δ T on the y-axis).
Figure 3. Outcome of the prediction − best (left) and worst (right) predictions for hydration (Number of data on the x-axis, Δ T on the y-axis).
Energies 14 03294 g003
Figure 4. Outcome of the prediction − best (left) and worst (right) predictions for dehydration (Number of data on the x-axis, Δ T on the y-axis).
Figure 4. Outcome of the prediction − best (left) and worst (right) predictions for dehydration (Number of data on the x-axis, Δ T on the y-axis).
Energies 14 03294 g004
Figure 5. Outcome of the prediction with the neural network of parameters (200, 1300, 400)—(number of data on the x-axis, Δ T on the y-axis)—(a) is dehydration, (bd) are hydration.
Figure 5. Outcome of the prediction with the neural network of parameters (200, 1300, 400)—(number of data on the x-axis, Δ T on the y-axis)—(a) is dehydration, (bd) are hydration.
Energies 14 03294 g005
Table 1. Hydration data [2,14].
Table 1. Hydration data [2,14].
Experiment T in min
( C )
T in max
( C )
RH in min RH in max Q v min
( m 3 · h 1 )
Q v max
( m 3 · h 1 )
Δ T min
( C )
Δ T max
( C )
Volume
of Data
Training or
Validation
1 series19.7120.390.00670.0073127.9189.8 0.16 38.42784training
2 top19.3820.480.00740.008949.0990.22 0.40 37.692671validation
2 bottom19.4520.590.00740.008956.2195.85 0.46 37.802671training
4 top17.4920.210.00460.005091.4999.83 5.80 37.182272validation
4 bottom17.3120.260.00460.005085.2892.56 7.61 37.182272training
5 top16.9220.120.00380.004772.4599.15 0.48 26.522695training
5 bottom17.0920.180.00380.004717.2492.65 0.53 26.602695training
8 top18.8019.970.00250.003055.1065.82 0.12 36.682083training
8 bottom18.8520.180.00250.003052.5062.37 0.52 36.602083validation
Table 2. Dehydration data [2,14].
Table 2. Dehydration data [2,14].
Experiment T in min
( C )
T in max
( C )
RH in min RH in max Q v min
( m 3 · h 1 )
Q v max
( m 3 · h 1 )
Δ T min
( C )
Δ T max
( C )
Volume
of Data
Training or
Validation
4 top19.27121.970.00470.011176.6999.82 96.25 0.20 544validation
4 bottom19.41124.280.00470.011169.0187.65 102.99 0.39 544training
5 top18.92123.130.00480.005275.68101.53 100.04 −0.60980training
5 bottom18.88126.600.00480.005266.8286.10 107.33 0.80 980training
Table 3. RMSE values of the prediction (same dataset).
Table 3. RMSE values of the prediction (same dataset).
RMSE1RMSE2RMSE3RMSE4RMSE5
1.661.320.902.102.02
RMSE6RMSE7RMSE8RMSE9Average
2.021.110.992.311.60
Table 4. RMSE values of the prediction (specific dataset).
Table 4. RMSE values of the prediction (specific dataset).
RMSE1RMSE2RMSE3RMSE4Average
2.912.112.591.872.37
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Delmarre, C.; Resmond, M.-A.; Kuznik, F.; Obrecht, C.; Chen, B.; Johannes, K. Artificial Neural Network Simulation of Energetic Performance for Sorption Thermal Energy Storage Reactors. Energies 2021, 14, 3294. https://doi.org/10.3390/en14113294

AMA Style

Delmarre C, Resmond M-A, Kuznik F, Obrecht C, Chen B, Johannes K. Artificial Neural Network Simulation of Energetic Performance for Sorption Thermal Energy Storage Reactors. Energies. 2021; 14(11):3294. https://doi.org/10.3390/en14113294

Chicago/Turabian Style

Delmarre, Carla, Marie-Anne Resmond, Frédéric Kuznik, Christian Obrecht, Bao Chen, and Kévyn Johannes. 2021. "Artificial Neural Network Simulation of Energetic Performance for Sorption Thermal Energy Storage Reactors" Energies 14, no. 11: 3294. https://doi.org/10.3390/en14113294

APA Style

Delmarre, C., Resmond, M. -A., Kuznik, F., Obrecht, C., Chen, B., & Johannes, K. (2021). Artificial Neural Network Simulation of Energetic Performance for Sorption Thermal Energy Storage Reactors. Energies, 14(11), 3294. https://doi.org/10.3390/en14113294

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop