Next Article in Journal
Cost Benefit and Risk Analysis of Low iLUC Bioenergy Production in Europe Using Monte Carlo Simulation
Next Article in Special Issue
Optimization of Window Design for Daylight and Thermal Comfort in Cold Climate Conditions
Previous Article in Journal
Determination of the Theoretical and Actual Working Volume of a Hydraulic Motor—Part II (The Method Based on the Characteristics of Effective Absorbency of the Motor)
Previous Article in Special Issue
Double-Target Based Neural Networks in Predicting Energy Consumption in Residential Buildings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Suggesting a Stochastic Fractal Search Paradigm in Combination with Artificial Neural Network for Early Prediction of Cooling Load in Residential Buildings

1
Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
2
Faculty of Civil Engineering, Duy Tan University, Da Nang 550000, Vietnam
3
Faculty of Civil Engineering, Technische Universität Dresden, 01069 Dresden, Germany
4
Institute of Structural Mechanics, Bauhaus Universität-Weimar, 99423 Weimar, Germany
5
John von Neumann Faculty of Informatics, Obuda University, 1034 Budapest, Hungary
*
Authors to whom correspondence should be addressed.
Energies 2021, 14(6), 1649; https://doi.org/10.3390/en14061649
Submission received: 6 January 2021 / Revised: 8 March 2021 / Accepted: 11 March 2021 / Published: 16 March 2021
(This article belongs to the Special Issue Smart Built Environment for Health and Comfort with Energy Efficiency)

Abstract

:
Early prediction of thermal loads plays an essential role in analyzing energy-efficient buildings’ energy performance. On the other hand, stochastic algorithms have recently shown high proficiency in dealing with this issue. These are the reasons that this study is dedicated to evaluating an innovative hybrid method for predicting the cooling load (CL) in buildings with residential usage. The proposed model is a combination of artificial neural networks and stochastic fractal search (SFS–ANNs). Two benchmark algorithms, namely the grasshopper optimization algorithm (GOA) and firefly algorithm (FA) are also considered to be compared with the SFS. The non-linear effect of eight independent factors on the CL is analyzed using each model’s optimal structure. Evaluation of the results outlined that all three metaheuristic algorithms (with more than 90% correlation) can adequately optimize the ANN. In this regard, this tool’s prediction error declined by nearly 23%, 18%, and 36% by applying the GOA, FA, and SFS techniques. Moreover, all used accuracy criteria indicated the superiority of the SFS over the benchmark schemes. Therefore, it is inferred that utilizing the SFS along with ANN provides a reliable hybrid model for the early prediction of CL.

1. Introduction

Buildings, vehicles, and industry are the three primary energy consumption sectors globally [1,2,3]. Among those, buildings consume a considerable share, which is anticipated to reach over 30% by 2040 [4]. On the other hand, people’s high tendency to dwell in smart cities has resulted in the idea of developing energy-efficient structures [5,6]. Therefore, accurate analysis of buildings’ energy performance (EPB) is a significant step toward this objective. In energy-efficient buildings, heating load (HL) and cooling load (CL) demand account for system energy consumption and are controlled by heating, ventilating, and air conditioning (HVAC) system [7] to provide convenient indoor air condition.
It is well established that the cost and use of energy affect human lives every day. In this sense, many issues arise from the content of energy consumption such as acid rain, dependency on depleting supplies of fossil fuels, greenhouse gas emissions [8,9,10,11,12,13,14,15,16], climate change [17,18,19,20], in addition to environmental concerns that come along with energy power supply [2,21,22,23,24,25,26,27,28,29,30,31,32,33]. In recent years, various techniques have been used for the optimal design of the HVAC system [34,35,36,37,38,39,40,41,42]. Ghahramani et al. [43], for instance, used a systematic model to optimize the performance of the HVAC system in office buildings in terms of temperature setpoints. Moreover, Ferreira et al. [44] used a soft computing method for controlling the HVAC system. Implemented the proposed model led to about 50% saving in energy consumption. The effect of façade parameters (e.g., solar reflectance and U-value) was investigated by Ihara et al. [45], and it was deduced that solar heat gain coefficient (SHGC) could have the most significant impact on energy consumption reduction.
In this way, some drawbacks of forwarding modeling approaches (e.g., modeling in simulation packages), such as large dimensions and inadequacy for occupied spaces [46], have driven engineers to employ artificial intelligence approaches for early estimation of energy [47,48,49]. Artificial intelligence is known as intelligence solutions demonstrated by machines, unlike the natural intelligence provided by animals and humans [50,51,52,53,54,55]. Artificial neural networks (ANNs), as one of the most known AI-based solutions, have received increasing attraction recently [56,57,58,59,60]. More technically, deep-learning-based [61,62,63,64], machine learning [65,66,67], decision-making-based theories, feature selection-based solutions [68,69,70], extremer machine learning solutions [71,72,73,74], and hybrid searching algorithms that enhanced conventional multilayer perceptron such as Harris Hawks optimization [62,75], whale optimizer [76,77], bacterial foraging optimization [78], chaos enhanced grey wolf optimization [79], moth-flame optimizer [72,80], many-objective sizing optimization [81,82,83,84,85,86], driven robust optimization [87], ant colony optimization [88], and global numerical optimization [89]. These techniques are successfully employed in different aspects such as building design [90,91,92,93,94,95,96,97], image processing/classification [97,98,99,100,101,102,103,104], sustainability, and environmental concerns [23,105,106,107]. Various studies are also performed on predicting the HL and CL of residential buildings [39,41,42,108,109,110]. Zhou et al. [42] used an ANN and the nonlinear autoregressive with exogenous inputs (NARX) concept for accurate analysis of thermal load in an academic building. In a similar effort, Koschwitz et al. [111] implemented an approximation of long-term urban heating load using nonlinear autoregressive exogenous recurrent neural networks (NARX RNNs). Roy et al. [112] showed a better performance of neural networks (with a variance accounted for 99.76%) compared to other conventional methods such as gradient boosted machines.
Other predictors such as adaptive neuro-fuzzy inference systems (ANFISs) have shown high robustness for dealing with EPB non-linear problems [113]. Pezeshki and Mazinani [114] concluded the superiority of ANFISs over typical fuzzy logic in buildings’ thermal efficiency because it enjoys the advantages of the ANN as well. Scholars such as Naji et al. [115] and Chou and Bui [116] have demonstrated the applicability of ANFIS and support vector-based methods. More recent studies have suggested utilizing conventional predictors with metaheuristic search schemes for various usages [41,117]. More significantly, for energy consumption modeling, these methods have gained high popularity. Satrio et al. [118] found integrating ANNs and multi-objective genetic algorithms a capable tool for optimizing the HVAC system. Moayedi et al. [4] coupled an ANN with a firefly algorithm based on electromagnetism for predicting building energy consumption. The findings revealed their suggested model’s superiority over conventional models such as typical ANN, genetic programming, and extreme learning machine. Tien Bui et al. [41] could reduce the HL prediction error of the ANN from 2.93 to 2.06 and 2.00 by applying the genetic algorithm and imperialist competition algorithm, respectively. This error also fell down from 3.28 to 2.09 and 2.10 for CL estimation. Particle swarm optimization (PSO) is another capable optimizer that has been widely employed for creating hybrid models [119,120]. Goudarzi et al. [121] also used the PSO to develop a hybrid of autoregressive integrated moving average and SVR for energy consumption modeling. Moreover, in researches by Nguyen et al. [122] and Moayedi et al. [123], novel metaheuristic techniques (e.g., elephant herding optimization, Harris Hawks optimization, and grasshopper optimization algorithm) were applied to enhance the prediction capability of the ANN. Despite the wide use of nature-inspired metaheuristic techniques for optimizing the HVAC system, the wide variety of these techniques motivated the authors to investigate the applicability of a novel member of this family, namely, stochastic fractal search in this paper. Based on our best knowledge, this algorithm has not been previously used in this field.

2. Methodology

2.1. ANN

As suggested by McCulloch and Pitts [124], the basic idea of ANNs roots in the biological neural system of humans. The structure of this model comprises several processors named neurons connected by so-called synapses weight. As a potential advantage of this model, the ANNs try to analyze the non-linear association of a set of input-output data by establishing a stage of mathematical relationships [125]. Figure 1 shows the structure of one of the most popular notions of ANNs, namely, multi-layer perceptron (MLP). As shown, this network is composed of one hidden layer. However, based on the problem’s complexity, it can contain two or more hidden layers [126,127].
Input nodes receive the data. Each node assigns (i.e., multiplies) a specific weight factor (W), and the obtained value is added to a bias factor (b). Lastly, an activation function (f) is applied to the whole term to generate the proposed neuron’s response. It is mathematically expressed by Equation (1) as follows:
O = f (   I W + b ) .

2.2. Stochastic Fractal Search

Proposed by Salimi [128], stochastic fractal search (SFS) is a recently developed metaheuristic algorithm. This method is based on the fractal mathematical concept and the diffusion feature. Mosbah and El-Hawary [129] used this approach for training an ANN. The SFS draws on two major processes, namely, diffusing and updating for handling the optimization task. During diffusion, the points exploit the space by diffusing around their current location. It helps the algorithm to protect the solution against local minima. This movement leads to a so-called process Gaussian walk that is mathematically described as follows [130].
G S 1 =   G ( m n B p , δ )   +   ( a 1 p 1   a 1 p 1 )
G S 2 =   G ( m n p , δ )
δ   =   | log ( z ) z   ( p i   p b ) | ,
where G (mn, δ) indicates the Gaussian distribution function with mean mn and standard deviations δ. z shows the iteration, and two terms a1 and a2 are random values varying from 0 to 1. Moreover, mnp and mnBp equal to p i (the position of a given individual) and p b (the position of the best individual), respectively. The role of l o g   ( z )   z lies in reducing the jump size, which leads to a more localized investigation.
For increasing the exploring ability of the population moving near the present location, two measures may be considered acting on every different vector index and the whole population. After ranking the population in the first measure, they receive a probability index based on the following equation:
p p i   =   rank   ( p i ) N p o p ,
where N p o p represents the number of the agents. Equation (6) shows how the qth component of the agent i is updated provided that p p i < a (a random number that could be between 0 and 1), otherwise no change takes place.
p i z + 1   ( q )   =   p s z   ( q )   a ( p u z   ( q )   p i z   ( q ) ) ,
where p u and p s are random individuals.
Over the second measure, all individuals updated in Equation (6) go through Equation (5) to be ranked provided that p p i < a, while for p i z + 1 , the recent individual is updated by the following equations:
p i n   ( q )   =   p i z + 1     µ ( p s z + 1     p u )           f o r     µ   0.5
p i n   ( q )   =   p i z + 1 +   µ ( p s z + 1     p u z + 1 )           f o r     µ >   0.5 ,
where p s z + 1 and p u z + 1 denote random agents obtained from the first measure. Moreover,   µ stands for a number generated by Gaussian distribution [131].

2.3. Benchmark Optimizers

As mentioned, the grasshopper optimization algorithm (GOA) and the firefly algorithm (FA) are used to create benchmark hybrids of ANN. Like any other algorithms, these techniques implement their certain search method to attain a globally optimal solution for a given problem [132]. As the name implies, the GOA is inspired by the herding behavior of grasshoppers in nature. It was designed by Saremi et al. [133] in two significant steps, including exploration and exploitation. The herd members (i.e., the grasshoppers) fly to find food, where their swimming movement is affected by three parameters of social relationship, gravity force, and wind advection. More details about the GOA can be found in previous studies [134,135,136].
The FA is also a capable nature-inspired search scheme proposed by Yang [137]. The FA represents the social flashing behavior of fireflies. The intensity variation of light and attractiveness formulation are two crucial ingredients of this algorithm [138]. Three basic rules of the FA are (i) the individuals are unisex and are attracted to each other regardless of gender, (ii) the more brightness, the higher attractiveness, and (iii) the objective function of the problem determines the brightness of a firefly. More mathematical information about the fireflies’ interaction is well-explained in references such as [139,140,141].

3. Data and Statistical Analysis

As is known, using reliable data plays an essential role in the proper development of intelligent models. In the case of this study, the information of a residential building is used. The database is produced by a computer simulation in Ecotect [142] environment. Tsanas and Xifara [143] generated it, and it is now available on http://archive.ics.uci.edu/ml/datasets/Energy+efficiency (accessed on 13 December 2020).
By analyzing 12 different buildings (with a total volume of 771.75 m3), the HL and CL [144] of 768 cases are collected when relative compactness (RC = 6 V 2 3 A , which indicates the ratio between the surface area (A) and the corresponding volume (V) [145]), surface area (SA), wall area (WA), roof area (RA), overall height (OH) [144,146], orientation, glazing area (GA, indicating the overall area measured using rough opening consisting of the glazing, frame and sash [147]), and glazing area distribution (GAD, indicating the distribution of the GA within the entire building) [144] are taken as influential parameters (i.e., independent factors).
Note that the orientation consists of the north, east, south, and west, the GA receives four values of 0%, 10%, 25%, and 40% of the floor area [144,146], and the GAD is classified into six groups as follows: (i) uniform (one-fourth glazing on each side), (ii) north: 55% on the north and 15% on the remaining sides each, (iii) east: 55% on the east side and 15% on the remaining sides each, (iv) south: 55% on the south side and 15% on the remaining sides each, (v) west: 55% on the west side and 15% on the remaining sides each, and (vi) no glazing areas [148]. Figure 2 shows the types of building with respect to the RC, which are mostly orthogonal polyhedral adapted from [119].
The statistical description of the parameters (in terms of average value, sample variance, standard error, minimum, and maximum) are presented in Table 1. Figure 3 also illustrates the relationship between the CL and the mentioned parameters.
In order to develop the proposed models, the dataset should be divided into two parts, namely, training and testing groups. In this study, it is conducted with respect to the broadly taken proportion of 80:20. In other words, out of all 768 data, 80% (i.e., 614 samples) are used in the training phase for analyzing the relationship between the CL and influential parameters, and the remaining 20% (i.e., 154 samples) are considered as testing data. This group is provided to the trained networks to assess their generalization capability.
In this dataset, some nominal parameters (e.g., orientations of north, east, south, and west) are represented by values (Figure 3) to be readable for the proposed neural system. These conversions are common ways for making the data accommodable with the mechanism of the model. For example, Moayedi et al. [149] replaced some nominal conditioning factors of a geotechnical issue with values. Apart from this, ANNs are known as powerful classifiers (for diagnoses issues [150] and natural hazard prediction [151]), meaning that they can nicely interpret the relationships between input parameters (or their representative values) and the given output by assigning weight factors. It is worth noting that the data are used in their original format (preferred over the normalized format), due to some factors. Most importantly, first, as Figure 3 shows, there is no large scattering or abrupt change in the behavior of the samples, and second, since a new method is going to be examined, the condition of the problem is better to remain the same as original in order to avoid unwanted effects. Nonetheless, the effect of normalization seems worth investigation in artificial intelligence models.

4. Results and Discussion

This study aims to evaluate the applicability of three notions of metaheuristic algorithms, namely, stochastic fractal search, grasshopper optimization algorithm, and firefly algorithm in the early estimation of cooling load in energy-efficient buildings. In this research, the algorithms are contributed to the problem by optimizing the parameters of an ANN used for estimating the CL by analyzing environmental parameters. The prediction results of the MLP neural network (MLPNN) tool, and the GOA–ANN, FA–ANN, and SFS–ANN ensembles, are presented and discussed in this part.

4.1. Hybridizing the MLPNN Using SFS, FA, and GOA

For executing the models, this study uses MATLAB (an abbreviation of "matrix laboratory, The University of New Mexico, New Mexico, US) programming language on a laptop system with these processing characteristics: Intel (R) Core (TM) i5-4460 CPU @ 3.20Hz RAM 6 Gb. In order to optimize an ANN using optimizer algorithms, they should be mathematically synthesized [122,152]. However, before doing this, the best structure of the ANN should be determined. To this end, the influential parameters, especially the number of hidden neurons, should be optimized. Based on previous studies and the authors’ experience, tangent–sigmoid (Tansig) is selected as the activation function of the ANN neurons. Then, 10 different MLPNN structures (with the number of hidden neurons from 1 to 10) are tested, and it was shown that six neurons generate the most reliable hidden layer. Note that this model was trained by Levenberg–Marquardt [153], which is a powerful candidate for this task.
Next, the predictive structure of the proposed MLPNN is provided to the GOA, FA, and SFS algorithms to construct the hybrid models. Given the training dataset, these algorithms try to find highly optimized weights and biases to build an efficient MLPNN. A total of 1000 repetitions is considered for each algorithm to have enough convergence. For each iteration, an evaluation is carried out by means of an objective function, which is set root mean square error (RMSE) in this study. The RMSE is formulated as follows:
R M S E = 1 Q i = 1 Q [ ( C L i o b s e r v e d C L i p r e d i c t e d ) ] 2 ,
where the number of involved data is shown by Q, CLi observed, and CLi predicted stand for the CL values obtained by Tsanas and Xifara [143] and our intelligent models, respectively.
The size of the working population is an influential factor for the performance of the implemented ensemble models. Thus, a trial-and-error process is adopted to ensure that the best population size is used during the optimization. Each model is implemented with nine population sizes of 10, 25, 50, 75, 100, 200, 300, 400, and 500, and the RMSE of the latest iteration is recorded as the best response. The results are shown in Figure 4a. As is seen, the best optimization results for the GOA, FA, and SFS algorithms with 400, 100, and 400 population sizes, respectively. The optimization curves of the mentioned structures are also shown in Figure 4b–d. The other parameters of the SFS algorithm (maximum diffusion = 2 and the type of walk that was the first Gaussian walk) and three parameters of the FA algorithm (α = 0.5, β = 0.2, and γ = 0.1) were manually tuned based on previous experiences or by trial and error. These models are then used to estimate the CL.

4.2. Accuracy Criteria

Along with the RMSE, mean absolute error (MAE) is used to report the error of prediction in both training and testing phases. Moreover, the accommodation of the results is evaluated by the coefficient of determination (R2), which varies from 0 to 1. The larger the R2 is, the higher the regression between the results becomes. These two indices (i.e., MAE and R2) are expressed by Equations (10) and (11), respectively.
M A E = 1 Q I = 1 Q | C L i o b s e r v e d C L i p r e d i c t e d |
R 2 = 1 i = 1 Q ( C L i p r e d i c t e d C L i o b s e r v e d ) 2 i = 1 Q ( C L i o b s e r v e d C L ¯ o b s e r v e d ) 2 ,
where the average of CLi observed values are addressed by C L ¯ observed.

4.3. Accuracy Evaluation

In the training phase, assessing the products showed a good understanding of the relationship between the CL and considered inputs for all used predictors. A comparison between the measured and predicted values of CL is also presented in Figure 5. It can be observed that the models can correctly forecast the overall CL pattern.
In addition, the used error criteria indicate a reasonable error in the learning process. In this regard, the RMSE obtained 3.3916, 2.7725, 3.0812, and 2.2824, respectively, for the MLPNN, GOA–ANN, FA–ANN, and SFS–ANN, respectively. It demonstrates nearly 18% and 9% accuracy enhancement by incorporating the GOA and FA, and more considerably, around 33% by employing the SFS. Reducing the MAE values from 2.6511 to 1.8972 (by 28.44%), 2.1468 (by 19.02%), and 1.6272 (by 38.62%) is another evidence for the efficacy of the applied metaheuristic algorithms. Moreover, the correlation for the training results is reported by the obtained R2s of 0.8804, 0.9156, 0.8957, and 0.9428.
Figure 6 illustrates the regression chart for the testing data. According to this figure, the correlation of the MLPNN results grew from 88% to over 90% after functioning all three metaheuristic algorithms. It is also shown that with an R2 of 0.9401, the SFS–ANN produced the most consistent outputs, followed by the GOA–ANN with 0.9123 and FA–ANN with 0.9006. A comparison between the estimated values for the largest and lowest CLs highlights the capability of the models better (Table 2).
Moreover, the evaluation of the accuracy is conducted by measuring the error of testing data. In this sense, Figure 7a to h depicts the calculated error (= measured CL–predicted CL) for each sample, along with the histogram them showing the frequency of errors in specific ranges. The calculated RMSEs of 3.1663, 2.7628, 2.9411, and 2.2828 indicate around 12.74%, 7.11%, and 27.90% reduction in generalization error of ANN by using the GOA, FA, and SFS techniques, respectively. As for MAE, these values are 22.89%, 17.54%, and 35.56%, as the mean absolute error fell down from 2.4575 to 1.8951, 2.0265, and 1.5836.
Table 3 provides a summary of the results. Based on the resulted accuracy criteria, three major points can be deduced as follows:
(i).
All used artificial intelligence models are capable enough to learn properly, and subsequently, estimate the CL pattern for buildings with unseen conditions;
(ii).
Synthesizing with three metaheuristic algorithms of GOA, FA, and SFS can significantly improve the regular ANN competency;
(iii).
Comparing the efficiency of the used algorithms reveals the outstanding optimization capability of SFS in optimizing the ANN. After SFS, GOA outperforms FA.

4.4. Presenting the Neural Predictive Formula

As explained, all evidence asserted the superiority of the SFS technique in optimizing the ANN in this study. It means that the solution matrix (i.e., ANN weights and biases) proffered by this scheme develops a more flexible neural network. Therefore, in this section, the SFS–ANN ensemble is presented in the form of a neural-metaheuristic formulation to predict the HL by taking the effective parameters into the equation. It is expressed by Equation (12), which is made of six weights and one bias term belonging to the unique output neuron. As can be derived, this formula needs to receive some inputs shown by A, B, …, F. These parameters symbolize the response of the hidden neurons (see Figure 1), which are calculated by Equation (13). Considering the number of input parameters and hidden neurons, the main matrix comprises eight columns and six rows, which are multiplied by the input matrix. Next, the bias matrix is added, and the Tansig activation function is applied. It is worth noting that to use this predictive formula, the values must be normalized, meaning that the output of the equation is normalized and it should be transformed to the real extent.
CL SFS-ANN = 0.1898 × A + 0.9542 × B − 0.3437× C + 0.0578 × D + 0.8368 × E − 0.8240 × F − 0.2414
[ A   B   C   D   E   F ] = Tansig   ( ( [ 1.0336   0.2712 0.9112   0.6027   0.8083   0.1966   0.1951   0.0468   0.7463   0.4399 0.6948 0.6665 0.7741 0.5713   0.4305 0.5282   0.2431 0.8419   0.3706 0.4923 0.5359 0.0527 1.0335   0.7495   0.4635   0.9505   0.6423 0.0044 1.0053 0.5465 0.3986   0.2620 0.2864   0.1648 0.6859 1.1583 0.4441   0.1043   0.1328 0.9595   0.4652   0.9660   0.8049 0.8814   0.1853 0.5100 0.4416   0.0622 ] R C   SA   WA   RA   OH   Orientation   GA   GAD ) + 1.7514 1.0509 0.3503   0.3503 1.0509   1.7514 )

4.5. Future Projects

This study presented a reliable application of optimized intelligent models for dealing with an important energy-related issue. Due to the improvements observed, the suggested method can emerge in the form of a more user-friendly platform (such as graphical user interface) in order to create an effective early prediction system that yields the cooling load by receiving certain input factors. It could be of great interest for engineers and architects to achieve the most optimal design of a residential building (e.g., with respect to geometries).
However, it is believed that the efficiency of the model can increase by doing a number of ideas. Some of these ideas are related to the data. First, the results are recommended to be compared with the case of using normalized data. It should be investigated which type of data are more suitable for these simulations. Second, using an optimal number of inputs may lead to a less complex method, and consequently, a fewer number of parameters to be optimized. Associating the HL along with the CL turns the problem into a double-target simulation, and the benefit should be evaluated versus the complexity caused. Another potential subject could be working on different types of buildings in one specific study. By doing this, the model can be generalized to many other usages of buildings. Lastly, further studies are recommended to focus on comparative efforts in order to point out the most optimal algorithm for coupling with neural networks or even other intelligent tools.

5. Conclusions

In this study, for the first time, three robust optimizers of stochastic fractal search, grasshopper optimization algorithm, and firefly algorithm were applied to the problem of the cooling load early prediction. These algorithms were used to optimize the performance of a neural network by adjusting the computational parameters. The findings revealed, firstly, that the applicability of neural computing for analyzing the non-linear relationship between the CL and eight environmental parameters. Secondly, all three metaheuristic algorithms could promisingly optimize the ANN for the betterment of accuracy. Last but not least, from a comparison among the results, it was concluded that the SFS-based hybrid model enjoys more accuracy than GOA and FA. The governing neural-metaheuristic relationship of the SFS–ANN was also presented in the form of a viable formula for mathematically predicting the CL.

Author Contributions

Conceptualization, methodology, H.M.; software, validation, writing—original draft preparation, H.M. and A.M.; writing—review and editing, visualization, project administration, A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was not funded.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Support of the Alexander von Humboldt Foundation is acknowledged. Open Access Funding by the Publication Fund of the TU Dresden.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Al-Homoud, M.S. Computer-aided building energy analysis techniques. Build. Environ. 2001, 36, 421–433. [Google Scholar] [CrossRef]
  2. Yang, W.; Zhao, Y.; Wang, D.; Wu, H.; Lin, A.; He, L. Using principal components analysis and IDW interpolation to determine spatial and temporal changes of surface water quality of Xin’anjiang river in Huangshan, China. Int. J. Environ. Res. Public Health 2020, 17, 2942. [Google Scholar] [CrossRef] [PubMed]
  3. Zuo, X.; Dong, M.; Gao, F.; Tian, S. The modeling of the electric heating and cooling system of the integrated energy system in the coastal area. J. Coast. Res. 2020, 103, 1022–1029. [Google Scholar] [CrossRef]
  4. Moayedi, H.; Gör, M.; Lyu, Z.; Bui, D.T. Herding Behaviors of grasshopper and Harris hawk for hybridizing the neural network in predicting the soil compression coefficient. Measurement 2020, 152, 107389. [Google Scholar] [CrossRef]
  5. Ahmadi-Karvigh, S.; Ghahramani, A.; Becerik-Gerber, B.; Soibelman, L. Real-time activity recognition for energy efficiency in buildings. Appl. Energy 2018, 211, 146–160. [Google Scholar] [CrossRef]
  6. Bibri, S.E.; Krogstie, J. Smart sustainable cities of the future: An extensive interdisciplinary literature review. Sustain. Cities Soc. 2017, 31, 183–212. [Google Scholar] [CrossRef]
  7. McQuiston, F.C.; Parker, J.D. Heating, Ventilating, and Air Conditioning: Analysis and Design; John Wiley and Sons, Inc.: Hoboken, NJ, USA, 1982. [Google Scholar]
  8. Chen, Y.; He, L.; Guan, Y.; Lu, H.; Li, J. Life cycle assessment of greenhouse gas emissions and water-energy optimization for shale gas supply chain planning based on multi-level approach: Case study in Barnett, Marcellus, Fayetteville, and Haynesville shales. Energy Convers. Manag. 2017, 134, 382–398. [Google Scholar] [CrossRef]
  9. Chen, Y.; He, L.; Li, J.; Zhang, S. Multi-criteria design of shale-gas-water supply chains and production systems towards optimal life cycle economics and greenhouse gas emissions under uncertainty. Comput. Chem. Eng. 2018, 109, 216–235. [Google Scholar] [CrossRef]
  10. Chen, Y.; Li, J.; Lu, H.; Yan, P. Coupling system dynamics analysis and risk aversion programming for optimizing the mixed noise-driven shale gas-water supply chains. J. Clean. Prod. 2021, 278, 123209. [Google Scholar] [CrossRef]
  11. Cheng, X.; He, L.; Lu, H.; Chen, Y.; Ren, L. Optimal water resources management and system benefit for the Marcellus shale-gas reservoir in Pennsylvania and West Virginia. J. Hydrol. 2016, 540, 412–422. [Google Scholar] [CrossRef]
  12. Han, X.; Zhang, D.; Yan, J.; Zhao, S.; Liu, J. Process development of flue gas desulphurization wastewater treatment in coal-fired power plants towards zero liquid discharge: Energetic, economic and environmental analyses. J. Clean. Prod. 2020, 261, 121144. [Google Scholar] [CrossRef]
  13. He, L.; Chen, Y.; Zhao, H.; Tian, P.; Xue, Y.; Chen, L. Game-based analysis of energy-water nexus for identifying environmental impacts during Shale gas operations under stochastic input. Sci. Total Environ. 2018, 627, 1585–1601. [Google Scholar] [CrossRef]
  14. Li, Z.-G.; Cheng, H.; Gu, T.-Y. Research on dynamic relationship between natural gas consumption and economic growth in China. Struct. Chang. Econ. Dyn. 2019, 49, 334–339. [Google Scholar] [CrossRef]
  15. Liu, E.; Lv, L.; Yi, Y.; Xie, P. Research on the Steady Operation Optimization Model of Natural Gas Pipeline Considering the Combined Operation of Air Coolers and Compressors. IEEE Access 2019, 7, 83251–83265. [Google Scholar] [CrossRef]
  16. Su, Z.; Liu, E.; Xu, Y.; Xie, P.; Shang, C.; Zhu, Q. Flow field and noise characteristics of manifold in natural gas transportation station. Oil Gas Sci. Technol. Rev. 2019, 74, 70. [Google Scholar] [CrossRef]
  17. He, L.; Shen, J.; Zhang, Y. Ecological vulnerability assessment for ecological conservation and environmental management. J. Environ. Manag. 2018, 206, 1115–1125. [Google Scholar] [CrossRef]
  18. Lu, H.; Tian, P.; He, L. Evaluating the global potential of aquifer thermal energy storage and determining the potential worldwide hotspots driven by socio-economic, geo-hydrologic and climatic conditions. Renew. Sustain. Energy Rev. 2019, 112, 788–796. [Google Scholar] [CrossRef]
  19. Tian, P.; Lu, H.; Feng, W.; Guan, Y.; Xue, Y. Large decrease in streamflow and sediment load of Qinghai–Tibetan Plateau driven by future climate change: A case study in Lhasa River Basin. CATENA 2020, 187, 104340. [Google Scholar] [CrossRef]
  20. Zhang, K.; Ruben, G.B.; Li, X.; Li, Z.; Yu, Z.; Xia, J.; Dong, Z. A comprehensive assessment framework for quantifying climatic and anthropogenic contributions to streamflow changes: A case study in a typical semi-arid North China basin. Environ. Model. Softw. 2020, 128, 104704. [Google Scholar] [CrossRef]
  21. Chen, H.; Chen, A.; Xu, L.; Xie, H.; Qiao, H.; Lin, Q.; Cai, K. A deep learning CNN architecture applied in smart near-infrared analysis of water pollution for agricultural irrigation resources. Agric. Water Manag. 2020, 240, 106303. [Google Scholar] [CrossRef]
  22. Feng, S.; Lu, H.; Tian, P.; Xue, Y.; Lu, J.; Tang, M.; Feng, W. Analysis of microplastics in a remote region of the Tibetan Plateau: Implications for natural environmental response to human activities. Sci. Total Environ. 2020, 739, 140087. [Google Scholar] [CrossRef] [PubMed]
  23. He, L.; Shao, F.; Ren, L. Sustainability appraisal of desired contaminated groundwater remediation strategies: An information-entropy-based stochastic multi-criteria preference model. Environ. Dev. Sustain. 2020, 23, 1759–1779. [Google Scholar] [CrossRef]
  24. Huang, Z.; Zheng, H.; Guo, L.; Mo, D. Influence of the Position of Artificial Boundary on Computation Accuracy of Conjugated Infinite Element for a Finite Length Cylindrical Shell. Acoust. Aust. 2020, 48, 287–294. [Google Scholar] [CrossRef]
  25. Jia, L.; Liu, B.; Zhao, Y.; Chen, W.; Mou, D.; Fu, J.; Wang, Y.; Xin, W.; Zhao, L. Structure design of MoS2@Mo2C on nitrogen-doped carbon for enhanced alkaline hydrogen evolution reaction. J. Mater. Sci. 2020, 55, 16197–16210. [Google Scholar] [CrossRef]
  26. Shirazi, M.G.; Rashid, A.S.B.A.; Nazir, R.B.; Rashid, A.H.B.A.; Moayedi, H.; Horpibulsuk, S.; Samingthong, W. Sustainable soil bearing capacity improvement using natural limited life geotextile reinforcement—A review. Minerals 2020, 10, 479. [Google Scholar] [CrossRef]
  27. Lv, Q.; Liu, H.; Yang, D.; Liu, H. Effects of urbanization on freight transport carbon emissions in China: Common characteristics and regional disparity. J. Clean. Prod. 2019, 211, 481–489. [Google Scholar] [CrossRef]
  28. Quan, Q.; Hao, Z.; Xifeng, H.; Jingchun, L. Research on water temperature prediction based on improved support vector regression. Neural Comput. Appl. 2020, 58, 458–465. [Google Scholar] [CrossRef]
  29. Yang, M.; Sowmya, A. An Underwater Color Image Quality Evaluation Metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef]
  30. Zhang, B.; Xu, D.; Liu, Y.; Li, F.; Cai, J.; Du, L. Multi-scale evapotranspiration of summer maize and the controlling meteorological factors in north China. Agric. For. Meteorol. 2016, 216, 1–12. [Google Scholar] [CrossRef]
  31. Zhang, D.; Han, X.; Wang, H.; Yang, Q.; Yan, J. Experimental study on transient heat/mass transfer characteristics during static flash of aqueous NaCl solution. Int. J. Heat Mass Transf. 2020, 152, 119543. [Google Scholar] [CrossRef]
  32. Liu, J.; Liu, Y.; Wang, X. An environmental assessment model of construction and demolition waste based on system dynamics: A case study in Guangzhou. Environ. Sci. Pollut. Res. 2020, 27, 37237–37259. [Google Scholar] [CrossRef]
  33. Zhang, W. Parameter Adjustment Strategy and Experimental Development of Hydraulic System for Wave Energy Power Generation. Symmetry 2020, 12, 711. [Google Scholar] [CrossRef]
  34. Bui, X.-N.; Moayedi, H.; Rashid, A.S.A. Developing a predictive method based on optimized M5Rules–GA predicting heating load of an energy-efficient building system. Eng. Comput. 2019, 36, 931–940. [Google Scholar] [CrossRef]
  35. Gao, W.; Alsarraf, J.; Moayedi, H.; Shahsavar, A.; Nguyen, H. Comprehensive preference learning and feature validity for designing energy-efficient residential buildings using machine learning paradigms. Appl. Soft Comput. 2019, 84, 105748. [Google Scholar] [CrossRef]
  36. Guo, Z.; Moayedi, H.; Foong, L.K.; Bahiraei, M. Optimal Modification of Heating, Ventilation, and Air Conditioning System Performances in Residential Buildings Using the Integration of Metaheuristic Optimization and Neural Computing. Energy Build. 2020, 214, 109866. [Google Scholar] [CrossRef]
  37. Jun, Z.; Kanyu, Z. A particle swarm optimization approach for optimal design of PID controller for temperature control in HVAC. In Proceedings of the 2011 Third International Conference on Measuring Technology and Mechatronics Automation, 6–7 January 2011; pp. 230–233. [Google Scholar]
  38. Moayedi, H.; Bui, T.D.; Dounis, A.; Lyu, Z.; Foong, K.L. Predicting Heating Load in Energy-Efficient Buildings Through Machine Learning Techniques. Appl. Sci. 2019, 9, 4338. [Google Scholar] [CrossRef] [Green Version]
  39. Moayedi, H.; Mu’azu, M.A.; Foong, L.K. Novel Swarm-based Approach for Predicting the Cooling Load of Residential Buildings Based on Social Behavior of Elephant Herds. Energy Build. 2019, 206, 109579. [Google Scholar] [CrossRef]
  40. Ghahramani, A.; Karvigh, S.A.; Becerik-Gerber, B. HVAC system energy optimization using an adaptive hybrid metaheuristic. Energy Build. 2017, 152, 149–161. [Google Scholar] [CrossRef] [Green Version]
  41. Tien Bui, D.; Moayedi, H.; Anastasios, D.; Kok Foong, L. Predicting Heating and Cooling Loads in Energy-Efficient Buildings Using Two Hybrid Intelligent Models. Appl. Sci. 2019, 9, 3543. [Google Scholar] [CrossRef] [Green Version]
  42. Zhou, G.; Moayedi, H.; Bahiraei, M.; Lyu, Z. Employing artificial bee colony and particle swarm techniques for optimizing a neural network in prediction of heating and cooling loads of residential buildings. J. Clean. Prod. 2020, 254, 120082. [Google Scholar] [CrossRef]
  43. Ghahramani, A.; Zhang, K.; Dutta, K.; Yang, Z.; Becerik-Gerber, B. Energy savings from temperature setpoints and deadband: Quantifying the influence of building and system properties on savings. Appl. Energy 2016, 165, 930–942. [Google Scholar] [CrossRef] [Green Version]
  44. Ferreira, P.; Ruano, A.; Silva, S.; Conceicao, E. Neural networks based predictive control for thermal comfort and energy savings in public buildings. Energy Build. 2012, 55, 238–251. [Google Scholar] [CrossRef] [Green Version]
  45. Ihara, T.; Gustavsen, A.; Jelle, B.P. Effect of facade components on energy efficiency in office buildings. Appl. Energy 2015, 158, 422–432. [Google Scholar] [CrossRef]
  46. Park, J.; Lee, S.J.; Kim, K.H.; Kwon, K.W.; Jeong, J.-W. Estimating thermal performance and energy saving potential of residential buildings using utility bills. Energy Build. 2016, 110, 23–30. [Google Scholar] [CrossRef]
  47. Wang, Z.; Hong, T.; Piette, M.A. Building thermal load prediction through shallow machine learning and deep learning. Appl. Energy 2020, 263, 114683. [Google Scholar] [CrossRef] [Green Version]
  48. Zhao, X.; Ye, Y.; Ma, J.; Shi, P.; Chen, H. Construction of electric vehicle driving cycle for studying electric vehicle energy consumption and equivalent emissions. Environ. Sci. Pollut. Res. 2020, 27, 37395–37409. [Google Scholar] [CrossRef]
  49. Zhu, L.; Kong, L.; Zhang, C. Numerical Study on Hysteretic Behaviour of Horizontal-Connection and Energy-Dissipation Structures Developed for Prefabricated Shear Walls. Appl. Sci. 2020, 10, 1240. [Google Scholar] [CrossRef] [Green Version]
  50. Chao, L.; Zhang, K.; Li, Z.; Zhu, Y.; Wang, J.; Yu, Z. Geographically weighted regression based methods for merging satellite and gauge precipitation. J. Hydrol. 2018, 558, 275–289. [Google Scholar] [CrossRef]
  51. Li, T.; Xu, M.; Zhu, C.; Yang, R.; Wang, Z.; Guan, Z. A Deep Learning Approach for Multi-Frame In-Loop Filter of HEVC. IEEE Trans. Image Process. 2019, 28, 5663–5678. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Xu, M.; Li, T.; Wang, Z.; Deng, X.; Yang, R.; Guan, Z. Reducing Complexity of HEVC: A Deep Learning Approach. IEEE Trans. Image Process. 2018, 27, 5044–5059. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Fu, X.; Pace, P.; Aloi, G.; Yang, L.; Fortino, G. Topology Optimization Against Cascading Failures on Wireless Sensor Networks Using a Memetic Algorithm. Comput. Netw. 2020, 177, 107327. [Google Scholar] [CrossRef]
  54. Fu, X.; Yang, Y. Modeling and analysis of cascading node-link failures in multi-sink wireless sensor networks. Reliab. Eng. Syst. Saf. 2020, 197, 106815. [Google Scholar] [CrossRef]
  55. Li, C.; Hou, L.; Sharma, B.Y.; Li, H.; Chen, C.; Li, Y.; Zhao, X.; Huang, H.; Cai, Z.; Chen, H. Developing a new intelligent system for the diagnosis of tuberculous pleural effusion. Comput. Methods Programs Biomed. 2018, 153, 211–225. [Google Scholar] [CrossRef]
  56. Cao, B.; Zhao, J.; Lv, Z.; Gu, Y.; Yang, P.; Halgamuge, S.K. Multiobjective Evolution of Fuzzy Rough Neural Network via Distributed Parallelism for Stock Prediction. IEEE Trans. Fuzzy Syst. 2020, 28, 939–952. [Google Scholar] [CrossRef]
  57. Shi, K.; Wang, J.; Tang, Y.; Zhong, S. Reliable asynchronous sampled-data filtering of T–S fuzzy uncertain delayed neural networks with stochastic switched topologies. Fuzzy Sets Syst. 2020, 381, 1–25. [Google Scholar] [CrossRef]
  58. Shi, K.; Wang, J.; Zhong, S.; Tang, Y.; Cheng, J. Non-fragile memory filtering of T-S fuzzy delayed neural networks based on switched fuzzy sampled-data control. Fuzzy Sets Syst. 2020, 394, 40–64. [Google Scholar] [CrossRef]
  59. Zhu, Q. Research on Road Traffic Situation Awareness System Based on Image Big Data. IEEE Intell. Syst. 2020, 35, 18–26. [Google Scholar] [CrossRef]
  60. Zhang, X.; Wang, Y.; Chen, X.; Su, C.-Y.; Li, Z.; Wang, C.; Peng, Y. Decentralized adaptive neural approximated inverse control for a class of large-scale nonlinear hysteretic systems with time delays. IEEE Trans. Syst. ManCybern. Syst. 2018, 49, 2424–2437. [Google Scholar] [CrossRef]
  61. Lv, Z.; Qiao, L. Deep belief network and linear perceptron based cognitive computing for collaborative robots. Appl. Soft Comput. 2020, 92, 106300. [Google Scholar] [CrossRef]
  62. Qian, J.; Feng, S.; Li, Y.; Tao, T.; Han, J.; Chen, Q.; Zuo, C. Single-shot absolute 3D shape measurement with deep-learning-based color fringe projection profilometry. Opt. Lett. 2020, 45, 1842–1845. [Google Scholar] [CrossRef] [PubMed]
  63. Qian, J.; Feng, S.; Tao, T.; Hu, Y.; Li, Y.; Chen, Q.; Zuo, C. Deep-learning-enabled geometric constraints and phase unwrapping for single-shot absolute 3D shape measurement. Apl Photonics 2020, 5, 046105. [Google Scholar] [CrossRef]
  64. Qiu, T.; Shi, X.; Wang, J.; Li, Y.; Qu, S.; Cheng, Q.; Cui, T.; Sui, S. Deep Learning: A Rapid and Efficient Route to Automatic Metasurface Design. Adv. Sci. 2019, 6, 1900128. [Google Scholar] [CrossRef]
  65. Hu, L.; Hong, G.; Ma, J.; Wang, X.; Chen, H. An efficient machine learning approach for diagnosis of paraquat-poisoned patients. Comput. Biol. Med. 2015, 59, 116–124. [Google Scholar] [CrossRef]
  66. Shen, L.; Chen, H.; Yu, Z.; Kang, W.; Zhang, B.; Li, H.; Yang, B.; Liu, D. Evolving support vector machines using fruit fly optimization for medical data classification. Knowl. Based Syst. 2016, 96, 61–75. [Google Scholar] [CrossRef]
  67. Zhang, C.; Wang, H. Swing Vibration Control of Suspended Structure Using Active Rotary Inertia Driver System: Parametric Analysis and Experimental Verification. Appl. Sci. 2019, 9, 3144. [Google Scholar] [CrossRef] [Green Version]
  68. Zhang, X.; Fan, M.; Wang, D.; Zhou, P.; Tao, D. Top-k Feature Selection Framework Using Robust 0-1 Integer Programming. IEEE Trans. Neural Netw. Learn. Syst. 2020, 1–15. [Google Scholar] [CrossRef]
  69. Zhao, X.; Li, D.; Yang, B.; Chen, H.; Yang, X.; Yu, C.; Liu, S. A two-stage feature selection method with its application. Comput. Electr. Eng. 2015, 47, 114–125. [Google Scholar] [CrossRef]
  70. Liu, S.; Yu, W.; Chan, F.T.; Niu, B. A variable weight-based hybrid approach for multi-attribute group decision making under interval-valued intuitionistic fuzzy sets. Int. J. Intell. Syst. 2020, 36, 1015–1052. [Google Scholar] [CrossRef]
  71. Chen, H.-L.; Wang, G.; Ma, C.; Cai, Z.-N.; Liu, W.-B.; Wang, S.-J. An efficient hybrid kernel extreme learning machine approach for early diagnosis of Parkinson’ s disease. Neurocomputing 2016, 184, 131–144. [Google Scholar] [CrossRef] [Green Version]
  72. Wang, M.; Chen, H.; Yang, B.; Zhao, X.; Hu, L.; Cai, Z.; Huang, H.; Tong, C. Toward an optimal kernel extreme learning machine using a chaotic moth-flame optimization strategy with applications in medical diagnoses. Neurocomputing 2017, 267, 69–84. [Google Scholar] [CrossRef]
  73. Wang, S.-J.; Chen, H.-L.; Yan, W.-J.; Chen, Y.-H.; Fu, X. Face recognition and micro-expression recognition based on discriminant tensor subspace analysis plus extreme learning machine. Neural Process. Lett. 2014, 39, 25–43. [Google Scholar] [CrossRef]
  74. Xia, J.; Chen, H.; Li, Q.; Zhou, M.; Chen, L.; Cai, Z.; Fang, Y.; Zhou, H. Ultrasound-based differentiation of malignant and benign thyroid Nodules: An extreme learning machine approach. Comput. Methods Programs Biomed. 2017, 147, 37–49. [Google Scholar] [CrossRef]
  75. Zhang, Y.; Liu, R.; Wang, X.; Chen, H.; Li, C. Boosted binary Harris hawks optimizer and feature selection. Eng. Comput. 2020, 1–30. [Google Scholar] [CrossRef]
  76. Wang, M.; Chen, H. Chaotic multi-swarm whale optimizer boosted support vector machine for medical diagnosis. Appl. Soft Comput. J. 2020, 88, 105946. [Google Scholar] [CrossRef]
  77. Cao, Y.; Li, Y.; Zhang, G.; Jermsittiparsert, K.; Nasseri, M. An efficient terminal voltage control for PEMFC based on an improved version of whale optimization algorithm. Energy Rep. 2020, 6, 530–542. [Google Scholar] [CrossRef]
  78. Xu, X.; Chen, H.-L. Adaptive computational chemotaxis based on field in bacterial foraging optimization. Soft Comput. 2014, 18, 797–807. [Google Scholar] [CrossRef]
  79. Zhao, X.; Zhang, X.; Cai, Z.; Tian, X.; Wang, X.; Huang, Y.; Chen, H.; Hu, L. Chaos enhanced grey wolf optimization wrapped ELM for diagnosis of paraquat-poisoned patients. Comput. Biol. Chem. 2019, 78, 481–490. [Google Scholar] [CrossRef]
  80. Xu, Y.; Chen, H.; Luo, J.; Zhang, Q.; Jiao, S.; Zhang, X. Enhanced Moth-flame optimizer with mutation strategy for global optimization. Inf. Sci. 2019, 492, 181–203. [Google Scholar] [CrossRef]
  81. Cao, B.; Dong, W.; Lv, Z.; Gu, Y.; Singh, S.; Kumar, P. Hybrid Microgrid Many-Objective Sizing Optimization With Fuzzy Decision. IEEE Trans. Fuzzy Syst. 2020, 28, 2702–2710. [Google Scholar] [CrossRef]
  82. Cao, B.; Fan, S.; Zhao, J.; Yang, P.; Muhammad, K.; Tanveer, M. Quantum-enhanced multiobjective large-scale optimization via parallelism. Swarm Evol. Comput. 2020, 57, 100697. [Google Scholar] [CrossRef]
  83. Cao, B.; Wang, X.; Zhang, W.; Song, H.; Lv, Z. A Many-Objective Optimization Model of Industrial Internet of Things Based on Private Blockchain. IEEE Netw. 2020, 34, 78–83. [Google Scholar] [CrossRef]
  84. Cao, B.; Zhao, J.; Gu, Y.; Fan, S.; Yang, P. Security-Aware Industrial Wireless Sensor Network Deployment Optimization. IEEE Trans. Ind. Inform. 2020, 16, 5309–5316. [Google Scholar] [CrossRef]
  85. Cao, B.; Zhao, J.; Gu, Y.; Ling, Y.; Ma, X. Applying graph-based differential grouping for multiobjective large-scale optimization. Swarm Evol. Comput. 2020, 53, 100626. [Google Scholar] [CrossRef]
  86. Cao, B.; Zhao, J.; Yang, P.; Gu, Y.; Muhammad, K.; Rodrigues, J.J.P.C.; de Albuquerque, V.H.C. Multiobjective 3-D Topology Optimization of Next-Generation Wireless Data Center Network. IEEE Trans. Ind. Inform. 2020, 16, 3597–3605. [Google Scholar] [CrossRef]
  87. Qu, S.; Han, Y.; Wu, Z.; Raza, H. Consensus Modeling with Asymmetric Cost Based on Data-Driven Robust Optimization. Group Decis. Negot. 2020, 30, 1–38. [Google Scholar] [CrossRef]
  88. Zhao, X.; Li, D.; Yang, B.; Ma, C.; Zhu, Y.; Chen, H. Feature selection based on improved ant colony optimization for online detection of foreign fiber in cotton. Appl. Soft Comput. 2014, 24, 585–596. [Google Scholar] [CrossRef]
  89. Sun, G.; Yang, B.; Yang, Z.; Xu, G. An adaptive differential evolution with combined strategy for global numerical optimization. Soft Comput. 2019, 24, 1–20. [Google Scholar] [CrossRef]
  90. Abedini, M.; Zhang, C. Performance Assessment of Concrete and Steel Material Models in LS-DYNA for Enhanced Numerical Simulation, A State of the Art Review. Arch. Comput. Methods Eng. 2020, 28, 1–22. [Google Scholar] [CrossRef]
  91. Gholipour, G.; Zhang, C.; Mousavi, A.A. Numerical analysis of axially loaded RC columns subjected to the combination of impact and blast loads. Eng. Struct. 2020, 219, 110924. [Google Scholar] [CrossRef]
  92. Liu, J.; Wu, C.; Wu, G.; Wang, X. A novel differential search algorithm and applications for structure design. Appl. Math. Comput. 2015, 268, 246–269. [Google Scholar] [CrossRef]
  93. Mou, B.; Zhao, F.; Qiao, Q.; Wang, L.; Li, H.; He, B.; Hao, Z. Flexural behavior of beam to column joints with or without an overlying concrete slab. Eng. Struct. 2019, 199, 109616. [Google Scholar] [CrossRef]
  94. Wang, J.; Huang, Y.; Wang, T.; Zhang, C.; Liu, Y.-H. Fuzzy finite-time stable compensation control for a building structural vibration system with actuator failures. Appl. Soft Comput. 2020, 93, 106372. [Google Scholar] [CrossRef]
  95. Wu, C.; Wu, P.; Wang, J.; Jiang, R.; Chen, M.; Wang, X. Critical review of data-driven decision-making in bridge operation and maintenance. Struct. Infrastruct. Eng. 2020, 17, 1–24. [Google Scholar] [CrossRef]
  96. Zhang, C.; Abedini, M.; Mehrmashhadi, J. Development of pressure-impulse models and residual capacity assessment of RC columns using high fidelity Arbitrary Lagrangian-Eulerian simulation. Eng. Struct. 2020, 224, 111219. [Google Scholar] [CrossRef]
  97. Zhang, C.; Wang, H. Swing vibration control of suspended structures using the Active Rotary Inertia Driver system: Theoretical modeling and experimental verification. Struct. Control Health Monit. 2020, 27, e2543. [Google Scholar] [CrossRef]
  98. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Metaheuristic algorithms: A comprehensive review. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; Academic Press: Cambridge, MA, USA, 2018; pp. 185–231. [Google Scholar]
  99. Chao, M.; Kai, C.; Zhiwei, Z. Research on tobacco foreign body detection device based on machine vision. Trans. Inst. Meas. Control 2020, 42, 2857–2871. [Google Scholar] [CrossRef]
  100. Liu, D.; Wang, S.; Huang, D.; Deng, G.; Zeng, F.; Chen, H. Medical image classification using spatial adjacent histogram based on adaptive local binary patterns. Comput. Biol. Med. 2016, 72, 185–200. [Google Scholar] [CrossRef]
  101. Xu, M.; Li, C.; Zhang, S.; Callet, P.L. State-of-the-Art in 360° Video/Image Processing: Perception, Assessment and Compression. IEEE J. Sel. Top. Signal Process. 2020, 14, 5–26. [Google Scholar] [CrossRef] [Green Version]
  102. Yue, H.; Wang, H.; Chen, H.; Cai, K.; Jin, Y. Automatic detection of feather defects using lie group and fuzzy fisher criterion for shuttlecock production. Mech. Syst. Signal Process. 2020, 141, 106690. [Google Scholar] [CrossRef]
  103. Zenggang, X.; Zhiwen, T.; Xiaowen, C.; Xue-min, Z.; Kaibin, Z.; Conghuan, Y. Research on Image Retrieval Algorithm Based on Combination of Color and Shape Features. J. Signal Process. Syst. 2019, 93, 1–8. [Google Scholar] [CrossRef]
  104. Zhang, T.; He, X.; Deng, Y.; Tsang, D.C.W.; Yuan, H.; Shen, J.; Zhang, S. Swine manure valorization for phosphorus and nitrogen recovery by catalytic–thermal hydrolysis and struvite crystallization. Sci. Total Environ. 2020, 729, 138999. [Google Scholar] [CrossRef] [PubMed]
  105. Hu, X.; Chong, H.-Y.; Wang, X. Sustainability perceptions of off-site manufacturing stakeholders in Australia. J. Clean. Prod. 2019, 227, 346–354. [Google Scholar] [CrossRef]
  106. Zhang, B.; Niu, Z.; Wang, J.; Ji, D.; Zhou, T.; Liu, Y.; Feng, Y.; Hu, Y.; Zhang, J.; Fan, Y. Four-hundred gigahertz broadband multi-branch waveguide coupler. Iet Microw. Antennas Propag. 2020, 14, 1175–1179. [Google Scholar] [CrossRef]
  107. Keshtegar, B.; Heddam, S.; Sebbar, A.; Zhu, S.-P.; Trung, N.-T. SVR-RSM: A hybrid heuristic method for modeling monthly pan evaporation. Environ. Sci. Pollut. Res. 2019, 26, 35807–35826. [Google Scholar] [CrossRef] [PubMed]
  108. Seo, B.; Yoon, Y.B.; Cho, S.S.S. ANN-based thermal load prediction approach for advanced controls in building energy systems. ARCC Conf. Repos. 2019, 1, 365–374. [Google Scholar]
  109. Halon, T.; Pelinska-Olko, E.; Szyc, M.; Zajaczkowski, B. Predicting Performance of a District Heat Powered Adsorption Chiller by Means of an Artificial Neural Network. Energies 2019, 12, 3328. [Google Scholar] [CrossRef] [Green Version]
  110. Nguyen, H.; Moayedi, H.; Sharifi, A.; Amizah, W.J.W.; Safuan, A.R.A. Proposing a novel predictive technique using M5Rules-PSO model estimating cooling load in energy-efficient building system. Eng. Comput. 2019, 35, 857–866. [Google Scholar] [CrossRef]
  111. Koschwitz, D.; Spinnräker, E.; Frisch, J.; van Treeck, C. Long-term urban heating load predictions based on optimized retrofit orders: A cross-scenario analysis. Energy Build. 2020, 208, 109637. [Google Scholar] [CrossRef]
  112. Roy, S.S.; Samui, P.; Nagtode, I.; Jain, H.; Shivaramakrishnan, V.; Mohammadi-ivatloo, B. Forecasting heating and cooling loads of buildings: A comparative performance analysis. J. Ambient Intell. Humaniz. Comput. 2019, 11, 1253–1264. [Google Scholar] [CrossRef]
  113. Nilashi, M.; Dalvi-Esfahani, M.; Ibrahim, O.; Bagherifard, K.; Mardani, A.; Zakuan, N. A soft computing method for the prediction of energy performance of residential buildings. Measurement 2017, 109, 268–280. [Google Scholar] [CrossRef]
  114. Pezeshki, Z.; Mazinani, S.M. Comparison of artificial neural networks, fuzzy logic and neuro fuzzy for predicting optimization of building thermal consumption: A survey. Artif. Intell. Rev. 2019, 52, 495–525. [Google Scholar] [CrossRef]
  115. Naji, S.; Shamshirband, S.; Basser, H.; Keivani, A.; Alengaram, U.J.; Jumaat, M.Z.; Petković, D. Application of adaptive neuro-fuzzy methodology for estimating building energy consumption. Renew. Sustain. Energy Rev. 2016, 53, 1520–1528. [Google Scholar] [CrossRef]
  116. Chou, J.-S.; Bui, D.-K. Modeling heating and cooling loads by artificial intelligence for energy-efficient building design. Energy Build. 2014, 82, 437–446. [Google Scholar] [CrossRef]
  117. Moayedi, H.; Tien Bui, D.; Dounis, A.; Ngo, P.T.T. A Novel Application of League Championship Optimization (LCA): Hybridizing Fuzzy Logic for Soil Compression Coefficient Analysis. Appl. Sci. 2020, 10, 67. [Google Scholar] [CrossRef] [Green Version]
  118. Satrio, P.; Mahlia, T.M.I.; Giannetti, N.; Saito, K. Optimization of HVAC system energy consumption in a building using artificial neural network and multi-objective genetic algorithm. Sustain. Energy Technol. Assess. 2019, 35, 48–57. [Google Scholar]
  119. Le, L.T.; Nguyen, H.; Zhou, J.; Dou, J.; Moayedi, H. Estimating the heating load of buildings for smart city planning using a novel artificial intelligence technique PSO-XGBoost. Appl. Sci. 2019, 9, 2714. [Google Scholar] [CrossRef] [Green Version]
  120. Moayedi, H.; Mehrabi, M.; Kalantar, B.; Abdullahi Mu’azu, M.; Rashid, A.S.A.; Foong, L.K.; Nguyen, H. Novel hybrids of adaptive neuro-fuzzy inference system (ANFIS) with several metaheuristic algorithms for spatial susceptibility assessment of seismic-induced landslide. Geomat. Nat. Hazards Risk 2019, 10, 1879–1911. [Google Scholar] [CrossRef] [Green Version]
  121. Goudarzi, S.; Anisi, M.H.; Kama, N.; Doctor, F.; Soleymani, S.A.; Sangaiah, A.K. Predictive modelling of building energy consumption based on a hybrid nature-inspired optimization algorithm. Energy Build. 2019, 196, 83–93. [Google Scholar] [CrossRef]
  122. Nguyen, H.; Mehrabi, M.; Kalantar, B.; Moayedi, H.; Abdullahi, M.A.M. Potential of hybrid evolutionary approaches for assessment of geo-hazard landslide susceptibility mapping. Geomat. Nat. Hazards Risk 2019, 10, 1667–1693. [Google Scholar] [CrossRef]
  123. Moayedi, H.; Nguyen, H.; Foong, L. Nonlinear evolutionary swarm intelligence of grasshopper optimization algorithm and gray wolf optimization for weight adjustment of neural network. Eng. Comput. 2019, 37, 1–11. [Google Scholar] [CrossRef]
  124. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  125. Moayedi, H.; Osouli, A.; Bui, D.T.; Kok Foong, L.; Nguyen, H.; Kalantar, B. Two novel neural-evolutionary predictive techniques of dragonfly algorithm (DA) and biogeography-based optimization (BBO) for landslide susceptibility analysis. Geomat. Nat. Hazards Risk 2019, 10, 2429–2453. [Google Scholar] [CrossRef] [Green Version]
  126. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. ControlSignals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  127. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  128. Salimi, H. Stochastic fractal search: A powerful metaheuristic algorithm. Knowl. Based Syst. 2015, 75, 1–18. [Google Scholar] [CrossRef]
  129. Mosbah, H.; El-Hawary, M. Optimization of neural network parameters by Stochastic Fractal Search for dynamic state estimation under communication failure. Electr. Power Syst. Res. 2017, 147, 288–301. [Google Scholar] [CrossRef]
  130. Tran, T.T.; Truong, K.H.; Vo, D.N. Stochastic fractal search algorithm for reconfiguration of distribution networks with distributed generations. Ain. Shams Eng. J. 2020, 11, 389–407. [Google Scholar] [CrossRef]
  131. Akar, H.A.; Mahdi, F.R. Research Article Trajectory Tracking Controller of Mobile Robot under Time Variation Parameters based on Neural Networks and Stochastic Fractal Algorithm. Res. J. Appl. Sci. Eng. Technol. 2016, 13, 871–878. [Google Scholar] [CrossRef]
  132. Yang, X.-S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Bristol, UK, 2010. [Google Scholar]
  133. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  134. Mafarja, M.; Aljarah, I.; Faris, H.; Hammouri, A.I.; Ala’M, A.-Z.; Mirjalili, S. Binary grasshopper optimisation algorithm approaches for feature selection problems. Expert Syst. Appl. 2019, 117, 267–286. [Google Scholar] [CrossRef]
  135. Mirjalili, S.Z.; Mirjalili, S.; Saremi, S.; Faris, H.; Aljarah, I. Grasshopper optimization algorithm for multi-objective optimization problems. Appl. Intell. 2018, 48, 805–820. [Google Scholar] [CrossRef]
  136. Aljarah, I.; Ala’M, A.-Z.; Faris, H.; Hassonah, M.A.; Mirjalili, S.; Saadeh, H. Simultaneous feature selection and support vector machine optimization using the grasshopper optimization algorithm. Cogn. Comput. 2018, 10, 478–495. [Google Scholar] [CrossRef] [Green Version]
  137. Yang, X.-S. Firefly algorithm. Nat. Inspired Metaheuristic Algorithms 2008, 20, 79–90. [Google Scholar]
  138. Gálvez, A.; Iglesias, A. Firefly algorithm for polynomial Bézier surface parameterization. J. Appl. Math. 2013, 2013, 237984. [Google Scholar]
  139. Yang, X.-S. Firefly algorithm, stochastic test functions and design optimisation. arXiv 2010, arXiv:1003.1409. [Google Scholar] [CrossRef]
  140. Yang, X.-S. Firefly algorithms for multimodal optimization. In International Symposium on Stochastic Algorithms; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar]
  141. Yang, X.-S. Multiobjective firefly algorithm for continuous optimization. Eng. Comput. 2013, 29, 175–184. [Google Scholar] [CrossRef] [Green Version]
  142. Roberts, A.; Marsh, A. ECOTECT: Environmental prediction in architectural education. In Proceedings of the 19th ECAADE—Education for Computer Aided Architectural Design in Europe, Helsinki, Finland, 29–31 August 2001. [Google Scholar]
  143. Tsanas, A.; Xifara, A. Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools. Energy Build. 2012, 49, 560–567. [Google Scholar] [CrossRef]
  144. Pessenlehner, W.; Mahdavi, A. Building morphology, transparence, and energy performance. In Proceedings of the Eight International IBPSA Conference, Eindhoven, The Netherlands, 11–14 August 2003. [Google Scholar]
  145. Ouarghi, R.; Krarti, M. Building Shape Optimization Using Neural Network and Genetic Algorithm Approach. Ashrae Trans. 2006, 112, 9. [Google Scholar]
  146. Schiavon, S.; Lee, K.H.; Bauman, F.; Webster, T. Influence of raised floor on zone design cooling load in commercial buildings. Energy Build. 2010, 42, 1182–1191. [Google Scholar] [CrossRef] [Green Version]
  147. Huang, Y.; Niu, J.-L.; Chung, T.-M. Comprehensive analysis on thermal and daylighting performance of glazing and shading designs on office building envelope in cooling-dominant climates. Appl. Energy 2014, 134, 215–228. [Google Scholar] [CrossRef]
  148. Papadopoulos, S.; Azar, E.; Woon, W.-L.; Kontokosta, C.E. Evaluation of tree-based ensemble learning algorithms for building energy performance estimation. J. Build. Perform. Simul. 2018, 11, 322–332. [Google Scholar] [CrossRef]
  149. Moayedi, H.; Mehrabi, M.; Mosallanezhad, M.; Rashid, A.S.A.; Pradhan, B. Modification of landslide susceptibility mapping using optimized PSO-ANN technique. Eng. Comput. 2019, 35, 967–984. [Google Scholar] [CrossRef]
  150. Thein, H.T.T.; Tun, K.M.M. An approach for breast cancer diagnosis classification using neural network. Adv. Comput. 2015, 6, 1–11. [Google Scholar] [CrossRef]
  151. Mehrabi, M.; Pradhan, B.; Moayedi, H.; Alamri, A. Optimizing an adaptive neuro-fuzzy inference system for spatial prediction of landslide susceptibility using four state-of-the-art metaheuristic techniques. Sensors 2020, 20, 1723. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  152. Moayedi, H.; Osouli, A.; Nguyen, H.; Rashid, A.S.A. A novel Harris hawks’ optimization and k-fold cross-validation predicting slope stability. Eng. Comput. 2019, 37, 369–379. [Google Scholar] [CrossRef]
  153. Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical Analysis; Springer: Berlin/Heidelberg, Germany, 1978; pp. 105–116. [Google Scholar]
Figure 1. A three-layer multi-layer perceptron (MLP).
Figure 1. A three-layer multi-layer perceptron (MLP).
Energies 14 01649 g001
Figure 2. Building shapes with respect to the relative compactness (RC) values.
Figure 2. Building shapes with respect to the relative compactness (RC) values.
Energies 14 01649 g002
Figure 3. The obtained cooling load (CL) values vs. influential factors.
Figure 3. The obtained cooling load (CL) values vs. influential factors.
Energies 14 01649 g003
Figure 4. (a) Sensitivity analysis and (bd) best optimization curves.
Figure 4. (a) Sensitivity analysis and (bd) best optimization curves.
Energies 14 01649 g004aEnergies 14 01649 g004b
Figure 5. The measured CLs versus those predicted by (a) MLPNN, (b) GOA–ANN (GOA-optimized artificial neural network), (c) FA–ANN (FA-optimized artificial neural network), and (d) SFS–ANN (SFS-optimized artificial neural network) models.
Figure 5. The measured CLs versus those predicted by (a) MLPNN, (b) GOA–ANN (GOA-optimized artificial neural network), (c) FA–ANN (FA-optimized artificial neural network), and (d) SFS–ANN (SFS-optimized artificial neural network) models.
Energies 14 01649 g005aEnergies 14 01649 g005b
Figure 6. The consistency of the measured CLs with those predicted by (a) MLPNN, (b) GOA–ANN, (c) FA–ANN, and (d) SFS–ANN models in the testing phase.
Figure 6. The consistency of the measured CLs with those predicted by (a) MLPNN, (b) GOA–ANN, (c) FA–ANN, and (d) SFS–ANN models in the testing phase.
Energies 14 01649 g006
Figure 7. Direct errors and their frequency drawn for the testing results of (a,b) MLPNN, (c,d) GOA–ANN, (e,f) FA–ANN, and (g,h) SFS–ANN.
Figure 7. Direct errors and their frequency drawn for the testing results of (a,b) MLPNN, (c,d) GOA–ANN, (e,f) FA–ANN, and (g,h) SFS–ANN.
Energies 14 01649 g007aEnergies 14 01649 g007b
Table 1. Statistical indices to describe the input/target variable(s).
Table 1. Statistical indices to describe the input/target variable(s).
FeaturesDescriptive Index
MeanStandard ErrorSample VarianceMinimumMaximum
Relative Compactness (m)0.760.000.010.620.98
Surface of the area (m2)671.713.187759.16514.50808.50
Area of the wall (m2)318.501.571903.27245.00416.50
Area of the roof (m2)176.601.632039.96110.25220.50
Height (m)5.250.063.073.507.00
Orientation (-)3.500.041.252.005.00
Area of the glazing (m2)0.230.000.020.000.40
Area distribution of the Glazing (-)2.810.062.410.005.00
Cooling load (kWh/m2)24.590.3490.5010.9048.03
Table 2. Relative errors for predicting five maximum/minimum CL values.
Table 2. Relative errors for predicting five maximum/minimum CL values.
Measured ValuesPredicted ValuesError (%)
MLPNNGOA-ANNFA-ANNSFS-ANNMLPNNGOA-ANNFA-ANNSFS-ANN
Minimum11.1711.0711.2213.0811.820.930.4517.145.80
11.2710.7311.8212.9611.074.774.9214.971.78
11.7311.4112.6713.1412.012.708.0412.032.40
12.0417.1115.1214.6012.6542.1525.5821.285.09
13.4310.3912.8913.5213.1322.654.030.682.20
Maximum41.0734.8534.6632.8936.9615.1315.6019.9110.00
42.8634.0434.5132.8336.3220.5919.4923.3915.26
44.1838.9539.2737.7241.5811.8311.1214.635.89
45.5236.6136.8935.6139.4519.5718.9621.7713.32
45.9738.7737.1435.7239.2015.6719.2122.3014.72
Table 3. A summary of accuracy indices obtained in this study.
Table 3. A summary of accuracy indices obtained in this study.
ModelsNetwork Results
TrainingTesting
RMSEMAER2RMSEMAER2
MLPNN3.39162.65110.88043.16632.45750.8881
GOA-ANN2.77251.89720.91562.76281.89510.9123
FA-ANN3.08122.14680.89572.94112.02650.9006
SFS-ANN2.28241.62720.94282.28281.58360.9401
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Moayedi, H.; Mosavi, A. Suggesting a Stochastic Fractal Search Paradigm in Combination with Artificial Neural Network for Early Prediction of Cooling Load in Residential Buildings. Energies 2021, 14, 1649. https://doi.org/10.3390/en14061649

AMA Style

Moayedi H, Mosavi A. Suggesting a Stochastic Fractal Search Paradigm in Combination with Artificial Neural Network for Early Prediction of Cooling Load in Residential Buildings. Energies. 2021; 14(6):1649. https://doi.org/10.3390/en14061649

Chicago/Turabian Style

Moayedi, Hossein, and Amir Mosavi. 2021. "Suggesting a Stochastic Fractal Search Paradigm in Combination with Artificial Neural Network for Early Prediction of Cooling Load in Residential Buildings" Energies 14, no. 6: 1649. https://doi.org/10.3390/en14061649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop