**Energy Efficiency and Indoor Environment Quality**

Editor

**Roberto Alonso Gonz´alez Lezcano**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editor* Roberto Alonso Gonzalez ´ Lezcano Universidad CEU San Pablo Spain

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Energies* (ISSN 1996-1073) (available at: https://www.mdpi.com/journal/energies/special issues/ indoor environment).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-2506-8 (Hbk) ISBN 978-3-0365-2507-5 (PDF)**

© 2021 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


## **About the Editor**

**Roberto Alonso Gonz ´alez Lezcano** is a Tenured Professor at the Department of Architecture and Design in the field of Building Systems, within the Institute of Technology at Universidad CEU San Pablo in Madrid. He is accredited by ANECA as a Senior Professor, and between 2003 and 2015, he undertook a six-year research period with CNEAI. He is the coordinator of the subject Mechanical Systems and of the Laboratory of Building Systems within Universidad CEU San Pablo, as well as the coordinator of the postgraduate course in Energy Efficiency and Mechanical Systems in Buildings. He is a member of the PhD Programs "Health Science and Technology" and "Composition, History and Techniques pertaining to Architecture and Urbanism", where he is the coordinator of the Construction, Innovation and Technology line and coordinator of the training course "Methodology of technical and statistical experimentation". He is the lecturer of the master's course in Quality in Construction and the postgraduate course in Overall Management of Buildings and Services at the Universidad Politecnica de Madrid. He is also coordinator of the Wind Energy section within the ´ master's degree program in Renewable Energy at the Institute of Technology (Universidad CEU San Pablo), as well as a lecturer of Advanced Manufacturing Processes in the master's degree program in Industrial Engineering at the Universidad Europea de Madrid. In addition, he lectures in Mechanics of Continuous Media and Theory of Structures, Electromechanics and Materials at seven Spanish universities. He has published 29 books, 20 of which in the last five years, and is the coordinator and co-author of the collections *ABC of Building Systems* (Munilla-Ler´ıa Eds.), consisting of five volumes, and *Building Systems in Building Design* (Asimetricas Eds.), consisting of seven volumes. He has ´ published 58 articles in the last five years concerning areas of energy, systems, and mechanics of continuous media (41 JCR and 17 SJR) and he is the editor of several books on issues of sustainability, energy efficiency and indoor environment quality, indexed in the Book Citation Index, WoS, Scopus, and SPI, among others. He was director of the PhD theses "Efficiency of ventilation in residential environments for the promotion of health of its occupants: impact on architectural indoor design" and "Energy simulation as a prognostic tool in architecture: design of passive strategies in different climatic zones of Spain", which he obtained in June 2018 and September 2020, respectively, Cum Laude, with mention of an International PhD. He is also the Director/Co-director of three PhD theses currently in progress related to energy efficiency, comfort, and indoor air quality, entitled "Influence of occupants on energy consumption in multi-family dwellings in Madrid: Characterization of their habits", "Building Systems in efficient office buildings" and "Energy simulation as a forecast tool in architecture: The importance of the solar reflectance index in the design stage of residential buildings in Spain". He is a member of the Editorial Board and Review Board of several journals indexed in JCR, as well as a Guest Editor of Special Issues and Topic Editor for JCR-indexed MDPI journals.

## *Article* **Real-Time Reconstruction of Contaminant Dispersion from Sparse Sensor Observations with Gappy POD Method**

#### **Zheming Tong 1,2,\*,**† **and Yue Li 1,2,**†


Received: 25 March 2020; Accepted: 11 April 2020; Published: 15 April 2020

**Abstract:** Real-time estimation of three-dimensional field data for enclosed spaces is critical to HVAC control. This task is challenging, especially for large enclosed spaces with complex geometry, due to the nonuniform distribution and nonlinear variations of many environmental variables. Moreover, constructing and maintaining a network of sensors to fully cover the entire space is very costly, and insufficient sensor data might deteriorate system performance. Facing such a dilemma, gappy proper orthogonal decomposition (POD) offers a solution to provide three-dimensional field data with a limited number of sensor measurements. In this study, a gappy POD method for real-time reconstruction of contaminant distribution in an enclosed space is proposed by combining the POD method with a limited number of sensor measurements. To evaluate the gappy POD method, a computational fluid dynamics (CFD) model is utilized to perform a numerical simulation to validate the effectiveness of the gappy POD method in reconstructing contaminant distributions. In addition, the optimal sensor placement is given based on a quantitative metric to maximize the reconstruction accuracy, and the sensor placement constraints are also considered during the sensor design process. The gappy POD method is found to yield accurate reconstruction results. Further works will include the implementation of real-time control based on the POD method.

**Keywords:** gappy proper orthogonal decomposition; sparse sensor observations; contaminant distribution; reconstruction; CFD

#### **1. Introduction**

Automatic control of the HVAC system plays a significant role in improving indoor air [1–4] and reducing building energy consumption [5–8]. Real-time estimation of contaminant distribution inside any enclosed space could provide immediate feedback to the control of ventilation systems and, thus, is of great significance. However, the temporal evolution of contaminant distribution is characterized by complex nonlinear dynamics. It is challenging to reconstruct the spatiotemporal distribution of contaminants for real-time control of the ventilation system [9].

There are three main approaches to constructing indoor field data: spatial data interpolation, physics-based simulation, and the data-driven approach. Generally, ordinary Kriging is the most widely used spatial interpolation method [10] and could produce an effective estimate of an indoor thermal map [11] and pollutant distributions [11–13] based on sensor measurements. However, the accuracy of the interpolation is strongly dependent on sensor placements and the number of sensors. Typically, a large number of sensors are required to achieve adequate spatial resolution [11]. On another hand, physics-based models such as computational fluid dynamics (CFD) [14–16], fast fluid

dynamics (FFD) [17–22], and zonal models [23–25] can provide three-dimensional field data for an indoor environment with only a small number of boundary conditions that are, however often difficult to obtain in real applications. Even though CFD have been widely employed for predicting the spatial variation of indoor environmental variables [26], the large amount of computational time makes real-time prediction extremely difficult and, thereby, unlikely to achieve real-time control. To overcome the large computational cost, the FFD model is proposed by solving the continuity equation and unsteady Navier-Stokes equations with a time-splitting approach [21]. In addition, the computational speed of FFD could be further accelerated by using parallel and GPU computing [19]. Considering its reasonable accuracy and high computational efficiency, the FFD method has been extensively studied for fast simulation of transient flow [18,19] and for inverse design of the ventilation parameters [20]. Although the reduction in computing time is significant, it is still computationally costly to implement, especially for real-time control. Different from CFD and FFD models, the zonal models are easily incorporated into the control system [25] and require very little computation time, because they only solve energy and mass balance equations at each thermal zone. However, based on the assumption that all quantities are uniform at each zone, zonal models are unable to provide detailed spatial information for a room space [23]. The third category is the data-driven approach. The artificial neural network (ANN) is one of the most widely used data-driven approaches for fast estimation of an indoor environment [27–30]. ANN models the nonlinear relationships between boundary conditions and environmental variables by training the data obtained from CFD simulations. However, an ANN model is very prone to over-fitting [12]. Moreover, as a black box model, it is difficult to interpret the results predicted by an ANN model.

Recently, the proper orthogonal decomposition (POD) has been extensively studied as a major class of data-driven approaches for dynamics reconstruction, since it provides a powerful tool to identify low-order modes of a system. Compared with ANN, POD provides a more interpretative reconstruction result by capturing the dominant physics-based coherent structures. At present, POD has wide application for dynamic reconstruction in the context of fluid mechanics [31,32], structural dynamics [33], ocean engineering [34,35], etc. As for the application in fluid dynamics, POD utilizes a few dominant POD bases for representation of the extracted flow features [36]. Moreover, to achieve active flow control in practice, sensor measurements must be introduced to the POD framework. Thus, gappy POD has been recently developed as an extension of the POD method by combining the POD model and sensor measurements [37]. Given a set of POD modes, gappy POD is able to reconstruct fluid dynamics with a limited number of sensor measurements through linear regression.

POD has recently been introduced to the field of built environment for fast reconstruction of the indoor temperature field [38–43] and airflow pattern [38,44]. For example, Allery et al. estimated the air flow pattern in a two-dimensional ventilated cavity by a POD-Galerkin construction [44]. Sempey et al. obtained reduced-order models of temperature distribution in an air-conditioned room under steady flow conditions based on the POD-Galerkin method [42]. Phan et al. applied the POD model for investigation of the temperature distribution in a data center by using the POD-Galerkin method and further investigated the effects of design parameters on the reconstruction accuracy [43]. Li et al. established a reduced-order model for fast estimation of the indoor thermal map by using Galerkin projection and further developed a real-time controller based on the reduced order model [41]. Tallet et al. used a POD model for real-time reconstruction of indoor temperature distribution based on sensor measurements [38]. Meyer et al. combined the POD method with the linear stochastic estimation method for reconstruction of the fluctuating thermal field in an experimental room based on real-time sensor measurements [40]. Jiang et al. fuse the input parameters of a CFD simulation and sparse sensor observations for reconstruction of a steady indoor thermal field [39]. However, gappy POD, which has been confirmed to perform well in turbulent flow sensing [37,45], has seldom been used for reconstructing the transient variations of the indoor environment.

Sensor measurements are critical to the estimation and control of the indoor environment. Considering the nonuniform distribution of indoor contaminants, a large number of sensors are required

to guarantee satisfactory HVAC performance, which, however, is usually cost-expensive. Thus, it is of great significance to extrapolate the three-dimensional distribution of indoor air contaminants based on limited sensor measurements. In this paper, a gappy POD method for the real-time reconstruction of contaminant distribution in an enclosed space is proposed by combining the POD method with a limited number of sensor measurements. Even though gappy POD has been confirmed to perform well in the field of turbulent flow sensing, the effects of gappy POD on the real-time reconstruction of contaminant distributions remain to be investigated. To evaluate the effects of gappy POD for the reconstruction of indoor environments, CFD models are utilized to perform the simulation study to validate the reconstruction results. The article is organized as follows. Section 2 briefly introduces the gappy POD method and the algorithm for optimal sensor placement. The reconstruction results of the contaminant distribution are presented in Section 3. Finally, the main conclusions are presented in Section 4.

#### **2. Materials and Methods**

#### *2.1. Standard POD*

The spatiotemporal distribution of contaminants is characterized by complex nonlinear dynamics. The POD method provides a powerful tool to decompose the high-dimensional dynamics to basic POD modes for representation of the coherent structures for contaminant distribution. The snapshot method was introduced by Sirovich et al. for determination of the basic POD modes [46]. The snapshots X are formed by collecting the time-series data.

$$\mathbf{X} = \begin{bmatrix} \mathbf{x}(t\_1) & \mathbf{x}(t\_2) & \cdots & \mathbf{x}(t\_k) & \cdots & \mathbf{x}(t\_m) \end{bmatrix} \tag{1}$$

where *x*(*tk*) is a vector of n elements representing the spatial distribution of the contaminants at a time *tk*. The snapshot *x*(*tk*) could be obtained from CFD simulation results. The correlation matrix R of size m × m is defined by

$$\mathcal{R} = \frac{1}{m} \mathbf{X}^T \mathbf{X}$$

Then, the eigenvectors <sup>ϕ</sup> <sup>=</sup> - <sup>ϕ</sup>1, <sup>ϕ</sup><sup>2</sup> ··· , <sup>ϕ</sup>*<sup>m</sup>* of the correlation matrix *R*, and its corresponding eigenvalues <sup>λ</sup> <sup>=</sup> - <sup>λ</sup>1, <sup>λ</sup><sup>2</sup> ··· , <sup>λ</sup>*<sup>m</sup>* are computed. The basic POD modes φ is given by a linear combination of snapshots

$$\boldsymbol{\phi}^{j} = \frac{\mathbf{X}\boldsymbol{\phi}^{j}}{\|\mathbf{X}\boldsymbol{\phi}^{j}\|\_{2}}\tag{3}$$

where ϕ*<sup>j</sup>* denotes the *j th* eigenvector. The eigenvalue is ordered in terms of the importance of their corresponding POD modes ( <sup>λ</sup><sup>1</sup> <sup>&</sup>gt; <sup>λ</sup><sup>2</sup> <sup>&</sup>gt; <sup>λ</sup><sup>3</sup> ··· <sup>&</sup>gt; <sup>λ</sup>*<sup>m</sup>* ). The lager eigenvalue means that the corresponding basic POD modes plays a more important role in describing the contaminant distribution. By using the first p eigenvectors, the contaminant concentration could be reconstructed as

$$\alpha \approx \sum\_{i=1}^{p} b\_i \Phi^i \tag{4}$$

where *bi* are temporal coefficients of the *i th* POD basis mode, and *x* is the vector representing the estimation of contaminant distribution in the enclosed space. For POD applications, the most important issue is the determination of the coefficients for basic POD modes.

#### *2.2. Gappy POD*

Gappy POD provides an effective method for the calculation of POD coefficients based on sparse sensor observations. The mask vector H of the n elements records the sensor locations. If the sensor is located on the ith grid point, then the ith element in H is equal to one; otherwise, the ith element in H is equal to zero. The incomplete vector *x* of the n elements describes the contaminant concentration at n locations corresponding to sensor locations but also has some elements missing. The full vector *<sup>x</sup>* of the n elements represents the reconstructed contaminant concentration. The gappy POD could reconstruct the full vector *<sup>x</sup>* from the incomplete vector *<sup>x</sup>*. The POD coefficient could be obtained by solving a linear regression in Equation (5).

$$
\Lambda \overline{\mathcal{b}} = f \tag{5}
$$

where

$$\mathcal{M}\_{ij} = \left( (H, \Phi^j), \left( H, \Phi^j \right) \right) \tag{6}$$

and

$$f\_i = \left(\mathbf{x}, \left(H, \Phi^i\right)\right) \tag{7}$$

The coefficient *b* could be obtained by solving Equation (5), and the spatial distribution of contaminants could be immediately reconstructed through Equation (4).

#### *2.3. Sensor Placement*

According to Equation (5), the POD coefficient is obtained through data regression. The condition number of matrix M is a good proxy for evaluating the solution accuracy of Equation (5). The smaller condition number could contribute to the higher solution stability for Equation (5). The optimal sensor placement for the N sensors could be determined by the following equations:

$$\begin{array}{c} \min \kappa(\mathbf{M})\\ \text{s.t. } \mathsf{K}(i,1) \in \{0,1\}, \ i = 1,2,\ldots,n\\ \sum\_{j=1}^{n} \mathsf{K}(i,1) = N \end{array} \tag{8}$$

where κ(*M*) is the condition number of M.

The sensor placement could be obtained based on a greedy algorithm to minimize κ(*M*) [37]. Firstly, loop over all possible placement points, evaluate the condition number of M for each point, and choose the point that minimizes κ(*M*) to locate the first sensor. Then, determine the location of the next sensor that could minimize the condition number κ(*M*). Repeat the previous steps until all the N sensors are placed in appropriate locations. For sensors with placement constraints, the number of possible sensor placement points would be fewer, and the only change we have to make is to loop over locations within location constraints for the optimal sensor placement.

#### *2.4. Numerical Model*

In the present study, the turbulent model based on Reynolds-Averaged Navier Stokes (RANS) is adopted. According to the previous studies focusing on gaseous contaminant dispersion in an enclosed space, the Eulerian method is able to provide an effective solution for simulating an indoor concentration distribution [26,47]. Therefore, the Eulerian method is used to predict the contaminant distributions in this study. Moreover, it is assumed that there is no chemical reaction during the gaseous contaminant dispersion process.

The experimental results given by Yuan et al. were utilized to validate the reliability of our numerical model [48]. The experiments were conducted in a test chamber with a size of 5.16 m × 3.65 m <sup>×</sup> 2.43 m (length <sup>×</sup> width <sup>×</sup> height) under the displacement ventilation. The total flow rate was 0.05 m3/s. The supply inlet was placed at the middle of the side wall near the floor, and the exhaust device of size

0.43 m × 0.43 m was fixed at the center of the ceiling. Two heated manikins, two computers, and six lamps were used to simulate heat sources. SF6 was used to simulate contaminants from occupants, and the contaminant source was placed above the simulated occupants.

Figure 1 gives the comparisons of the velocity and dimensionless SF6 concentration profiles between the numerical results and the experimental data. It could be observed that the agreement between the CFD results and the experimental data is reasonably good. Discrepancies between the simulated results and experimental values could be observed in the upper part of the space. Additionally, the discrepancies could also be found in the simulation study by Yuan et al. [49], who conducted this experiment and explained that this was because of recirculating flows in the upper part of the space. Moreover, it should be noted that the uncertainties for the experimental data are 0.01 m/s and 10% for the SF6 concentration [49].

**Figure 1.** Validation of the numerical model. (**a**)–(**e**) Validation of the airflow model. (**f**)–(**j**) Validation of the contaminant transport model.

#### *2.5. Model Setup*

In this study, the case is set in an enclosed space with dimensions of 4 m (L) × 3 m (W) × 2.5 m (H). Both airflow inlet and outlet are located on the left wall with dimensions of 0.6 m × 0.3 m. The inlet is located in the upper part of the left-side wall of 0.3 m beneath the ceiling, while the outlet is located in the lower part of the left-side wall of 0.3 m above the floor. A gaseous contaminant source is located in the center of the room. Detailed configurations of the ventilation system are shown in Figure 2. Moreover, the sensor locations are assumed to be constrained on the left and the back wall.

The reconstruction of the contaminant distribution is decomposed into two steps: the offline stage and online stage (Figure 3). In the offline stage, the snapshots are decomposed into basic POD modes, and the optimal sensor placement is determined. In the online stage, the temporal coefficients for POD modes are obtained based on the sparse sensor measurements through linear regression and then are applied for reconstruction of the contaminant distribution through a linear combination of the dominant POD modes. The POD-based online-offline algorithm has already been applied to many cases for optimal control of the dynamic systems, including optimal control of the water flooding reservoir [50] and optimal control of the indoor temperature [38,41]. The detailed procedure is described as follows:

**Figure 3.** Flow diagram illustrating the control process for indoor air quality management. POD: proper orthogonal decomposition.

(1) Collect a set of snapshots with a combination of different inlet velocities and contaminant source strengths. In case I, we first obtain the steady contaminant distribution with the CFD model for an inlet velocity of 1 m/s, and the gaseous contaminant is steadily released with a source strength of 1 mL per cubic meter per second. Using the steady contaminant distribution as the initial condition, the inlet velocity experiences a step change from 1 m/s to 2 m/s, and the source strength remains unchanged. The simulation results are recorded as snapshots until the next steady state is reached. In summary, the snapshots are recorded every 1 s, and 180 snapshots are obtained during the response process in case I. Case II follows a similar procedure as case I; the only difference between case I and case II is that the gaseous contaminant is steadily released with a source strength of 2 times higher than the source strength in case I. A sum of 320 snapshots are recorded for case II. Finally, a snapshot matrix combining the snapshots in both case I and case II is formed.

(2) Decompose the snapshots to obtain the POD modes, and then the optimal senor placement can be determined by minimizing the condition number in Equation (8). Moreover, it should be noted that the number of sensors should be more than the number of the dominant POD modes to make sure that the optimal algorithm for sensor placement is not an underdetermined problem.

(3) In the online stage, the temporal coefficients of the POD modes are determined based on the real-time measurements from the sparse sensors. The CFD simulations in the test case are used to provide sensor observations for implementing gappy POD reconstruction and to validate the reconstruction results. In particular, it is noteworthy that the gappy POD in the online step does not rely on any CFD simulation, and the CFD simulation here is used to perform a simulation study to evaluate the gappy POD model. The test case is set when the contaminant is steadily released at a source strength of 1.5 times higher than the source strength in case I. We first obtain the steady contaminant distribution with the CFD model for an inlet velocity of 1.2 m/s, and then, the inlet velocity is subject to a 0.4 m/s positive step to 1.6 m/s. The gappy POD method can be applied for reconstruction of the contaminant distribution in this test case, because the airflow pattern in our test case shares similar coherent structures with the snapshots. Moreover, Sempey et al. confirmed that the POD method performs well in reconstructing system dynamics with boundary conditions different from the conditions used for constructing snapshots [42].

(4) Reconstruct the spatiotemporal distribution of the contaminants based on the basic POD modes and the corresponding temporal coefficients.

#### **3. Results**

#### *3.1. POD Decomposition*

The snapshots of the contaminant distributions are of great importance for characterizing the system dynamics, and the dominant POD modes are built based on the snapshot ensemble. In detail, the contaminant is released from the center of the enclosed space, and the concentration is highest around the contaminant source. The contaminant distribution evolves with time as a response to the step increase of the inlet velocity from 1 m/s to 2 m/s.

A total of 500 POD modes and 500 corresponding eigenvalues are obtained based on the POD decomposition process. The exponential decay of the normalized eigenvalues is demonstrated in Figure 4. The normalized eigenvalues are calculated by dividing the sum of the total 500 eigenvalues and are regarded as a significant parameter for evaluating the percentage of energy contained in their corresponding POD modes and for measuring the importance of the POD modes. The normalized eigenvalues in Figure 4 are ordered in terms of their ability to describe the spatiotemporal distribution of the contaminants. It could be observed that the first few modes account for most of the system's energy. For example, the first, second, and third POD mode accounts for 66.63%, 17.83%, and 4.26% of the total system energy, respectively. In addition, the first six POD modes contain about 95% of the total system energy, while the first 16 POD modes contain about 99% of the total system energy. As the normalized eigenvalue decreases, the corresponding POD mode will exhibit less meaningful spatial structures and contribute less to reconstructing the contaminant dispersion process.

**Figure 4.** The exponential decay of the normalized eigenvalue. The eigenvalue is normalized by dividing the sum of the 500 eigenvalues. The first 64 POD modes contain more than 99.9% of the total system energy.

#### *3.2. Sensor Placement*

It is challenging to optimize the sensor placement to maximize the reconstruction accuracy, since there are thousands of potential locations for sensor placement. A quantitative framework is used to determine the sensor locations by minimizing the condition number of the M matrix in Equation (8). The sensor locations are assumed to be constrained on the left and the back wall. For example, the optimal sensor placements of 20 sensors with location constraints is demonstrated in Figure 5. The potential sensor location could be in any location of the approximately 3000 grids on the left and the back wall. The optimization strategy is to pick up 20 points from the 3000 potential locations for the placement of sensors that could minimize the condition number of M in Equation (8) based on the greedy algorithm. It should be noted that increasing the POD mode number or increasing

the sensor number would both result in a higher computational cost for finding the optimal sensor location. However, the computation of the optimal sensor placement is in the off-line stage and would not affect the speed for the real-time contaminant reconstruction in the online stage. Moreover, it can be observed that the sensors with location constraints are always kept away from the contaminant source. This can benefit the sensors by protecting them from long-term exposure to high-concentration contaminants and can improve the sensors' service lives.

**Figure 5.** Optimal sensor placements: (**a**): isometric view and (**b**): top view.

#### *3.3. Gappy Flow Reconstruction*

Quantifying the indoor air quality plays an important role in analyzing indoor occupant exposure [51]. Figure 6 demonstrates the estimation of the contaminant distribution based on 16 dominant POD modes with sensor placement constraints. It should be noted that the reconstruction accuracy is significantly limited by the number of dominant POD modes chosen for estimation of the contaminant distribution, and the first 16 modes contain more than 99% of the total system energy. As shown in Figure 6, the gappy POD method exhibits a high reconstruction accuracy and performs extremely well in estimating a contaminant concentration higher than 0.02 ppm. This is because 16 POD modes could capture a sufficiently detailed structure for reconstruction of the contaminant dispersion process. Moreover, the difference between gappy POD reconstruction and CFD simulation can be observed at t = 20 s, because it is difficult to reconstruct the contaminant distribution during this period with dramatical variations in the airflow field.

**Figure 6.** Comparison of the estimated contaminant distributions (ppm) at *x* = 1.5 m by the POD method and by CFD simulation at time 1s, 20 s, 40 s, and 60 s. (**a**)–(**d**): POD reconstruction; (**e**)–(**h**): CFD simulations.

For further evaluation of the gappy POD method, comparison of the estimated contaminant concentration along the line *x* = 1.5 m, *y* = 1 m is conducted (Figure 7). The difference between the POD reconstruction and CFD simulation could be observed at *t* = 20 s due to the slightly different airflow pattern from the snapshots, which could also be observed in Figure 6. However, the gappy POD performs well in most conditions, because the airflow pattern in our test case shares similar coherent structures with snapshots in most conditions. Moreover, Sempey et al. confirmed that the

POD method performs well in reconstructing system dynamics with boundary conditions different from the conditions used for constructing snapshots [42].

**Figure 7.** Comparison of the estimated contaminant concentrations (ppm) along the line *x* = 1.5 m, *y* = 1 m at time (**a**): *t* = 1 s; (**b**): *t* = 20 s; (**c**): *t* = 40 s; (**d**): *t* = 60 s.

The reconstruction accuracy could be improved by increasing the dominant POD modes. This is because more POD modes can contribute to more detailed information for the description of the pollutant distribution. In our study, we found that 16 dominant POD modes are sufficient for estimation of the contaminant distribution in the enclosed space with sensor placement constraints.

#### **4. Discussions and Conclusions**

Real-time estimation of the contaminant distribution is essential for ventilation control and indoor air quality management. Moreover, it is known that automatic control is of great importance for improving a systems' energy efficiency in engineering applications [52–55]. However, it is challenging to reconstruct spatiotemporal distributions of contaminants in an enclosed space, due to the ununiform distribution and nonlinear variations of indoor contaminants. As an essential part of active control, the sensor measurements only provide limited information about contaminant concentrations near sensor locations. Usually, a large number of sensors are required to guarantee the satisfactory performance of HVAC control, which might result in high costs for maintaining and conducting a sensor network in practice. Facing such a dilemma, the gappy POD offers a solution to provide real-time contaminant distributions with a limited number of sensor measurements.

In this study, a gappy POD method for the real-time reconstruction of pollutant distribution in an enclosed space is proposed by combining the POD method with a limited number of sensor measurements. In fact, the spatial distribution of indoor contaminants is often represented by a combination of a few dominant patterns, and this inherent property enables reconstruction of the contaminant distribution with sparse sensor observations. Moreover, our study gives the optimal sensor placement based on a quantitative metric in order to maximize the reconstruction accuracy. The sensor placement constraints are also considered during the sensor design process. It should be noted that the reconstruction accuracy is significantly limited by the number of dominant POD modes chosen for estimation of the pollutant distributions. For example, the first six POD modes only contain about 95% of the total system energy, while the first 16 POD modes contain about 99% of the total system energy. According to our study, reconstruction based on 16 POD modes are sufficient for accurate reconstruction of the pollutant distribution in the enclosed space.

For feedback control of the HVAC, gappy POD is able to provide a reliable estimation of three-dimensional data for the indoor environment with a high fidelity and low computational cost. Considering the low-order nature of the POD model, a closed-loop control might be achieved by controlling the coefficient of the POD modes, and the estimated POD coefficients could provide the necessary information for driving the HVAC actuators. Further work will include real-time control of the dynamic system based on the gappy POD method.

**Author Contributions:** Y.L.: conceptualization, methodology, and writing—original draft preparation. Z.T.: methodology, project administration and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the National Natural Science Foundation of China (51708493); Zhejiang Provincial Natural Science Foundation (LR19E050002); the National Key R&D Program of China (grant no. 2019YFB2004604); the Zhejiang Province Key Science and Technology Project (2018C01020, 2018C01060, and 2019C01057); and the Youth Funds of State Key Laboratory of Fluid Power & Mechatronic Systems (SKLoFP\_QN\_1804).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Review* **Energy E**ffi**ciency Evaluation Based on Data Envelopment Analysis: A Literature Review**

#### **Tao Xu, Jianxin You, Hui Li and Luning Shao \***

School of Economics and Management, Tongji University, Shanghai 200092, China; xutao1007@tongji.edu.cn (T.X.); yjx2256@tongji.edu.cn (J.Y.); lihui@mail.tongji.edu.cn (H.L.)

**\*** Correspondence: shaoluning@tongji.edu.cn

Received: 7 June 2020; Accepted: 7 July 2020; Published: 10 July 2020

**Abstract:** The importance and urgency of energy efficiency in sustainable development are increasing. Accurate assessment of energy efficiency is of considerable significance and necessity. The data envelopment analysis (DEA) method has been widely used to study energy efficiency as a total factor efficiency assessment method. In order to summarize the latest research on DEA in the field of energy efficiency, this article first analyzes the overall situation of related literature published in 2011–2019. Subsequently, the definition, measurement and evaluation variables of energy efficiency are introduced. After that, this article reviews the current DEA model and its extension models and applications based on different scenarios. Finally, considering the shortcomings of the existing DEA model, possible future research topics are proposed.

**Keywords:** energy efficiency; data envelopment analysis; literature review; future research

#### **1. Introduction**

Energy efficiency is a major global issue that plays an essential role in achieving sustainable development. Although the use of clean energy is gradually increasing, about 80% of global energy consumption is still fossil fuels, such as oil and natural gas, and about 50% of power generation depends on coal resources [1]. As a result, the public, researchers and governments are paying more attention to this issue. It is of considerable significance to evaluate the energy efficiency of different regions and sectors, not only can help identify differences in energy efficiency, but also to provide a quantitative basis for improving efficiency [2]. Patterson [3] first proposed the concept of energy efficiency, considering that it means using fewer resources at the same output, and gave four indicators of energy efficiency measurement. According to this definition, the indicators that measure energy efficiency can be divided into economic energy efficiency and physical energy efficiency.

To the best of our knowledge, the energy efficiency measured by different definitions and indicators varies widely. In order to measure energy efficiency more accurately, many scholars have studied the measurement of energy efficiency. Among them, Hu and Wang [4] proposed the concept of total factor energy efficiency (TFEE), which was widely recognized. TFEE believes that a single energy input cannot produce any output, which means that energy must be combined with other factors (such as labor and capital) to produce output. Based on the TFEE framework, energy efficiency is defined as the ratio of the target energy input to the actual input required at a particular output level. The proposal of TFEE effectively makes up for the shortcomings of traditional single-factor energy efficiency evaluation and has significant enlightening effects on subsequent research.

Early TFEE assessment methods consider that the input of production factors such as labor, capital, and energy will ultimately yield a single output, which is generally expressed as the gross domestic product (GDP). As fossil fuels dominate the world's energy consumption structure, environmental pollution is becoming increasingly severe. Therefore, a large number of studies have gradually incorporated ecological issues into energy efficiency evaluation [5]. It also means that energy efficiency is a significant issue related to the coordinated development of economy, energy and the environment.

With regard to energy efficiency measurement methods, there are parametric and non-parametric methods. Parametric Stochastic Frontier Analysis (SFA) requires the assumption of production function [6], which may have a multicollinearity problem [7]. The Non-parametric DEA method can better deal with the efficiency evaluation of decision-making units under the complicated situation of multiple inputs–outputs and has been widely used to evaluate the TFEE. DEA was first proposed in 1978 as a mathematical programming method for determining the relative effectiveness of homogeneous decision-making units (DMUs) [8]. Zhu, et al. [9] pointed out that DEA is a data-oriented method for evaluating the efficiency of a set of homogeneous DMUs. Compared with previous efficiency evaluation methods, DEA does not need to build a production function, which means that it can better deal with the efficiency of DMUs.

In the existing research, a large number of studies are conducted from the perspective of theory and application based on the data of countries, regions, industries and enterprises. In order to sort out the latest research results of DEA in the field of energy efficiency, this article reviews different DEA models used in energy efficiency evaluation, including basic DEA models and their extensions in different scenarios. Future research directions of DEA-based energy efficiency evaluation methods have also been proposed.

In the rest of this article, the basics of the literature published by DEA Energy Efficiency are presented in Section 2. Section 3 introduces the definition of energy efficiency and input and output variables. Sections 4 and 5 review the DEA-based energy efficiency assessment models and applications, respectively. Conclusions and future research discussions are illustrated in Section 6.

To produce clear descriptions, all the acronyms mentioned in this paper are listed in the nomenclatures.

#### **2. DEA-Based Energy E**ffi**ciency Publications**

This section first analyzes the publication years, journals and authors. Subsequently, through a visual analysis of the keywords and their evolutionary context, the basic situation of research in the field of energy efficiency evaluation based on DEA can be discovered.

#### *2.1. Number of Publications, Journal Distribution and Authors*

In order to analyze the latest research in the field of DEA energy efficiency evaluation, this article searches the relevant articles in the core database of the web of science. The search method used in this article is for the literature with the subject "energy efficiency" and the titles containing "DEA", as well as a publication period of 2011–2019. In the search results, the "article" type was selected, and 281 articles were selected in the end.

Based on the published papers retrieved, this article first analyzes the journal distribution of the literature. Journals with more than six published articles are shown in Figure 1. Among them, the journal with the most published articles is *Energy Economics*, with 26 published articles. Besides, the number of journals published in *Sustainability* and *Energy* exceeded 20. *Energies* and *Journal of Cleaner Production* have also published more than 15 articles.


**Figure 1.** Journal distribution and number of publications.

Figure 2 shows the number of papers published in the field of DEA energy efficiency from 2011 to 2019. As can be seen from Figure 2, the number of papers published since 2011 has shown a clear upward trend. The quantity reached 61 articles in 2019. This also indicates that the issue of energy efficiency is getting more and more attention from researchers.

**Figure 2.** Number of publications from 2011 to 2019.

#### *2.2. Keyword Evolution Analysis*

In order to further analyze the research progress of DEA in the field of energy efficiency, this article uses Citespace to visualize the keywords and their research evolutionary context between 2010 and 2019. The context of keyword evolution is shown in Figure 3.

It can be found from Figure 3 that related research is mainly carried out from theory and applications. Theoretical analysis is primarily based on the DEA method to improve the evaluation model to evaluate the energy efficiency better.

Combined with the development and evolution of the DEA model, with the deepening of the research, the construction of the DEA-based energy efficiency evaluation model is more in line with the actual situation. These models have changed from traditional radial models (CCR, BCC) to radial SBM models, from a single output to considering undesirable output, and from simple static structures to dynamic models of complex network structures. Besides, it can be seen in Figure 3 that the primary research objects of the existing literature are regional energy efficiency (country, province, and city), industrial energy efficiency (manufacturing and agriculture), and company energy efficiency (power plant). The detailed theory and application of DEA in energy efficiency will be further discussed in the following article.

#### *Energies* **2020**, *13*, 3548

**Figure 3.** The context of keyword evolution.

#### **3. Energy E**ffi**ciency Definition and Input–Output Variables**

#### *3.1. Energy E*ffi*ciency Definition*

As traditional energy efficiency measurement methods ignore other inputs, the concept of TFEE proposed by Hu and Wang [4] has been widely accepted. In the TFEE framework, it is believed that energy itself cannot produce any output and must be put together with other factors to produce output. The TFEE index incorporates energy, labor, and capital into the input system to generate economic output. Energy efficiency is defined as the ratio of target energy input to actual energy input, as shown below.

$$0 \le \frac{Target\ Energy\ Input}{Actual\ Energy\ Input} \le 1$$

#### *3.2. Input–Output Variables*

Selecting the appropriate input and output variables is an important step in the evaluation of TFEE using the DEA model. Although there have been many studies on energy efficiency analysis, there is still no unique standard for selecting input and output variables. In conventional energy efficiency measures, energy is used as a single input to generate GDP [3]; Hu and Wang [4] introduced labor and capital as input into the energy efficiency evaluation system for the first time, to evaluate the energy efficiency of 29 provinces and cities in China. By adopting the same variables, Honma and Hu [10] calculated the energy efficiency of a Japanese region from 1993 to 2003. Zhang, et al. [11] evaluated the energy efficiency of 27 developing countries.

In production activities, as the energy consumption structure is still dominated by fossil energy, a large number of carbon emissions, wastewater, and waste gas generated by the input of traditional energy have a serious impact on the environment. In this context, carbon emissions, waste gas, etc., are included as undesirable outputs when evaluating energy efficiency [12]. For example, Zhang, Sun and Huang [5] used carbon emissions as undesirable output and GDP as a desirable output when evaluating the energy efficiency of CDM member states. Li and Lin [13], Zhang and Choi [14], and Wang, et al. [15] also used GDP and carbon emissions as output variables when evaluating energy efficiency in 30 provinces and cities in China. Makridou, et al. [16] and Feng and Wang [17] also use the above variables to evaluate the efficiency of high energy consuming industries in EU countries and Chinese provincial-level industrial sectors.

Through the above analysis, it can be seen that there is a common system when using DEA for TFEE evaluation. Without loss of generality, energy, capital and labor are used as inputs. The main reason for introducing labor and capital is that energy itself cannot produce any output, and it must be put together with other factors to produce the output [4]. Regarding output variables, GDP is generally used as the expected output. Considering the impact of energy consumption on the environment, carbon emissions are typically used as undesirable output.

The variables will also vary depending on the different subjects. For example, when evaluating the energy efficiency of industries or enterprises, the input–output variables should be consistent with their actual production processes. Wu, et al. [18] selected passenger seats, transportation energy consumption, fixed assets, and transportation mileage as inputs. Passenger turnover and carbon emissions are selected as outputs to evaluate the energy efficiency of China's transportation sector. Zhang and Choi [14] used the amount of power generation as the expected output when assessing the energy efficiency of 252 power plants in China.

#### **4. Construction of DEA-Based Models in Energy E**ffi**ciency Evaluation**

Since DEA was proposed in 1978, it has been widely used in the efficiency evaluation of multiple input–output problems. Since then, there has been continuous research to expand the DEA model based on different theoretical and realistic backgrounds. Many models have also been proposed, including radial and non-radial, static and dynamic, single structure and network structure models, etc. [19]. In the field of energy efficiency assessment, due to the existence of undesirable carbon emissions, modeling of energy efficiency, including undesirable outputs, has also attracted the attention of researchers. This section reviews the energy efficiency methods from the perspective of basic and extended DEA methods.

#### *4.1. The Theoretical Basis of DEA*

Data Envelopment Analysis (DEA) is a data-oriented method for estimating the full factor efficiency of homogeneous decision units (DMUs) [19]. DEA can obtain the weight of a set of optimal input and output variables through optimization methods based on objective data of the evaluation object, and determine the efficiency level of DMU in the form of input and output ratio [8]. The basic logic of the DEA method is to construct a set of homogeneous DMU convex combinations based on input–output data to obtain an efficient production frontier. By comparing the actual input–output data of the DMU with the projection data of the frontier, the DMU is evaluated by relative efficiency [20,21]. Since the DEA method was proposed, it has been widely used in different fields [9,19]. As to the DEA method, many scholars have also proposed new concepts and models, such as the cross-efficiency model, the super-efficiency model, SBM model, network structure model, etc. [22–24].

Charnes, Cooper and Rhodes [8] first proposed the DEA theory method. In the later DEA literature, the first DEA model they created, namely the CCR model, was named after the first letters of the three surnames of Charnes, Cooper and Rhodes. The CCR model assumes constant returns to scale (CRS). In the DEA method, the evaluated objects are represented by DMUs. Each DMU produces R outputs by inputting M production factors. In model (1), *xmo*, *yro* represents the *m*-th input and *r*-th output of the DMU, respectively, and *um*, *vr* are the weights of the corresponding output and input variables. The input–output efficiency of the *o*-th DMU can be obtained from model (1). According to the weights of input and output variables, the CCR model can be understood as turning the multi-input–output problems into a virtual single input–output one. For a specific DMU, this single efficiency is measured by the ratio of virtual output to input. The linear programming of the CCR model can be expressed as

maximizing the efficiency of a specific DMU with the condition that the efficiency of all DMUs does not exceed 1.

$$\begin{array}{c} \mathop{\rm max}\_{\begin{subarray}{c} \sum\nolimits \\ \sum\nolimits \\ \sum\nolimits \end{subarray}}^{M} u\_{m} y\_{m} \\ \mathop{\rm max}\_{\begin{subarray}{c} \sum\nolimits \\ \sum\nolimits \\ \sum\nolimits \\ \sum\nolimits \\ \sum\nolimits \end{subarray}}^{M} v\_{r} x\_{m} \\ \text{s.f.:} \frac{\sum\nolimits \, u\_{m} y\_{m}}{k} \leq 1; \\ u \geq 0; v \geq 0; m = 1, 2, \dots, M; \\ r = 1, 2, \dots, R; \, n = 1, 2, \dots, N \end{array} \tag{1}$$

Since model (1) is a nonlinear programming one, let *t* = <sup>1</sup> *R r*=1 *vrxro* , μ = *tu*, ν = *tv*, model (1) can be

transformed into an equivalent linear programming model, as shown in model (2). Model (2) is called the DEA multiplier form, and its dual model can be represented by model (3), known as the envelope form. The CCR model assumes that the returns to scale are unchanged, and the technical efficiency obtained includes the scale efficiency component, so it is called comprehensive technical efficiency [8]. According to the way efficiency is measured, DEA models can also be divided into input-oriented, output-oriented, and non-oriented.

$$\begin{aligned} \max & \sum\_{m=1}^{M} \mu\_m y\_{mo} \\ \text{s.t.} & \sum\_{m=1}^{M} \mu\_m y\_{mn} - \sum\_{r=1}^{R} \nu\_m \mathbf{x}\_{mn} \le 1 \quad m = 1, 2, \dots, M \\ & \sum\_{\substack{R\\r=1}}^{R} \nu\_m \mathbf{x}\_{mo} = 1 \quad r = 1, 2, \dots, R \\ & \nu \ge 0; \mu \ge 0; n = 1, 2, \dots, N \end{aligned} \tag{2}$$

$$\begin{array}{c} \min \theta \\ \text{s.t.} \sum\_{n=1}^{N} \lambda\_n \mathbf{x}\_{mn} \le \theta \mathbf{x}\_{mo} \ m = 1, 2, \dots, M \\ \sum\_{n=1}^{N} \lambda\_n \mathbf{y}\_{rn} \ge \mathbf{y}\_{ro} \quad r = 1, 2, \dots, R \\ \lambda\_n \ge 0, n = 1, 2, \dots, N \\ j = 1, 2, \dots, N \end{array} \tag{3}$$

As mentioned above, many scholars have also proposed various new concepts and models in the field of the DEA method, and many of those novel methods are utilized to evaluate energy efficiency. In this paper, some basic or extended DEA models for assessing the efficiency of energy performance will be summarized. To better understand these models more visually and clearly, the main features of these models as well as their application scenario are presented in Table 1.

**Table 1.** The main features and application scenario of DEA.



**Table 1.** *Cont.*

#### *4.2. Energy E*ffi*ciency Evaluation Model Based on Basic DEA*

#### 4.2.1. CCR-Based Evaluation Model

Based on the input-oriented CCR model, Hu and Wang [4] proposed a TFEE evaluation model, as shown in model (4). It is also assumed that there are N DMUs, and the production process of each DMU is the input of energy and other M production factors, resulting in R outputs. The energy input of the *o*-th DMU is denoted by *eo*. *xmo*, *yro* represents the non-energy input and expected output of the DMU respectively.

$$\begin{aligned} \min & \theta\\ \text{s.t. } & \sum\_{n=1}^{N} \lambda\_n e\_n \le \theta e\_o\\ & \sum\_{n=1}^{N} \lambda\_n \mathbf{x}\_{mn} \le \theta^\* \mathbf{x}\_{mo} \, m = 1, 2, \dots, M\\ & \sum\_{n=1}^{N} \lambda\_n y\_m \ge y\_{ro} \quad r = 1, 2, \dots, R\\ & \lambda\_n \ge 0, n = 1, 2, \dots, N \end{aligned} \tag{4}$$

The optimal solution can be obtained by solving model (4), which represents the efficiency of the *o*-th DMU. However, the above model cannot directly obtain the target energy input. Thus, Ali and Seiford [25] proposed a two-stage method so that the slack variables of input and output can be obtained. The slack variable of energy input can be expressed by *s*− *<sup>e</sup>* . The TFEE based on the CCR model can be calculated by formula (5).

$$\text{CCR} - \text{based} \quad TFE = \frac{\theta \times \varepsilon\_o - s\_\varepsilon^-}{\varepsilon\_o} = 1 - \frac{(1 - \theta) \times \varepsilon\_o + s\_\varepsilon^-}{\varepsilon\_o} \tag{5}$$

#### 4.2.2. BCC-Based Evaluation Model

The CCR model assumes that the scale effect of production technology is maintained, but in fact, not all DMUs are in the optimal production scale state. Therefore, the efficiency calculated by the CCR model includes scale efficiency. Subsequently, Banker, Charnes and Cooper [21] proposed a DEA model that considers returns to scale, known as the BCC model. The BCC model is based on variable returns to scale, and the technical efficiency derived excludes the effects of returns on the scale, so it is called pure technical efficiency. The BCC model adds a constraint *<sup>N</sup> <sup>n</sup>*=<sup>1</sup> λ*<sup>n</sup>* = 1 to the CCR model; λ is the linear combination coefficient of input–output variables. The BCC-based energy efficiency evaluation model is shown in model (6), and the meaning of the variables in model (6) is consistent with their counterparts in model (4).

$$\begin{aligned} \min & \theta\\ \text{s.t.:} & \sum\_{n=1}^{N} \lambda\_n \mathbf{e}\_n \le \theta \mathbf{e}\_o\\ & \sum\_{n=1}^{N} \lambda\_n \mathbf{x}\_{mn} \le \theta \mathbf{x}\_{mo} \le \mathbf{m} = 1, 2, \dots, M\\ & \sum\_{n=1}^{N} \lambda\_n \mathbf{y}\_{rn} \ge \mathbf{y}\_{ro} \quad r = 1, 2, \dots, R\\ & \sum\_{n=1}^{N} \lambda\_n = 1\\ & \lambda\_n \ge 0, n = 1, 2, \dots, N\\ & BCC-based \quad TFEE = \frac{\frac{\partial \times \mathbf{e}\_o - \mathbf{s}\_r^\*}{c\_p}}{c\_p} = 1 - \frac{(1-\theta)\mathbf{x}\_o + \mathbf{s}\_r^-}{c\_p} \end{aligned} \tag{6}$$

4.2.3. SBM-Based Evaluation Model

The CCR and BCC models improve the inefficient DMU by reducing (increasing) all inputs (or outputs) proportionally, which are called radial models. In fact, for the inefficient DMU, the gap between the current state and the effective target value also includes part of the slack improvement. Therefore, Tone [26] proposed the Slack-Based Model (SBM), a non-oriented efficiency evaluation model, which is used to deal with the slack improvement of the input and output. The SBM has a higher discriminating ability than the CCR and BCC model. The energy efficiency evaluation model based on the non-oriented SBM is shown in model (7).

In model (7), the optimal value of the objective function is the efficiency value of the unoriented SBM. *s*− *<sup>e</sup>* represents the slack variable of the energy input, *S*<sup>−</sup> *<sup>m</sup>* and *<sup>S</sup>*<sup>+</sup> *<sup>m</sup>* represent the slack variable of the *m*-th input and *r*-th output. The remaining variables are consistent with those in the model (6). According to the Charnes–Cooper transformation, the abovementioned nonlinear SBM can be transformed into a linear model for solving.

$$\begin{aligned} \min & \frac{1 - \frac{1}{M + 1} \left( \sum\_{n=1}^{M} \frac{s\_{n}^{-}}{s\_{\text{mo}}} + \frac{s\_{\text{c}}^{-}}{s\_{\text{c}}} \right)}{1 - \frac{1}{K} \sum\_{r=1}^{K} \frac{s\_{r}^{+}}{s\_{\text{r}o}}} \\ & \text{s.t. } \sum\_{n=1}^{N} \lambda\_{n} c\_{n} = c\_{o} - s\_{\text{c}}^{-} \\ & \sum\_{n=1}^{N} \lambda\_{n} \mathbf{x}\_{mn} = \mathbf{x}\_{mo} - s\_{m}^{-}, \quad m = 1, 2, \dots, M \\ & \sum\_{n=1}^{N} \lambda\_{n} y\_{rn} = y\_{ro} + s\_{r}^{+}, \quad r = 1, 2, \dots, R \\ & \lambda\_{n} \ge 0, s\_{c}^{-} \ge 0, s\_{m}^{-} \ge 0, s\_{r}^{+} \ge 0, n = 1, 2, \dots, N \end{aligned} \tag{7}$$

#### *4.3. Energy E*ffi*ciency Evaluation Model Based on Extended DEA*

#### 4.3.1. Evaluation Model Considering the Impact of Carbon Emissions

In recent years, the use of clean energy has gradually increased, but the energy consumption structure has not changed in the short term, with fossil energy as the mainstay. A large number of carbon emissions, wastewater and gas caused by the use of traditional energy sources have a serious impact on the environment. Therefore, it is necessary to consider the impact of carbon emissions and wastewater emissions when evaluating energy efficiency [12]. For example, He, et al. [27] found that undesirable output has a great impact on energy efficiency when studying the energy efficiency of OECD countries. Disregarding undesirable output often leads to an overestimation of energy efficiency.

Regarding the setting of the undesirable output in the DEA method, it can be divided into strong disposability and weak disposability [28]. The first is strong disposability, which can reduce undesirable output without reducing expected output [29]. In the case of strong disposability assumptions, the undesirable output can be treated as input [30,31] or transform the data of the undesirable output, including linear transformation, inverse transformation and exponential transformation [32–34]. The second is the assumption of weak disposability of undesirable output, that is, to reduce undesirable output requires additional input or reduce expected output [35,36]. It implies that the reduction in undesirable output comes at the cost of expected output. There is also a null joint hypothesis for undesirable output, which indicates that as long as there is expected output in production activities, it must accompany undesirable output. The only solution to avoid undesirable output is to stop production.

In the existing literature, the assumption of weak disposability is widely adopted. As the setting of null-jointness is more in line with reality, that is to say, the input of energy, labor and capital will produce greenhouse gas emissions while generating economic benefits. According to Apergis, et al. [37], in the SBM-DEA model considering undesirable output, it is assumed that there are *N* DMUs, and the production process of each DMU is through the input of energy and other *M* production factors. This results in *R* expected outputs and *K* undesirable outputs. *uqo* represents the *q*-th undesirable output of the *o*-th DMU, and the remaining variables are consistent with model (7). The SBM-DEA model based on weak disposability is shown in model (8). The constraint of model (8) on the undesirable output, (1 + <sup>1</sup> R *R r*=1 *s* + *r yro* ) indicates that the expected output and the undesirable output have the same proportion of change.

$$\min \frac{1 - \frac{1}{M+1} \sum\_{n=1}^{M} \frac{z\_{nn}^{-}}{z\_{np}} + \frac{z\_{\sigma}^{-}}{z\_{\sigma}^{+}}}{1 + \frac{1}{1 + \sum\_{r=1}^{R} \left(\sum\_{r=1}^{R} \frac{z\_{r}^{+}}{r\_{\sigma r}} + \sum\_{q=1}^{Q} \frac{z\_{q}^{-}}{z\_{qr}}\right)}$$

$$\text{s.t. } \sum\_{n=1}^{N} \lambda\_{n} u\_{n} = c\_{o} - s\_{\sigma}^{-}$$

$$\sum\_{n=1}^{N} \lambda\_{n} \mathbf{x}\_{mn} = \mathbf{x}\_{m0} - s\_{m}^{-}, \quad m = 1, 2, \dots, M \tag{8}$$

$$\sum\_{n=1}^{N} \lambda\_{n} y\_{rn} = y\_{ro} + s\_{r}^{+}, \quad r = 1, 2, \dots, R$$

$$\sum\_{n=1}^{N} \lambda\_{n} u\_{qn} = (1 + \frac{1}{N} \sum\_{r=1}^{R} \frac{s\_{r}^{+}}{r\_{ro}}) u\_{qo} - s\_{q}^{-}, \quad q = 1, 2, \dots, Q$$

$$\lambda\_{n} \ge 0, s\_{r}^{-} \ge 0, s\_{m}^{-} \ge 0, s\_{r}^{+} \ge 0, s\_{s}^{+} \ge 0, s\_{s}^{-} \ge 0, n = 1, 2, \dots, N$$

4.3.2. Evaluation Model Considering the Network Structure

The basic DEA method regards the production system as a "black box", which means that only the initial input and final output of the system are considered to evaluate the efficiency of the decision unit [22]. However, the energy system is regarded as a complex process, in other words, the "black box" assumption ignores the process of energy conversion or transmission. For example, the conversion process of primary energy (such as coal, oil, and natural gas) to secondary energy (such as coke, liquefied petroleum gas, and thermal power) in industrial systems, as well as the process of power generation, power distribution transmission, and end-user energy consumption by power generation companies. The energy efficiency obtained under the "black box" assumption cannot provide specific process guidance in the energy production sector and the energy utilization sector to improve energy efficiency separately [38].

In order to open the "black box" to evaluate the process efficiency of DMUs, Färe and Grosskopf [22] first proposed a network DEA model considering intermediate output. Since Seiford and Zhu [39] first applied two-stage DEA to US commercial banks, network DEA has been widely used for efficiency in various industrial and commercial sectors. Since then, Tone and Tsutsui [40] proposed the SBM-Network DEA model, and Fukuyama and Weber [41] extended the SBM network DEA model to evaluate the efficiency of systems with undesirable outputs.

In the two-stage DEA structure, the DMU is divided into two sub-DMUs, where each sub-DMU consumes input to generate output, where the input of the second stage is the output of the first stage. When evaluating energy efficiency, Liu and Wang [38] divided the energy efficiency of the industrial sector into two phases of energy production and energy consumption and used a two-stage network DEA to evaluate energy efficiency, as shown in model (9).

In the network structure of model (9), the first stage is the energy production sector, where the input is primary energy and the output is secondary energy. The second stage is the energy consumption sector, whose input is the output produced in the first stage. λ<sup>1</sup> and λ<sup>2</sup> represent the linear combination coefficients of the input and output in the first and second phases, respectively; *eo* is the primary energy input in the first phase; *xmo* is the *m*-th input index of the *o*-th DMU in the first phase; *zto* is the *t*-th secondary energy of the *o*-th DMU generated in the first stage, and it is also the input index of the second stage; *yro* is the *r*-th output index of the *o*-th DMU in the second stage.

$$\begin{array}{c} \max \frac{1}{2} \cdot \frac{1}{M+1} \left( \sum\_{m=1}^{N} \frac{s\_m^+}{x\_{\text{min}}} + \frac{s\_c^-}{s\_c^-} \right) + \frac{1}{2} \cdot \frac{1}{1} \sum\_{r=1}^{R} \frac{s\_r^+}{y\_{\text{min}}} \\ \text{s.t.} \\ \sum\_{m=1}^{N} \lambda\_m^1 x\_{\text{min}} = c\_0 - s\_c^- \\ \sum\_{m=1}^{N} \lambda\_m^1 x\_{\text{min}} = x\_{\text{to}} + s\_{\text{t}}^{1,\*}, t = 1, 2, \dots, K \\ \sum\_{m=1}^{N} \lambda\_m^1 z\_{\text{tr}} = z\_{\text{to}} - s\_{\text{t}}^{2,-}, t = 1, 2, \dots, K \\ \sum\_{m=1}^{N} \lambda\_m^2 x\_{\text{tr}} = z\_{\text{to}} - s\_{\text{t}}^{2,-}, t = 1, 2, \dots, K \\ \sum\_{m=1}^{N} \lambda\_m^2 y\_{\text{tr}} \ge y\_{\text{to}} + s\_r^+, r = 1, 2, \dots, K \\ \sum\_{m=1}^{N-1} \lambda\_m^1 \ge 0, \lambda\_m^2 \ge 0, s\_c^- \ge 0, s\_m^- \ge 0, s\_t^{-1} \ge 0, \\ s\_t^{\text{tr}} \ge 0, s\_r^+ \ge 0, n = 1, 2, \dots, N \\ \text{Network} - DEA-based \quad TFE = 1 - \frac{\zeta\_r}{t\_c} \end{array} \tag{9}$$

#### 4.3.3. Evaluation Model Considering the Dynamic Process

When the data of the evaluated DMU are panel data of multiple periods, the assumption that the production technology of the basic DEA method is the same all the time does not meet the actual situation. Therefore, Charnes and Cooper [42] introduced DEA window analysis as an extension of the traditional DEA method. DEA window analysis can analyze section data and panel data to analyze the dynamic efficiency of the evaluated object. As a commonly used dynamic efficiency analysis method, window DEA analysis is based on the moving average principle. It treats each DMU in different periods as a separate unit for efficiency measurement. When evaluating energy efficiency under the window analysis framework, the energy efficiency of a region in one period can be compared with the efficiency of other regions and its efficiency in other periods [43]. In addition, window analysis can also explore the energy efficiency of different regions in different years through a series of overlapping windows [44].

Besides, the non-parametric Malmquist productivity index (MPI), as a time series analysis technique, has also been extended to DEA models to characterize dynamic changes in efficiency. Färe, et al. [45] introduced the Malmquist index into the DEA model to evaluate the efficiency changes of different DMUs and the dynamic changes of production technology in two periods. In the field of energy efficiency evaluation, the Malmquist model is also widely used [17,46].

#### 4.3.4. Evaluation Model Considering Game Relations

The basic DEA method maximizes the efficiency of the evaluation object by selecting a set of optimal weights, which usually leads to an overestimation of the evaluation object's efficiency value [47]. Therefore, Sexton, et al. [48] proposed cross-efficiency DEA, in which the mutual evaluation weights were added between DMUs to improve objectivity. In reality, there is often a direct or indirect competitive relationship between different DMUs. Especially in the evaluation of energy efficiency, competition may be more intense due to the scarcity of energy. Considering that when there is a competitive relationship, the cross-efficiency DEA evaluation method will not be able to evaluate the efficiency value of the DMU correctly. To this end, Liang, et al. [49] consider the game relationship between different DMUs and extend the DEA cross-efficiency to the DEA game cross-efficiency, by maximizing the cross-efficiency of each DMU without reducing the efficiency of other DMUs.

In the field of energy efficiency evaluation, DEA efficiency is widely used in game cross-efficiency. Chen, et al. [50] introduced the game cross-efficiency DEA for the first time to measure the energy efficiency of the power industry in China's provincial regions under environmental constraints. Studies have shown that the energy efficiency of eastern China is much higher than that of central and western China. Xie, et al. [51] evaluated the environmental efficiency of China's power generation industry through the game cross-efficiency DEA, and the results showed that there was a significant efficiency gap between regions. Yang and Wei [52] also used this method to analyze the energy efficiency of 26 prefecture-level cities in China. Considering that the cross-game DEA model requires multiple steps, this article does not describe the model in detail here. The detailed model can be understood by reading the relevant literature mentioned in this section.

#### 4.3.5. Evaluation Model Considering Technical Heterogeneity

Based on a unified reference technology assumption, the basic DEA method assumes that all DMUs participate in the evaluation with the same technical benchmark. It means that the heterogeneity of different evaluation objects is ignored in the efficiency evaluation. In fact, there are huge differences in natural resources, economic foundations, urbanization levels, and industrial structures in various regions of China. For example, some provinces are dominated by agriculture and services, while others are dominated by manufacturing. In this case, each region of China may have its own characteristics of typical energy use, and the energy structure and technological level among the provinces are quite different [53,54].

In order to solve the problem of bias in efficiency evaluation results due to heterogeneity between DMUs, Battese, et al. [55] introduced a meta-frontier model for different groups with different technologies. It can calculate comparable technical efficiency for DMUs under different technologies. Since then, DEA and meta-frontier models have been widely used in energy efficiency evaluation. Yu, et al. [56] combined the meta-frontier method with super-SBM to study the energy efficiency of various regions in China during 2006–2016. Yu, You, Zhang and Ma [55] analyze the energy efficiency of 277 cities in China from 2007 to 2014, taking into account the effects of technological heterogeneity. Wang, et al. [57] classified Guangdong enterprises based on geographic boundaries and industry classification systems and used group frontier and meta-frontier direction distance functions to analyze their energy efficiency. The technical explanation and detailed settings of the meta-frontier model can be obtained in the related literature mentioned in this section.

#### **5. Application of DEA Model in Energy E**ffi**ciency Evaluation**

As DEA has become an important and commonly used analysis tool and method in the field of energy efficiency assessment, a large amount of the literature evaluates energy efficiency based on data from countries, regions, industries and companies. This section will introduce the application of DEA in energy efficiency evaluation.

#### *5.1. Energy E*ffi*ciency Evaluation of Regions*

After Hu and Wang [4] first proposed the total factor energy efficiency framework and evaluated the energy efficiency of various regions in China, the DEA method was widely used in national and regional energy efficiency evaluation. This section reviews the studies that have evaluated energy efficiency in different regions using DEA from 2015 to 2019 and the results are shown in Table 2.

Jebali, et al. [58] analyzed the energy efficiency and influencing factors of Mediterranean countries during 2009–2012. The results of the study indicate that the energy efficiency levels in Mediterranean countries are high and decline over time. Gross national income per capita, population density and the use of renewable energy can affect energy efficiency. Zhao, et al. [59] measured the energy efficiency of 35 Belt and Road countries in 2015 based on a three-stage DEA model. The results show that South Korea, Singapore, Israel, and Turkey have a TFEE of 1. Uzbekistan, Ukraine, South Africa and Bulgaria are less efficient. He, Sun, Shen, Jian and Yu [27] established an DEA-based energy efficiency evaluation model for measuring the energy efficiency of 32 OECD countries from 1995 to 2016. Additionally, the effects of environmental factors on energy efficiency assessment were compared through efficiency analysis and predicted value analysis. Wang, et al. [60] use the DEA-Malmquist method to measure the energy efficiency of 25 countries; the results of this study show that by using the same inputs as developing countries, the developed countries' balance between GDP growth and carbon dioxide emissions is more balanced. In addition, India and China increased their energy intensity during 2010–2017.


**Table 2.** Energy efficiency evaluation of regions.

In addition to evaluating a country's energy efficiency, the energy efficiency of provinces and cities has also attracted the attention of many researchers, especially in China's provinces and cities. Wang, Yu and Zhang [43], Li and Lin [13], and Wu, Zhu, Yin and Song [63] adopt the DEA method to evaluate the energy efficiency of 30 provinces in China, and research shows that most provinces are less energy efficient. Eastern China has the highest energy efficiency, while western China has the worst energy efficiency. Efficiency has improved in most regions during 2006–2010. Yu, You, Zhang and Ma [56] proposed an energy efficiency evaluation model that takes into account regional technological heterogeneity and carbon emissions. By evaluating the energy efficiency of 277 cities in China between 2007 and 2014, the study found that there are large differences in the energy efficiency of Chinese cities. Sun, Wang and Li [66] considered the heterogeneity and technology gap of energy management in different regions and measured the energy efficiency of 211 cities in the country. The results show that

the overall efficiency of Chinese cities is low, while that of central China is the lowest, and there is a huge technological gap between regions. Yang and Wei [52] used the game cross-efficiency DEA method to analyze the urban total factor energy efficiency of 26 prefecture-level cities in China from 2005 to 2015. The results show that the energy efficiency of cities considering competition is lower than traditionally calculated energy efficiency. During the study period, the study concluded that urban energy efficiency did not improve. There are also studies evaluating regional energy efficiency in other countries. For example, Honma and Hu [10] used the DEA method to analyze total factor energy efficiency based on data from 47 cities and counties in Japan.

#### *5.2. Energy E*ffi*ciency Evaluation of Industries and Companies*

DEA is also widely used in assessing industry energy efficiency. Through a search of the related literature, it can be found that research on industry energy efficiency is mainly concentrated in high energy consuming industries such as electricity, construction, and transportation. Table 3 shows the energy efficiency evaluation of industries.

Makridou, Andriosopoulos, Doumpos and Zopounidis [16] used the DEA method to assess the energy efficiency of five energy-intensive industries (building, power, manufacturing, mining, and transportation sectors) in 23 EU countries between 2000 and 2009. The study found that overall efficiency has improved across all sectors during this period. Lee and Choi [67] evaluated the energy and environmental efficiency of seven manufacturing sectors in South Korea from 2011 to 2017, and the results showed that energy efficiency improved by an average of 0.3% during the study period. Zhou, Xu, Wang and Wu [34] conducted an empirical study on the energy efficiency of China's industrial sector from 2010 to 2014, and the results showed that most sectors of Chinese industry performed poorly, especially those related to energy extraction. Lei, et al. [68] evaluated the energy efficiency of 30 provincial transport departments in China. The results show that the energy efficiency of the provincial transport departments in China varies widely; efficiency is better than in the midwest of China. Djordjevic and Krmac [69] uses a non-radial DEA to evaluate the energy efficiency of the transportation industry (road, railway and aviation sectors) in Europe. Studies indicate that the energy efficiency of the road sector is improving, while the energy efficiency of the railway transport sector in many assessed countries is low.


#### **Table 3.** Energy efficiency evaluation of industries.

Compared to the regional and industry levels, energy efficiency at the enterprise level is relatively low. In the existing research, Cui and Li [73] used DEA to analyze the energy efficiency of 11 airlines from 2008 to 2012. The results show that capital efficiency is an important factor to promote energy efficiency. The US financial crisis had a significant impact on energy efficiency. Zhang and Choi [14] carried out an empirical analysis of the energy efficiency of fossil fuel power generation in Korea by using the DEA method. The results show that coal-fired power plants have higher total energy efficiency than oil-fired power plants, and the technology gap of coal-fired power plants is smaller than that of oil-fired power plants. Studies show that the Korean government should promote technological innovation to reduce the technology gap in coal-fired power plants. Bi, et al. [74] analyzed the energy efficiency of Chinese fossil fuel power generation enterprises. They pointed out that the energy and environmental efficiency of the enterprises are low, and there are large differences between provinces. In addition to power generation companies, Zhang, et al. [75] also analyzed the energy efficiency of 62 power generation equipment.

#### **6. Findings and Future Research Discussions**

#### *6.1. Main Findings*

By analyzing the literature on energy efficiency evaluation using the DEA method, it can be found that a large number of studies are conducted from the perspective of theory and application based on the data of countries, regions, industries and enterprises. The research has attracted more researchers' attention and the number of publications has gradually increased since 2011. From a methodological perspective, the DEA-based energy efficiency evaluation models are more consistent with the actual situation, such as extending from a single output model to an evaluation model that considers pollution emissions. The analysis of the research stage also ranges from a single stage to a multi-stage energy conversion issue. In addition, a dynamic analysis of multi-year efficiency is also the focus of one study. In other words, the construction of the energy efficiency evaluation model based on DEA has evolved from a static structure of a simple structure to a dynamic model of a complex network structure, and the accuracy of the efficiency evaluation has also been continuously improved.

Based on the above analysis of the related research on energy efficiency using DEA, this article discusses the overall situation of existing research and existing research deficiencies as follows:

(1) From the perspective of research objects, a large number of documents use data from countries, regions, industries and companies. Many research results have been obtained. Especially as China is a large country of energy consumption and carbon emissions, a large number of studies have been conducted on energy efficiency in China. Aiming at the technical heterogeneity of energy efficiency and competitive cooperation between different research objects, existing research proposes corresponding expansion models for different scenarios to improve the accuracy of efficiency assessment. It is not difficult to find that most of the existing energy efficiency is analyzed at the regional level. Although the energy efficiency at the company level has also attracted the attention of many scholars, compared with the regional and industry sectors, the energy efficiency analysis for enterprises is relatively small.

(2) From a method point of view, a large number of scholars have improved the model from different perspectives, and the accuracy of energy efficiency assessment has also continuously improved. With the expansion of research, the level of agreement between the construction of the DEA-based energy efficiency evaluation model and the actual situation continues to increase. However, as a data-oriented efficiency assessment method, DEA is mainly based on analysis with structured and clear data. Model studies that can deal with energy efficiency issues in complex data environments such as heterogeneity, uncertainty, or big data are still lacking. As the complexity of products and services continues to increase, the depth of energy efficiency assessment objects, especially at the microdata level, such as enterprise-level data and production line data, is often unstructured, and different data structures affect DEA. The accuracy of the assessment will also have an impact, resulting in increased errors in the efficiency assessment. Therefore, with the increasing complexity of the energy

system, building a DEA model in a complex data environment will enable a more effective evaluation of energy efficiency.

#### *6.2. Future Research Discussions*

In order to inspire subsequent research on energy efficiency assessment using DEA, this paper proposes possible future studies from the perspective of application areas and models.

#### 6.2.1. Further Research on Energy Efficiency Issues in Enterprises

This paper believes that research on the energy efficiency of enterprises will help to further improve energy efficiency if data are available. Specifically, the analysis of corporate data helps reveal the state of corporate energy-saving technologies. Besides, with the continuous improvement of carbon trading markets and policies, analyzing the energy efficiency level of enterprises will help companies to manage carbon emission quotas and improve their competitiveness.

#### 6.2.2. Further Research on Energy Efficiency Based on Complex Data Environment

For energy efficiency assessment models based on complex data environment, as the complexity of energy systems continues to increase, it is particularly important to build evaluation models that can analyze complex data. In this article, complex data may include inaccurate or ambiguous observations of input and output data, large datasets for analysis, and heterogeneous data due to differences in input or output structure.

(1) The DEA energy efficiency evaluation model in the heterogeneous data environment.

Despite the continuous development of current information technology and the continuous improvement of data retrieval and analysis capabilities, there will still be data heterogeneity in the evaluation. Unlike the problem of data loss caused by data retrieval and data storage, the data heterogeneity discussed here is due to differences in the input or output variables caused by the complexity of the production system. For instance, Cook, et al. [76] pointed out that steel plants will produce different types of steel even if they invest the same resource structure. When the traditional DEA method is used for evaluation, the efficiency will be biased. In fact, researchers have begun to consider the heterogeneity of output indicators. Wu, et al. [77] have started to discuss the use of improved DEA analysis to evaluate the efficiency of DMUs with different input and output indicators. It is not difficult to find that under different energy consumption scenarios, especially at the microdata level, it is particularly important to expand the efficiency assessment method in the case of heterogeneous input–output variables.

(2) The DEA energy efficiency evaluation model in the uncertain data environment.

In the reality of energy efficiency assessments, the observations of input and output data may be inaccurate or ambiguous [78]. The efficiency evaluation in the uncertain environment has attracted the attention of many researchers. Among them, the fuzzy set theory proposed by Zadeh, et al. [79] and others has been widely adopted. Based on fuzzy theory, some researchers have suggested the Fuzzy DEA model [80,81]. In the field of energy efficiency assessment, the expansion and application of the Fuzzy DEA model will help to improve the accuracy of energy efficiency assessment.

(3) The DEA energy efficiency evaluation model in the big data environment.

In the big data environment, the dataset used for analysis is usually very large, which causes the traditional DEA calculation process to take a long time. Therefore, analyzing big data makes researchers face many difficulties [82]. Recently, scholars have begun to evaluate energy efficiency based on a large number of data environments. For example, Zhu, et al. [83] proposed a DEA-based method for the allocation and utilization of natural resources in China, using big data technology to characterize the production technology in each region. Li, et al. [84] uses big data theory to analyze and evaluate the efficiency of China's forest resources, taking into account many evaluation indicators and large amounts of data in the big data environment. With the continuous improvement of information and information technology and data retrieval capabilities in the future, how to make full use of the

big data environment in the energy field and expand DEA models and algorithms will help further enhance the application space of DEA.

**Author Contributions:** Conceptualization, T.X. and J.Y.; data curation, L.S.; funding acquisition, J.Y.; methodology, T.X.; resources, H.L.; visualization, L.S.; writing—original draft, T.X.; writing—review and editing, L.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Natural Science Foundation of China grant number 71671125.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Nomenclatures**


#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Impact of Actual Weather Datasets for Calibrating White-Box Building Energy Models Base on Monitored Data**

**Vicente Gutiérrez González , Germán Ramos Ruiz and Carlos Fernández Bandera \***

School of Architecture, University of Navarra, 31009 Pamplona, Spain; vgutierrez@unav.es (V.G.G.); gramrui@unav.es (G.R.R.)

**\*** Correspondence: cfbandera@unav.es; Tel.: +34-948-425-600 (ext. 803189)

**Abstract:** The need to reduce energy consumption in buildings is an urgent task. Increasing the use of calibrated building energy models (BEM) could accelerate this need. The calibration process of these models is a highly under-determined problem that normally yields multiple solutions. Among the uncertainties of calibration, the weather file has a primary position. The objective of this paper is to provide a methodology for selecting the optimal weather file when an on-site weather station with local sensors is available and what is the alternative option when it is not and a mathematically evaluation has to be done with sensors from nearby stations (third-party providers). We provide a quality assessment of models based on the Coefficient of Variation of the Root Mean Square Error (CV(RMSE)) and the Square Pearson Correlation Coefficient (*R*2). The research was developed on a control experiment conducted by Annex 58 and a previous calibration study. This is based on the results obtained with the study case based on the data provided by their N2 house.


**Citation:** Gutiérrez González, V.; Ramos Ruiz, G.; Fernández Bandera, C. Impact of Actual Weather Datasets for Calibrating White-Box Building Energy Models Base on Monitored Data. *Energies* **2021**, *14*, 1187. https://doi.org/10.3390/en14041187

Academic Editor: Roberto Alonso González Lezcano

Received: 23 December 2020 Accepted: 6 February 2021 Published: 23 February 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

**Keywords:** weather data; calibration; sensors; energy simulation; sensors saving; methodology; Building Energy Models (BEMs)

#### **1. Introduction**

The building energy model (BEM) is a key element when speaking about building analytics and control applications, such as model predictive control (MPC) [1] and fault detection diagnosis (FDD) [2]. The smart grids to be built in the future will use high-quality BEMs as an important element. The European Union has funded SABINA [3], which is an innovation and research project that seeks to generate financial models and create new technologies to actively manage, connect and control storage and generation to exploit the connections between the thermal inertia of buildings and electrical flexibility. The European electricity system has the capacity to introduce an increasing amount of energy generation from renewable sources into its system. SABINA echoes this demand, as it focuses on one of the cheapest sources of green energy: thermal inertia inside buildings, also achieving the coupling between heat and power grids. Using thermal inertia as an energy store is referred to as a "power to heat" (P2H) solution [4–6].

Energy prediction relies entirely on models, and therefore one of the main pillars of SABINA is the production of high-quality models (calibrated) that can give reliability to P2H technology. These models are constructed on the basis on an initial methodology developed by Ramos et al. [7] and Bandera et al. [8], which has been recently improved and empirically validated by Gutiérrez et al. [9] based on the work carried out by Annex 58 of the Committee of the International Energy Agency Energy in Buildings and Communities program (IEA-EBC) approved in 2011 and completed in 2016. The main objectives of Annex 58 were: to develop common quality procedures for dynamic full-scale testing to come to a better performance analysis and develop models to characterize and predict the effective thermal performance of building components and whole buildings [10].

Whole building energy simulation tools allow the detailed calculation to specify building performance criteria, such as the space temperature and electric energy consumption, under the influence of external inputs such as weather [11], occupancy, ground [12] and infiltration. These calculations are carried out at time series data and EnergyPlus [13] is among the main tools that perform these calculations based on what is called a white-box model. These models are founded on physical parameters rather than mathematical or statistical formulation. The main challenge of these models is how to reduce the gap between the measured and simulated data as these models are over-parameterized and under-determined. Despite the potential benefits and the software continuous progress, a number of problems retract from a more widespread use. The gap undermines the confidence in the model prediction and curtails the adoption of BEM tools during the design, commissioning, and operation.

It is necessary that BEM closely represents the actual behavior of the real building. The calibration process can achieve this. However, the calibration of white-box modeling by aligning the measured data to a simulation is a highly under-determined, which normally yields multiple solutions [14]. This under-determined problem carries us to the uncertainty problem that should be analyzed properly [15,16]. De Wit [17] classified the various sources of uncertainty as follows:


In this paper, the main focus is on the uncertainty of the scenario based on outdoor weather conditions. As pointed out by some authors [18–20], one of the key parameters to produce a calibrated BEM is the weather file. According to Bhandari et al. [21] there are three existing weather files that can be fed into the energy model: future [22–24], typical and actual [25]. Typical weather files generate a weather file for a set of years, usually covering the last 20–30 years. These types of files are used to understand the building under standard conditions. For this reason, these weather files together with the so-called future weather files are not suitable for building calibration. For the study presented in this paper, the focus has been on actual weather files that are constructed from a specific location and time. The data for the generation of the weather file can be obtained from those generated by an on-site weather station [26] or by processing data from several nearby stations [25,27,28]. The latter option is often used by external or third-party data providers [29].

The calibration of a BEM normally entails the installation of an on-site weather station. This installation normally implies an extra cost for the project because, on top of the expense of the station, data handling could be an extra issue. This is one of the main problems that restrains energy service companies (ESCOs) in promoting option D (calibrated BEM for measuring energy conservation measurements) of international measurement verification protocols (IPMVP) [30]. For this reason, the goal of this study is to tackle the following research questions: First, is it possible to obtain a better weather file than that provided by the on-site weather station? Secondly, when not having the option of installing a dedicated weather station or retrieving data from a nearby one, is it still possible to obtain a calibrated BEM? The research presented in the following sections answers these two questions positively and provides a methodology for any modeler to quantify the impact of these decisions on their work.

To answer these questions, the paper is structured as follows. In Section 2, design of the method, where is explained the study used for weather file selection base on two

different techniques. First, the weather file selection is performed with a base model to produce a rank of the best weather files and a selection of the best results is chosen based on two criteria: the best results at the level of uncertainty indexes and the more cost-effective solution, which does not reduce the model quality. Secondly a calibration process is performed based on the four weather files selected. This process will confirm the rank of the weather files and the initial guess that is not necessary to conduct the calibration, which is time consuming, in order to choose the most adequate weather file. In Section 3, we present our analysis of the results and explain how a better model than the on-site was achieved and a cost-effective solution was provided without greatly reducing the model quality. The paper finishes with our conclusions, which are discussed in Section 4.

#### **2. Design of the Method**

To apply the methodology proposed in this paper, the data provided in annex 58 IEA-EBC [10,31] have been used both for the generation of the energy model and for obtaining the calibrated model.

We focused on the N2 house (N2 is the name of the house) in the German town of Holzkirchen near Munich. The house is situated on a flat area and there are no buildings in the vicinity to provide shade in the summer season when the annex was tested. The house has three floors (basement, ground floor, and attic) with a free height of 2.50 m. The test proposed by the annex focuses on the ground floor spaces: the two bedrooms, the living room, the entrance, the bathroom, the corridor and the kitchen (Figure 1). This housing is optimal for the realization of the proposed study because it provides all the necessary data for the accomplishment of the work. At the same time, these data are of a high quality, which reduces the uncertainty that they may cause in the final results of the research.

We focus on house N2 (N2 is the name of the house) in the German town of Holzkirchen, about 35 km south of Munich (47.874 N, 11.728 E). It is a house situated on a flat area without any buildings that could cast shadows on it in the summer period, which was the period of analysis. It shares the typical climate of Central Europe: oceanic. The house has three floors (basement, ground floor and attic) with a clear height of 2.50 m. The study focused on the ground floor. The study focused on the ground floor, which included a living room, a kitchen, an entrance, a bathroom, a corridor and two bedrooms (Figure 1). This dwelling is optimal for the proposed study because it provides all the data necessary to carry out the work. At the same time, this data is of high quality, which reduces the uncertainty it may cause in the final results of the research.

**Figure 1.** Plan and external views of the N2 house. Holzkirchen, Germany.

The exercise developed in annex 58 consisted of five periods of energization of the housing, each period with certain characteristics. The energy model then responds to reality,

both in the energy consumed and in the temperature reached. The offered periods began with the first (initialization) duration of three days, where a constant interior temperature was maintained in the house at 30 ◦C. In period number 2 (Period 2, set point 30 ◦C), seven days in length, a constant temperature was still maintained at 30 ◦C inside the house, and the calibrated energy model provided the real energy consumed. In the 3rd period that lasted 14 days (Period 3, ROLBS), energy was introduced through the living room radiator in aleatory periods with a randomly ordered logarithmic binary sequence (ROLBS).

Within the EC COMPASS project [32], this sequence was developed. Its objective is to ensure that all relevant frequencies that can occur in a given time have the same weight. To achieve this, the on and off periods are chosen in logarithmically equal intervals and mixed in a quasi-random order. With this sequence, it can be ensured that there is no connection between solar gains and heat input through the HVAC systems. The energy model must be able to reproduce the internal temperature that this energy produces. In period 4 (Period 4, set point 25 ◦C), a constant temperature was again introduced into the house, but this time at 25 ◦C and the energy model reproduced the energy produced by that indoor temperature. The last period is the 5th (Period 5, free oscillation), where the house was left in free oscillation, i.e., without any energy contribution. The calibrated model, then, reproduced the interior temperatures.

For the weather files used in this exercise, a climate file offered by third parties was selected, located about 440 m in a straight line from the selected house (47.87 N, 11.73 E) and together with the data from the weather station placed on the site, a composition of the sensors was made (Table 1). Using the sensors of the on-site station as a basis, the sensors will be replaced by those of third parties, generating a total of 64 climate files. The sensors used were the outside temperature (T), global horizontal irradiation (GHI), diffuse horizontal irradiation (DHI), wind speed (WS), wind direction (WD), and relative humidity (RH).

Once the energy model of the house was created (baseline model), and having already generated the weather files that will be used in the test, we proceeded to simulate the energy model in all periods and with all the climate files, to obtain a list with the behavior of the weather files and, thereby, obtain which one is the best suited to reality.

The next step was to check that the simulations performed with all the weather files and the base model conformed to a fitting process, i.e., to subject the model to a calibration process. This process was conducted with 4 of the 64 climate archives generated. The four that were considered to be the most relevant were selected. These were:



**Table 1.** All possible combinations of weather files. The sensors used were the outside temperature (T), global horizontal irradiation (GHI), diffuse horizontal irradiation (DHI), wind speed (WS), wind direction (WD), and relative humidity (RH).

> The research method used started with the selection of the weather files. Once they had been selected, the process of calibrating the base model began [9]. The objective of this procedure was to justify the impact that the weather files had on the calibrated model, thus checking which was the energy model that together with its weather file best fit reality and to verify that the ranking generated in the simulations of the base model with all the weather files was fulfilled (Figure 2).

**Figure 2.** Generic process diagram for achieving calibration mode.

The calibration process, tested in previous studies with different buildings and with satisfactory results, will generate the model that best fits in temperature and energy to all the periods proposed by the exercise in the annex (periods 2 to 5) . To achieve this, several scripts were programmed in the EnergyPlus [33] run-time language. These commands transfer the measured temperature or energy to the model. The periods described in the previous paragraphs were also subjected to the calibration process shown in Figure 3. Although the methodology may resemble an optimization process, the objective function links the fit between the values provided by the model and the real data. In this case, the coefficient of variation of the root mean square error (CV(RMSE)) and the square Pearson correlation coefficient (*R*2) were used [16]. To find the best solution, the non-dominant sorting genetic algorithm (NSGA-II) [34] was chosen as the search engine. The possible combinations of the parameters are those that will determine the search space, these are: thermal bridges, thermal mass, infiltrations and capacitances.

**Figure 3.** Calibration environment with a genetic algorithm. The coefficient of variation of the root mean square error (CV(RMSE)) and the genetic algorithm non-dominated sorting genetic algorithm (NSGA-II) [9].

Once all the periods were calibrated according to this methodology, the model that best suited all of them was obtained. This calibration operation was then repeated with the proposed weather files.

When the calibrated models (*CUi*, ..., *CUn*) were obtained, the adjustment they achieved with respect to the real temperature and energy were studied, thus discovering which was the weather file that after a calibration process of the energy model, was the closest to reality. In addition, we checked whether the adjustment classification of the base model with all the weather files had similarity with the results obtained in the calibrated models. The evaluation of the models was proposed to be performed with two types of indexes, the same ones that the genetic algorithm used in the calibration process: First, the CV(RMSE) (Equation (1)) which is the coefficient of variation of the root mean square error. The CV(RMSE) is achieved by weighting the Root Mean Square Error (RMSE) by the mean of the actual data. The measured variability is considered to be error variance by this index and therefore the American Society of Heating, Refrigerating, and Air Conditioning Engineers (ASHRAE) Guideline 14, the Federal Energy Management Program (FEMP) and the International Performance Maintenance and Verification Protocol (IPMVP) recommend its use [30,35–42]. Secondly, the coefficient of determination R2 (Equation (2)) which is the percentage of variation of the response variable that explains its relationship with one or more predictor variables. Generally, the higher the R2, the better the fit of the model to its data. The R<sup>2</sup> is always between 0 and 100%.

$$\text{CV}(RMSE) = \frac{1}{\mathcal{Y}} \left[ \frac{\sum\_{i=1}^{n} (y\_i - \mathcal{Y}\_i)^2}{n - p} \right]^{\frac{1}{2}} \tag{1}$$

$$\mathbf{R}^2 = (\frac{\sum\_{i=1}^n (y\_i - \bar{y})(\hat{y}\_i - \bar{y})}{\sqrt{\left(\sum\_{i=1}^n (y\_i - \bar{y})^2\right) \left(\sum\_{i=1}^n (\hat{y}\_i - \bar{y})^2\right)}} )^2. \tag{2}$$

#### **3. Analysis of the Results and Discussion**

The first exercise that was performed for the demonstration of the methodology was the simulation of the base model with all the proposed weather file combinations. This was an attempt to discover which climate file, by simulating it with a baseline model, best fit reality. The results can be seen in Figures 4 and 5. Through the box plots, the results obtained in this phase of the methodology are shown. In both figures are highlighted the results of:


**Figure 4.** The simulation result of the base model with all the proposed weather files. CV(RMSE) Index.

Figure 4 shows the sum of the CV(RMSE) index of the thermal zones that come into play in the simulation (living room, bedroom, kitchen, and children's room). This index has a range from 0 to ∞, where 0 represents the model with the best adjustment. This was obtained by comparing the temperature and energy data (depending on the period) generated by the base energy model and the different weather files with those of reality. Each period has its own box plot. On the left of the figure are the periods where the model is asked for temperature, and on the right are the periods where the model is asked for energy:

The required temperature periods for the energy model:


Required energy periods to the energy model:

• Period 2: In this period, the house was subjected to a constant temperature at 30 ◦C, and the energy model was required to be able to reproduce the energy needed to reach that temperature. The weather file that produced the best fit with reality was the "weather file combination" followed very closely by the on-site weather file, both located in the first quartile. Already in the second quartile, but above average, the "third-party weather file + on-site temp. sensor" was placed. The weather file that was the worst suited to reality is that of third parties, which is in the fourth quartile.

• Period 4: The house was heated to 25 ◦C and the energy model was able to reproduce the energy necessary to obtain that temperature. The best weather file was again the "weather combination", this time making more of a difference to the "weather on-site", although both files were in the first quartile. In the 3rd, below the average, we find the "third-party weather file + on-site temp. sensor" and, as in the other periods, located at a great distance from the rest. In the 4th quartile was the "third-party weather file".

**Figure 5.** The simulation result of the base model with all the proposed weather files. R2 Index.

Figure 5 shows the sum of the index R<sup>2</sup> of the thermal zones that enter the simulation process: living room, bedroom, kitchen, and children's room. This index measures how much the shapes of the two curves resemble each other. In this case, the temperature and energy curves produced by the simulated model and the different weather files were compared with reality. The range was from 0 to 4 (4 because this is the sum of the four thermal zones that are analyzed in the energy model) with 4 being the model with the best adjustment. Each period is shown in a box plot. As in Figure 4, on the left are the periods where the model is asked for temperature, and on the right are the periods where the model is asked for energy.

The required temperature periods for the energy model:


The required energy periods for the energy model:


As a summary of the comments on the results of this analysis, the weather file that best fit the real data was the "weather file combination" followed at a very short distance by the "on-site weather file". Thirdly, the "third-party weather file + on-site temperature sensor" was clearly positioned most of the time in the third quartile, but above average. For the "third-party weather file", there is no doubt that it was the weather file that fit the worst to reality, placing itself, in most periods, in the lower limit of the last quartile.

Once this check was performed, the combination of sensors between those placed on site and those obtained from a third-party weather file were known to produce better adjustment results. This is the wind speed (WS), global (GHI) and diffuse (DHI) horizontal irradiation, and temperature (T) sensors of the weather station placed on the site and the wind direction (WD) and relative humidity (RH) sensors of the third-party.

The next step, as explained in the description of the methodology, is to justify the selection of the four weather files with which the calibration process was performed. They were:


The energy model with each type of weather file was subjected to a fitting process (explained in Section 2), with the aim of verifying whether the same results are obtained as in the classification made with the simulations of the baseline model and all the weather files (Figure 6).

**Figure 6.** The process diagram for achieving the calibrated model.

The model with the four selected weather files was calibrated in each of the periods proposed in annex 58. The four calibrated models obtained for each period were joined into one, creating the model that best fits the reality in all periods. Thus, we obtained four calibrated models, one for each selected weather file:


• *CU4* model obtained with the "weather file combination".

To present the results obtained, Table 2 was created to show the uncertainty indexes CV(RMSE) and R<sup>2</sup> attained by the models in the different calibration periods. For the periods where the temperature generated by the energy model was compared with the real temperature, the area-weighted average of each thermal zone analyzed was considered to produce the index result. However, in the periods where the energy consumed is the comparison, the sum of the energy spent per thermal zone was carried out to achieve the uncertainty index.

To combine the effects on the weather file in all calibration periods in a unique result, the value of the temperature and energy indices obtained (normalized indices) was normalized, so that they have the same basis to be able to add them up and thus obtain a unified value (sum indices).

**Table 2.** The normalized results in the checking periods with the different weather files. Randomly ordered logarithmic binary sequence (ROLBS).



**Table 2.** *Cont.*

When studying the results obtained, the weather file that obtained the best results was the "weather file combination", with an index sum value of 0.416, exceeding, by a very small margin, the "on-site weather file" (0.544). In third position was the "third-party weather file + on-site temperature sensor" with a normalized index sum value of 0.985. In addition, the "third-party weather file" obtained a normalized index sum value of 6.245, which was well behind the other weather files. As can be seen in the table, we confirmed that the ranking the weather files produced with the base model was repeated in the calibration process.

When analyzed in more detail, we concluded that in the periods where the result sought was temperature (period 3 and 5), both the "weather file combination", "on-site weather file" and "third-party weather file + on-site temperature sensor", had very similar results, all obtaining very good values. The biggest differences occurred when we examined the energy periods (periods 2 and 4). Here, the "weather file combination" and the "on-site weather file" produced a difference compared with the other files.

In addition, in period 4, the "weather file combination" was much better adjusted to the expected result of 8.08% CV(RMSE) compared to the 14.46% obtained by the "on-site weather file".

Another analysis made to the weather archives was to determine if they complied with the objectives proposed by the international standards: The International Performance Measurement and Verification Protocol (IPMVP) determines that a model achieves best fit values when it manages to obtain a CV(RMSE) of less than ±20% on an hourly scale. On the other hand, ASHRAE and the Federal Energy Management Program (FEMP) state that a model can be considered to be fit when it obtains a CV(RMSE) index of less than ±30% on the same scale (Table 3). As for the R2 index, ASHRAE recommends that models should not have an index of less than 75% to be considered calibrated.

**Table 3.** The calibration criteria of Federal Energy Management Program (FEMP), American Society of Heating, Refrigerating, and Air Conditioning Engineers (ASHRAE), and international performance measurement and verification protocol (IPMVP).


In Table 2, the results that are not colored comply with international standards, the results that comply with ASHRAE and FEMP have been colored in orange, while the periods that do not comply with any standard have been marked in red. Both the "weather file combination" and the "on-site weather file" met the standard expectations in all the periods analyzed. The "third-party weather file + on-site temperature sensor" achieved very good

rates, although, in period 4, it did not manage to comply with the IPMVP standard in the CV(RMSE) rate value where it stayed with 20.60% being the limit to be able to comply with 20.00%. The "third-party weather file" failed to respond satisfactorily in period 2, where it did not meet the value of R2, and in period 4 where it did not reach a good value for either index.

#### **4. Conclusions**

Based on the results obtained with the study conducted in this paper, a new method is proposed to establish the degree to which the sensors that generate the climate file were affected in the process of adjusting the energy models of the building. Thus, we were able to select the best composition of sensors depending on the needs of the project.

Different types of sensors come into play in the process of creating the weather file: the wind speed and direction, global and diffuse horizontal irradiation, the outdoor temperature, and the relative humidity. Data from these sensors can be obtained from weather stations placed at the experiment site or can be acquired from third parties.

A weather station placed on the site requires a strong economic investment not only when buying or renting the group of sensors needed, as well as when processing and validating the data generated. Although weather data acquired from third parties are much cheaper, they have the disadvantage of being less accurate.

In this article, a new fast methodology was developed to determine the degree of adjustment of the different weather files composed of site data and third-party data. To this end, the sensors of the on-site weather station and those of third parties were combined with the aim of creating as many climate files as possible, resulting in a total of sixty-four files.

The results obtained in the experiment confirmed that there was a combination of sensors that offered a better degree of adjustment between the simulated temperatures and energies and reality than those offered by the site's weather station. Using these sensors to compose the weather file allowed the energy model to represent reality more accurately. This combination used the wind speed (WS), global and diffuse solar radiation (GHI), and temperature (T) sensors from the weather station placed at the site and the wind direction (WD) and relative humidity (RH) sensors from the data provided by third parties (weather file combination).

Good results were obtained by the climate file composed of the third-party data plus the temperature sensor of the site's weather station (third-party + on-site temp. sensor). With a very low economic cost, as it was simply necessary to place a temperature sensor outside the building, very positive adjustment results were achieved. The model behaved with excellent performance in all periods, complying with the parameters proposed by ASHRAE both in the CV(RMSE) and in R2 index. This model obtained a normalized index sum of 0.985 compared to 0.544 as obtained by the model calibrated with the site's climate file. This is a difference to be considered, but is far from the one that occurs with the model adjusted with the climate file composed of third-party data, which obtained an index sum of 6.245. Depending on the degree of precision that must be obtained in the energy model, this option can be a good way to deal with the adjustment process. The weather file had the best cost-effectiveness ratio.

One of the most relevant conclusions obtained when developing the proposed methodology was the fact that it was not necessary to go through a process of adjustment of the model to be able to indicate the effect of the weather file in the final result. The results obtained when simulating all the proposed climate archives with the base model were the same as those obtained when executing an adjustment process. Even the positions in which the files were placed, as well as the distance between them, were met.

With the methodology proposed here, in a swift and simple way, we opened the option of being able to choose the composition of the meteorological file for use in the energy model adjustment process. This decision can be considered for economic, precision, and effectiveness factors, among others.

**Author Contributions:** V.G.G. and C.F.B. supervised the methodology used in the article, performed the simulations and the analysis, and wrote the manuscript. G.R.R. developed the EnergyPlus model and participated in the data analysis. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work was funded by the research and innovation program Horizon 2020 of the European Union under Grant No. 731211, project SABINA.

**Acknowledgments:** We would like to thank the promoters of annex 58, the possibility of accessing the data of the houses. Without this data it would not have been possible to complete this work.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:


#### **References**


## *Article* **Life Cycle Assessment of Dynamic Water Flow Glazing Envelopes: A Case Study with Real Test Facilities**

**Belen Moreno Santamaria 1, Fernando del Ama Gonzalo 2,\*, Matthew Griffin 2, Benito Lauret Aguirregabiria <sup>1</sup> and Juan A. Hernandez Ramos <sup>3</sup>**


**Abstract:** High initial costs hinder innovative technologies for building envelopes. Life Cycle Assessment (LCA) should consider energy savings to show relevant economic benefits and potential to reduce energy consumption and CO2 emissions. Life Cycle Cost (LCC) and Life Cycle Energy (LCE) should focus on investment, operation, maintenance, dismantling, disposal, and/or recycling for the building. This study compares the LCC and LCE analysis of Water Flow Glazing (WFG) envelopes with traditional double and triple glazing facades. The assessment considers initial, operational, and disposal costs and energy consumption as well as different energy systems for heating and cooling. Real prototypes have been built in two different locations to record real-world data of yearly operational energy. WFG systems consistently showed a higher initial investment than traditional glazing. The final Life Cycle Cost analysis demonstrates that WFG systems are better over the operation phase only when it is compared with a traditional double-glazing. However, a Life Cycle Energy assessment over 50 years concluded that energy savings between 36% and 66% and CO2 emissions reduction between 30% and 70% could be achieved.

**Keywords:** water flow glazing; dynamic building envelope; life cycle assessment

#### **1. Introduction**

In recent years, clean energy use has steadily grown. However, energy consumption has not altered its pattern, with fossil fuels acting as the primary energy consumption and generation source. Many carbon emissions and greenhouse gasses caused by conventional energy sources have severe consequences on the environment. Therefore, building codes, regulations, and energy directives consider carbon emissions' impact when evaluating energy efficiency [1]. New technologies emerge rapidly and force building designers to act without a thorough environmental impact analysis [2]. To control the construction sector's environmental impact is the most critical challenge that the architecture, engineering, and construction (AEC) industry must face soon [3,4]. Life Cycle Assessment (LCA) is a comprehensive and internationally standardized method. It quantifies all relevant emissions, resources consumed, and the related environmental aspects associated with any goods or services [5,6].

#### *1.1. Literature Review*

Extensive investigation has been conducted to study the environmental impact of building materials [7]. A Life Cycle Assessment can significantly help improve a sustainable building design by presenting how different materials or processes contribute to the

**Citation:** Santamaria, B.M.; Gonzalo, F.d.A.; Griffin, M.; Aguirregabiria, B.L.; Hernandez Ramos, J.A. Life Cycle Assessment of Dynamic Water Flow Glazing Envelopes: A Case Study with Real Test Facilities. *Energies* **2021**, *14*, 2195. https://doi.org/10.3390/en14082195

Academic Editor: Fitsum Tariku

Received: 14 March 2021 Accepted: 8 April 2021 Published: 14 April 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

building's overall environmental impact [8,9]. According to some authors, Life Cycle Assessment considers four phases, the extraction of resources, the construction phase, the building operation, and the building's demolition [10]. The European standards (EN) standards for Building Life Cycle Assessment maintain a list of 24 total categories that describe potential environmental impact [11–13]. A broadly accepted standard of the environmental impact of buildings is energy consumption [14]. The Life Cycle Energy analysis (LCE) only accounts for energy inputs at different life cycle stages, including the operational energy and embodied energy in buildings over their lifetime. Life Cycle Energy analysis evaluates the embodied energy of products, design modifications, and strategies used to optimize operational energy. The most common period for major renovations in the residential sector in practice is, according to some authors, 30–40 years [15]. Other authors estimate the lifetime of buildings as 50 years in their LCC approach [16]. The optimization of building envelopes should examine both the energy-saving and the Life Cycle Cost goals [17]. Different authors have proposed mathematical models to calculate each envelope material's Life Cycle Cost and heating and cooling system [18]. Construction costs, return on investment, increased market value, and maintenance and operation costs are the factors that determine consumers' response to sustainable buildings [19]. Since buildings have long lifespans, the design decisions have long-term consequences, considering that upfront costs amount to less than 30% of the total Life Cycle Costs [20,21].

Global warming potential (GWP) is a measure of the amount of heat trapped in the atmosphere. This variable is measured in carbon dioxide equivalents (kgCO2eq). This equivalency means that the total greenhouse potential of a specific emission type is given concerning CO2 [22]. Since the calculation for global warming potential includes the residence time of gases in the atmosphere, a total time range for an assessment can be defined at 100 years [23]. Emissions are substances that are released into the environment, which includes the air, water, and soil. This, in turn, negatively impacts human and environmental health. Emissions typically enter into the environment as a waste product from different industrial processes. The most common (and therefore well-known) emissions are called greenhouse gas (GHG) emissions [24]. Some articles have studied building envelope design from its energy-saving potential, environmental consequences, and social impacts. It is of the utmost importance to optimize the balance between the cost and energy savings [25,26].

Dynamic or active buildings adapt their thermal performance according to different inputs, such as outdoor and indoor conditions, and can produce part of the building's energy over its operational life. Technical research and numerical simulation tools on active building envelopes have increased over the last decade. However, the ratio of dynamic facades in the building industry remains stable [27]. Some dynamic envelopes change their opacity or vary their transmission or reflection properties. Electrochromic glass, Polymer dispersed liquid crystal, and Suspended Particle Devices are hindered by their high cost and non-standardized manufacturing processes [28,29]. Active envelopes can also produce renewable energy on-site. Photovoltaic panels (PV) are considered the most reliable onsite renewable energy generation technology due to the wide range of electricity use and cables' flexibility that transport the energy [30]. Solar thermal collectors are considered a renewable and CO2 free energy source. However, the pipes' stiffness transporting warm water has limitations in their applicability as a part of the building envelope [31].

This paper will examine the Life Cycle Assessment (LCA), Life Cycle Energy (LCE), and Life Cycle Cost (LCC) calculations for dynamic Water Flow Glazing (WFG) envelopes. Coupled with a plug and play piping system, it would produce a high-performance building envelope and innovative heating and cooling system [32,33]. Water Flow Glazing is a technology that can be integrated into transparent building envelopes, either in new buildings or as a retrofit for traditional glazing [34]. Flowing water through WFG panels captures an extensive percentage of the solar infrared radiation and keeps the glazing transparency [35]. WFG can absorb solar energy to provide domestic hot water to plumbing fixtures when the solar irradiance is high enough [36,37].

#### *1.2. Objectives and Innovation*

Water Flow Glazing has proven its potential for energy savings and for increasing comfort of occupants in previous studies. The main contribution of this article has been to carry out a thorough analysis of a 50-year life cycle from the energy and cost perspectives. To accomplish this task, the authors have employed a tested methodology used in previous scientific articles. This study considered two prototypes in different locations, glazing compositions and energy systems. Real-world data were collected from these prototypes and analyzed to determine the actual energy performance of WFG systems. This paper is structured into four separate parts. Section 2 of this article provides background information on Water Flow Glazing technology, a description of the WFG test facilities, and finally the methodology of Life Cycle Energy analysis (LCE) and Life Cycle Cost (LCC) calculations. Section 3 provides an analysis of the year-long data collection that occurred at the WFG facilities. This section also contains information pertaining to the embodied energy, the operational energy, and the renewable energy production of the test facilities. Section 4 is a discussion on LCE and LCC and their impact on global warming potential based on a multi-index evaluation. In addition to this, this section has a discussion on the limitations of the methodology. The fifth and final section is the conclusions section.

#### **2. Materials and Methods**

Water flow glazing can be used as both a high-performance envelope as well as an element of the heating and cooling system. The WFG module presented in this paper is made up of three components: an extruded aluminum frame, the glazing, and a circulating device. The glazing is a compound of several layers of laminated glass and coatings, with thermal and spectral properties provided by the glass manufacturers. It combines coatings and Polyvinyl butyral layers with a variable water mass flow rate to absorb or reject incoming infrared solar radiation. The circulating device is defined by a water pump, an exchanger that can regulate heat, and different sensors (such as the water flow and water temperature) to regulate the different fluid variables involved. Finally, the aluminum assembly provides the frame with structure. Although WFG can absorb and transport thermal energy, alternative heating and cooling sources might be added to compensate for the heat losses and gains and maintain comfort conditions. The initial cost of the glazing exceeds the cost of a traditional double or triple glazing panel. However, its performance has to be evaluated over its life cycle to consider potential energy savings. In addition, the water flow captures the solar infrared radiation and increases its temperature through the window. This water is transported and eventually releases the energy in buffer tanks so that thermal energy can be used in hydronic heating systems. The renewable energy integrated into the building envelope might not be enough to meet the energy needs, so it is necessary to study different energy sources and systems to compare their final energy consumption and greenhouse gas emission potential. The Life Cycle Assessment of Water Flow Glazing includes four phases. Phase 1 is the extraction, production of construction materials, and transportation of materials from the extraction point to the construction site. Phase 2 is the construction phase, including required energy to run construction machinery, any additional materials for construction, and any waste disposal. Phase 3 includes the energy required for the building's actual operation (including all energy used during the building occupation over the total lifespan of the structure), general maintenance, repairs, and finally, any required material replacement for the building. Finally, phase 4 involves demolition and transport of waste to recycling plants or landfills.

#### *2.1. Water Flow Glazing Thermal Properties*

Heat flux through any glazing depends on the difference between the indoor and outdoor temperatures *(θ<sup>e</sup>* − *θi)* and the direct and diffuse solar radiation, *i*0. Equation (1) shows the heat flow, *q*, in glazing panels with gas chambers depending on the thermal transmittance, *U*, and *g*-factor. Equation (2) illustrates that the variable fluid's temperature and mass flow rate impact the heat flow through the glazing. Equations (1) and (2) are

valid assuming steady conditions, constant values for convective and radiative coefficients, negligible thermal resistance and thermal mass of the glass panes and the water chamber, and the uniform flow inside the water chamber.

$$q = \mathcal{U}(\theta\_{\mathfrak{e}} - \theta\_{\mathfrak{i}}) + \mathfrak{g}i\_{0\mathfrak{e}} \tag{1}$$

$$q = lI(\theta\_{\varepsilon} - \theta\_{i}) + lI\_{w}(\theta\_{IN} - \theta\_{i}) + \lg i\_{0\prime} \tag{2}$$

where *Uw* is the thermal transmittance between the water chamber and the interior, *U* is the glazing thermal transmittance, *θ<sup>e</sup>* is the outdoor temperature, *θ<sup>i</sup>* is the indoor temperature, and *θIN* is the inlet temperature of the fluid into the WFG system. Equations (3)–(5) are taken from a previous article [38]. They show the thermal transmittances of WFG, along with the g-factor. All the parameters depend on the mass flow rate, which is estimated uniform inside the glass pane.

$$\mathcal{U} = \frac{\mathcal{U}\_i \mathcal{U}\_\mathcal{E}}{\dot{m}c + \mathcal{U}\_\mathcal{E} + \mathcal{U}\_i} \,' \tag{3}$$

$$
\mathcal{U}\_{\text{iv}} = \frac{\mathcal{U}\_{i}\stackrel{\leftarrow}{\cdot}\stackrel{\leftarrow}{\cdot}\mathcal{U}\_{i}}{=\stackrel{\leftarrow}{\cdot}\mathcal{U}\_{\text{c}} + \mathcal{U}\_{i}}\,'\,\tag{4}
$$

$$\log g = \left(\frac{\mathcal{U}\_i}{\dot{m}c + \mathcal{U}\_t + \mathcal{U}\_i}\right) \left( \left( A\_1 \left( \frac{\mathcal{U}\_\ell}{h\_\ell} \right) + A\_2 \left( \frac{1}{h\_\mathcal{g}} + \frac{1}{h\_\mathcal{t}} \right) \mathcal{U}\_\ell + A\_3 \left( \frac{\mathcal{U}\_\ell}{h\_i} \right) + A\_w \right) \right) + A\_i + T. \tag{5}$$

Water flow glazing can change the thermal performance by varying the mass flow rate per unit of surface, *m˙* . *U* and *Uw* are two additional thermal transmittances that depend on the mass flow rate. The product *mc ˙* denotes the potential of the water to transfer energy. *Ui* and *Ue* thermal transmittances were obtained utilizing the convective heat coefficients, *he*, *hi*, *hg*, *hw*. The thermal resistance, (1/*Ue*) is the sum of the thermal resistances from the water chamber to the outdoors. Similarly, the thermal resistance, (1/*Ui*), is the sum of the thermal resistances from the water chamber to indoors. Hence, *Ue* represents the thermal transmittance between the water chamber and outdoors, and *Ui*, the thermal transmittance between that water chamber and indoors. *Aw* is the water absorptance, whereas *A*1, *A*2, *A*<sup>3</sup> are the glass panes absorptances.

This study included two separate prototypes. The first one was placed in Peralveche, Spain (latitude 40◦36 42" N, longitude 2◦26 57" W, altitude 1111 MAMSL). The second one was built and tested in Sofia, Bulgaria (42◦39 1" N, 23◦23 26" E, Elevation: 590 m a.s.l.). The Peralveche WFG cabin was made of double glazing with a water chamber. Figure 1 shows the WFG cabin and energy system schematics.

**Figure 1.** Peralveche prototype: (**a**) schematics of energy system: 1—plate heat exchanger, 2—water circulation pump and precision flow meters, 3—wall water flow glazing panels, 4—plate heat exchanger, 5—water circulation pump and precision flow meters, 6—roof water flow glazing panels, 7—borehole heat exchangers; (**b**) Water Flow Glazing (WFG) cabin.

The glass panes utilized to build this glazing assembly were Planiclear 6 + 6 mm with Poly-Vinyl Butyral layers (1 × 0.38 mm), a water chamber measuring 16 mm, and Planiclear 8 + 8 mm with Poly-Vinyl Butyral layers (1 × 0.38 mm). The cabin had six sides and each side was a square of 2 m × 2 m. The WFG panels were connected to four borehole heat exchangers buried 50 m underneath the cabin. The WFG cabin managed two different closed loop circulating systems. The first loop consisted of pipes that distributed refrigerant fluid from the borehole heat exchangers to the circulation pump. The WFG cabin included a second loop that circulated water through vertical facades and the horizontal roof and the mass flow rate was set to 0.9 L·min−1·m−2. Table <sup>1</sup> shows the glazing's thermal and spectral values from previous articles [38,39]. This WFG cabin was compared with another, referred to as Reference prototype, which had double glazing with an air cavity.

**Table 1.** Thermal properties of Peralveche glazing.


<sup>1</sup> Values taken from [39].

As per the Sofia prototype, the square plan dimensions were 7 m × 7 m. The walls are parallelograms of 7 m by 3.4 m comprised of five WFG modules facing east, west, and south, respectively. Figure 2 shows the unitized facade module. The circulating system incorporated a solar water pump and a plate heat exchanger connected to the water distribution pipes. The unitized WFG panels measured 3000 mm high and 1300 mm width, and the mass flow rate was set at 2 L·min−1·m<sup>−</sup>2. In the Southern glazing, the following layers were employed: a single 10 mm diamant glass pane, a 16 mm argon chamber, a low-emissivity coating Planitherm XN, Planiclear (8 mm), 2 Saflex R solar (SG41), Planiclear (8 mm), water chamber (24 mm), Planiclear (8 mm), 4 Saflex R standard clear (RB11), Planiclear (8 mm). Eastern and western glazing composition was meant to reject energy, so a highly reflective coating was included, instead of the low emissivity coating.

**Figure 2.** Sofia prototype description: (**a**) section; (**b**) detail of the unitized facade and circulating device; (**c**) elevation of the unitized module.

Table 2 shows the thermal properties of the glazing taken from a previous article [40]. The total glass area was 60 m2. This WFG cabin was compared with another one, referred to as Reference prototype, with triple glazing with an air cavity and an argon cavity. The glass layers and coatings were the same as the WFG.


**Table 2.** Thermal properties of Sofia glazing.

<sup>1</sup> Values taken from [40].

#### *2.2. Life Cycle Energy (LCE) Analysis*

The total embodied energy of building elements involves the energy consumed directly at the primary material extraction, manufacturing, and assembly. These amounts of energy constitute the element's initial embodied energy, and the operational energy includes the heating and cooling energy consumption. Primary energy also considers the energy required to produce the final energy consumed in the building, and it varies according to fuel type and transportation losses. Primary energy is proportional to energy-related CO2 emissions [41,42]. Life Cycle Energy comprises the building's operational energy, initial and recurrent embodied energy over its lifetime, and, finally, the energy for demolition and disposal. Equation (6) was used to calculate the Life Cycle Energy.

$$E = E\_l + E\_{err} + E\_\theta(n) + E\_{d\lambda} \tag{6}$$

where *E* is the total energy of the building element, *Ei* is the initial embodied energy, *Eerr* is the recurrent embodied energy for future maintenance and refurbishment (5% of the initial embodied energy), *Eo* is the total annual operational energy, *n* is the lifetime of the element in years, and *Ed* is the embodied energy required for demolition and disposal (3% of the initial embodied energy). Some LCE analysis studies include energy requirements such as lighting, cooking, hot water, and appliances. However, this study includes only the heating and cooling energy through the envelope. For the purposes of this paper, the energy absorbed by the WFG panels was considered as renewable energy production over the operational time in the final energy balance. Therefore, when calculating the total annual operational energy variable, this will be determined by taking the total energy consumption value and subtracting the amount of energy provided by the WFG panels in the structure.

#### *2.3. Life Cycle Cost (LCC) Analysis*

In addition to improving the thermal performance and reducing the environmental impact, the design of an efficient building envelope needs to pay attention to reducing the economic costs. Building owners demand the selection of cost-effective elements of the building envelope in a sustainable building design. Therefore, in terms of effective decision-making, it is essential to have a complete insight into the construction and running costs throughout the building's lifespan. The Life Cycle Cost (LCC) approach is based on optimizing design solutions and minimizing the sum of construction and operating expenses over the building lifetime. The building envelope's Life Cycle Cost has included its construction cost, *C1*, operation cost, *C2*, and demolition cost, *C3*. The present value interest factor of the annuity (PVIFA) was used to calculate the operation cost's current value. The present value interest factor (PVIF) was used to estimate the demolition and disposal cost's current value at the end of the envelope life cycle. *C* represents the building envelope's total Life Cycle Cost and the heating and cooling system; *r* represents the interest rate; *n* represents the design operating life in years. Therefore, the proposed Life Cycle Cost model for building facades is presented in Equation (7).

$$\mathcal{C} = \mathcal{C}\_1 + \mathcal{C}\_2 \left( \frac{1}{r} - \frac{1}{r(1+r)^n} \right) + \mathcal{C}\_3 \left( \frac{1}{(1+r)^n} \right) \tag{7}$$

#### **3. Results**

This section started by defining the parameters to quantify the environmental performance: primary energy, equivalent CO2 emissions, and cost. The building reference time was defined as 50 years.

Depending on the local context, the embodied impact may surpass the operational impact. The indicators that measure energy performance can be split into economic factors and physical energy factors.

#### *3.1. Embodied Energy Calculation*

The building elements' embodied energy has been taken from different sources, whereas the operational energy was calculated with experimental data from the prototypes. It was assumed that the structures were not renovated during this time and had no change in their usage mode throughout their useable life. Energy use, in the viewpoint of the various stages of a building's life cycle, in cold-weather temperatures, the operational stage can reach upwards of 80%. In contrast, the building materials and in-situ construction account for 10–20% [43]. For simplification purposes, the energy for demolition and disposal is a percentage of the total primary energy. Embedded Energy (EE) is the total energy needed to produce goods and services, including processing, mining and extraction, manufacturing, and transport of products. Table 3 shows the emissions associated with the EE are the result of energy and emissions quantification based on the ITeC database [18]. The following assumptions have been considered: The embodied energy associated with the replacement, refurbishment, and substitution of materials and products is assumed to be 5% every ten years [44]. The embodied energy associated with demolition and disposal was assumed to be 3% of the total building Life Cycle Energy [45–47]. The considered energy for replacement, demolition, and transportation to landfill does not exceed 10 kWh.m<sup>−</sup>2.


**Table 3.** Embodied Energy (EE) and Embodied Carbon (EC) of materials.

<sup>1</sup> A circulating device can serve 4 m2 of WFG.

The energy systems used for this study were borehole heat exchangers coupled with a ground-source heat pump for the WFG Peralveche prototype, Air-to-Water heat pump for WFG Sofia prototype, and Air-to-Air heat pump for both reference prototypes. All the heat pumps are considered to be 7 to 20 kW. The chosen system boundaries include the production of the component, starting from raw materials, the use stage, and the dismantling stage of both systems, along a temporal horizon of 50 years. Closed-loop borehole heat exchangers were made of high-density polyethylene vertical pipes with water (78%) and ethylene glycol (22%) mixture flowing. The installation process included drilling boreholes and trenches, inserting a vertical loop, and grouting operations of the borehole with concrete and bentonite mixture [48–50]. The dismantling process included the disposal of the glycol, the sealing of the borehole, and the disposal at the heat pump's end of life. Table 4 shows the embodied energy, emissions, and cost associated with the energy systems used to operate the different prototypes.

**Table 4.** Embodied Energy (EE) and Embodied Carbon (EC) of energy systems.


<sup>1</sup> Four 50 m borehole heat exchangers, diameter 118 mm.

The compressor and structure were made of reinforced steel and the evaporator and condenser from low alloyed steel. The pipes, cables, and expansion valves were made of copper. Pipes were insulated with a polymer and the cables were insulated with PVC. The refrigerant was assumed to be the same (tetrafluoroethane) for all the heat pumps. The heat pumps were considered maintenance-free, and their lifetime, 25 years.

#### *3.2. Renewable Energy Production*

Renewable primary energy (RPE in kWh·m−2) was produced as the water flowed through the glazing. The data were measured over a year. The RPE can cover part of the winter's heating needs and was subtracted from the non-renewable primary energy needed to maintain the prototype's indoor temperature within a comfort range. The heat absorption rate in the southern facades is closely related to the glazing composition and the orientation. When the solar radiation hits in the glazing units, the water chamber absorbs part of that energy. Heating was assumed to be delivered by a hydronic system using the energy absorbed by the WFG envelope. When needed, a heat pump operated to deliver the energy to keep comfort indoor conditions. Figure 3 shows the water heat gain of the prototypes' envelope. The horizontal panels in Peralveche show the largest water heat gain in July with 81.44 kWh·m−2. When it comes to the southern facades, the Sofia prototype shows the largest heat gain. The mass flow rate was set higher than any other prototypes (2 L·min−1·m<sup>−</sup>2), and the southern glazing properties demonstrate the highest absorptance. The peak solar heat gain in the Sofia prototype was 97.39 kWh·m−<sup>2</sup> in July.

**Figure 3.** Experimental data. Water heat gain in WFG envelopes; (**a**) Peralveche prototype WFG; (**b**) Sofia WFG prototype in kWh·m<sup>−</sup>2.

#### *3.3. Operational Energy*

Heating and cooling energy loads (in kWh·m<sup>−</sup>2) were calculated by using Equation (1) for the reference glazing and Equation (2) for WFG. The inputs were measured over a year time. Real data, obtained from the prototype throughout the year, allowed the correlation between the WFG cabin and the Reference cabin. Figure 4 shows the heating and cooling energy in kWh·m−<sup>2</sup> to keep the inside within comfort temperature over the year. The figure considers only the energy that goes through the transparent envelope. The cooling load of the Peralveche Reference cabin showed by far the highest energy consumption with a peak of 118 kWh·m−<sup>2</sup> in July.

**Figure 4.** Experimental data. Heating and cooling energy through the transparent building envelopes in kWh·m<sup>−</sup>2.

Table 5 shows conversion factors between non-renewable primary energy, NRPE, and final energy, FE (kWhNRPE/kWhFE), as well as CO2 emissions conversion factors provided by the Spanish Regulation of Thermal Installations in Buildings [42]. The price per kWh of different energy carriers or fuels was taken from Eurostat documents [43].

**Table 5.** Conversion factor (f) from final energy (FE) to non-renewable primary energy (NRPE), CO2 emissions, and price of energy carriers.


<sup>1</sup> values taken from [43].

The non-renewable primary energy consumption of the existing building per unit of envelope area and year, *NRPEC*, in kWh·m−<sup>2</sup> per year, was calculated using Equation (8).

$$NRPE = f\_{NRPE}FE\_{heat} + f\_{NRPE}FE\_{cool\prime} \tag{8}$$

The equivalent CO2 emissions of the existing building per unit of envelope area and year, in kgCO2.m−<sup>2</sup> per year, were calculated with Equation (9).

$$CO2\_{\text{2}\text{q}} = f\_{\text{CO2}}FE\_{\text{heat}} + f\_{\text{CO2}}FE\_{\text{cool}} \tag{9}$$

Tables 6 and 7 illustrate the non-renewable primary energy consumption (NRPE) with different energy generators in Peralveche and Sofia, respectively. The conversion factor between final energy and non-renewable primary energy (kWh NRPE/kWh FE) and the factor of emitted CO2 for electricity were taken from Table 3 for mainland electricity: 1.954

for the conversion factor between final energy and non-renewable primary energy (kWh NRPE/kWh FE) and 0.331 for CO2 emissions for electricity. In the final energy balance, the renewable energy production was subtracted from the WFG prototype heating loads. The ground-source heat pumps' performance depends on the source inlet temperature in the heat pump, and the inlet temperature in the WFG (*θIN*). A typical value of source inlet temperature ranges from 20 ◦C in ground-source heat pumps (GSHP) to 35 ◦C in other Water-to-Water (W-W) heat pumps. The parameters that influence Air-to-Air (A-A) heat pumps' performance are the dry bulb exterior air temperature and the dry bulb interior return air temperature [51].


**Table 6.** Peralveche prototype. Final energy, non-renewable primary energy, and CO2 emissions.

<sup>1</sup> COP values are taken from [51].



<sup>1</sup> COP values are taken from [51].

The yearly operating energy was assumed to remain steady during the entire building operation. The resource mix supplying electricity to the buildings is assumed to be unvarying. The HVAC systems' efficiency and the operation schedule were assumed to remain unchanged during the Life Cycle Assessment.

#### **4. Discussion**

WFG can be a part of hydronic heating and cooling systems, and it is compatible with ground-source heat pumps and boilers. In this study, the authors have considered only mainland electricity as the energy source, ground source heat pumps for the Peralveche WFG prototype, Air-to-Water heat pumps for the Sofia WFG prototype, and Air-to-Air heat pumps for the reference prototypes.

#### *4.1. Life Cycle Cost Evaluation*

The method to evaluate the Life Cycle Cost (LCC) has been shown in Section 2. The parameters to calculate the envelope's Life Cycle Cost and the energy system initial cost have been discussed in Section 3. The operation cost was calculated with data provided by Table 5 and energy prices for mainland electricity. Maintenance costs for the envelope over 50 years have been calculated as a percentage of the initial cost (1% for the reference glass and 5% for WFG), whereas the heat pump's lifetime was 25 years. The cost of the studied WFG unitized facade was 985 €·m<sup>−</sup>2, whereas the price for the circulating water pump was <sup>20</sup> €·m<sup>−</sup>2. Replacing the water pumps after a 10-year cycle is included in the maintenance cost. Finally, the demolition and disposal costs were calculated as a percentage of the initial cost (3%). Therefore, the total proposed Life Cycle Cost for the building envelope and energy system was calculated according to Equation (7). The total construction costs of the envelopes and the energy systems were calculated by multiplying each envelope component's quantity by the total unit price. The operation cost was converted to the present value based on annual heating and cooling loads through the envelope. The considered demolition and disposal values were 3% of the total construction costs and converted to the present value. This article considered a discount rate of 2% calculated as an average of the harmonized consumer price index in Spain over the last 20 years. The price of energy was taken from Eurostat reports [43]. The material prices of other components come from the ITeC database [18]. Table 8 shows the cost analysis parameters for all cases.

**Table 8.** Cost analysis parameters.


Water flow glazing envelopes showed a higher Coefficient *C*1. The initial cost reflected the investment in borehole heat exchangers and circulating devices. However, the coefficient *C*<sup>2</sup> reflected that the reference yearly operation cost is twice as much as the WFG cost in Peralveche and 1.5 more in Sofia. The total cost considered a 50-year life cycle by converting the operation and disposal costs to the present value. In Peralveche, the reference prototype cost surpassed the WFG one. In Sofia, the WFG is slightly higher than the reference one.

#### *4.2. Life Cycle Energy Evaluation*

Life Cycle Energy included the materials initial embodied energy, the operational energy for heating and cooling over 50 years, the recurrent embodied energy over its lifetime, and, finally, the energy for demolition and disposal. Equation (6) was used to calculate the Life Cycle Energy. Table 9 shows the initial embodied energy *Ei*, the recurrent embodied energy for future maintenance *Eerr*, calculated as a percentage of the initial embodied energy, the total annual operational energy, and the embodied energy required for demolition and disposal *Ed*.



The most considerable initial embodied energy *Ei* was shown in both WFG prototypes. It was more significant in the Peralveche prototype than in Sofia because of the high embodied energy in the borehole heat exchangers. However, the lower operational energy over 50 years compensated for the higher initial energy consumption. Table 10 illustrates the CO2 emissions, which is the parameter used to assess the global warming potential (GWP) in kgCO2eq. The life cycle emissions included the same phases: manufacture and transport of construction materials, maintenance, operation of the building, and waste disposal.


**Table 10.** Life cycle Embodied Carbon in kgCO2eq.

Improving the thermal performance of the building envelope with WFG caused an increase in the initial cost. However, the amount of energy and CO2 emissions declined in both cases over the considered life cycle.

#### *4.3. Multi-Index Evaluation Model*

Usually, the optimum building envelope's thermal performance is often accompanied by an increase in the cost and environmental load. This study established a multi-index evaluation model, including non-renewable primary energy consumption NRPE, global warming potential GWP, and cost, as indicators of the environment's aspects to evaluate a building envelope. Figure 5 shows that the initial cost of the WFG prototype in Peralveche doubled the reference one's cost because of the investment in borehole heat exchangers. Over a 50-year lifetime cycle, the reference prototype's accumulative cost surpasses the WFG prototype, due to the operational cost difference. When it comes to the Life Cycle Energy and the global warming potential the WFG showed a better performance. The reference prototype consumed three times as much energy as the WFG. The accumulative CO2 emissions for the WFG envelope were 25,247 kgCO2eq, whereas the reference glass envelope was responsible for 85,286 kgCO2eq. When the cost is increased by 100%, the total Life Cycle Energy decreased by 1145 GJ. It has been estimated that the initial investment for the WFG system would cost €17,529, while the Reference system would require an initial investment of €7373. The WFG system would require €640.88 for maintenance per year, with an end-of-product, total demolition, and removal cost of €7921. Summing together these values and the initial investment for the system, the final total Life Cycle Cost of the Peralveche WFG system is €40,611. Meanwhile, for Peralveche RC, this system would cost €1239 in maintenance costs per year, with an end-of-product, total demolition, and removal cost of €3921. Combining these values with the initial investment cost shows that the final total Life Cycle Cost of the Peralveche RC system would be €47,767. Therefore, in the Peralveche case, it is apparent that the successful construction and implementation of the WFG system would save the owner a total of €7156 over the total building life cycle period. When it comes to Life Cycle Energy assessment and global warming potential, the WFG system in the Peralveche prototype used 571 GJ of energy during its lifetime. It was also determined that the system would contain 25,452 kgCO2eq during its 50 years of use. Meanwhile, for the Peralveche RC prototype, the system was calculated to use 1716 GJ of energy, while containing 85,286 kgCO2eq during its 50 years of use. Therefore, the Peralveche WFG system, when compared to the RC, will save 1145 GJ of energy as well as 59,834 kgCO2eq of Embodied Carbon over its entire lifetime.

Figure 6 shows that the HVAC operation cost over 50 years did not compensate for the higher initial cost of the WFG prototype. Over a 50-year lifetime cycle, the WFG envelope showed a better performance in the Life Cycle Energy and the global warming potential. The accumulative Life Cycle Energy for the WFG envelope was 2377 GJ, whereas the Life Cycle Energy of the reference glass was 3608 GJ. A model integrating Life Cycle Cost with Life Cycle Energy is used to assess the envelope schemes and select the optimal one in the decision-making process. The Sofia WFG system is estimated to initially cost €65,626, while the Sofia Reference system would cost €40,795.

**Figure 5.** Life Cycle Assessment outcomes of the Peralveche prototype.

**Figure 6.** Life Cycle Assessment outcomes of the Sofia prototype.

For Sofia prototypes, the WFG system would require €1320 for maintenance per year, with an end-of-product, total demolition, and removal cost of €10,046. Summing together these values, as well as the initial investment for the system, it can be seen that the final total Life Cycle Cost of the Sofia WFG system is €110,865. Meanwhile, for Sofia RC, this system would cost €1923 in maintenance costs per year, with an end-of-product, total demolition and removal cost of €4923. Combining these values with the initial investment cost, the final total Life Cycle Cost of the Sofia RC system would be €103,066. The Sofia WFG system would cost an additional €33,184 compared to Sofia RC over the structure's lifespan.

The WFG system in the Sofia prototype used 2377 GJ of energy, so the system would contain 136,250 kgCO2eq during its 50 years of use. Meanwhile, for the Sofia RC prototype, the system used 3608 GJ of energy while containing 197,625 kgCO2eq during its 50 years of service. In this case, the WFG system will save 1231 GJ of energy while also containing 6137 kgCO2eq less of Embodied Carbon over its entire lifetime as compared to the Reference system.

#### **5. Conclusions and Limitations**

This paper aimed to develop a conceptual framework to assess the building envelope energy consumption throughout their entire life cycle. By analyzing two case studies, the results can assist building designers during the decision-making process at early stages and consider water flow glazing as an option to reduce energy consumption and CO2 emissions.

As has been demonstrated, WFG glazing technology typically retains a higher price point for initial construction than the reference prototypes, requiring a substantial investment early on. This is because of the additional equipment needed for the successful operation of WFG panels. The Peralveche reference prototype's initial cost is 42% of the

Peralveche WFG cost, whereas the Sofia reference prototype initial cost is 62% of the WFG initial cost. The steeper initial investment for WFG technology can serve as a deterrent for the technology in the eyes of building design professionals. However, when the total Life Cycle Costs of the WFG and RC are also taken into account, WFG can potentially be a much more economical option.

It is not until we consider each system's overall Life Cycle Costs that the economic benefits of WFG systems become apparent. For this study, the building lifespan was determined to be 50 years. The total Life Cycle Cost (LCC) of the Peralveche WFG is 85% of the total Reference system LCC. Meanwhile, the Sofia Reference LCC is 92% of the total WFG LCC. The conclusion derived from these findings is that selecting high-performance triple glazing is better than a WFG in Life Cycle Cost. A WFG system is better over the operation phase only when it is compared with a traditional double-glazing system, as has been demonstrated in Peralveche.

Another important factor that should be taken into account in the analysis between a much more traditional glazing system versus a WFG system is the Life Cycle Energy (LCE) and global warming potential (GWP) variables. The Peralveche WFG system, as compared to the Peralveche reference prototype, has demonstrated a savings of 66% in LCE with a 70% reduction of CO2 emission. In Sofia, there are similar results. Sofia WFG demonstrated a 36% savings in LCE with a 30% reduction in CO2 emissions. This analysis shows that a high-performance triple glazing system improves the Reference prototype performance, but WFG performs better in LCE and GWP in both cases.

The WFG system, however, does have several limitations. Firstly, there is an apparent lack of interoperability with the rest of the building systems present in modern structures, especially concerning the ventilation system. In addition to this, it is not always possible to retrieve all the detailed information needed as input for Water Flow Glazing operation. The maintenance of the building systems operation and the control of the building's indoor environmental conditions according to its user's comfort is of the utmost importance, and smart meters can assist in this. However, these devices pose considerable limitations concerning the quality, frequency, and accuracy of data. Therefore, taking these limitations into account, several future steps of research should be undertaken. Firstly, a development of a testing method to evaluate the performance of the unitized module components should be explored. In addition to this, more case studies in several different climate regions should be analyzed. Thirdly, the development of a management system to control the water pump in the circulating device should be realized. The life cycle of the water pump, which is another point of future research, depends on the operating hours and the on-off cycle. Finally, an integration of a whole evaluation protocol, including maintenance, environmental, and economic aspects, should be explored. This could be used by stakeholders involved in the design, maintenance, and monitoring process in future, potential projects.

After monitoring the WFG systems for a year, several uncertainties, misfunctions, and system issues must be addressed. WFG systems are limited by a high initial investment cost coupled with the need for an energy management system integrated with the other required equipment, especially if the system is coupled with borehole heat exchangers combined with a ground source heat pump. The heating and cooling devices must be adequately dimensioned to avoid misfunctions, especially the Air-to-Water heat pump. Further research must include monitoring energy performance much more accurately by attaching sensors to monitor the amount of electricity powering the heat pump to compare the actual thermal and electricity consumption. In addition to this, further standardization of the manufacturing and deployment process is required to bring down upfront investment costs and payback periods. Finally, another potential further research component would be to control indoor relative humidity, which would be achieved by integrating WFG with efficient ventilation systems.

**Author Contributions:** Conceptualization, B.M.S., F.d.A.G., J.A.H.R.; methodology, B.M.S., F.d.A.G.; software, J.A.H.R.; formal analysis, M.G., F.d.A.G.; data curation, F.d.A.G., J.A.H.R.; writing—original draft preparation, B.M.S., F.d.A.G., J.A.H.R.; writing—review and editing, M.G., B.M.S.; visualization, M.G., B.M.S., B.L.A.; supervision, J.A.H.R., B.L.A.; project administration, F.d.A.G.; funding acquisition, F.d.A.G. All authors have read and agreed to the published version of the manuscript.

**Funding:** This article has been funded by KSC Faculty Development Grant (Keene State College, NH, USA).

**Acknowledgments:** This work was supported by program Horizon 2020-EU.3.3.1: Reducing energy consumption and carbon footprint by smart and sustainable use, project Ref. 680441 InDeWaG: Industrialized Development of Water Flow Glazing Systems. The authors wish to thank the municipality of Peralveche, Spain, for its generous support.

**Conflicts of Interest:** The authors declare that they have no conflict of interest.

#### **Abbreviations**


*Uw* Thermal transmittance (water chamber–interior) (K).

#### **References**


## *Article* **Zero Energy Building Economic and Energetic Assessment with Simulated and Real Data Using Photovoltaics and Water Flow Glazing**

**Fernando del Ama Gonzalo 1,\*, Belen Moreno Santamaria 2, José Antonio Ferrándiz Gea 3, Matthew Griffin <sup>1</sup> and Juan A. Hernandez Ramos <sup>4</sup>**


**Abstract:** The new paradigm of Net Zero Energy buildings is a challenge for architects and engineers, especially in buildings with large glazing areas. Water Flow Glazing (WFG) is a dynamic façade technology shown to reduce heating and cooling loads for buildings significantly. Photovoltaic panels placed on building roofs can generate enough electricity from solar energy without generating greenhouse gases in operation or taking up other building footprints. This paper investigates the techno-economic viability of a grid-connected solar photovoltaic system combined with water flow glazing. An accurate assessment of the economic and energetic feasibility is carried out through simulation software and on-site tests on an actual prototype. The assessment also includes the analysis of global warming potential reduction. A prototype with WFG envelope has been tested. The WFG prototype actual data reported primary energy savings of 62% and 60% CO2 equivalent emission reduction when comparing WFG to a reference triple glazing. Finally, an economic report of the Photovoltaic array showed the Yield Factor and the Levelized Cost of Energy of the system. Savings over the operating lifetime can compensate for the high initial investment that these two technologies require.

**Keywords:** water flow glazing; building integrated PV panels; levelized cost of energy

#### **1. Introduction**

As the world economy continues to develop, the building industry continues to maintain a great expense of energy utilization and carbon dioxide emission. Consumers are central participants in this specific circumstance, as adaptability in demand is necessarily required to embrace the discontinuous nature of most renewable energy sources [1]. The poor performance of existing buildings offers essential possibilities for energy retrofit. Integrating renewable solar systems into the building envelope and storage of thermal and electric energy can be the solution to this challenge. [2]. The Energy Performance Buildings Directive (EPBD) promotes policies that will produce highly energy-efficient and decarbonized structures by 2050 [3]. Starting 31 December 2020, all new buildings will have to be Nearly Zero Energy Buildings [4]. Zero-energy buildings are required to produce equivalent energy from renewable technologies to balance their energy consumption [5]. Designing passive measures and implementing highly efficient building systems are the first steps to accomplish Net Zero Energy objectives [6]. The second step includes the

**Citation:** del Ama Gonzalo, F.; Moreno Santamaria, B.; Ferrándiz Gea, J.A.; Griffin, M.; Hernandez Ramos, J.A. Zero Energy Building Economic and Energetic Assessment with Simulated and Real Data Using Photovoltaics and Water Flow Glazing. *Energies* **2021**, *14*, 3272. https://doi.org/10.3390/en14113272

Academic Editor: Francesco Asdrubali

Received: 5 May 2021 Accepted: 28 May 2021 Published: 3 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

integration of active surfaces for energy generation and management. Solar thermal collectors and photovoltaic panels are the most reliable ways to provide electrical and thermal energy from renewable sources [7]. Therefore, the current and future architectural design philosophies must consider the floor, roof, wall, and any other surface space available to be able to place energy generation devices. The building's energy footprint is a new concept to be considered at the design stage [8]. In relation to the architectural building design, the possible renewable supply options are in the building's physical footprint [9,10]. Architects and building designers have to face the challenges of designing buildings and their energy footprint and considering how this energy footprint will influence the habitats, cities, and landscapes of tomorrow [11].

Photovoltaic panels placed on building roofs can generate enough electricity without causing greenhouse gases or occupying other building footprints. The integration of Photovoltaic (PV) systems in the building envelope can reduce the primary energy consumption and decrease the global warming potential by cutting down on CO2 emissions [12,13]. Energy generated by PV panels can be used where it is consumed and can be used to power any building request, from lighting to electrical heating and cooling generators. The system can allow construction costs reduction if used to substitute traditional building materials [14]. Some authors regard Building-integrated PV panels as an optimal solution for zero energy building design due to the ability to generate energy [15,16]. PV modules are available at the price of about 1.5 €/W. However, the price is lower in the European market, with the lowest price of about 0.75 €/W [17]. A global annual growth rate of 40% is expected in the BIPV installed capacity from 2017 to 2021 [18,19]. Although policies for promoting BIPV deployment have begun over the last few years, there exist barriers to the massive use of this technology in the building industry [20]. The lack of national programs or building codes covering the technical specifications of the devices and long pay-back periods for solar installations hinder the massive deployment of this technology [21]. Financial boundaries continue to be the primary barrier to a renewably-powered community [22]. Battery packages for energy storage are required in areas without access to electric grids, although integrating batteries shows a high energy product cost [23,24]. Extensive research has been conducted to assess the PV cells' efficiency. The reported efficiency for the polycrystalline silicon PV and amorphous silicon PV cells usually used in BIPV is 25.7% and 10.2%, respectively [25,26].

Researchers commonly recognize the impacts of the envelope on the building energy balance. Many design features such as insulation, structural stiffness, and aesthetics pertain to fabrication, assembly, and deployment [27]. The use of curtain wall systems enables effective daylighting in high-grade, institutional and commercial buildings [28]. A stick Curtain Wall System comprises mullions, transoms, and glass panels placed on-site. [29]. Unitized Curtain Walls are produced, constructed, and glazed in an off-site facility; these unitized units are transported to the construction site and attached to the building structure.

There are many current trends in energy efficiency for transparent building envelopes. These include, but are not limited to: addressing solar heat gain coefficient (SHGC) with coatings on glazing [30], dynamic glazing solutions such as electrochromic glazing (EC) [31], suspended particle device glazing (SPD) [32], and polymer dispersed liquid crystal (PDLC) [33]. In addition to this, one can improve the thermal resistance of the building envelope by using multi-layered glazing and reduce cooling demand with shading devices [34,35]. A fluid medium that can cool the glass itself can preheat the air before it enters the interior space [36]. An alternative to circulating air is using a circulating water chamber which benefits from capturing solar energy and turning that potential energy load into a renewable energy source [37]. This idea is known as Water Flow Glazing (WFG), a disruptive active façade technology that aims to compliment the HVAC system in a structure. Water-flow glazing (WFG) has been studied as a curtain wall component. Energy savings depend on the water layer's absorption for energy management of the building envelope [38,39]. Building envelopes with high window-to-wall ratios significantly impact the built environment's ecological footprint due to the high operational energy demand. WFG panels absorb heat which can be transported to a thermal storage unit [40]. Some authors successfully tested this technology and showed saving potential between 52% and 72% compared with traditional double glazing and between 34% and 61% compared with triple glazing [41]. Other authors have assessed the WFG life cycle showing that its cost payback time and energy payback time is less than three years [42]. Some experimental facilities called 'water houses' have been built to evaluate their viability and compare the water-filled envelope with existing construction techniques [43]. The present article continued this line of research, presenting a new Water-flow glazing unitized façade. This facade has been tested in a real-world facility built in Sofia, Bulgaria. The objective of this paper was to assess the energy performance and the economic feasibility of a WFG façade coupled with a PV array system and heat pump to improve the energy resilience of buildings while allowing for desirable architectural features. The first section describes the proposed system and presents the data about this research. The second section explains the performance indicators used to evaluate the system operation. This study includes an analysis to study the influence of variable input on technical performance and economic feasibility. This article further presents a model for expanding the enhancement of the PV energy source. It was reported that the potential of renewable energy for buildings and reduction in CO2 potential can be influenced through different government policies and strategy measures.

The main novelties of this work comprise several goals. Firstly, to fulfill the demand of an office building for space heating and cooling using a PV-driven heat pump. Secondly, to compare Building Information Modeling (BIM) and Energy+ simulation with measured data from a real facility. Thirdly, dynamic simulations were performed to analyze the thermo-economic performances at different peak powers of PV by means of two different software tools. Finally, technical and economical parameters (including the tilt angle, peak power of PV, electricity price, and conversion factors for primary energy) have been taken into account and presented.

This paper is organized into five different sections. Section 2 contains a description of the tested facility, weather data from the site, and a description of the water flow glazing system and PV array. Section 3 presents actual data from the tested facility, calculates the heating and cooling loads, showcases the renewable energy production of WFG, and finally demonstrates simulation data from commercial software. Section 4 takes into consideration the energy and economic analysis. Lastly, Section 5 covers conclusions and limitations.

#### **2. Materials and Methods**

In this section, the authors intend to explain the testing facility, provide a quick overview of the water flow glazing envelope, the energy system employed in the testing facility, and finally, an overview of the PV array system. The testing facility was a structure built in Sofia, Bulgaria, that collected data over the timespan of one year. This structure was built using water flow glazing panels, a façade typology that contains a triple-glazed panel. The chamber located towards the exterior contained a traditional gas-filled chamber, while the interior chamber was filled with a circulating fluid medium. The energy system used in this prototype is a water-to-water heat pump that collected heat from the water flow glazing panels and stores it in a buffer tank below ground level. When needed, this heat recirculated into the building. Finally, a photovoltaic (PV) array provided the required electricity to the building and building peripherals, including heating, lighting, and computer systems.

#### *2.1. Description of Testing Facility and Energy Management System*

WFG modules were tested in an experimental facility built in Sofia, Bulgaria (42◦39 1" N, 23◦23 26" E, Elevation: 590 m a.s.l.). The building is a stand-alone construction with a square geometry, deployed in a single plan of outer dimensions 7 × 7 m, and a ceiling height of 2.7 m, with open office design and a central service core. Three of the facades are fully glazed, from floor to ceiling, except the north façade which is 100% opaque. The

glass facades are composed of unitized WFG modules with dimensions of 2.7 × 1.3 m. Eastern, western and southern facades were triple glazing with a 16 mm argon cavity and a 24 mm water cavity. The unitized WFG panels were put together in a workshop and then served for on-site application with an aluminum enclosure that held the circulating device. A water pump, a plate heat exchanger, flow meters at the inlet and outlet, and temperature probes made up this circulating device. The design flow rate was 2 L/min m2. The southern glazing showed high near-infrared and low-infrared FIR absorptions, making it an ideal solution for large glazing facades in severely cold climates. The goal for eastern and western glazing was to prevent heat from entering the internal area, especially in summer with high front reflectance and low infrared absorptance. Besides, the water flow absorbed solar infrared radiation increasing the water temperature within the window. The energy transported by the water is released in tanks so that thermal energy can be collected and utilized in radiant heating systems and domestic hot water. This renewable energy integrated into the water flow glazing panels might be enough to meet the heating loads in winter. However, it is necessary to study other renewable energy sources to address the electric loads over the year. Figure 1 illustrates the prototype, the WFG panels and the building integrated PV array. Southern WFG showed a low Far-Infrared absorptance, high Near-Infrared absorptance, and high front Far-Infrared reflectance. Its ability to absorb heat made it the right solution for heating up water. The infrared absorptance was very low in eastern and western WFG, whereas the infrared front reflectance was high. This glazing would show the best performance for preventing heat from entering the indoor space, but it would not be relevant to heat water.

**Figure 1.** Description of Sophia's prototype glazing.

The space heating and cooling demand was met by the energy management system made of WFG panels, a cylindrical 370-L buffer tank with a 2 cm rigid insulation, and a water-to-water heat pump coupled to a building-integrated PV array. The tank's internal diameter was 0.6 m and the height 1.55 m. It was placed underneath the prototype, with the top accessible from a crawl space, and connected to the water-to-water heat pump., The PV array satisfied the electricity demand formed by: (i) the electrical appliances installed in the building, (ii) the lighting equipment and (iii) the space heating and cooling by electric-driven heat pumps for (iv) the hydraulic auxiliary system for WFG panels. The PV

capacities are calculated to maximize the self-consumed renewable energy. The PV array is sized to fulfill the electricity demand. When the PV output is lower than the demand, the remaining amount of electricity demand is withdrawn from the power grid. Moreover, if the PV plant output exceeds the demand, the excess output is delivered to the grid. Therefore, it is assumed that there is no electricity storage system. Figure 2 shows the proposed layout that consisted of two main thermal loops. The water flow glazing loop is based on water heated by the WFG modules and supplied to the buffer tank. The second loop connects the buffer tank with the water-to-water heat pump. In summer, WFG panels absorbed solar radiation and released the heat in the buffer tank underneath the prototype. The heat pump distributes cold water through the WFG façade and interior panels. In winter, WFG façade panels absorbed solar radiation and heated up the buffer tank. Hot water is circulated through the interior WFG panels and released heat inside the building. The water-to-water heat pump worked as a backup system and was operating if the buffer tank temperature is not high enough to supply the required heating load.

**Figure 2.** Description of Sophia's prototype energy management system. 1. Inverter; 2. Production meter; 3. Utility meter; 4. Customer panel; 5. Electric loads (heat pump, lighting, power); 6. Buffer tank; 7. Roof PV open rack; 8. WFG facade with building integrated PV panels; 9. WFG interior panels.

#### *2.2. Estimation of Solar Irradiances and Outdoor Temperature*

To properly calculate the total irradiance that will be received on the test prototype, there needs to exist a breakdown of variables that will present the total irradiance when summed together. Firstly, the Global Horizontal Irradiance (Ghi) maintains the total irradiance received on horizontal surfaces. This variable is calculated by summing the horizontal components of diffuse and direct (beam) irradiance types. This beam irradiance is the most relevant variable to consider when calculating for technologies that rely on solar power. Secondly, the Diffuse Horizontal Irradiance (Dhi) is the horizontal component of irradiance that is scattered due to the presence of the atmosphere. In conjunction with beam irradiance, diffuse irradiance is a crucial component to consider when calculating the performance of the solar module. Thirdly and finally, the Direct Normal Irradiance (Dni) is the amount of solar irradiance delivered directly from the sun. The authors have estimated these crucial solar irradiance components (direct beam, diffuse, and reflected radiation) to show the importance of each type.

Figure 3 demonstrates solar irradiances received in January. Direct (Beam) Horizontal Irradiance (Ebh) is the horizontal type of Direct Normal Irradiance. In this figure it is understood that days one, two, and three are sunny days, while days five and six are cloudy. During sunny days both direct normal irradiance and diffuse radiation are present;

Meanwhile, during the cloudy days only diffuse radiation is present. Outdoor temperatures during these days ranged from −5 ◦C to 7 ◦C.

**Figure 3.** Solar irradiances and outdoor temperature in January.

Figure 4 demonstrates solar irradiances in the month of July, where Direct (Beam) Horizontal Irradiance (Ebh) is the horizontal component of Direct Normal Irradiance. During this month, it can be understood that the exterior temperature ranges from 15 ◦C to 30 ◦C.

**Figure 4.** Solar irradiances and outdoor temperature in July.

#### *2.3. Water Flow Glazing Thermal Properties*

Previous articles have shown the equations that explain the spectral and thermal behavior of water flow glazing assuming steady conditions [44,45]. The variable *m˙* denotes the mass flow rate per unit of surface; this variable can be manipulated in order to change the thermal performance of the water flow glazing (WFG) system. *c* is the variable that measures the heat capacity of the fluid medium. Meanwhile, *c*, when multiplied by *m˙* (the mass flow rate), gives the capacity of the fluid medium to absorb heat. Equation (1) demonstrates the variable g-factor of a triple-glazed WFG panel.

$$\mathcal{g} = \left(\frac{\mathcal{U}\_i}{\dot{m}c + \mathcal{U}\_\varepsilon + \mathcal{U}\_i}\right) \left( \left( A\_1 \left( \frac{\mathcal{U}\_\varepsilon}{h\_\varepsilon} \right) + A\_2 \left( \frac{1}{h\_\mathcal{g}} + \frac{1}{h\_\varepsilon} \right) \mathcal{U}\_\varepsilon + A\_3 \left( \frac{\mathcal{U}\_\varepsilon}{h\_i} \right) + A\_\mathcal{w} \right) \right) + A\_3 \left( 1 - \frac{\mathcal{U}\_i}{h\_i} \right) + T\_\prime \tag{1}$$

where:

*Ue* = thermal transmittance between water chamber and exterior environment,

*Ui* = thermal transmittance between water chamber and interior,

*Aw* = Water absorptance,

*A*1, *A*2, *A*<sup>3</sup> = Glazing panel absorptances of each respective pane in window assembly,

*he, hi, hg, hw* = Convective heat coefficients,

*T* = energy transmittance of glazing.

*Ui*, *Ue* are obtained through the manipulation of the convective heat coefficient variables, *he, hi, hg, hw*. Equations (2) and (3) are used to calculate the variable thermal transmittances of WFG. U is the glazing thermal transmittance; *Uw* is the total thermal transmittance between the indoor environment and the water chamber.

$$
\mathcal{U} = \frac{\mathcal{U}\_i \mathcal{U}\_\mathbf{c}}{\dot{m}\mathbf{c} + \mathcal{U}\_\mathbf{c} + \mathcal{U}\_i} \,' \tag{2}
$$

$$
\mathcal{U}\_{\text{iv}} = \frac{\mathcal{U}\_i \text{ } \dot{m}c}{\dot{m}c + \mathcal{U}\_c + \mathcal{U}\_i}. \tag{3}
$$

Equation (4) illustrates the expression of *q*, the heat flow through glazing with gas cavities. The heat flow depends on the difference between the indoor and outdoor temperatures *(θ<sup>e</sup>* − *θi),* thermal transmittance, *U*, *g*-factor, and direct and diffuse solar radiation, *i*0.

$$q = \mathbb{U}(\theta\_{\mathfrak{e}} - \theta\_{\mathfrak{i}}) + \mathfrak{g}i\_0. \tag{4}$$

Equation (5) illustrates the heat flow, *q*, in water flow glazing panels, where the fluid's temperature and mass flow rate are transient, which affects the heat flow. *θIN* is the variable that measures the inlet temperature of the fluid medium in the WFG assembly.

$$q = \mathcal{U}(\theta\_{\varepsilon} - \theta\_{i}) + \mathcal{U}\_{\text{av}}(\theta\_{IN} - \theta\_{i}) + \mathcal{g}i\_{0\prime} \tag{5}$$

where *θIN* is the inlet temperature of the fluid into the WFG system. Thermal transmittances and *g*-factors of the water flow glazing change at different flow rate conditions. When the fluid is flowing at a rate of *m˙* > 2 L/m2 min, *g* and *U* are denoted with the superscript *ON*. When there is no flow (*m˙* = 0) the *g*-factor and *U* values, denoted with the superscript *OFF,* depend only on the spectral properties of the glass.

#### *2.4. Photovoltaic*

Photovoltaic (PV) systems have many potentialities, thanks to their ability to produce electricity and the enormous recent cost reduction. In this project, a grid-connected PV arrangement supplied the grid when the building demand exceeded the produced energy. The viability of the private grid-connected PV system came after a techno-economic feasibility study that included the initial costs and the energy savings over the facility's operating life cycle. The research was conducted using different PV simulation software and analyzing various techno-economic indicators. These indicators benchmarked the system production at a particular spot upon the system's production under possible operational patterns. The inputs were based on actual energy consumption results over a year. The highest daily consumption and the minimum amount of peak solar hours are required to determine a photovoltaic array. The grid-connected systems stated in this article were integrated in the WFG southern modules and installed on the rooftop of the tested facility. The estimation of rooftop areas is the first step to fitting the design specifications. In terms of roof construction, the tested facility had a flat rooftop that facilitates the installation of a PV array. This PV system eliminated the expense of storage. The system was broken down into PV arrays, inverters, and transformers, as shown in Figure 2. The excess electricity, exceeding consumption by the connected equipment, is supplied to the grid. The PV system monthly energy output has been calculated with the solar thermal power which is absorbed per unit area of a PV system, determined using Equation (6).

$$E\_{PV} \;= I\_t \; A\_m \; N\_m \; \prime \tag{6}$$

where *It* is monthly average daily of total absorbed solar irradiance on a tilted surface, *Am* is the area of a PV panel, and *Nm* is the number of PV panels that made up the PV array. Equation (7) illustrates the AC energy production from a PV system.

$$E\_{PV-DC} = E\_{PV} \ \eta\_c \ \eta\_{inv} \tag{7}$$

where *η<sup>e</sup>* is the PV efficiency of the panel and *ηinv* is the inverter's efficiency. The monthly average hourly beam irradiance on a tilted surface (Wh/m2) is given by Equation (8).

$$I\_{\rm B} \;= I\_{\rm Bn} \; \cos \theta\_{\prime} \tag{8}$$

where *IBn* is the average beam irradiance for normal incidence (Wh/m2), and θ is the angle of incidence. Equation (9) shows the diffuse irradiance on a tilted surface (Wh/m2).

$$I\_{D\_{-}} = I\_{Dn} \left(\frac{1 + \cos\beta}{2}\right),\tag{9}$$

where the monthly average hourly diffuse irradiance for normal incidence, *IDn* (Wh/m2), depends on the tilt angle, *β* [46]. Table 1 shows data from the energy management system, including the PV array, the Water Flow Glazing thermal and spectral properties, the buffer tank, the reference glazing, and the heat pump. WFG prototype was connected to a buffer tank and a water-to-water heat pump, whereas the reference glazing prototype was simulated with an air-to-water heat pump. It has been assumed that the former had a heat pump, with a Coefficient of Performance (COP) of 5.4 and an Energy Efficiency Ratio (EER) equivalent to 6.0. The latter heat pump's coefficients were COP of 3.5 and EER of 4.0 [47].



<sup>1</sup> Values taken from [47].

#### **3. Results**

This section reviewed several critical energy parameters, demonstrating both initial predictions and collected real-world data from the facility. The heating and cooling loads, both with and without WFG panels, was demonstrated. The amount of PV energy produced is also listed. Finally, heat gains from the internal water chamber were presented. The tested prototype had an opening schedule from 8:00 to 20:00, (Monday to Friday). During scheduled operating hours the considered internal loads were equipment loads: 7 W/m2, lighting loads: 10 W/m2, occupation loads: 5 W/m2. The heat balance was broken down in internal heat loads provided by the equipment in operation and occupancy of the building, the heat transmitted between outside and inside through the transparent and opaque envelope, solar gain through the glazing (considering the amount of direct and diffuse radiation which penetrates to the interior), and ventilation loads. Domestic hot water consumption has been considered zero.

#### *3.1. Energy Plus Simulation Results*

This section aimed to evaluate how the different technologies implemented in the facility are performing related to the construction of zero energy buildings, as set out in the 2010/31/UE directive, and using renewable solar energy. Subsequently, the electric type energy consumption is represented due to the lighting, the heat pump, and the operation of equipment, all of them monthly, in kWh. The consumption of domestic hot water has been considered zero. For performing the energy simulations, the EnergyPlus 8.4.1 (https://energyplus.net accessed on 2 February 2021) software tool has been used to calculate the building's energy consumption. For graphical interface, the Open Studio Application Suite (https://energyplus.net/extras accessed on 2 February 2021), a tool able to support the whole building energy modeling when using Energy Plus. Table 2 illustrates the estimated heating loads of the WFG prototype on a cloudy winter day. The number of occupants (*n*) at each hour was used to calculate ventilation loads (Vent). Values for area and thermal transmittance of the roof, opaque walls, and water flow glazing were taken from Table 1.


**Table 2.** Winter heating loads on 5 January 2020.

<sup>1</sup> Values are taken from Figure 3.

Table 3 shows the estimated cooling loads of the WFG prototype. Internal loads (*IL*) were determined with the metabolic rate of an office, the number of people, and 20 W/m2 for lighting. Solar radiation (*SR*) was calculated using the g-factor from Table 1 for WFG.


**Table 3.** Summer cooling loads on 2 July 2020.

<sup>1</sup> Values are taken from Figure 4.

#### *3.2. Operational Energy and Thermal Energy Production*

Heating and cooling energy loads were calculated by using Equation (4) for the reference glazing and Equation (5) for WFG. The inputs were taken from real data over a year time. The thermal energy production was obtained from the prototype's real data throughout the year. Equation (10) verified the absorbed energy per unit of area per day [44]. The output of the equation is the water heat gain. The daily water heat gain efficiency is defined as the ratio of total water heat gain compared to the daily solar radiation. In this research, the daily water heat gain is accumulated if the inlet temperature is lower than the outlet temperature.

$$P = \frac{\dot{m}c}{\dot{m}c + \mathcal{U}\_{\text{f}} + \mathcal{U}\_{\text{i}}} (A\_{\text{v}}\dot{n}\_{0} + \mathcal{U}\_{\text{i}}(\theta\_{\text{i}} - \theta\_{IN}) + \mathcal{U}\_{\text{e}}(\theta\_{\text{e}} - \theta\_{IN})),\tag{10}$$

where:

*Ue* = thermal transmittance between water chamber and exterior environment,

*Ui* = thermal transmittance between water chamber and interior,

*Avi0* = energy absorbed by the water, plus the energy transferred by convection due to the glass panels heat absorption,

*θIN* = inlet temperature in the water flow glazing

*mc ˙* = capacity of the fluid medium to absorb heat.

Equation (11) defines *Av* as the absorptance of triple glazing with a water chamber facing indoors [44].

$$A\_{\upsilon} = A\_1 \left(\frac{lI\_{\mathfrak{c}}}{h\_{\mathfrak{c}}}\right) + A\_2 \left(\frac{1}{h\_{\mathfrak{g}}} + \frac{1}{h\_{\mathfrak{e}}}\right) lI\_{\mathfrak{e}} + A\_3 \left(\frac{lI\_{\mathfrak{e}}}{h\_{\mathfrak{i}}}\right) + A\_{\mathfrak{w}\prime} \tag{11}$$

where:

*Ue* = thermal transmittance between water chamber and exterior environment,

*Ui* = thermal transmittance between water chamber and interior,

*Aw* = Water absorptance,

*A1*, *A2*, *A3* = Glazing panel absorptances of each respective pane in window assembly, *he, hi, hg, hw* = Convective heat coefficients.

In Figures 5 and 6, the southern facing wall was responsible for the majority of water heat gain, while the walls in the eastern and western orientations provided much less. Two causes can explain this phenomenon. The first being the simple fact that in winter, the southern direction is the orientation that receives the most amount of direct solar radiation, while the walls in the eastern and western orientations receive much less. The second reason involves the selection of the glazing types used for each direction. In the eastern and western orientations, the glazing was selected to have a high near-infrared reflectance. This high reflectance, in turn, caused a low absorption of solar heat gains in near-infrared radiation. The authors were seeking out this effect because of two reasons. Reason one was because the solar irradiance in eastern and western orientations was predicted to be negligible. Therefore, it is not warranted to invest the same in these directions as in the Southern orientation. Reason two was to prevent overheating during summertime when the solar radiation is perpendicular to eastern and western facades. Meanwhile, in the southern orientation, the glazing was chosen to have a low reflectance and a high absorptance of near-infrared radiation. This absorptance causes a high heat gain leading to increasing temperatures in the fluid medium in the WFG panels. This heated water can then be used for other purposes.

**Figure 6.** Solar irradiances on the façade and water heat gain in July.

Figure 5 demonstrates the measured solar irradiance and the water heat gains during the selected winter period in three different orientations. Panels oriented in a southern direction had a daily energy absorption rate of 39 kWh on sunny winter days when the peak solar irradiance on the southern façade was 800 W/m2. On cloudy winter days, the

energy absorption rate in the eastern and western orientations was negligible. The higher gain in southern glazing can also be explained by the solar angle and the hours of sun on the southern facade. The angle and sun hours are lower on eastern and western facades, which results in less heat absorption.

Figure 6 shows the measured solar irradiance and the water heat gains over the first week of July. The average water heat gain in the southern WFG panels was 47 kWh per day, whereas the average water heat gain in the eastern and western WFG panels was 33 kWh per day. As expected, most of the heat on eastern and western facades was rejected, whereas the water heat gain was high on the southern facade. Solar radiation on eastern and western facades showed low values in winter. The peak solar radiation was slightly over 400 W/m2, whereas in summer, the peak solar radiation was over 700 W/m2. In addition, there were more sun hours in summer than in winter. The east-west gain is different depending on the day. Figure 6 shows that on 2 July, solar heat gains are higher in the east than in the west because solar radiation showed high values. On 4 July, Figure 6 shows that the solar radiation in the western facade was higher, so was the solar heat gain over that day.

The water-energy absorption can be used as renewable primary energy production. Table 4 shows the total thermal energy production compared to the heating load needed to keep the indoor temperature at a comfortable range. A hydronic system made of indoor WFG panels delivered this energy inside the building. It can be seen in Table 4 that in December and January, the heating loads are slightly above what the water heat gains can provide. During these months, another energy source will be required to provide thermal comfort conditions. Meanwhile, during the remainder of the year, especially during the summer months, the exact opposite occurs. The WFG panels absorb an excessive amount of solar heat.


**Table 4.** Yearly water heat gain and heating loads for the WFG prototype.

An excess of water heat gains will lead to uncomfortable interior conditions for building occupants. Therefore, the heat gains should be applied to something that will be beneficial to building occupants in a way other than heating. Possible remedies could include domestic hot water for residential buildings, and seasonal heating storage by use of a buffer tank.

#### *3.3. PV Production and Electric Loads*

This subsection aimed to demonstrate that the remaining heating loads, cooling loads, artificial lighting, and power for building equipment were covered by energy produced from the PV array. Figure 7 demonstrates the total loads per month, with lighting and power being considered the same in both prototypes. For the Water Flow Glazing structure, most of the heating loads were covered by heating the water in the WFG panels, heat storage in the buffer tank, and finally heat delivery by indoor WFG panels acting as

traditional radiators. The coefficient of performance (COP) for the WFG water-to-water heat pump was 5.4, while the water-to-water heat pump's energy efficiency ratio (EER) was 6. Meanwhile, with the traditional reference cabin, energy consumption was much higher as compared to the WFG facility. In the reference cabin, the COP of air-to-water heat pump was 3.5, whereas EER of air-to-water heat pump was 4. The significant reduction in electricity consumption becomes especially apparent when viewing the total electrical loads of the reference glazing, 1000 kWh, compared to 575 kWh in the WFG prototype during August.

**Figure 7.** Estimation of final electric energy consumption of the prototypes: (**a**) Water Flow Glazing prototype with a water-to-water heat pump; (**b**) Reference glazing prototype with an air-to-air heat pump.

The National Renewable Energy Laboratory PVWatts Calculator [48] was used to estimate the power production of the building integrated PV panels installed on the roof and the south facade. The goal was to maximize Alternating Current (AC) energy production, reducing the energy supplied to and from the electrical grid. Table 5 summarizes the two options that were apparent to the authors of this text. Option 1 was 8.3 m2 of building integrated PV panels on the south façade. Additionally, there would exist an extra 15 m2 of open-rack PV panels on the roof that would maintain a 60◦ tilt. Option 2 would be 15 m<sup>2</sup> open-rack PV panels with a 30◦ tilt, with an additional 15 m<sup>2</sup> of panels with a tilt of 60◦ on the roof. The AC production of each of these options is higher than the electric demand shown in Figure 7. In January, the demand was 563 kWh, while the production was 276 kWh for option 1 and 245 kWh for option 2.

Figure 8 demonstrates the hourly energy balance during the seven days of January. The differences in solar radiation observed on different days were due to the impact of different factors on the PV system performance, such as temperature, wind, and the reduced sun hours in January. On sunny days the peak PV production surpassed 5 kW at noon, whereas the demand was almost steady at 1.3 kW every hour. Meanwhile, on cloudy days the system produced more than 1 kW every hour, with the demand continuing to be steady at 1.3 kW every hour.

Figure 9 shows the hourly energy balance during seven days in July. The PV yield is almost the same throughout the entire period. Peak production during the central hours of the day surpassed the demand.


**Table 5.** Yearly water heat gain and heating loads for the WFG prototype.

<sup>1</sup> Values are taken from Figure 7.

**Figure 8.** Hourly energy balance. January.

**4. Discussion**

*4.1. Operational Energy Analysis and Environmental Assessment*

**Figure 9.** Hourly energy balance. July.

Data from the prototype was used to calculate the yearly operational energy. In this case study, it was assumed that the building experienced no renovations and no change in the usage mode throughout its life cycle. It has been assumed that all the building's

envelope elements have the same service life, and the lifetime of the PV array and heat pumps was 25 years. Non-renewable primary energy, equivalent CO2 emissions, and cost were the parameters utilized to assess the environmental performance. The electricity conversion factor for Bulgaria was 0.47, the Conversion Factors from final energy (FE) to non-renewable primary energy (NRPE) in Bulgaria was 2.17, and the cost of electricity in Bulgaria was 0.239 €/kWh [49,50]. Table 6 shows the primary energy consumption, global warming potential, and operational energy (OE) cost of the prototypes.


**Table 6.** Final energy, non-renewable primary energy, and CO2 emissions per year.

<sup>1</sup> COP and EER values are taken from [47].

The total operational cost of the WFG prototype was 1096 €, whereas the reference glazing prototype was 2881 €. The total CO2eq emissions were 2157 KgCO2eq/year for the WFG prototype and 5667 KgCO2eq/year for the reference glazing prototype for nonrenewable energy sources. These results demonstrated the energy savings and reduction of global warming potential of WFG compared with a high performance triple glazing. As shown in Section 3.3, PV arrays of options 1 and 2 produce more AC energy than the WFG prototype electric demand, so it is necessary to compare the global warming potential in terms of total CO2eq.

The embodied energy of materials is highly correlated with the production of technology and components [51,52]. Table 7 shows the embodied carbon and energy of each material and the costs of a PV panel.


**Table 7.** Embodied Energy (EE), Embodied Carbon (EC), and Cost of PV panel materials.

<sup>1</sup> value taken from [52].

The global warming potential of the manufacturing process of PV panels showed that the CO2eq emissions were 208 KgCO2eq/m2. The PV area of option 1 was 23.3 m2. The PV area of option 2 was 30 m2. The total CO2eq emissions during the PV array manufacturing process were 4846 KgCO2eq for option 1 and 6240 KgCO2eq for option 2.

Several outcomes can be drawn from the examination of this data. First, using WFG can reduce emissions during the operating lifetime in glazed buildings. Second, by producing AC energy with PV arrays, the operating lifetime emissions are zero. Third, manufacturing PV panels also has a global warming potential due to the high emission potential of processing Silicon and Lithium. This section has shown that the minimum global warming potential of manufacturing the PV array to turn the WFG prototype into a Zero Energy

Building was 4846 KgCO2eq. In contrast, the yearly global warming potential due to the building operation was 2157 KgCO2eq/year, so in three years the reduction in CO2 emissions compensated for the PV array's initial manufacturing emissions.

#### *4.2. Economic Analysis*

The highest cost of a solar PV system happens at the installation stage. Therefore, once a PV system is operating, it produces electricity for free. A solar PV array financial report determined the economic feasibility, considering initial cost requirements, operating and maintenance costs. Yield factor (YF) is the energy generated by a PV array over time, and it takes into account the module efficiencies and array designs. Equation (12) shows the expression of yield factor measured in kWh/kWp, by dividing the energy yield (*Et*) by the PV array's nominal power at standard test conditions (*PSTC*).

$$\mathcal{Y}F = \frac{E\_{\rm t}}{P\_{\rm STC}}.\tag{12}$$

The initial cost included the PV modules, circuit, breakers, cables, initial labor, and grid connection cost. Operating costs included yearly scheduled operation costs that comprised screening, maintenance, repairs, panel cleaning, and insurance. To evaluate and compare the cost of energy production between selected case studies and each configuration of PV systems, the concept of Levelized Cost of Energy (LCOE) needs to be introduced. The LCOE is the cost at which electricity must be produced from a source to break even over the project's life [53]. The Levelized Cost of energy expression shown in Equation (13) was taken from previous articles as the ratio between the total life cycle cost and the total life cycle energy output [54].

$$LCOE = \frac{I\_0 + \sum\_{t=1}^{n} \frac{M\_t + F\_t}{(1+r)^t}}{\sum\_{t=1}^{n} \frac{E\_t}{(1+r)^t}},\tag{13}$$

where *I*<sup>0</sup> is the initial investment cost, *Mt* represents the cost of operation and maintenance in the year*t*, *Ft* is the fuel expense in the year *t*, *Et* is the electrical energy generated in the year *t*, and *r* is the discount rate. In this article, fuel cost *Ft* was zero. It was assumed that the replacement rate for WFG would be the same as the reference triple glazing over the considered lifetime. Other components, such as the circulating water pumps, are certified to work for 25 years. The performance evaluation of the solar PV system has been carried out by using Equations (12) and (13). Table 8 presents the yearly yield factor and the levelized cost of energy of two options. The yield factor (YF), which measured the productivity of the system, was found to being 1247 kWh/kWp per year, and 987 kWh/kWp per year, for option 1 and option 2, respectively.

**Table 8.** Final energy, non-renewable primary energy, and CO2 emissions per year.


Final LCOE results depended on average temperatures and solar radiation on the selected site. The relationship between the final results and the selected climate zone was addressed in the conclusions section. Another relevant factor was the discount rate. In this study, three different discount rates have been considered, being 3%, in line with historical data for the Euro Area [55]. Equipment, labor, connection charges, and taxes make up the installation costs. The replacement of the inverter in the 12th year was considered the only maintenance cost. Due to this prototype's small size, there were no operating costs, and maintenance cost was fixed at 1% yearly. The investment costs were different for each country, according to the study performed by the International Renewable Energy Agency (IRENA) [55]. An annual degradation of 0.8% has been estimated based on the specifications provided by manufacturers and thorough analysis on this subject. This degradation leads to decreasing the production, especially after 10 years. [56] The investment costs (*I*0) can be divided into installation costs, soft costs, and hardware costs. Installation costs are the expenses related to the settings of the PV system, including mechanical and electrical fixtures. In contrast, soft costs include all relevant permits administrative costs connected with the system. Hardware costs comprise module, inverter, racking, and electrical wiring. The total investment costs were different for each country. Typical values for Bulgaria were taken from previous articles and databases [22]. A low discount rate increases the reduction of the overall cost of the PV system, decreasing the price of generating one unit of energy. Option 1 showed a better performance when it came to the yield factor and the levelized cost of energy at all discount rates.

#### **5. Conclusions**

This paper aimed to develop a comprehensive comparative analysis from the operating energy, global warming potential, and economic point of view, of two different technologies. This study measured real data of a small office building located in Sofia, Bulgaria to understand the thermal behavior of a structure using WFG panels. This energy system also included a PV system connected to the grid to provide electric energy for heating, cooling, lighting and power supply. In addition to the water flow glazing panels and the PV array, the proposed building-energy system layout also includes radiant panels that delivered the necessary heating and cooling loads. Electrical heat pumps are also modelled for backup heating and for balancing the system cooling demands.

In order to validate the energy savings and the reduction on global warming potential, the water flow glazing facade has been compared with a high-performance triple glazing one. This analysis showed that the selected water flow glazing system coupled with a water-to-water heat pump improves the performance of highly efficient triple glazing. Coefficient of performance and energy efficiency ratio of water-to-water heat pumps are usually higher than air-to-water or air-to-air heat pumps. WFG system, as compared to the reference prototype, has demonstrated actual savings of 60% in non-renewable primary energy consumption with a 62% reduction of CO2 emission.

WFG panels can work as integrated solar thermal collectors. This article has shown that water heat gain throughout the year compensate for 86% of the prototype's heating loads. To maximize water heat gains in the southern orientation, the water flow glazing was chosen to have a low reflectance and a high absorptance of near-infrared radiation. This absorptance causes a high heat gain leading to increasing temperatures in the fluid medium in the WFG panels. In southern and western elevations, a high reflectance panel has been selected to avoid overheating in summer.

The goal of zero-energy building was achieved first, by reducing the heating and cooling demand and second, by implementing a PV array.

Due to the high emission potential of some PV panel components such as Silicon and Lithium, manufacturing PV panels has also a global warming potential This article has compared the CO2 emissions of manufacturing the PV system with the emissions over the operating lifetime. It was found that in three years the reduction in CO2 emissions compensate for the initial manufacturing emissions of the PV array.

Two factors have been used to assess the economic aspects of a PV system, the Yield factor (YF) and the Levelized Cost of Energy (LCOE). This study has not included SELL and BUY.

Option 1 included integrated PV panels on a vertical façade. This option has shown a yield factor of 1247 (kWh/kWp), which was 21% more than option 2.

Levelized Cost of Energy (LCOE) has shown to be dependent on the discount rate. Option 1 has shown a better LCOE ranging from 4.35 €/kWh to 8.68 €/kWh with discount rates of 2% and 4% respectively.

Combining water flow glazing facades and building integrated PV panels can reduce the energy demand and produce renewable energy (thermal and electrical) within the building footprint. However, the proposed system has limitations and room for improvement. The primary disadvantage is the capital cost required for initial implementation. Seasonal thermal storage systems are necessary to use the excess heat produced by the WFG façade. Office buildings have low hot water consumption, and much thermal energy is wasted in summer. Increasing the volume of buffer tanks may lead to achieving seasonal storage, although it will increase the energy system's cost. Building Integrated Photovoltaic panels (BIPV) are negatively affected by several factors in the outdoor environment, such as temperature variations and variable solar irradiation. More prototypes in various climate zones should be analyzed. Another future research line is the assessment of comfort. Traditional cooling systems focused on air movement have lacked comfort by blowing cold air instead of focusing on the mean radiant temperature through radiant elements such as floors, ceilings, and interior partitions. WFG can control the surface temperatures and provide an excellent thermal environment with indoor air temperatures outside the comfort range. Several difficulties and system misfunction must be discussed. WFG technology is hindered by a high initial expenditure that includes other required equipment. A storage tank is needed to avoid overheating in summer, and dimensioning this tank is of paramount importance to the correct performance of the system. WFG technology has to be implemented at the first stage of the design process to bring down the initial expense and payback period. The lack of reliable simulation tools for this technology might slow down the market adoption because of the many parameters that influence thermal performance in different orientations.

**Author Contributions:** Conceptualization, B.M.S., F.d.A.G., J.A.H.R.; methodology, B.M.S., F.d.A.G.; software, J.A.H.R.; formal analysis, M.G., F.d.A.G.; data curation, F.d.A.G., J.A.H.R., J.A.F.G.; writing original draft preparation, B.M.S., F.d.A.G., J.A.H.R.; writing—review and editing, M.G., B.M.S., J.A.F.G.; visualization, M.G., B.M.S., J.A.F.G.; supervision, J.A.H.R.; project administration, F.d.A.G.; funding acquisition, F.d.A.G., J.A.F.G. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Acknowledgments:** Special thanks to the Central Laboratory of Solar Energy and New Energy Sources, of the Bulgarian Academy of Science (CL SENES-BAS) for providing measured data of the solar radiation in the different facades of the test facility in Sofia, Bulgaria. This work was supported by Keene State College Faculty Development Grant program.

**Conflicts of Interest:** The authors declare that they have no conflict of interest.

#### **References**


## *Article* **Pandemic of Childhood Myopia. Could New Indoor LED Lighting Be Part of the Solution?**

**David Baeza Moyano 1,\* and Roberto Alonso González-Lezcano <sup>2</sup>**


**Abstract:** The existence of a growing myopia pandemic is an unquestionable fact for health authorities around the world. Different possible causes have been put forward over the years, such as a possible genetic origin, the current excess of children's close-up work compared to previous stages in history, insufficient natural light, or a multifactorial cause. Scientists are looking for different possible solutions to alleviate it, such as a reduction of time or a greater distance for children's work, the use of drugs, optometric correction methods, surgical procedures, and spending more time outdoors. There is a growing number of articles suggesting insufficient natural light as a possible cause of the increasing levels of childhood myopia around the globe. Technological progress in the world of lighting is making it possible to have more monochromatic LED emission peaks, and because of this, it is possible to create spectral distributions of visible light that increasingly resemble natural light in the visible range. The possibility of creating indoor luminaires that emit throughout the visible spectrum from purple to infrared can now be a reality that could offer a new avenue of research to fight this pandemic.

**Keywords:** daylighting; circadian lighting; indoor lighting; dopamine; myopia

#### **1. Introduction**

The development of the components of the visual system during infancy and early childhood appears to be biphasic, with emmetropization occurring within the first 2 years of infancy during a rapid exponential phase [1].

Myopia in humans consists of decompensation of the refractive power of the cornea and lens compared to the axial length, such that images lie in front of the retina [2,3]. The components of the ocular system (axial length and anterior and posterior chamber depths) during emmetropization of children aged 3 to 9 months are currently longer than those measured in previous studies [4]. Some authors define myopia as a multifactorial disorder controlled by genetic interactions and environmental risk factors [5].

Myopia is a growing major public health problem and is the world's largest refractive problem [6–9]. The richest countries in the Asia–Pacific region have the highest prevalence in the world. The are multiple studies on the incidence of myopia worldwide according to ethnic and geographic parameters, with myopia prevalence at 80–90% in young adults and high myopia rates of 10–20% [10] in the urban populations in East Asia, especially in China and South Korea [11,12], Taiwan [13], and Singapore [14], while more than one-third of the people from the United States suffer from it [15–17]. Rates differ among people living in urban or rural communities, with similarities in other respects [6].

How myopia and high myopia are defined depends on the prevalence studies selected. Some projections state that 50% of the population will have myopia and 10% will have high myopia within thirty years [2,6,17].

**Citation:** Baeza Moyano, D.; González-Lezcano, R.A. Pandemic of Childhood Myopia. Could New Indoor LED Lighting Be Part of the Solution? *Energies* **2021**, *14*, 3827. https://doi.org/10.3390/en14133827

Academic Editor: Alessandro Cannavale

Received: 23 May 2021 Accepted: 21 June 2021 Published: 25 June 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

It has been associated with an increased risk of retinal detachment, macular degeneration, early-onset glaucoma, and cataracts [6,8,12,18] and is the leading cause of blindness worldwide [6,19,20].

Data vary depending on the source; however, this social health problem is of gigantic proportions. In Europe, the above-mentioned levels have not yet been reached; however, the percentage of children and adolescents with myopia continues to increase [21,22].

The scientific community, to date, has not been able to obtain conclusive results regarding its cause, making it difficult to find a solution.

Because of this, there has been an abundance of studies from around the world. A representative example is the Cochrane Database of Systematic Reviews, which includes 41 studies (6772 participants) evaluating the effects acting on myopia progression in children with optometric corrections, and pharmaceutical procedures, such as muscarinic receptor antagonists, cycloplegic eye drops, and intraocular pressure-lowering medications [2].

The aim of this paper is to analyze the different possible causes of this pandemic, which scientists have highlighted in recent years, and to raise the possibility that something can be done to attenuate its progression based on new discoveries about the influence of light absorbed through our eyes on multiple biochemical mechanisms in our organisms, including the retina.

#### **2. Materials and Methods**

A bibliographic search was conducted in the biomedical databases of PubMed and ScienceDirect. Variations of the words "myopia", "nearsightedness", "short-sight", "progression", "refractive error", "Dopamine", "prevalence" were used as search criteria. These were used in combination with the inclusion criteria "risk factors", "outside", "outdoor", "atropine", "schoolchildren", "eye growth, "circadian rhythms", "indoor LED lighting".

Figure 1 below shows the flow diagram of the process carried out for the development of this article.

**Figure 1.** PRISMA flow chart for literature search and selection of articles included in the literature review Pandemic of Childhood Myopia. Could new indoor LED lighting be part of the solution?

The following inclusion criteria were used for the selection of selected articles: Scientific articles from the last two years, publications dealing with indoor lighting and new luminaires, indoor lighting standards, and magazines and articles written in English.

With regard to the exclusion criteria taken into account for the preparation of the work, the following were excluded publications that do not have the keyword "myopia" or "LED".

#### **3. Results**

Heritability is one of the factors most linked with young myopia together with increased near-distance work, increased school performance, and decreased sporting activity; however, there is no evidence that children inherit a myopathic environment or a susceptibility to the effects of near-distance work performed by their parents [23]. Children with myopic parents who do less sport and spend less time receiving natural light have a greater risk of becoming myopic than children who are in the same situation but for whom one or neither of their parents who are myopic [24]. The existence of this pandemic may be due to the pressure of increased school performance and limited outdoor time rather than a heightened sensitivity to these factors [10]. Increasing prevalence in many parts of the world makes it unlikely that its cause is related to a genetic factor [25].

Multiple factors have been raised that could influence the development of myopia in children, such as access to education, access to outdoor activities, and lack of exposure to natural light [6,26,27].

#### *3.1. Myopia and Overwork at Close Distances*

Increased working hours at close distances could increase the prevalence of myopia. Near-distance work has been defined as the group of activities that are carried out in proximal and to intermediate distances such as watching TV [14,27].

A higher incidence has been found in Chinese children living in Singapore compared to others living in Xiamen, because of positive relationships between the number of hours of reading, hours measured in tasks at close distances, use of electronic devices, and having high myopia; however, it is not certain that other factors do not play a role [14].

The dominant theory supported by numerous epidemiological studies about the origin of myopia is associated with near-distance work during eye development; however, there are doubts about this, as there are experimental animal studies in which no involvement of accommodation in the development of myopia is found. The site of action of atropine, which blocks myopia progression in humans, does not act on accommodation [3,10].

There are studies suggesting that the relationship between mass schooling and the onset and progression of myopia lies in the short focal distance required for reading and writing [22,28], whereby increased time spent in education may inadvertently increase the chances of suffering from myopia [29].

Others are of the opinion that, generally, no association has been found between working at close distances and myopia, except for the subgroup of people with high levels of nearby work and moderate levels of outdoor activity. A weak protective effect of outdoor activity on myopia was observed in children in rural China [30]. A high percentage of refractive problems was found in Jewish children attending orthodox schools. It has not been seen in other population subgroups, and it is very similar to that seen in East and Southeast Asia. Similar refractive distributions have also been reported for recent cohorts of young adults in Singapore [31] and South Korea [10,32].

#### *3.2. Medication to Treat the Progression of Myopia*

The use of atropine [16] and anticholinergic blockers has been shown to be suitable in the control of the progression myopia; however, there is considerable debate about their effectiveness and the possible long-term side effects [3].

Many researchers thought that excess accommodation was responsible for myopia and that atropine causes temporary paralysis of the smooth ciliary muscle [8].

Anticholinergics are blockers of the action of acetylcholine at muscarinic receptors (MRs). Acetylcholine is a neurotansmitter that plays an important role in retinal development and regulates eye growth. The most effective treatment to slow the progression

of myopia is topical antimuscarinic medication, namely the use of low-dose atropine (0.01–0.05%) eye drops. However, the side effects of this medication include light sensitivity and blurring [2], so it is rarely prescribed [8].

The topical application of Pirenzepine does not block accommodation while decreasing the progression of childhood myopia [8,26].

#### *3.3. Light in the Lives of Children and Adolescents*

Outdoor illumination levels depend on weather, altitude, and latitude. The illuminance of a sunny day can be 150–130,000 lx, 50,000 lx on a hazy sunny day, and 15,000 lx on an overcast day. However, these levels are in stark contrast to typical indoor illumination levels, with values from around 1000 lx to values of 500–100 lx [27,33]. The effects of illumination on human refractive development occur within the context of the changing refractive state in the years after birth [33]. Circadian responses to light are thought to be mediated primarily by melanopsin-containing retinal ganglion cells, not rods or cones. Melanopsin cells are intrinsically blue-light-sensitive but also receive input from visual photoreceptors [34,35]. The scientific literature contains a large number of studies that relate circadian, neuroendocrine, and neurobehavioral responses to calibrated light exposures [35].

The light that enters through the human eye not only has the function of image formation but also influences the health and well-being of human beings by producing non-imaging effects in the long and short term (acute) [36]. From a light spectrum point of view, melanopsin in ipRGCs has a maximum of between 460 and 500 nm, while the visual system is more sensitive to mid-wavelengths of the visible spectrum at around 555 nm [37,38]. The duration, timing, spatial distribution, intensity, and power of the spectrally distributed light reaching the eyes can influence circadian rhythms and thus health [38,39].

Light does not affect children and adolescents in the same way as it does adults. For many days throughout the year, when children and adolescents come to school in the morning, they do so in total or partial darkness. The intensity of natural light entering classrooms through windows varies throughout the day and during different seasons. Students work for many hours in classrooms with lighting that does not resemble natural light in either composition or intensity. When they leave school, they usually spend very little time receiving natural light. This results in students living virtually every day with insufficient light, not receiving the proportion and amount of visible light for which we humans are genetically designed [40].

Teenagers usually go to sleep and get up several hours later than normal, and they have difficulty waking up in the morning for school. The reason could be the hormonal changes that occur at puberty. Schools do not have enough global lighting (artificial light and daylight) to stimulate their circadian system. This is more important in the months of less natural light. As teenagers spend more time indoors, they may lose the morning light necessary for circadian resetting. It would be advisable for the protection of the health of adolescents that they receive higher levels of morning light (or daylight) in schools and lower levels of night light (or daylight) at home [39].

School building spaces are the most important non-residential indoor environments, which often have, among other things, inadequate lighting [41]. The structural characteristics of building installations have a profound influence on learning. Inadequate conditions in classrooms are mentioned as factors relevant to poor student progress [4]. Ensuring good quality lighting in educational environments is complicated [42]. Student performance is better in classrooms with higher light intensity. Children can differentiate their light needs according to the task at hand [43]. In rooms where lighting is not uniform, with more luminosity over the immediate task area than the surrounding area, discomfort effects may be important [44]. There is extensive evidence of damage being caused to children's vision due to poorly lit classrooms [45].

A study was carried out with students who were fitted with a filter that did not allow the most important part of the light for the elimination of melatonin to pass through. Eleven students at a North Carolina school with unusually high levels of daylight were fitted with orange filter glasses to remove short-wavelength circadian light for five consecutive days. It was observed that the orange filter glasses allowed them to perform their activities; however, their dim light melatonin onset (DLMO) was delayed by half an hour compared to the previous week. Another study compared the behavior of students between two seasons, with significant differences in the amount of natural light received during the day. In this study of 16 students in New York, their DLMO was found to be delayed by 20 min and sleep output by 16 min in the spring compared to the winter [39].

#### *3.4. Myopia and Light Insufficiency*

Both the time spent outdoors and in sports are related to incident myopia. Researchers believe that outdoor time has the largest effect, independent of distance work activity [46] and physical activity levels [21].

In a comparison of myopia among children, it was concluded that the effects are related to educational pressure and time spent outdoors. Sports activities for hours in the summer months by both myopic and nonmyopic children can contribute to delay the excessive growth of the eye [47]. It has been observed that the progression of myopia is much faster in months with less daylight (winter) than in months with more daylight (summer) [48]. Seasonal effects (quantity of daylighting) are more powerful than more potent medicaments or optical treatments to try to reduce the progression of myopia [10].

Myopes have lower vitamin D levels than non-myopes (Donal Mutti et al., 2011). Studies to date have suggested that the relationships between factors that may lead to the onset of myopia, and vitamin D levels may be minimal [33].

A comparison of progression rates between Taiwanese school children in urban and rural areas showed that the average progression rate in urban areas was higher than in rural areas. Environmental factors such as urban development and academic level may be important factors contributing to myopic progression [13]. Se ha observado una disminución de la incidencia de la miopía en niños cuyas eddes estaban entre uno y tres años [9] después de aumentar el tiempo que estuvieron entre1y2 horas diarias [49,50].

A decrease in the incidence of myopia has been observed in children whose ages were between one and three years [9] after increasing the time they were outdoors between 1 and 2 h per day [49,50].

Children living in countries like Singapore with a high prevalence of short-sightedness spend less time outdoors than children in countries like Australia with a lower prevalence of short-sightedness [27]. It has been observed that the prevalence of myopia in Chinese children aged 6–7 living in Sydney (3.3%) is significantly lower than in Singapore (29.1%). The reason for this difference in the prevalence of myopia is that children living in Sydney are much longer outdoors each week. Children in Taiwanese schools are required to perform at least 11 h of outdoor activities per week, and as a result, Taiwanese health authorities claim that shortsightedness has been reduced in all children. There is no need to receive high solar intensity for the prevention of myopia; however, longer outdoor activities with less intense sunlight intensity in corridors or under trees are preferable [12].

The approach of increasing time outdoors has been validated in school-based interventional research, with increases from 25 to 50%. The Myopia Control Programme in China specifies a period of one to two hours outdoors every day [9,50].

The spectral power distribution of the Sun is continuous, while that of the luminaires that we use are discontinuous. It has been found in experimental tests with animals that there are wavelengths of light that affect the refractive development of the eye and the growth of the axial length of the eye. Researchers say that when chicks receive intense light, whether it is daylighting or artificial light, there is a delay in the development of experimental myopia [17]. Exposure to violet light (VL, 360–400 nm wavelength) in particular, which is the shortest wavelength range of visible light, has a protective effect against myopia by suppressing the elongation of the axial length (AL) [17,51]. As VL acts on the myopia-suppressive gene EGR1, it suppresses myopia progression and is therefore an important external environmental factor in controlling myopia. It induces a significantly higher up-regulation of EGR1 in chick chorioretinal tissues than blue light under the same conditions [51]. Violet light is abundantly present in sunlight, while it is rarely detected in indoor ambient light. If the VL content of natural light at different times of the day is compared with that entering through glass into indoor environments, the absence of VL behind the glass can be seen [17]. These results suggest that violet light is important for not only the prevention of myopia progression but also the onset of myopia [51]. Excessive ultraviolet (UV) protection and the absence of VL in artificial light may be possible reasons for the global myopia pandemic [17].

A systematic review of articles on this subject revealed great heterogeneity in the results, and it can be stated that increased outdoor time is effective both in preventing the onset of myopia and in slowing down the progression of myopic refractive error; however, paradoxically, outdoor time does not slow down the progression of myopia in eyes that are already myopic [52]. Most but not all prospective studies and cross-sectional surveys find an antimyopic effect with increasing outdoor time [26].

There have been research studies, although not statistically significant, in which red lighting illumination is provided to people during the afternoon (i.e., 15:00). People said they perceive a significant reduction in the feeling of drowsiness and a greater subjective sense of vitality. The use of luminaires that combine white LED with red light could give a sufficient stimulus to be able to work during the afternoon [53].

According to Mardaljevic's research, 24 h a day can be divided into three phases in relation to the dark–light cycle. The phase 6:00–10:00 a.m. is defined as circadian resetting, 10:00–18:00 is the period of alerting effects of daylight, and 18:00–6:00 the time of bright light avoidance and dim light only. The most important thing to get a circadian reset is to receive intense light during the morning. It is desirable that people are exposed to bright light during 10:00–18:00 for its potential to increase alertness [54,55]. It should be noted that there are important differences in the intensity in each part of the light spectrum of the day that we receive depending on altitude, latitude, and the season of the year. The color of the furniture is also a factor to be taken into account. The combination of interior lighting and daylight at each workplace will depend on the position of the worker in relation to the windows [41,56].

Although light reaching the retina may be the key, it is difficult to measure the W/cm<sup>2</sup> of each wavelength band entering through the pupil and reaching the retina [33]. Objective studies on the association between light exposure and myopia, not time spent outdoors, are needed to assess how much bright light is necessary for myopia prevention [57].

#### *3.5. The Non-Visual Effects of Light on Humans*

Light influences the hormonal secretion in our behavior and the regulation of circadian cycles [37]. The suprachiasmatic nucleus (SCN) is responsible for generating circadian rhythms that receive inputs from the intrinsically photosensitive retinal ganglion cells (ipRGCs) through the activation of the photopigment melanopsin. Dopamine (DA) is a neurotransmitter that has the ability to modulate the refractive development of the human eye depending on its concentration, which varies depending on the amount of light received, especially in blue range. DA has a circadian internal retinal clock regulating function [26].

Diurnal rhythms in ocular dimensions, rhythms in retinal signaling, and molecular biology are linked to refractive development. DA acts as an inhibitor of axial elongation [9]. Retinal dopamine (DA) messages affect many intraretinal processes of the light–dark cycle, including the entire light–dark adaptive state of the retina. The possible link of circadian rhythms in the control of ocular refraction could explain the effects of outdoor and light exposures on refractive development. Researchers believe that the retinal endogenous clock could have a connection with the light entering our visual system and the retinal Zeitgebers, and therefore it could influence the development of the eyeball and the possibility of producing ametrophies if the appropriate amount of ambient light is not received at every moment of the day. This connection could give a biological explanation of the apparent positive action to curb the nearsightedness of daylight exposure [25].

Dopamine may play an important role in the refractive development of the human eye [58]. D2 (D2R and D4R) receptors are considered the most important myopia-related receptors within the two groups of dopamine receptors in the retina (D1 and D2) [17].

A significant number of publications state that exposure to sunlight and being outdoors stimulate the release of dopamine in the retina [17,27,58]. The release of dopamine is increased by receiving daylight or indoor lighting and decline without light [25,58]. Dopamine levels in the retina increase if we are exposed to intense or bright light. This could be an option to control the progress of myopia [11].

The dopamine metabolite dihydroxyphenylacetic acid (DOPAC) is generally accepted as an indicator of dopamine release. Midday retinal levels of this metabolite are 30% higher in our "elevated" light-level condition (15,000 lx·7.75 h per day) [33].

Chronodisruption is a common problem in modern societies because of our habits of life. The most developed societies have the highest prevalence of child myopia. The study of chronodisrupion could improve the knowledge of the mechanisms that produce myopia and open new fields of research and treatment to reduce the growth of the childhood myopia pandemic [25].

Since the discovery about 20 years ago of the non-visual methods of light absorption, it has been known that apart from the image-forming effects (IF) of light from which the criteria for correct lighting were developed, non-image-forming effects (NIF) of light exist. The discovery of NIF has enhanced researchers' belief in the importance of daylighting [59] and has raised new criteria to be taken into account for proper interior lighting [60]. Due to all of the factors mentioned above, the parameters to be met by a luminaire and its environment for the proper lighting of the workstation have to be modified and expanded [46].

#### *3.6. Regulatory Lighting Framework*

The technical characteristics of the luminaires and the lighting conditions at the workplaces where they are used are clearly defined in international standards [61,62]. The standards ISO 8995:2002 "Lighting in Workplaces" (2201/220) [61] and prEN 12464- 1:2019 "Lighting in Workplaces" [62] are in the process of being reviewed. International regulations are beginning to take account of the NIF effects of light [62].

The recommended illuminances for classrooms are between 300, 500, and 750 lx, with recommendations for blackboards to be maintained at an illuminance of 500 lx in order to avoid specular reflections. Classrooms and tutorial rooms should be maintained at an illuminance of 300 lx.

Lighting requirements are determined by the satisfaction of three basic human needs: visual comfort, where workers have a sense of well-being, and indirectly also contributing to a high level of productivity; visual performance, where workers are able to perform their visual tasks, even in difficult circumstances and for longer periods; and safety [62].

The Illuminating Engineering Society of North America (IESNA) [63] has written indoor lighting proposals, in which the intensities vary depending on age, which is not found in the ISO and European standards.

Table 1 shows the recommendations for target illuminances (lx) based on the visual ages of the observers (years) for certain indoor education situations.


**Table 1.** Indoor lighting at workplaces. Lx per group of ages recommended.

The illumination must be controllable. Classrooms used for evening classes and adult education should be maintained at an illuminance of 500 lx [63].

The most important parameters to be taken into account for the manufacture and sale of luminaires can be taken from the CIE Technical Reports and the Commission's Delegate Regulation (EU) Nº 874/2012 from 12 July 2012 [64].

The photometric and chromatic tests outlined in EN 13032-4 [65] are some of the most important parameters to be taken into account, including the light emission or luminous flx emission, luminaire opening angle, correlated color temperature (CCT), color rendering index (CRI), and risk of blue light (according to EN 62471) [66]. Other factors are the energy consumption indicators and light source service life parameters. In a study carried out on LED luminaires purchased by consumer inspectors in commercial establishments in the Community of Castilla la Mancha in Spain, it was found that they meet the requirements set out by international standards. All of the luminaires analyzed were found to be in risk group 0 (no risk) [67,68].

#### *3.7. New Indoor LED Lighting Luminaires*

The differences between indoor and outdoor ambient lighting, such as the intensity and wavelength of modern electronic lighting equipment, may be a way to control myopia as an environmental factor [17].

Scientists around the world are proposing new luminaires, the spectral power distribution (SPD) of which is intended to resemble that of the sun in order to match our circadian cycles [69].

Several prototypes developed by researchers who claim that these different combinations of LED light improve performance are shown below and that they could serve to not alter our circadian rhythm.

Remote-controllable light source grouping, together with a quadruple-chip lightemitting diode driven by pulse width modulation currents to provide healthy lighting: RGB tricolor LEDs can emit pure white light. The color rendering index is of worse quality when the CCT is changed. The combination of white LED with the red–green–blue–white (RGBW). The authors of this research believe that the best solution for good interior lighting are RGBW LED (Figure 2) [70].

**Figure 2.** Parameters of the RGBW LED.

Other researchers think that the best option is the tunable LED lighting with five channels of RGCWW: red, green, cyan, warm white, and cool white (RGCWW). LEDs are individually controlled in this prototype. The researchers claim that, with this new source of indoor lighting, they managed to obtain dynamic daylight spectrum and complete mixing monochromatic thanks to the width and shape of the LED spectra, the green gap, and the computation capability. They can be very useful to illuminate interiors where not enough daylight comes in (Figure 3) [71].

Another approach involves spectral energy distribution with qualities close to daylight with a high CRI, with a continuous and balanced spectrum in the visible emission range, ideally harmonized with human sensitivity. The high blue energy part of the total blue range can be reduced to minimize the ups and downs of the spectral energy distribution. The spectral intensities in the visible range of three developed LED combinations are shown below (Figure 4) [72].

(**a**) The emission spectrum of 5000 K LED package under blue LED excitation.

**Figure 4.** Intensity and spectral distribution of different combinations of LED visible light emitters [72].

The CIE, in its Technical Report CIE 015:2018, showed different spectral distributions. The next figure shows the relative SPD of each of the nine LED illuminants proposed in CIE Publication 015:2018, indicating their corresponding CCTs and distances to the Planckian locus in the UV space [73].

Figure 5 shows the SPD Standard D65 and D50, together with their corresponding indoor illuminants, ID65 and ID50. Also shown in the figure is the spectral transmittance of the average glass used to define the two indoor CIE illuminants [74].

Researchers have tried to reproduce daylight by creating light sources with five different types of light emitting diodes: red, green, blue LEDs; warm white light emitting LEDs and cold white light emitting LEDs (RGBWW) [71,75–77]. The circadian action factor (CAF), circadian stimulus (CS), or equivalent melanopic lx (EML) are some of the NIF parameters that researchers take into account for the design of new interior light sources [40,75,76,78].

**Figure 5.** LED illuminants spectral power distribution of (**a**) the SPD, (**b**) the Standards D65 and (**c**) D50 [79].

If we compare daylight with new LED light sources, new indoor luminaires cannot produce an exact correlation with NIF effects of the light and the circadian cycle because of the differences between people and multiple environmental factors that cannot be controlled [68,77,79].

#### **4. Discussion and Conclusions**

This paper has shown the possible causes that the scientific world considers to be responsible for the current myopia pandemic. The authors of this article have found an extensive bibliography for each section presented in this review. It is not possible to mention them all, and the authors have tried to refer to the articles of the first quartiles.

There is increasing evidence implicating daylight and circadian cycles in the process of the emmetropization of the eye. The possible influence of light on myopia progression has been taken into consideration by the developed countries in Asia most affected by this pandemic [9].

These measures seem to us to be appropriate and novel in the face of this important child health problem; however, even so, we believe that they may not be sufficient due to the periods of the day when they are inevitably in enclosed spaces, both because they are in class or studying at home due to educational pressure and because of the use at close distances of electronic devices of all kinds in their leisure time. These activities are generally carried out in poorly lit environments, and therefore we believe that it is in this area that solutions should be sought.

Some authors claim that in addition to increasing outdoor time, the use of small doses of atropine, the use of rigid contact lenses to flatten corneal surfaces at night, and the use of prescription glasses and special contact lenses can reduce the progression of the myopia by 50.0% or more [2,9]. Our aim is to propose possible solutions that make their use and that of optometric correctors unnecessary.

For more than a decade, studies have proposed the importance of the blue part of light as a regulator of DA synthesis in the retina as a key factor in the appearance or otherwise of myopia [11,26,35,64], while others have claimed that it is found in the VL part of the spectrum [17,51]. The blue part centered at 490 nm is key in the regulation of circadian cycles; however, as outlined in recent CIE publications, the five photoreceptors present in the retina influence this regulation in a way that is still largely unknown [37].

Numerous studies by scientists studying IF and NIF effects of the light absorbed through our eyes have found an important role of each part of the visible light spectrum [33]. Scientific experts have recommended the amount and composition of light we are to receive in each part of the day [80]. Photoreceptors of the eye absorb light between the wavelengths 380 to 780 nm [81].

We propose that the education authorities consider that children should spend at least one hour a day in the open-air mornings performing sports or educational activities, especially during autumn and winter.

By simple evolutionary logic, the entire range of the electromagnetic spectrum to which our retinal photoreceptors are sensitive must have a positive function, whether known to researchers or not. In order to try to help slow down the current myopia pandemic, we believe that we should neither increase the intensity of indoor light sources nor increase the blue part of the spectrum in a massive and uncontrolled way, but rather try to replicate the spectral distribution and intensity of each part of the livable solar spectrum in the least imperfect way possible, and to analyze, in each interior space, the lack of intensity and spectral distribution, ranging from purple to the limit of red to infrared, from the natural light that can enter through windows (daylighting) and complement it with new combined LED light sources for interiors. It would be interesting to study how this can be replicated from 380 to 780 nm continuously with existing LED light sources, as well as to consider replacing window panes with materials that allow light to pass through from 380 nm. According to the progress of LED technology and studies of the emission of existing LED luminaires, we believe that it is possible to develop luminaires with a regulated irradiance so that their emission is higher in the blue range during the morning, in compliance with current international regulations.

The use of luminaires that combine white LED with red light could justify reducing the amount of blue light in the luminaires

On the basis of the research carried out for this study together with that of other publications currently under review, the research group of the degree in Architecture of the University San Pablo CEU is developing a utility model that will have an emission in the entire range of the visible in which the irradiance of blue and VL can be regulated for at least three different levels.

There are an enormous amount of variables that influence the light that enters through our eyes at every moment of the day, depending on such a large number of factors that it is not possible to manufacture a luminaire that can replace daylight [82], but if new LED indoor lighting is not developed with this range of visible light to see the biological responses, we will not be able to see if they can be a part of the solution for the pandemic of myopia.

**Author Contributions:** Conceptualization, D.B.M.; methodology, D.B.M.; formal analysis, D.B.M.; investigation, D.B.M.; resources, D.B.M.; data curation, D.B.M. and R.A.G.-L.; writing—original draft preparation, D.B.M. and R.A.G.-L.; writing—review and editing, D.B.M. and R.A.G.-L.; visualization, D.B.M. and R.A.G.-L.; supervision, R.A.G.-L.; project administration, D.B.M.; funding acquisition, D.B.M. and R.A.G.-L. All authors have read and agreed to the published version of the manuscript.

**Funding:** The authors wish to thank CEU San Pablo University Foundation for the funds dedicated to the Project Ref. USP CEU-CP20V12 provided by CEU San Pablo University.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **The Spirit of Time—The Art of Self-Renovation to Improve Indoor Environment in Cultural Heritage Buildings**

**Coline Senior 1,\*, Alenka Temeljotov Salaj 1, Milena Vukmirovic 2, Mina Jowkar <sup>1</sup> and Živa Kristl <sup>3</sup>**


**Abstract:** The purpose of this paper is to explore the challenges of an old low-standard urban district with a strong historical and cultural heritage and propose more sustainable renovation solutions, acceptable for the residents and municipality. The challenges of physical renovation or refurbishment are complex due to poor condition of the buildings, municipal ownership and governance, mixed management with community and low rents, which are insufficient to cover the costs. The paper discusses the proposed solutions of living standards, supported by the research in two directions: (i) available resources and reuse of materials, (ii) developing a renovation guidance for inhabitants from the building physics perspective, including indoor environment quality. Challenges related to energy efficiency are addressed from the decision-making perspective to overcome the barrier of lack of motivation to invest in energy-efficient measures at the individual and community level. The interdisciplinary approach complements engineering-focused studies with a focus on the comfort conditions and the influence of occupant habits in sustainable buildings. The methods used were literature review, case studies with observations and survey, looking to cover all technical, social, and historical aspects of sustainable renovation of cultural heritage buildings with the same level of importance. Results show that to keep a sustainable, low-cost urban living model, instructions for self-renovation are a valuable guidance for non-professional actors to make more sustainable choices. In conclusion, we can emphasize that inhabitants are accustomed to lower living standards, so the project is aimed to present the proper solutions for improvement as a balance between new sustainable technical solutions, personal self-renovation skills, habits, and health.

**Keywords:** self-renovation; habits and comfort; sustainable building material; cultural heritage buildings

#### **1. Introduction**

An increasing body of scientific literature has put the focus on energy efficiency measures for cultural heritage buildings, but this approach tends to undermine cultural values which are often described only as constrains [1]. In the extensive systematic literature review on sustainable refurbishment of historical buildings, Loli and Bertolin (2018) [2] pointed out the "Scandinavian paradox", which often shows the Scandinavian countries as frontrunners when it comes to implementing and operationalizing sustainability goals [3], but on the other hand, the scientific production on methods for sustainable maintenance and renovation of cultural heritage buildings in Scandinavia is relatively scarce [2]. Norway is quite advanced in terms of energy-efficiency renovation with a yearly rate of 2.5% of the existing building stock compared to the 13 other EU countries where data are available, which have rates between 0.5% and 2.0% [2]. However when it comes to "deep renovation" defined as interventions that fundamentally affect the buildings performance, the European rate is stagnating at 0.2% [4,5].

**Citation:** Senior, C.; Salaj, A.T.; Vukmirovic, M.; Jowkar, M.; Kristl, Ž. The Spirit of Time—The Art of Self-Renovation to Improve Indoor Environment in Cultural Heritage Buildings. *Energies* **2021**, *14*, 4056. https://doi.org/10.3390/en14134056

Academic Editors: Boris Igor Palella and Roberto Alonso González Lezcano

Received: 28 May 2021 Accepted: 28 June 2021 Published: 5 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The renovation of cultural heritage buildings can produce positive effects for the socio-economic regeneration of the cities [6] and boost the application of contemporary living models, sustainable management strategies, and maintenance procedures, which can balance the up-to-date requirements of energy efficiency, human comfort, and operating cost reduction [7] ensuring also energy and economic benefits [6,8].

In this regard, different studies indicated that the most critical factors for deep renovation of existing housing concern the non-technical barrier related to social, logistic, legislative, and financial constraints [9], while behavior and local cultural factors, in particular, can reduce the efficiency of the renovation initiatives, impacting on energy use, conservation state, management, and maintenance operations [6,9,10]. The importance of cultural heritage for the community and the specific character it has built over time are two sensitive elements that need to be combined with the contemporary expectations and responsible and sustainable lifestyle.

For policy makers, supporting self-renovation projects of residential buildings can contribute to lessen social exclusion and spatial segregation and encourage tenants to feel a sense of responsibility for their dwellings [11]. However, the lack of technical and legal knowledge among residents combined with the high costs are still major barriers in the implementation of energy efficient renovation measures [12,13]. In addition, some of the researchers emphasize the lack of motivation as a barrier for more sustainable renovation solutions [14–16], and proposing the focus on social sustainability aspects [17], more concretely: 'factors affecting participation' [18,19], 'relationship between participants' [20–22], 'engagement strategy' [23–25], and 'influence of participation' [26–28]. As stated by Esmaeilpoorarabi et al. (2020) [29], it is important to mobilize resources, improve relationships, promote cooperation, and ultimately achieve community engagement and trust [30,31]

The art of self-renovation in low-income urban communities is quite unique in the sense that it transcends a purely technical approach and often constitutes an opportunity for them to feel more ownership of their homes, promote social inclusion, and strengthen community bonds [32]. Financing and education of workforce are also central challenges that need to be addressed [12]. The social aspects are widely underrepresented in energy research, and although there is no question that technological engineering studies are of utmost importance to improve technical performances of buildings, due to the fact that energy-efficiency relies on human actions and informed choices, more interdisciplinary research is needed to include a softer approach and address social barriers and drivers to a successful energy transition [33–35]

#### *1.1. National Regulations for Conservation of Cultural Heritage Buildings*

The purpose of building protection is to preserve our sources of cultural heritage and history. Building protection became a formal matter in Norway in 1913 when the first national antiquarian was hired. Since then, it has been the Directorate for cultural heritage national antiquarian that has been responsible for cultural heritage policy [36]. In Norway, there is a broad political will to preserve cultural heritage buildings [37]. It points out further that the preservation of cultural heritage can contribute with knowledge and input to sustainable resource management by providing insight into how environmental problems have affected our patrimony and give a better understanding of how they can be solved. About listed buildings, it says: "*From a long-term socio-economic perspective, preserving valuable parts of the building stock rather than demolishing and rebuilding can be significantly more profitable*" [37].

The Norwegian Planning- and Building Act § 31-1 states that when doing renovation or rehabilitation, the municipality shall ensure that historical, architectural, or other cultural value associated with a building is preserved as far as possible. Nevertheless, there is a challenge associated with the maintenance and preservation of the protected building stock due to the ever-increasing demands for lower greenhouse gas emissions [38].

The existing building stock in Norway accounts for around 40% of the total energy consumption in the country, 22% of which is for residential and 18% for non-residential buildings [39]. In addition, most buildings built before 1950 were without insulation, which makes both protection and improvement of this building stock challenging. Fulfilling the requirements related to energy savings could mean major structural interventions and may lead to changes that affect cultural historical values of the building [36].

#### *1.2. Improving Energy Efficiency in Existing Buildings*

According to Ugarte et al. (2016) two of the main problems that low-standard households need to face as a consequence of low energy efficiency are extreme indoor temperatures and high humidity levels. Rehabilitation could address these problems by taking measures with three main outcomes: improvements in indoor air quality, humidity levels, and indoor temperature [40]. For this matter, retrofit programs may focus on a weatherization based on changes in the shell of the building, such as insulation of facade wall, ceiling and basement, facade painting, or replacement of windows and doors [41,42].

Given the environmentally friendly nature of wood, its low maintenance cost, and broad accessibility, wood is one of the most renewable materials in construction [43]. However, when it is used in building facades, it needs careful treatment and maintenance to minimize thermal and humidity discomfort sources in buildings. Energy wastage in the facade tends to be derived from humidity, since it provokes expansions and contractions of the material and mold growth, causing degradation of timber. From an environmental point of view, one of the best ways to protect wood is with linseed oil [44]. It saturates the external layer of the wood, so that it is impossible for water to penetrate the structure. Moreover, it allows water kept inside to evaporate [45]. It is important to plan the functioning of the building envelope for the future degraded climate conditions; in northern Europe, climate adaptation has to take into account moisture resistance (due to increased precipitation) and slight increase of temperatures [46,47].

When refurbishing the wooden structure, two main aspects need to be considered. First, if replacement of a timber member needs to be made, the new item must belong to the same species of wood and have the same quality. Secondly, it is also recommended to use the same (or similar) techniques that the original craftsmen used to convert the timber and assemble it [45].

This way, the identity of the building can be preserved. Moreover, some materials should be avoided or, at least, reduced to a minimum. Such as epoxy resins, steel reinforcements and plastic paint for coating [45].

Existing retrofit programs for low-standard buildings are based on weatherization of households; this way comfort improves and costs are reduced [48]. Weatherization is usually focused on changes in the structure, as insulation of ceilings and walls, air sealing, and duct sealing, and the electricity savings can reach up to almost half the original price [41]. Some of the actions that can be taken in order to achieve a good weatherization are [42,49]:


Regarding the refurbishment of doors and windows, they play an undoubtable role in the ventilation and weatherization of a household, but sometimes their high prices might keep the owners from fixing them. Anyway, it is a key element to address, as old doors can be leaky and have extremely poor insulation; similarly, single glaze windows are inefficient and should be switched to double-glazing or interior storm windows which provide moderate insulation and avoid air infiltration [41].

Insulation is a key element in order to achieve a good energy efficiency level. Some natural and renewable materials have been proved to have a great performance as insulating materials, presenting important advantages regarding costs, ecology, and energy savings, if compared to traditional materials.

Natural fibers, such as technical hemp, jute, and flax have very good mechanical, acoustic, and thermal insulation properties. Moreover, they can be combined in different proportions and be useful in façades, roofs, floors, partition walls, and external walls [50]. Natural fibers, such as technical hemp, jute, and flax, have very good mechanical, acoustic, and thermal insulation properties. Moreover, they can be combined in different proportions and be useful in façades, roofs, floors, partition walls, and external walls [50].

Some additional costs can derive form the insulation of the façade, since activities as scaffolding and painting cannot be overlooked and homeowners rehabilitating a building exterior need to pay for them no matter what [42].

Other non-structure related improvements that could be applied are [42]:


This said, it is clear that some good weatherization methods can be decisive for inhabitants of the building, not only in economic terms but also healthwise, as inadequate heating, cooling, or ventilation of the spaces can be a life-or-death issue [51,52].

#### *1.3. The Scope of This Study*

The scope of the study, considering the transformation of historical and cultural background of Svartlamon, poor housing condition, reuse of materials, insufficient funding, and existing self-renovation model, aligns with the New Bauhaus European initiative which vouches for a softer approach to the challenges in reaching sustainability goals. The European Commission's initiative aims to bridge the world of science and technology with the world of art and culture by developing affordable and accessible living spaces, striving to improve quality of life and highlight simplicity, functionality, and circularity of materials while accounting for the need for comfort and attractiveness in citizens' daily life [53]. Building upon this initiative, this study focuses on three principles:


The paper aims to address some of the challenges linked to unsupervised self-renovation work that can result in alteration of the cultural heritage buildings, negative consequences of the maintenance backlog, and unsustainable technical solutions. At the same time, this study looks into approaches that will increase the sustainable renovation rate that is currently lagging behind due to socio-cultural barriers [54]. The case study proposed in this paper focuses on "soft approach" to energy renovation. This paper deals with energy efficiency from the decision-making perspective addressing the challenge of lacking motivation to invest in energy-efficient measures. The interdisciplinary approach complements engineering-focused studies with a focus on the comfort conditions and the influence of occupant habits in sustainable buildings. It proposes a practical self-renovation guidance for residents with a focus on sustainable technical and technological solutions, focusing on indoor environment, health, and wellbeing. This is motivated by a gap identified in previous studies that revealed communication challenges between public authorities and residents in the field of energy retrofitting of cultural heritage buildings that resulted in

decaying buildings rather than improving their sustainability [13]. This can be also seen as an urban experimentation (Newton and Frantzeskaki 2021) case which can be replicable for similar circumstances.

The main research question is *how to raise the self-renovation standards of cultural protected though poorly maintained area*. The aim is to explore how it is possible to refurbish the houses, keeping all the existing aspects of the community, the history, the culture for re-use of materials, and the low budget and adding new ones like thermal insulation and general improvements of the indoor environment. The analysis of the physical properties of buildings such as ventilation, lighting, thermal weaknesses, and thermal insulation will be explained, in order to propose appropriate measures to meet the growing demand for a pleasant indoor climate while safeguarding the cultural heritage values. In addition, a survey conducted among residents of a low-standard urban community is presented to understand the opportunities and challenges of sustainable self-renovation of cultural heritage buildings.

This paper is structured as such: Section 2 presents the overview of the material and methods used for the study; Section 3 shows the study results; Section 4 presents a guidance for sustainable self-renovation in accordance with the theoretical and empirical background, and Section 5 concludes with the main findings.

#### **2. Materials and Methods**

To address the research question, we investigate in three directions: (i) improving the living standards and culture of a low-income intentional community, (ii) widening the reuse of materials as a part of circular economy solution, and (iii) finding high quality LCC solutions for specific problems. A number of buildings with a focal point on observation in the intentional community of Svartlamon in Trondheim, Norway, were chosen as the case study. This was combined with multiple on-site visits and a questionnaire survey. The case of Svartlamon was selected for its uniqueness and self-organized maintenance and operation system. The economic, socio-cultural, and environmental challenges posed by its ownership structure and eventful history as well as the subversive nature of Svartlamon will be presented in the following section. The present study is a result of several projects running at the Norwegian University of Science and Technology with master's students since 2018.

The data collection and observation have been conducted in collaboration between master's students and the authors of this paper during the spring and fall semesters, from 2018 to 2020 [55,56]. Initial data about Svartlamon's historical and cultural background were collected through informal meetings with Svartlamon housing foundation (Svartlamon boligstiftetse) and desk research at Trondheim Municipal Archive Center (Trondheim Byarkiv). Observation of indoor has been organized by Svartlamon housing foundation several times per year for the students and the research team. The observations of the outdoor environment were taken and documented whenever was needed.

A questionnaire survey was designed to investigate the living conditions in Svartlamon, the self-renovation work organized and done by residents, and their perception of sustainable technologies. The questionnaires included 4 sections; the first section collected demographic information such as age, employment status, composition of the household, living period in Svartlamon, the affiliation to a sub-community, and types of facilities respondents shared with others (bathrooms, toilets, kitchen, living room, laundry room, etc.) The second section on the living conditions consisted of questions about social aspects of the community life, motivation factors for living in Svartlamon, opinions on possible changes or improvements that could be made, and their perception of the core-values of the community. The third section collected data on self-renovation habits and culture, the respondents' involvement in such work, and their will to participate more in the future. The last section of the questionnaire addressed the available technologies within the households as well as their perception of innovative sustainable technologies.

The questionnaire was distributed in digital and hard copy versions to all inhabitants in Svartlamon (ca. 200 people), and answers from 24 people were collected. A qualitative analysis of the results was then conducted by the authors in order to serve as a basis for the recommended renovation measures that respect and reflect the spirit of the Svartlamon community.

#### *The Overview of the Study Area*

Svartlamon is a neighborhood in the city of Trondheim, Norway, with a strong presence of listed buildings (Figure 1). It is inhabited by an alternative low-income community and regulated as the first urban ecological research area in Norway, which makes it a particularly relevant experimental case to explore the challenges of sustainable renovation of cultural heritage buildings.

**Figure 1.** Trondheim Historic Centre, Svartlamon, and National Railroad (©Google 2021, edited by authors).

The origin of Svartlamon dates back to 1860, established as a settlement in the outskirts of Trondheim for dock workers, sailors, and workers from nearby factories. Due to railway construction in 1889, the area was exempted from the bigger district Lademoen and became the dirtiest and poorest part of it; thus, it got nicknamed Svartlamon or "black" Lamon [57]. There was no water or sewage system, which made it a sullied residential area with precarious living conditions. During World War II, some houses were demolished to allow the construction of the Dora II Bunker. Even though Svartlamon was later incorporated into the city, the basic infrastructure was severely lacking (Figure 2) [58]; therefore, by 1980, the municipality changed plans for this district to be demolished and become an industrial area. As the demolition operations started, a small area comprising of wooden houses remained untouched and was squatted by outcasts and criminals.

**Figure 2.** Svartlamon in 1964 (Trondheim Municipal Archive Center).

However, in the 1980s the cultural heritage character of the houses was brought to the attention of a community of artists and activists who occupied the houses and settled in, starting the first wave of basic renovation to repair damages caused by negligent tenants [59]. They made the area livable again and started their own alternative-living

community. Svartlamon is also defined as an intentional community [60], a community '*which choose to live together with a common purpose, working cooperatively to create a lifestyle that reflects their shared core values*' [61].

In 1990, the municipality attempted once more to tear down the Svartlamon area arguing that the houses were unfit for living and had not met the regulations in place for residential buildings. This marked the beginning of a fight between the municipality and the residents who organized themselves into the Svartlamon Residents' Association, '*Svartlamon Beboerforening'*.

After long years of discussions and conflicts, the municipality decided in 2006 (Figure 3) to consider Svartlamon as an experimental urban ecological area, the first of its kind in Norway [62,63]. Since 2001 the Housing Foundation '*Svartlamon Boligstiftelse'* has been acting as a steering organ, collecting the rents, and managing the everyday operations of the community. It also bridges communication between the community and their municipal landlord.

**Figure 3.** Timeline of Svartlamon's history (by the authors).

The challenges it faces today are caused by the long period of neglecting the area, mixed governing and managing structure, unclear distribution of responsibilities, and insufficient maintenance funding as the income is based on low rents. All these lead to poor infrastructure in indoor and outdoor housing condition. Regardless of the very active and social oriented community, their self-renovation attitude and habits and their demographic background with lower income and short-term tenants' contracts lacked enough security to search for high quality solutions and resulted in the multiplication of quick fixes rather than long-term solutions. Today, the contract period is extended to twenty years, which gives better planning foundation for more sustainable renovation solutions and future development.

In line with the aforementioned and having in mind the presented history of its origin and basic function, the settlement can be characterized as industrial heritage with significant social value, as important evidence about the life of ordinary people and their identity [64–66]. In addition, it has a technological and scientific value with regard to the history of manufacturing, engineering and construction, as well as a significant esthetic value in terms of architecture, design, and planning. These values exclusively refer to industrial heritage, its materials, components, equipment, and method of installation in industrial environments, as well as written documentation and intangible records related to the memory of the people and customs [67].

Svartlamon is covered by zoning plan R0219b, which came into effect on the 27th of June 2006. Based on the zoning plan, urban ecological efforts at Svartlamon include both physical and process related efforts. The physical experiments involve testing of new and affordable solutions in housing types, technology, and architecture. With the main focus on utilizing physical resources in the area [68].

Process-related trials, on the other hand, involve testing new planning, management, rehabilitation, and collaboration processes, with the main focus on utilizing the human resources in the area. According to the zoning plan, the concept of urban ecology therefore implies a holistic view where development in the area is based on an interaction between the physical and human resources, where one cannot be seen separate from the other [68].

Further on, the zoning plan states regulations in regard to refurbishment at the area. According to § 3–5, in the zoning plan, it is specified that: "*Existing buildings shall be preserved except for the buildings marked as demolition objects in the planning map of the area. For existing buildings, no major reconstruction, extensions or facade changes can be made without this being presented to the Cultural Heritage Management Office in advance.*" [68]

In this study, we highlight both the technical, social, and cultural aspects with equal importance. This combined approach of sustainable renovation of cultural heritage buildings is seen as an added value to tackle the challenges in this domain.

#### **3. Results**

#### *3.1. Case Study Description—Observation*

At Svartlamon, there are a total of 25 residential buildings with 130 dwellings/tenancies. The total gross area of the residential buildings is approximately 7000 m2. About 200 people live in the dwellings rented by svartlamoen boligstiftelse by the municipality. There are also four commercial buildings, and the gross area that the culture and industry foundation rents from the municipality constitutes ca 2500 m2 (Table 1).

#### **Table 1.** Characteristics of Svartlamon.


About 80 people work in varying positions in Svartlamon. In addition, the kindergarten is managed by the municipality. The area is regulated as special conservation area and has about 35 antiquarian classified buildings (residential buildings and outbuildings) [69].

Svartlamon is a district located to the east of the Trondheim city center, which hosts in addition to the residential buildings, a reusable shop, free shop, cultural festival, concert venue, kindergarten and book café. Most residents share some facilities like bathrooms (toilets and showers) and kitchens. The old houses are wooden and brick buildings. Inhabitants tried to renovate some parts of the buildings since their construction in the 19th century, but the buildings are in a general bad condition now. Lack of money and complex organization between the municipality and inhabitants lead to insufficient investments in maintenance of the buildings and regeneration of the area. The buildings are owned by the municipality, and since 2001, the housing foundation has been responsible for collecting rents and ensuring the maintenance. This was formalized in the contract that binds the foundation and the municipality, by which the municipal landlord transferred all its responsibilities for maintenance and preservation to the foundation. Today, the foundation manages 151 leases divided into 35 houses with about 200 people living there. The foundation has three people working full-time, a manager, a carpenter, and an electrician. The money collected by the foundation finances the salaries of the three employees, renovations, and maintenance operations. In order to keep the rent as low as possible, residents are expected to do as much work as possible by themselves. The Housing Foundation has a permanent office and meeting place on site. In addition, in 1990, Svartlamon residents' association was founded with the purpose to preserve the houses and to defend low-income people's right to live downtown. Everyone who has a lease with the Housing Foundation is automatically a member of the Residents' Association. They

have monthly meetings and govern with a flat structure and consensus. The residents of Svartlamon have a leading role in the renovation. They manage to do almost everything by themselves with low budget, reusing materials from the old houses from other parts of Trondheim and trying to keep the buildings in their original state. The focus of this study is mostly on the technical and technological aspects, but reflecting also the economic, organizational, social, cultural, and historical ones.

Svartlamon is located close to the city center of Trondheim, with a climate type dominated by the winter season, cold period with short daylight, relatively little precipitation which is mostly in the form of snow, and low humidity. It is located north of the humid continental climate. There is an average of 272.0 days of precipitation. The average temperature for the year in Trondheim is 4.8 ◦C. The warmest month is July with an average temperature of 13 ◦C. The coolest month is January, with an average temperature of −3 ◦C.

Wooden buildings (Figure 4) are houses with bedrooms and kitchen. Each house does not necessarily have a bathroom. The community shares few bathrooms. Most of the houses were built at the end of 19 century; they are now deteriorated even though people tried to keep them in a satisfactory condition. Most of them are two-story houses. In the area, we can notice a sloping terrain, some gardens, pebbles on the ground, and brick floor in some paths. There are no paved roads or sidewalks between houses. The site is located near the Strandveien road with cars and bus. There is some lighting next to the main road but not in the smaller paths between houses. There is also the railway right next to houses. Most of the houses do not have any basements except the storage room for the ancient milk shop.

**Figure 4.** Svartlamon wooden buildings (NTNU).

Foundations are in bad condition. Houses have many windows, but they are damaged, showing problems with condensation and humidity. All the facades are painted; however, the painting is very old and is deteriorating (Figure 5). Roofs are not insulated, and all houses have a steep roof. Many houses have tile roof. Clay tiles are a traditional kind of roofing material, it is a material that is both widely available and easy to shape into forming a channel to direct the flow of water. Some of the houses have a corrugated metal roof which is inexpensive and quick to install. Mainly, the rainwater gutters and downpipes are in good condition close to the roof, but the lower parts are worse (leakages), causing constant humidity on the walls, allowing moisture to seep into the houses, so molds, fungus, and mosses are spotted. They affect the health and well-being of people. The façades are complicated to insulate due cultural protection, so the design of the facades and windows should be original. Walls, doors, windows, partitions, and finishes are wooden and old.

**Figure 5.** Deteriorating façade (NTNU).

Many walls are not straight, and they lean (Figure 6), likely because the houses were built on poorly compacted embankments or on a heterogeneous soil. No drainage systems could be seen around them. Some of the load bearing elements are rotten. Even some entry doors are not straight. Most of the houses do not have any bathroom, and the only sink in their house is the kitchen one. The houses with bathrooms share them. The plumbing system is old and partly maintained but not to the standards that we can find in newer houses. The electrical system is at its full capacity, old but safe.

**Figure 6.** Leaning house (NTNU).

A typical example of housing block is presented: two buildings Strandveien 19 and 21 (Figure 7), from 1893, with four flats each, shared facilities in the common basement, and a shop in the ground floor. The foundation material is stone; basement walls are made of brick, and upper floor levels have a wooden structure. During the time only some small interventions were done, and foundations were stabilized in 2001; a wind barrier on the north façade was installed in 2016, and the damaged wooden panels were replaced (Figure 8). Both houses need major refurbishments as some foundations and walls are falling apart, and the structure is in a very bad condition, showing many cracks.

A comparison of the floor plans shows some smaller changes in the usability of the space in the floors and basement. An apartment in the basement is changed to a bathroom and storage place for inhabitants, and some internal walls were removed to create a larger common space. In one of the flats in the second floor, an extra bedroom and own bathroom was built.

An analysis of technical condition is presented, based on the observation (Figure 9). The foundations of the two buildings are connected and have the same structure, consisting of stones. They support brick walls with a thickness around 0.40 m and a height equal to the basement height, which is up to 3.00 m. There is no drainage system on the outer side of the walls. The inner walls structure from the basement on is carried by wooden columns. The wooden façade is made from stacked timber with a thickness of 7 cm.

**Figure 7.** Strandveien 19 and 21 (Trondheim Municipality, 2016).

**Figure 8.** Renovation of the façade (Trondheim Municipality, 2016).

**Figure 9.** Illustrated section of the building in Strandveien 19 (Sekkal, 2019).

The structure of the slabs is made of primary beams resting on the columns and the brick wall. The wooden slab is made in classic form: secondary layer of beams with space of 50 cm between each, another layer for wooden panel floors, and clay or gravel in between as insulation. The frame of the buildings is classic with a wooden truss, which is supported by the columns in the walls. Above the trusses, there are purlins that support perpendicular wooden boards. The distance between the trusses is more than 6 m.

Windows are not the original ones, and they are in simple or double glazing. However, we saw that some frames are damaged mainly due to humidity.

Almost no insulation is found, and it is only in the basement, in front of the brick wall (15 cm of glass wool). Some of the residents put an internal insulation.

The walls have many cracks both inside and outside the buildings. A large crack in the cladding of the north façade between the two houses extends along the entire height of the ground floor, and the thickness is from 1 to 2 cm. From documented material, it is there from 2012 and is still the same size, without any changes (Figure 10). There are many other smaller cracks in the brick walls, missing cement binders and in the wooden walls as well. On the north façade several humidity traces could be seen (Figure 11).

**Figure 10.** Crack between the buildings Strandveien 19 and 21 (NTNU).

**Figure 11.** Humidity damage (NTNU).

The situation extended for a longer period could cause the structural and health problems due to humidity indoor (Figure 12), which could freeze during the winter, or formation of molds and fungi. In addition, the cladding contact between the brick and wooden wall is not tightened enough (Figure 11, upper left corner). There are some damages on the south wall: some pieces of bricks are falling; there are cracks between the bricks with missing binders and traces of humidity coming from the outside, i.e., from the soil (the bedrock in Svartlamon is very shallow). The analysis shows various causes of humidity in the wall: cracks outdoor, poor cladding condition, no drainage, capillary rise, bad condition of the gutters causing too much water streaming on the façade, and the wooden board above the wall of the ground floor does not play its part to 'drop' off water. The humidity traces are located at the top of the wall or on the ground, behind the cracks, in the contact area of two different cladding materials, and behind the wooden façade, panels are not tightened enough and have no vapor barrier.

The ventilation is not efficient in the buildings. The air exchange is only through the doors and windows, somewhere through leaks in the walls, and small extractors in bathrooms (Figure 13). There is no ventilation system in the kitchen.

**Figure 12.** Indoor wall in one of the rooms (NTNU).

**Figure 13.** Integrated humidity control, occupancy sensor, and downtime (NTNU).

The acoustic is problematic, and there is a lot of sound transmission. During the visit to one of the apartments, the discussion from another one was heard as well as a washing machine from the common stairs. Inhabitants made an agreement between the residents to respect a certain schedule of "making noise". The noise could be heard through the windows of the north façade, which is facing to the railway.

The electrical system is old, safe, but insufficient, so they should consider new equipment. They use mostly electrical heaters. The lighting conditions are quite poor.

The municipality conducted an evaluation of the maintenance backlog of the building stock of Svartlamon in 2016. Their findings, which align with our on-site observation (Table 2), support the evidence that the available funds for maintenance were very low compared to the required renovation work (examples given in Table 3 for Strandveien 19 and 21).

**Table 2.** Problems of the building stock based on site-observation.


**Table 3.** Estimated renovation costs compared to available funds (Trondheim Kommune, 2016).


In the case study of Svartlamon, different local resources were found. The observations revealed several workshops available at no cost to the residents; photos of the workshop are shown in Figures 14 and 15. Here, the renters have both materials and tools that can make it easier to maintain their home and houses. There is also a dedicated group that manages the workshops.

**Figure 14.** Workshop session on window restauration (Svartlamon residents association, 2016).

**Figure 15.** Workshop at Svartlamon (NTNU).

The workshop voluntary group manages the workspaces (Figure 15) at Svartlamon with associated tools, materials, and a recycling stock that is available both to the residents and to the residents association.

They receive a lot of recycled materials from both outside and inside Svartlamon. From time to time, Svartlamon also receives old material and equipment from houses that the municipality is tearing down. The idea is that access to tools and materials should stimulate reuse and local refurbishment. It is stated as a cautious estimate that the voluntary group organizing the workshop produces about 250–300 h of work per year [69].

Based on their own observations and this information, the students decided to investigate solutions that could improve the living standards in the buildings in a more sustainable way with respect for the low-budget and self-renovation culture deeply anchored in the Svartlamon community.

#### *3.2. Results of the Survey*

Based on the results of the questionnaire, relevant information regarding the living conditions in Svartlamon are presented in this section. 54.2% of the respondents share bathrooms and toilets with other households. The respondents rated their relationship to their neighbors above average (on a scale from 0–6, 100% rated it above 3; with 58.3% rating it as a 6/6). The main factors that influenced their choice to move Svartlamon were Social aspect/sense of belonging (83.3%), Freedom (79.2%), Budget/economic reasons (66.7%), Having friends/family that were already living in Svartlamon (66.7%). To the open question on what they liked the most about living in Svartlamon, the social aspects came out the most, including community feeling, local democracy, shared facilities and green areas, economic freedom (non-binding leases, possibility to settle in a mobile home, no mortgage, cheap rent that allow to focus personal investments elsewhere, e.g., "pursuing their dreams"), general feeling of freedom (to set up art installations, start new activities and events, DIY projects). To the open question on whether they would like to change anything in Svartlamon, most respondents would like to have an even bigger community engagement, i.e., that more people would actively participate in the collective activities and decisions.

The interesting thing about the self-renovation culture in Svartlamon is that a majority of the respondents have experience in renovation work (Figure 16).

**Figure 16.** Participants' involvement in renovation/refurbishment projects.

Furthermore, results point towards a strong community aspect of these projects with respondents being involved not only in their own house renovation but also those of other residents (Figure 17), and a further engagement of the community at large beyond the household (Figure 18).

**Figure 17.** The type of buildings and the participants involvement in renovation plans.

**Figure 18.** Involved groups in renovation plans.

However, most of them did not discuss the project with the community beforehand (Figure 19), and a vast majority expressed their will to be involved in future projects (Figure 20).

**Figure 19.** Participants interest in involvement in future renovation plans.

**Figure 20.** Respondents' engagement with community before renovations.

Regarding the technologies available to their household, results show that the most basic needs are met, while more sustainable technologies are not (Figure 21). However, a majority of respondents expressed no need for new technologies (Figure 22).

From the open question about the use for new technologies in their daily life, some respondents expressed a will to have more sustainable energy sources such as solar panels combined with thermoelectric generator, heat pumps, and bio toilets.

"*We are saving up to buy a thermoelectric generator which will help us utilize our solar energy system better during the dark months of winter, via the wood stove. We are also saving up for a bio/compost toilet*". (Survey respondent)

Others expressed a need for keeping up the basic comfort with examples of kitchen fans and floor heating systems. For the majority who expressed no need for new technologies, respondents expressed a will to keep the simple living standards as they are for cultural reasons or lack of space in their dwellings.

"*I have what I need", "My house doesn't need anything else, nor is our house big enough", "The rooms are original 1880s, and I like the quiet atmosphere in the rooms*". (Survey respondents)

An analysis is designed to address challenges and find solutions from different perspectives that could be useful for inhabitants with low budget and a 'do it yourself' mindset.

The students covered different areas, such as indoor temperature, air quality, noise disturbance, energy consumptions, cracked walls, fire safety, and lack of private bathrooms.

#### **4. Guidance for Sustainable Refurbishment for Improving Indoor Environment**

As a result of the observations and the survey, the students developed under supervision of academic and professional experts, a practical guide in form of a small magazine for residents in which they addressed each part of the sustainable renovation work (Figure 23, Trouillon et al. (2019)).

**Figure 23.** Practical guide developed by NTNU Students for Svartlamon residents.

The first part of this magazine recalled information about the cultural heritage background and communicated the results of the survey. Further on, each element of the sustainable renovation guide was addressed in the most practical way yet easily understandable for a non-expert audience. This practical guide provided background information on the buildings' symptoms, the importance of addressing them, the potential for reusing materials, and the possible renovation measures to be taken.

The following section goes into the details of each element covered more succinctly in the magazine.

#### *4.1. Reuse of Materials*

While global concerns are saving energy, protecting the environment, as well as the scarcity of resources, re-use and recycling take place in the very first steps towards a promising future. The inhabitants and the entire community of Svartlamon have a good understanding of these issues, and many of them explain that they are living in this neighborhood not only for the low rent price, but rather to live a simple life and have a low environmental impact. Furthermore, this practice is the preferred one in the community, both for economic and environmental reasons. Svartlamon could also benefit from a collaboration with Loopfront, a digital platform created in Trondheim that enables the reuse of materials in the construction industry, promoting a circular economy [70].

More specific information about the reuse of construction materials is presented below in order to provide a basis for technical guidance for non-professional residents who engage in self-renovation work.

#### **Bricks**

Brick is a robust material with a long service life. Thus, recycled bricks have great potential for use in new constructions. Recycled bricks can be used in different contexts depending on the technical characteristics of bricks. Reclaimed bricks that are frost-proof can be used for cladding in facades while those that are not frost-proof are used in a plastered façade. Bricks used over several floors must have sufficient compressive strength. The reuse potential of bricks varies for different periods of time. Bricks covered with paint or treated with surface treatment that may contain PCBs, chlorine paraffins, heavy metals and other hazardous substances should be avoided [71,72].

A project at Lilleborg in Norway shows the following figures on the cost of unit price for reuse of bricks:


Results of the Lilleborg project indicate that due to the costs associated with sample extraction and quality control, the reuse volume per demolition object should be at least 50,000 bricks [72].

#### **Wood**

Results from the project "Reuse House Trondheim" showed that wood is one of the most popular materials for reuse [73]. Re-used material groups include all types of treated and untreated wood, glued-laminated timber, and wood fiber products, where spruce and pine are the most commonly used types of wood in Norway. Wood is most often used in constructive elements. It makes up about 30–40% of the total waste during a demolition. In the "Reuse House Trondheim", approximately 85% of the timber-frame and cladding were made of reclaimed wood, and smaller wood elements like doors, window frames, and kitchen interiors were also reused [74]. It is important to keep a record of quality assurance process when re-using or repurposing wood so that the material keeps its quality and usability. Wood that should be avoided is CCA- and creosote impregnated wood which belongs to the group impregnated wood and is considered hazardous waste.

#### **Metal**

Metals such as steel, zinc, copper, and aluminum components are sorted for reuse. One should be aware that materials that belong to the hazardous waste category should not be reused. Surface treatment must be assessed against limits for hazardous substances and whether there may be a risk of leakage [74]. One must be careful about the metal components with surface treatments that contain sensing substances (asbestos, heavy metals, Pcb, chlorine paraffins).

#### *4.2. Indoor Environment*

In the case of low-standard buildings, retrofit recommendations are based on weatherization of households, to improve indoor comfort and reduce energy-consumption costs. Weatherization is usually focused on changes in the structure, such as insulation of ceilings and walls, air sealing, and duct sealing.

#### **Ventilation**

To have a healthy indoor environment, fresh air is required in buildings to lighten and minimize odors, to improve the oxygen level for respiration, and to increase thermal comfort [75]. Natural ventilation could be a good environmentally friendly solution in Svartlamon buildings to improve air quality as the budget is very tight and the cultural heritage must be preserved [76]. This ventilation system can not only bring in fresh air using the natural force of wind and provide a high ventilation rate, which can be cost and energy efficient compared to mechanical systems, but also (if applied properly) can provide the opportunity of accessing daylight in buildings [77].

The proposed solution implies having straight pipes (re-used when applicable) through the roof and the ceilings of the apartments in order to exhaust the polluted air from the secondary rooms (kitchens in this case). Each apartment would have its own pipe as shown in Figure 24 which refers to a possible solution for the house of Strandveien 21. Putting vents through the walls is also a good solution for air change. However, the location and orientation of these pipes are crucial in terms of controlling the airflow, pollution release rate, and the amount of fresh air coming in [78], not having vents close to the exhaust pipe, but at the opposite side to improve airflow and preferably placed facing the wind, but also not close to the windows which is a cold area.

**Figure 24.** Simplified drawing of the proposed ventilation system for Strandveien 21 (Practical guide by Gendarme, 2019).

Ventilation outlets would be a vertical duct (at the opposite side of the air inlets) from the ceiling of the apartment to the roof of the house but also air extractors. The vertical straight pipe will evacuate hot air through the roof due to the low density of heated air.

#### **Sound insulation**

Materials which will be used for sound insulation need to be selected carefully. The goal is to prevent, or at least reduce, the sound coming from other apartments since it is the most annoying sound source. For keeping a low budget, only the walls (separating the common stairs and the apartments) and the floors (separating the first floor and the second floor) are addressed. Sound absorbent materials act as springs in a mass–spring–mass system. The sound absorbent is a soft, porous material that dampens sound and which is placed between two rigid walls considered as the masses [79]. These rigid walls stop a part of the sound according to the mass law: the heavier and denser a material, the higher the acoustic insulation is. The growing environmental awareness triggered a shift towards more environmentally friendly materials from renewable resources, such as waste wool and recycled polyester fibers [80]. Wooden fiber is good for acoustic since it is quite heavy but also mineral wool that has a good absorption factor at all frequencies [81,82].

Frames can be created in front of the existing walls separating the apartments from the common stairs/area, in which the insulation will be placed. For optimal insulation, the frame must be detached from the wall. That way, if one wall vibrates, it does not transmit its vibration to the second wall.

#### **Lighting conditions**

Ensuring good natural lighting conditions in a building is important to improve the health and wellbeing of residents [83]. A solution to enhance natural lighting in buildings would have been putting larger windows, but the cultural heritage regulations make this impossible. The alternative solution is to choose colors that reflect natural light. The reflection value of light can vary from very high percentage for white glossy finish to almost null for mat black. This can affect daylight illuminance in the room and impact the visual comfort of residents [84]. Finally, improving the artificial light equipment in the rooms can contribute to better conditions.

#### **Windows**

One solution will be to use windows with the lower U-value than windows already installed in order to reduce heat loss through window, especially in the north façade where there is the least solar gain [85–87]. However, windows with low U-Value are expensive and have a lower transmittance which deteriorates the properties of natural lighting [88]. A better performing frame like an insulated frame allows to reduce heat loss through the frame, and if they are well installed, they will highly reduce air leakage [89]. However, some of them could be expensive, and the design of the building does not allow to put any windows, as the external visual aspect must be kept. In the cheapest way, it is recommended to stop air leakage around the windows just with a seal and the use of reclaimed material to replace damaged parts.

#### **Wind barrier**

Airtight building envelope in lightweight constructions in cold and moderate climates is normally realized by a continuous interior air- and vapor-tight barrier. However, a proper interior air barrier that fulfills the stricter environmental requirements is usually labor intensive due to many internal joints in buildings (e.g., interior walls, perforations necessary for electrical and plumbing devices) [90,91].

Given the high possibility of unwanted infiltration of cold air in buildings through forced convection, using "wind barrier" can help to protect the outside insulation layer from such cold air infiltration into the constructions [92]. The wind barriers can also serve as a drainage plane to prevent water infiltration into the structure [92].

#### **Air leakage**

Uncontrolled infiltration of air through the building results from holes in the envelope (chimneys and ducts), gaps between building components especially at the roof, joints around movable elements such as doors and windows, and penetration of air through building components under pressure from wind. These infiltration means a high heat loss (up to 30%). Reducing infiltration will therefore significantly reduce heat loss and reduce overconsumption [93]. The following actions can be taken to reduce infiltration:


An old building breaths due to its air infiltration weaknesses, ventilation is mainly due to low airtightness. Therefore, in reducing air leaks, the building will be more airtight and this type of natural ventilation will be less efficient. It is important to consider ventilation in the renovation planning, in order to avoid reduction of inside air quality causing moisture, material degradation and health problems for the residents thereafter [94].

#### **Insulation of exterior components**

Given the considerable influence of building envelope properties on the construction energy performance, one of the most common methods to improve energy efficiency in constructions is to insulate the internal and external walls [95]. To minimize energy use in buildings, providing thermal and moisture insulation in the layers tend to be very efficient. Thermal insulation can be installed on the external or internal side of the building envelope. However, in case of historical buildings, applying insulation in the external walls should be done in a more careful way due to the necessity of preserving the ancient and distinctive appearance in such buildings [96].

However, the influence of moisture transfer and condensation inside the walls (which would affect the thermal performance of the materials as well) is usually overlooked. For instance, it is shown by Barbosa and Mendes (2008) [97] that ignoring the influence of moisture in thermal insulation layers may lead to underestimation of yearly heat flux in buildings, which will result in huge waste of energy. Hence, it is crucial to view both thermal and humidity aspects in installing insulation in both internal and external wall layers.

The given circumstances in Svartlamon offer both advantages and disadvantages in terms of wall insulation of external walls. The usual measures used in the performance of insulation techniques are usually characterized by a high-cost factor. However, Svartlamon has to deal with limited budgets. Due to this fact as well as the historical and heritage characteristics of buildings in this area, it is not advisable to install insulation in the outer shell of the building, as a composite thermal insulation system. On the contrary, techniques can be used which may differ slightly from the state of the art or which do not fully comply with the regulations. The loosened requirements for building standards in Svartlamon offer a great advantage in this respect. Thus, apart from fire protection, the energy standards for new buildings do not necessarily have to be fully met. In the following section, different possibilities will be explained which can lead to an enhancement of the living situation and comfort.

Internal insulation is characterized by numerous disadvantages, as poor execution can result in extensive consequential damage [85]. Internal insulation in historical buildings can also lead to reduction of floor area, change of spatial room pro-portions, loss of historic fabric, and influence on the hygrothermal behavior of the insulated walls [98]. However, with careful execution it also offers several advantages with regard to Svartlamon. For instance, single rooms such as the living room can be insulated as desired to reduce heating times and energy loss. Although this means that valuable living space is lost, it also saves costs as no scaffolding is necessary, and the wooden cladding of the façade does not have to be removed and replaced.

Due, among other things, to the widespread use of timber-/timber-framed constructions in Norway, blow-in insulation should also be taken into consideration. With the help of this technique, all cavities are filled with loose bulk material and the insulating effect of the wall is significantly increased without a change in the external appearance of the building being visible after the measure. However, the installation should be carried out or at least supervised by a specialized company, since a deficient execution leads to large consequential damages. Nevertheless, it is a very cost-effective and above all very fast method with few working hours. There are also numerous sustainable and natural materials available for the choice of insulation material. For example, old newspapers can be collected and recycled to provide a much cheaper alternative to manufactured raw materials.

The structural and physical properties of the exterior walls are influenced by the right choice of materials and therefore also by the choice of insulating materials. Nowadays, many different insulating materials are used in the building industry. The Tables 4 and 5 were adapted from Gabriel and Ladener (2018) [99] and Kolb (2014) [100] and developed by one of the student [101] to be incorporated in the practical guidance and each give an overview of the individual properties of the selected insulating materials and techniques.


**Table 4.** Insulation materials and their properties (Beck, 2019).


**Table 5.** Insulation techniques and their potential for self-renovation (Beck, 2019).

Provided information consists of approximate costs per square meter for the materials and their technical parameters as well as indications regarding insulation techniques that could be more or less easily applicable for self-renovation work. A point system is used to describe the insulating efficiency, summer heat protection, and moisture control (Table 4), whereby five filled out points stand for very high (very good) and five empty points for low (very bad). The same point system is used to assess the insulation techniques (Table 5).

#### **5. Discussion and Conclusions**

The purpose of this study was to explore the challenges related to sustainable refurbishment of an experimental urban ecological area with a strong historical and cultural heritage. We proposed possible renovation solutions and instructions with respect to the self-renovation culture that dominates in the community while also accounting for the complex ownership structure, governance, and insufficient funds for the maintenance backlog. Findings from this paper provided insight in two aspects of self-renovation:


The research prepared together with master's students from NTNU is based on on-site observation and a close collaboration with the housing foundation of Svartlamon, thus providing solid basis for recommending real-life practical solutions that reflect the needs and the will of the community while trying to improve the overall sustainability of the building stock.

Based on the research, this study presents answers to the research question *How to raise the self-renovation standards of a culturally protected although poorly maintained area*?

Results show that in order to keep a sustainable, low-cost urban living model, instructions for self-renovation including information on their impact on sustainability are valuable as a guidance for non-professionals to make informed choices. We can emphasize that inhabitants are used to lower living standards, so the project is aimed to present the proper solutions for improvement as a balance between new technical solutions, personal self-renovation skills, habits, and health. This study addresses the challenges related to lack of motivation, knowledge, and communication that constitute major barriers to the implementation of sustainable renovation measures by proposing concrete actions based

on state-of-the-art energy-retrofit interventions that account for the pre-existing cultural and economic factors influencing community decisions and capabilities.

The community has had a long history of fighting for the preservation of the area as a low-standard independent neighborhood. Despite of the maintenance backlog, the will of the community to work together to renovate the buildings is an asset. Having only a low budget and little knowledge, they constantly found creative ways to address their problems, for example, the infrastructure and network systems for re-using materials are already well implemented. However, the local storage of the materials is a limit to their ambitions, and the community as well as the municipality could benefit from being part of a wider network as proposed by Loopfront. This pre-existing culture should be combined with more sustainable solutions such as improving the thermal insulation, upgrading the windows with better glazing and frames, etc. With an improved maintenance strategy and access to the proper tools and resources, the community could develop their self-renovation culture towards more efficient and environmentally friendly practices.

Recommendations for future self-renovation work should provide some key-knowledge to the inhabitants about insulation, structure, indoor climate, building physics, and energy efficiency. This would allow them to make better-informed choices when engaging in self-renovation projects. All aspects of a renovation plan are closely linked and should be considered simultaneously. For instance, repairment, sealing of cracks and sealing of air infiltration weaknesses, adding new layers like wind barrier and insulation lead to buildings more airtight which means that ventilation system must be considered and improved. Regarding the indoor comfort in the building, it is relevant to point out that they were not initially designed for the modern lifestyle and hygiene habits; inhabitants' activities and use of the kitchens and bathrooms have caused humidity damages that should be fixed in order to insure an acceptable indoor air quality.

Furthermore, when developing a practical maintenance guide, the effects and the actual cost-benefit factor for the residents of Svartlamon must also be considered, and a balance must be found between technical solutions and user behavior.

To improve the living standards in Svartlamon, thermal comfort and building physics, the proposed solutions and recommendations aforementioned might be a promising start to sustainably improve the houses in the long-term.

**Author Contributions:** Conceptualization, C.S., A.T.S. and M.V.; formal analysis, C.S., A.T.S. and M.V.; investigation, C.S. and Ž.K.; methodology, C.S. and A.T.S.; resources, Ž.K.; supervision, A.T.S.; visualization, M.J.; writing—original draft, C.S., A.T.S. and M.V.; writing—review and editing, C.S., A.T.S., M.V., M.J. and Ž.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Acknowledgments:** The authors would like to thank all the master students who took part in the course "Refurbishment technology Specialized course" at NTNU Faculty of Civil ad Environmental Engineering since 2018 for their work and their innovative thinking which are the foundations of this paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Airflow Analysis of the Haida Plank House, a Breathing Envelope**

**Roberto Alonso González Lezcano <sup>1</sup> and María Jesús Montero Burgos 2,\***


**Abstract:** The Haida plank house is one of the most important models built by the native American Indians. Built on the southwest coast of Canada, it adapts the tradition of the ancient pit houses to the requirements of the humid and cold climate characteristic of the Haida Gwaii Islands. This construction is composed by two main pieces: the central pit covered by a wooden envelope. Both protect its dwellers and their hearths. The ventilation system is based on two solutions: the gaps between the wall planks and a smoke hole that can be opened or closed in the roof at will. The aim of the present research is to analyze the way these two elements arrange the indoor airflow in order to ensure the comfortability of the house. Four cases have been proposed, according to four different dimensions for the gaps: 1, 2, 3 and 4 cm. Each case has been doubled in order to determine how the state of the smoke hole affected the corresponding results. This way, it has been concluded that if the gaps' width becomes higher than 4 cm, the airflow velocity comfort level would be exceeded. It is been possible to observe how the state of the smoke hole influences the way the air moves around the dwelling.

**Keywords:** ventilation; CFD analysis; archaeology; architecture; native American Indians; traditional architecture; vernacular architecture

#### **1. Introduction**

The dwellings built by the Native American Indians are one of the most interesting examples of primitive architecture that can be found. When analyzing them, it can be seen that they were designed in the search for one main goal: to achieve a construction that takes advantage of its environment and of the available resources in the most efficient way [1]. In order to succeed, it was necessary to have a deep knowledge about the way that the environment worked and the advantages and disadvantages offered by those resources. Such was the case of these communities.

When the European explorers arrived in North America by the end of the 15th century, they found a bunch of cultures that represented a period of history that was already lost in their own continent [2–4]. Among the members of those expeditions, there were not only sailors but also artists, historians, engineers, botanists, or anthropologists, whose work is extremely valuable nowadays [5]. Specifically in the architecture field, they gathered a great amount of information about the dwellings they had the opportunity to visit. Dimensions, building materials, building processes, or indoor distribution are some of the features described in the documentation they created.

The present research is focused on one of these dwellings, the plank house built by the communities settled in the coast of southwest of Canada and northwest of the United States. The goal of this work is to analyze the passive ventilation strategies that are contained in its design.

**Citation:** Lezcano, R.A.G.; Burgos, M.J.M. Airflow Analysis of the Haida Plank House, a Breathing Envelope. *Energies* **2021**, *14*, 4871. https:// doi.org//10.3390/en14164871

Academic Editor: Alessandro Cannavale

Received: 28 May 2021 Accepted: 3 August 2021 Published: 10 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The study case was built by the Haida, one of the most important groups of the zone. Two types of houses were built by this community, according to two different structural solutions. The use of six or four beams was the main decision to make (Figure 1).

**Figure 1.** (**a**) Four beams model based on [1] (p. 270); (**b**) Six beams model based on [1] (p. 270).

The first one is probably the eldest [1] (p. 271). The house that has been chosen for the present research, dwelling number 3 from the village of Ninstints, corresponds to this structural solution.

The Haida name for the village was "Red Cod Island Town", which clearly indicates the way of life of its inhabitants. Its first European visitors changed its name from the one of the chief of the tribe, Nañ stîns, "He who is two". It contained twenty dwellings, although at present, just seventeen of them can be located, as can be seen in Figure 2 [6] (p. 103).

**Figure 2.** Plan of Ninstints (dwelling number 3 in red) based on [6] (p. 10).

As usually happens in this type of villages, the line of dwellings was arranged along the coast line in order to ease the entry to the ocean and to take advantage of the airflow that arose between the mountains and the ocean. This mechanism was a very useful feature of these locations, since this airflow kept the wooden structures of the dwellings from being rotten. The red cedar wood (*Thuja plicata* Donn ex D. Don) was chosen by the Haida, since it is naturally covered by a special oil that also protects it from moisture [7] (p. 63).

Walls were composed by a line of wooden planks (Figure 3). It was necessary to leave a little gap between them, so the airflow could enter into the house. As told by Underhill [8] (p. 84), one of these dwellings was rebuilt some time ago by modern Indians. When it was finished, they lit a hearth indoors, and smoke filled the house completely. They could not understand how their ancestors could have lived in such a smoky place. An Indian elder

man who had seen all the process explained them how important those gaps between the planks were: "Why, it was the cracks. With enough cracks, you have fine circulation of air".

**Figure 3.** Exploded axonometric of a Haida dwelling construction system based on [9] (p. 171).

The door consisted of a hole carved in a totem pole, and it usually was covered with another plank, hung from a cedar bark rope, and blocked with some poles fit in the structure [10] (p. 21).

This way, the structure of the dwelling was solved by means of wooden beams and poles, and its envelope was composed of wooden planks. Roofs were built in two different ways: wooden planks or bark sheets. The first option was used by the richest families, while the second possibility was chosen by the rest of the community [10] (p. 21). Whatever solution was chosen, these pieces were overlapped, such as the tiles used in modern roofs, in order to keep rainwater away from the house; finally, they were weighted down with stones [11] (p. 72).

The indoor space was arranged around a central terraced pit [12] (p. 146B). It was the place where the hearth was lit. There could be several hearths, if there was more than one family living in the house. The roof planks could be removed in order to let the smoke out.

As explained before, the target of the present research work is to analyze the airflow system that triggers indoor ventilation. There are three elements to focus on: the dimensions of the gaps between the planks, the influence of the smoke hole in the indoor airflow, and the way the airflow behaves in the pit.

#### **2. Materials and Methods**

#### *2.1. Materials*

The dwelling analyzed was built by the Haida in the village of Ninstints, Haida Gwaii Islands, British Columbia. Specifically, it is dwelling number three. It was built according to the oldest type of structure, as explained before, and it had a central pit, which was the usual piece of this type of dwelling [10] (p. 107). Its floor plan was 14.1 per 14.85 m. The pit (9.45 per 9.9 m) was 1.8 m deep; it was delimited by two steps that were each 75 cm high. The height of the house has been obtained by taking the Haida dwelling drawn by Niblack in 1890 (Figure 4); [13] (p. 42) as reference. This way, its height has been estimated as 3.5 m for the side walls and as 5.03 m for the ridge of the two main façades.

**Figure 4.** Cutaway section of Haida dwelling based on [13] (detail of plate XXXV).

#### *2.2. Methods*

First of all, the ventilation rate of the dwelling has been calculated in order to determine if it was high enough. This way, the 2nd Florida Solar Energy Center method has been used [14] (pp. 101–103). This method is used to dimension the inlet and outlet areas in cross-ventilated spaces. It is based on the pressure increase between the inlet and the outlet openings. The results must be read taking into account that they are not influenced by the ambient temperature and that this method assumes that every inlet opening has the same positive pressure value, and every outlet has the same negative pressure value.

According to this method, the ventilation rate is calculated according to the following formula:

$$\mathbf{Ach} = \left[ \mathbf{A} \cdot \mathbf{W} \cdot \sqrt{\left( \mathbf{f}\_3 \cdot \mathbf{f}\_4 \cdot \mathbf{PD} \right)} \right] / \left[ \mathbf{4.3} \times 10^{-4} \cdot \mathbf{V} \right]. \tag{1}$$

"A" is equal to the total effective area of the openings (m2) and is determined following the next expression:

$$\mathbf{A} = (\mathbf{A}\_{\rm o} \cdot \mathbf{A}\_{\rm i}) / (\mathbf{A}\_{\rm o} \,^2 + \mathbf{A}\_{\rm i} \,^2)^{0.5}. \tag{2}$$

The outlet area is represented by "Ao" (m2), and the inlet area is represented by "Ai" (m2). "W" is equal to the wind velocity (m/s). There are two correction factors represented by letter "f", f3 and f4. On the one hand, "f3" introduces the effect of the nearby constructions and is equal to "g/h", being "h" the height of the nearby obstacles and "g" the distance between them. On the other hand, "f4" is determined by the floor number where the case study in question is located. "PD" is the subtraction between the windward pressure coefficient (WPC) and the leeward pressure coefficient (LPC) [14] (p. 102).

The dwelling has been modeled in Autodesk Revit 2020®. This model has been exported as a SAT file in order to carry out the corresponding calculations in Autodesk CFD 2021® [15]. These calculations have been developed taking into account the average conditions of the zone where these dwellings were built. This way, in order to obtain representative results, ten locations have been taken, and their wind velocity average has been determined (Figure 5). Just summer wind velocity has been taken into account, since that is the season when ventilation is more important.

As can be seen in Figure 6, the model has been placed in a box. Air material has been assigned to this element, as it has to the gaps between the planks, the smoke hole, and the air mass located inside the house. Wood material has been assigned to the planks, floor, and roof.

As boundary conditions, wind force (8.24 m/s) (Figure 5) has been assigned to one of the sides of the box, normal to the longitudinal axe of the house. Zero pressure has been assigned to the opposite face. This way, the latter absorbed the windflow emitted by the former. The top and longitudinal sides of the box have been characterized as "slim/slippery". The free sides of the gaps have also been assigned zero pressure and the smoke hole has as well (Figure 6).

The dimensions of the air box were established by the software Autodesk CFD®. Its width is five times the width of the dwelling, its length is five times the length of the dwelling, and its height is five times the height of the dwelling.

The solution mode was established as "steady state". The resolution of the mesh was defined as 1, the edge growth rate was defined as 1.1, two points were the minimum on the edges, and 10 was the number of points for the longest edge. These meshes were designed automatically by the CFD software. Calculations have been carried out accepting the Autodesk CFD® criteria, since it limits automatically the optimum amount of iterations to achieve convergence. This way, "Intelligent solution control" and "Automatic convergence assessment" have been selected and set as "tight". The calculation system has been developed from the algorithm SIMPLER, which belongs to one of the best-known CFD tools family. The turbulence was simulated by means of the model k-epsilon, which has been widely developed and studied. The validity of the results that have been obtained are backed by the use of Autodesk CFD®, which is a software that has been tested since 2013. Its calculations have been compared in numerous research studies, such as the one developed in the Victoria University of Wellington by Dr. Jing Li [17]. Autodesk® provides an ample compilation of verification studies; some of them compare test models against experimental results, and others compare them against empirical hand calculations [18].

**Figure 6.** Screenshot in which boundary conditions are indicated (Autodesk CFD®).

Four alternatives have been calculated according to the gaps dimensions: 1, 2, 3, and 4 cm wide. Each case is calculated twice, since the smoke hole can be either open or closed. Five locations have been chosen inside each model, as shown in Figure 7. Each location has been divided into several levels, from level 0 to level 450, in such a way that one point was placed each 50 cm (Figure 7).

**Figure 7.** (**a**) Axonometry, screenshot showing the location of the five points analyzed (Autodesk CFD®); (**b**) Floorplan of the same model (Autodesk CFD®); (**c**) Section of the same model (Autodesk CFD®).

#### **3. Results**

#### *3.1. Ventilation Rate*

As explained before, the ventilation rate of this dwelling has been calculated according to the Second Florida Solar Energy Center method in order to determine if the openings were wide enough to accomplish this minimum requirement. The Spanish Building Technical Code establishes that this value has to be higher or equal to 0.63 ach in abodes.

Calculations have been carried out and, as can be seen, the plank house whose gaps are 1 cm wide accomplished this requirement (Table 1).


**Table 1.** Ventilation rate for 1 cm wide gaps.

Once it was determined that the ventilation rate was high enough if the gaps between wall planks were equal to or higher than 1 cm wide, the research could be focused on analyzing the way the airflow moved inside the dwelling, varying the dimension of these gaps.

#### *3.2. Airflow*

#### 3.2.1. Airflow above the Central Pit

The velocity magnitude has been calculated in five columns, each one composed by nine points from 50 to 450 cm high. Each column has been calculated for two cases, since the smoke hole could be either opened or closed.

As can be seen in Figures 8 and 9, the velocity magnitude increases as the gaps' width does. The thinner the gaps are, the more regular the velocity is along the corresponding column.

**Figure 8.** Values of velocity for the five columns of points, which is analyzed when the smoke hole of the plank house is opened (m/s) (Microsoft Excel®; Autodesk CFD®).

**Figure 9.** Values of velocity for the five columns of points, which is analyzed when the smoke hole of the plank house is closed (m/s) (Microsoft Excel®; Autodesk CFD®).

"A" is the column that gathers the higher velocity values (Figure 7). The highest values are not obtained by the thinnest gaps, in such a way that wind passes through the gaps that are 1 cm wide more slowly than through the 4 cm gaps. This difference is higher if compared with the closed-smoke hole model. Even though the velocity through the 1 cm gap is lower when the smoke hole is closed, it can be seen that it is almost invariable for the rest of the widths.

The greatest oscillations occur in columns "B" and "D", which are both located near the envelope. The velocity varies along them in an irregular way from the floor to the top of the house. However, their lowest values are reached close to the work plane (about 70 cm high), and the highest ones are found near the ceiling.

The most irregular case is the one of column "C". Its levels are similar to "B" at 50 cm below the floor, and they reach velocities as high as the "A" column when approaching the ceiling. So, the airflow behavior at this point is a combination of both "A" in the highest part of the dwelling and "B" in the lowest one. Whether the smoke hole is opened or closed, the result is the same.

Column "D" is near the outlet gaps. Column "C" is close as well, but the velocity is higher inside the gaps located besides "D". The difference between the results obtained for "C" and "D" is based on the proximity of the former to the smoke hole: that is, the element that triggers the velocity increase in "C" at 300 cm high. Actually, the velocity magnitude is almost null near the ceiling in column "D", which means that the smoke of the hearths could be trapped in this zone. The same velocity level takes place in column "C" below 300 cm high.

The column that is clearly influenced by the state of smoke hole is "E". As can be seen, the airflow is much more regular if this element is closed rather than if it is opened. The most remarkable fact is that the pit stays almost the same. The airflow velocity does not vary in this zone whatever the state of the smoke hole. These graphs also reflect that as expected, the velocity near the smoke hole is higher when it is opened.

In the following diagrams, the way the wind flows along the longitudinal axis of the house, the same as the wind vector, can be observed (Figures 10 and 11). They show how the opened smoke hole creates a quiet zone in the leeward elevation. This zone becomes smaller when the smoke hole is closed, and it turns softer and bigger as long as the gap between the planks becomes bigger. This fact reflects the way Haida transformed these dwellings along the year by moving these boards in order to let the air in. This way, there was no limit between indoors and outdoors, as happened in tipis.

**Figure 10.** Airflow velocity and direction inside and nearby the plank house, whose smoke hole is opened (**a**) 1 cm gap; (**b**) 2 cm gap; (**c**) 3 cm gap; (**d**) 4 cm gap (Autodesk CFD®).

The airflow velocity inside the dwelling is lower when the smoke hole is closed, but it is higher on the part of the roof surface touched by the airflow from indoors. This way, the roof erosion was more intense when the smoke hole was closed.

**Figure 11.** Airflow velocity and direction inside and nearby the plank house, whose smoke hole is closed (**a**) 1 cm gap; (**b**) 2 cm gap; (**c**) 3 cm gap; (**d**) 4 cm gap (Autodesk CFD®).

#### 3.2.2. Airflow in the Central Pit

By means of the following diagrams (Figures 12 and 13), it has been possible to observe how the airflow behaves inside the central pit, where the hearths were lit. As they show, the wider the gaps are between the planks, the bigger the turbulence is inside the pit. They are even more complex if the smoke hole is closed.

As explained before, the airflow velocity was very low in this zone. This turbulence helped to keep the its air clean, but at the same time, the smoke would spread over the place more easily.

When the smoke hole is closed, the airflow in the pit stays in it and cannot exit the house directly. The only case where it becomes possible is the one of the 4 cm gaps (Figure 12f). If the gaps are thinner, the air in the pit has to be dragged by other flows that are able to exit.

**Figure 12.** Airflow traces in the pit when the smoke hole is opened. (**a**) 1 cm gap; (**b**) 2 cm gap; (**c**) 3 cm gap; (**d**) 4 cm gap; (**e**) Axonometry corresponding to (**a**); (**f**) Axonometry corresponding to (**d**) (Autodesk CFD®).

**Figure 13.** Airflow traces in the pit when the smoke hole is closed. (**a**) 1 cm gap; (**b**) 2 cm gap; (**c**) 3 cm gap; (**d**) 4 cm gap; (**e**) Axonometry corresponding to (**a**); (**f**) Axonometry corresponding to (**d**) (Autodesk CFD®).

Instead, when the smoke hole is opened, the air in the pit usually exits by itself. In addition to the turbulence, it is important to take into account the velocity of the air particles. Figures 8 and 9 show that the airflow is almost static in the pit if the gaps are 1 cm wide.

#### **4. Discussion**

According to the results, it can be asserted that the plank house was a pit covered by a breathable envelope. The pit was the main area of the house, where the hearths were lit and where daily life took place. The origin of this space is lost in time, but it can be found in several Indian dwellings. In different ways, it is the main piece of the Navajo hogans [1] (pp. 325–336), of the pit-houses built by the Thompsons groups [1] (pp. 176–179) and, even, of the sod-houses built by the Yup'ik communities [19] (pp. 119–156). The thermal stability was the target to accomplish in most of these cases, but as shown in previous studies [20], there are some examples in which this function is not so clear. The plank houses are one of them. The depth of the pit is not enough to influence the indoor temperature of the house, so it has to have another aim. As can be seen in the previous graphs, it is a peaceful zone inside the house, where the hearths could be lit up and where the cold air from outdoors decelerated.

The idea of versatility is as important in the plank house as it was in the tipi, which is the most important nomad dwelling. The planks of Haida houses could be removed or displaced as the hide envelope of the tipis could be opened or closed. This way, both models search for the continuity between indoors and outdoors, as nomad tribes do in their daily life [21]. However, the most important point is that these communities analyze conscientiously their environment and take advantage of the opportunities it offers to them [22]. As Atta [23] asserts, an affordable and sustainable housing design requires integrating economic and environmental solutions from a social point of view. Traditional architecture offers a unique opportunity to investigate how these three factors can coexist successfully.

As the research developed by Knowles and Easton established [1,24], Native American architecture is much more complex than expected at first sight. Their research was focused on Acoma dwellings, which are adobe pueblos from New Mexico. Knowles determined that the orientation of Acoma dwellings let the houses take advantage of the solar radiation as much as possible; meanwhile, Easton analyzed the ascending currents of air that occur along the main elevation of the Pueblo houses.

As another example, the tipis could be taken. Their conical design was based on the shape of a Cottonwood tree leaf [25] (p. 7). This specie of tree is well known for being able to resist strong winds, as the tipis do. Its leaves move, adapting to the wind direction in order to survive. This is the way the ventilation system works in the tipis, taking advantage of the main wind in such a way that it takes out the hearth smoke from indoors.

The present research is rooted in the field of virtual archaeology, which is a discipline that offers an enriching amount of opportunities for other areas. The virtual reconstruction of disappeared buildings can show forgotten solutions that are particularly useful nowadays. In this case, the reconstruction of the Haida plank house has been possible thanks to the archaeological records about Ninstints as well as due to the journals written by the first European and Russian explorers. Research studies such as those developed by Trebeleva and Glazov [26] are undoubtedly essential to understand how the ancient buildings were designed and the technical solutions they contained.

The plank house that has been analyzed is the one built by the Haida community in the Haida Gwaii Islands. However, the "plank house" model can be found all along the Pacific coast of North America in different ways. From California, where the Hoopa built their own models, to the coastal Salish in the Vancouver Island and Puget Sound, who dwelled their shed houses, there are enough models to carry out a deeper comparison study.

#### **5. Conclusions**

By analyzing the results shown previously, it can be concluded that if the gaps between the planks of the Haida dwelling are wider than 4 cm, the summer airflow comfort limits (1.1 m/s) would be exceeded inside the building [27] (p. 305). However, it should be taken into account that the present research has assigned the same width to every gap in each case, which is a variable that could be studied in further research.

This way, it can be seen that the envelope of the Haida plank house let the indoor area breath. This dwelling is composed of two elements: this envelope and a central pit. The Native American dwellings were usually designed around a central hearth that is protected by an envelope [1]. This envelope can establish a solid limit between indoor and outdoor, as, for instance, happens in the New Mexican adobe pueblos, but there is another group of dwellings that uses this envelope to let the indoor area breath. Among them, the Wichita grass houses can be found. As occurs in the Haida plank houses, their grass envelope is designed to let the smoke out; that is to say, the whole envelope breathes. This type of solution can also be found in the thatched British cottages, confirming the idea that this type of solution is common in high humidity locations. What turns the Haida plank house into a special case is the combination of this breathing envelope and a central pit. As can be seen in Figures 10 and 11, if the smoke hole is closed, this zone is protected from the air currents that take the smoke out and keep the cedar planks dry. Thereby, the smoke hole works as a ventilation trigger and can be used to renovate the air of the dwelling at will.

By means of modern technology, this type of envelope is being developed at the moment by many architecture research centers and offices. For instance, the design developed by the engineer Tobias Becker from Stuttgart can be cited [28]. His envelope is composed of multiple cells controlled by changes in air pressure. This way, they regulate the incident light and the airflow as skin pores and pigments regulate a living being. In addition, the idea behind the Al Bahar towers should be highlighted [29]. Built in Abu Dhabi by Abdulmajid Karanouh and Aedas, their envelopes are integrated by multiple cells inspired in the Arabic lattice. These cells are closed or opened according to the meteorology, since they can let the air and the solar radiation in or they can shield the buildings from them. The prototype developed in Los Angeles by professor Doris Kim Sung also offers an interesting option [30]. Called Bloom, this experiment proposes the use of metal sheets that curl when heated in such a way that the whole structure casts shadows and lets ventilation under specific areas when temperature rises. Obviously, all these projects are technically much more complex than the plank house built by the Haida several centuries ago, but

the concept hidden behind all of them was the same. The point was not just to design a breathing envelope but to get the best of every element that composes a building by means of a deep knowledge of the possibilities offered by the environment in order to save the available resources.

**Author Contributions:** Conceptualization, R.A.G.L. and M.J.M.B.; methodology, R.A.G.L. and M.J.M.B.; software, R.A.G.L. and M.J.M.B.; investigation, M.J.M.B.; resources, R.A.G.L.; writing original draft preparation, review and editing, R.A.G.L. and M.J.M.B.; supervision, R.A.G.L.; project administration, R.A.G.L.; funding acquisition, R.A.G.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** The authors wish to thank CEU San Pablo University Foundation for the funds dedicated to the Project Ref. USP CEU-CP20V12 provided by CEU San Pablo University.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Implementation of a Ventilation Protocol for SARS-CoV-2 in a Higher Educational Centre**

**Alberto Meiss \*, Irene Poza-Casado \*, Alfredo Llorente-Álvarez, Héctor Jimeno-Merino and Miguel Ángel Padilla-Marcos**

> RG Architecture & Energy, Universidad de Valladolid, Av. Salamanca, 18-47014 Valladolid, Spain; llorente@arq.uva.es (A.L.-Á.); hector.jimeno@uva.es (H.J.-M.); miguelangel.padilla@uva.es (M.Á.P.-M.) **\*** Correspondence: alberto.meiss@uva.es (A.M.); irene.poza@uva.es (I.P.-C.)

**Abstract:** The most recent research confirms that airborne transmission may be the dominant mode of SARS-CoV-2 virus spread in the interior spaces of buildings. Consequently, based on some prescriptions that implemented natural ventilation during face-to-face lessons in a university centre, an experimental characterization of several complementary options aimed at reinforcing the prevention and safety of the occupants was carried out. The action protocol adopted was based on the combination of mandatory natural ventilation, a maximum contribution of outdoor air supply in the air conditioning system, and the use of filtering devices located inside the classroom. All the strategies were incorporated concomitantly with necessary compliance with the basic conditions of social distance, occupation, use of masks and guidelines for use and cleaning within educational buildings. The suitability of this protocol was further evaluated throughout the teaching day with students and teachers by measuring the CO2 concentration. The results showed that the measures implemented successfully removed the possible pollutants generated inside.

**Keywords:** indoor air quality; COVID-19; educational buildings; air purifier; airborne transmission

#### **1. Introduction**

Throughout the current global pandemic caused by the SARS-CoV-2 virus, there has been a significant evolution of the knowledge of contagion. In the early stages of preventive action, the recognized contagion mechanisms were [1]:


At the beginning of February 2020, it was still not considered necessary for the population to wear face masks. In this way, mandating the population to assume protection mechanisms that might not make sense was avoided [2]. It was in May 2020 when the mandatory use of masks was decreed in Spain [3], following the pattern of the rest of the developed countries, with the predominant aim of blocking the emission of "large droplets" infected with the virus.

Only recently, in April 2021, has the World Health Organization (WHO) begun to as-sess airborne transmission, although emphasis continues to be placed on the fomite path and large droplets [4]. However, there is a current endorsement by a growing number of scientists who do not agree with this limited view [5].

In the field of interior spaces in buildings, the term bioaerosol includes the droplets exhaled by people which can be between 0.4 and 100 μm and remain suspended in the air

**Citation:** Meiss, A.; Poza-Casado, I.; Llorente-Álvarez, A.; Jimeno-Merino, H.; Padilla-Marcos, M.Á. Implementation of a Ventilation Protocol for SARS-CoV-2 in a Higher Educational Centre. *Energies* **2021**, *14*, 6172. https://doi.org/10.3390/ en14196172

Academic Editor: Christopher Micallef

Received: 1 September 2021 Accepted: 23 September 2021 Published: 27 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

for a long time [6]. In other contexts, the emission sources can be any living beings in any terrestrial or marine ecosystem.

At an early date (July 2020), the Directorate of the Architecture Technical School in Valladolid developed a supplementary strategy to face a possible airborne transmission method in addition to the strict measures adopted regarding social distance, occupancy, the use of masks and guidelines for the use and cleaning of the building.

It is important to inform society that bioaerosols may be the dominant mode of transmission of the SARS-CoV-2 virus, with some contribution from the fomite pathway and an even smaller contribution of large droplets.

#### **2. State of the Art**

*2.1. Bioaerosols and SARS-CoV-2*

Bioaerosols can contain in their structure:


The size of an individual SARS-CoV-2 virus is very small (around 0.10 μm), which enables its lodgement in the bioaerosols generated by an infected person (Figure 1).

The virus uses the bioaerosol as a means of transport and, depending on the bioaerosol's size, can affect different organs of the recipient body, which in turn influences the viral load that will determine the contagion.

All bioaerosols smaller than 100 μm can be inhaled. There are three main mechanisms for the deposition of bioaerosols in the respiratory tract (Figure 2):


#### *2.2. Classification of the Droplets Emitted by People*

In the case of sedentary or light activity (1.0 to 1.2 met), human emissions through the respiratory tract are limited to events such as breathing, speaking, coughing or sneezing. In each of these events, a mixture of air and droplets of different sizes and at a variable speed is exhaled (Table 1).

**Figure 2.** Mathematical model of particle deposition of nasal breathing [7].

**Table 1.** Average emission per person and event [8].


Their distribution based on sizes also varies according to the type of event, which influences the way in which contagion from the infected person could occur (Figure 3, [9–12]). A usual classification establishes three ranges of size:


**Figure 3.** Droplets emitted according to the type of event.

#### *2.3. Evaporation*

Evaporation processes affect bioaerosols emitted by people. Thus, the distribution according to droplet size is not only dependent on the event but also a transformation through evaporation (Figure 4). This significantly affects the transition droplets (between 100 and 300 μm).

Thus, the rate of evaporation of the droplets depends on the humidity conditions of the environment (the lower the humidity, the higher the evaporation) and the emission speed (depending on the type of event). Based on these parameters, part of the droplets precipitate and others undergo an evaporation process, which allows them to diffuse as bioaerosols (Figure 5). Large droplets are those that could cause infection if they hit the eyes, nose, or mouth of a healthy person. In these cases, social distance greater than 1 m serves as effective protection, as well as the mandatory use of masks.

**Figure 5.** Behaviour of emissions according to their diameter [14].

It is the particles smaller than 100 μm that normally acquire the properties of an air-diffused bioaerosol, which, when inhaled, transmit the disease through the respiratory tract.

A fundamental question is to know how long the SARS-CoV-2 virus remains infectious in bioaerosols emitted by a patient. The life expectancy of the virus is conditioned by the loss of viral infectivity.

However, there is still no consensus on this matter since the time varies between 1 and 3 h under typical temperature and humidity conditions (20 ◦C and 50% RH) [15–17]. This range of time is important in order to confirm that any virus present in the air of a room has lost its contagiousness after a few hours. On the other hand, it has been demonstrated that the virus remains infectious over much longer periods of time on surfaces (Table 2).


**Table 2.** SARS-CoV-2 viability in different materials [15].

*2.4. Dynamic Behaviour of Bioaerosols*

Once the droplet is emitted by the infected person, its dynamic behaviour is determined by the simultaneous and combined action of a system of forces. The acting forces are the gravity force, the buoyancy force and the drag force (Figure 6). The preponderance of one over the others basically depends on the size of the droplet, which determines its diffusion and permanence in the air of the room.

**Figure 6.** System of forces affecting the bioaerosol.

In the case of the larger droplets, the drag force (from the impulse speed) and the gravity force are dominant, so their impact or their precipitation is favoured after a few seconds. In the case of bioaerosols, buoyancy is the main force, favouring their diffusion and increasing their permanence in the air.

#### *2.5. Standard Protocol*

Therefore, it is verified that bioaerosols serve as a vehicle for transporting the SARS-CoV-2 virus when they are emitted from an infected person. Its diffusion capacity is very wide, being able to follow a highly variable pathway from its emission until the bioaerosol leaves the room or the virus loses its infectivity. For this reason, it is necessary to adequately ventilate interior spaces.

The instructions received from the government authorities [18] motivated the development of a first strategy in the classrooms throughout the 2020/2021 school year:


Although the measures were correct as a basic prevention mechanism, a series of drawbacks that could affect the desired level of prevention were identified:


Accordingly, in order to supplement the above measures and increase the prevention of possible contagion, an experimental study was carried out [19]. The indoor air quality (IAQ) was assessed assuming strict compliance with the aforementioned provisions, and alter-native proposals based on the use of air purifiers were evaluated.

#### **3. Methods**

#### *3.1. Experimental Study in a Typical Classroom*

Successive tests were carried out in a representative classroom of the building of the Architecture Technical School of the University of Valladolid (Figure 7) under different ventilation conditions.

**Figure 7.** Typical classroom of the Architecture Technical School in Valladolid.

The classrooms have an air conditioner that vertically blows heat-treated air into the room through three inlets in the ceiling and recirculates the indoor air, which is mixed with fresh outside air (50% of the airflow, as a preventive measure against SARS-CoV-2). The windows were modified to allow total opening of their openable surface.

Tests were carried out according to the concentration drop method. In this way, aerosols were released in a controlled way up to a certain concentration, achieving homoge-

neous distribution by means of fans located inside the classroom. Once the concentration was stabilized, measurements were taken with six sensors (S1 to S6) distributed through-out the room.

To simulate the behaviour of bioaerosols, the emission of smoke was used, since they behave in an analogous way: the smoke contains aerosols with a similar range of sizes (the visible part of the smoke) and gases (recognizable by their smell, which do not constitute a vehicle for virus transport).

The behaviour of smoke emission is analogous to that of bioaerosols released in an environment. When a person smokes, the smoke does not fall to the ground quickly, but rather concentrates in front of the smoker and then mixes with the airflows. If this occurs in a poorly ventilated room, the smoke accumulates. In these situations, only a small fraction, less than 10%, is deposited on the interior surfaces, and the rest remains floating in the environment until the interior air is renewed.

The concept of smoke serves also to illustrate a situation between two people [18] where one person is exhaling smoke and the other wishes to inhale as little of it as possible: this is how the chances of contagion are reduced. This experience carries over to the use of masks: the aerosols are filtered with the masks, but the gases are not. The fact that the odour penetrates the mask does not mean that the mask is not retaining the aerosols.

A smoke machine (1.2 kW model Fz-1200) was used to carry out the tests. Its operation consisted of the combustion of oils that emit fumes with bioaerosols analogous to the composition of the air, allowing its traceability in its airflow path, as well as measuring its concentration by means of multipurpose sensors.

#### *3.2. Filtering Device*

As a significant improvement to the initial protocol, it was proposed to incorporate an air filtration device into the classroom. The mechanical capture of particles and aero-sols was done by means of a HEPA H14 filtering surface with an efficiency greater than 99.995%. The efficiency represents the percentage of particles that each type of filter can retain with respect to the most penetrating particle size (MPPS), which is normally be-tween 0.15 and 0.25 μm. Thus, a HEPA H14 filter with an efficiency of 99.995% for a particle size of greater penetration of 0.15 μm implies that for every 100,000 particles of that size, the filter only allows 5 of them to pass through. For particles of other sizes, the filter has an even higher retention capacity.

The flow rates treated depend on the adjustable flow rate (SETPOINT), according to the data supplied by the manufacturer (Figure 8):

**Figure 8.** Adjustable flow rate of air filtration device.

A parameter that requires further study is the flow pattern of treated and contaminated air in order to determine the recommended position and height of the air purifiers within the classroom [20]. For the tests conducted for this study, the device was placed in a central position in the space. Since a mixture flow pattern was achieved inside the studied classroom, it was possible to determine that adequate purification of the polluted air was guaranteed [21–25].

#### **4. Results**

The experimental study consisted of 12 cases with different configurations regarding the opening of windows and doors, the operation of the air conditioning system with the supply of outdoor air, and the use of a filtration device:


Throughout all the tests, the outdoor temperature was in the range of 8 to 10 ◦C, which represents a typical winter situation in Valladolid (Spain).

The results obtained in the tests are summarised in Table 3 and Figure 9, which reflect the average values of the sensors distributed within the room.

**Table 3.** Aerosol concentration decay time to 10μm/mm3.


**Figure 9.** Evolution of the concentration drop of aerosols emitted in the classroom.

#### **5. Discussion**

*5.1. Conclusions Derived from the Results*


#### *5.2. Verification of the Ventilation Conditions*

Finally, after the previous comparative analysis, the classroom under study was subjected to a test under typical conditions, that is, with 29 students and a teacher during the usual course of a class. During the test, the outside temperature was 3 ◦C on average, so the openings were partially closed (50% of their surface) in a similar way to Case 11.

In this case, the tracer gas was the CO2 concentration. Although CO2 is not a pollutant since its concentration was far from dangerous levels for health, it is nevertheless a good indicator of the emission of human bioaerosols, because it is strictly related to them.

The classroom was intermittently occupied (from 9:00 to 11:00 on Tuesday and Wednesday, and from 9:00 to 12:00 on Thursday). However, the total CO2 concentration remained very low (maximum values of 515 ppm with an outdoor CO2 concentration of 400 ppm), as shown in Figure 10.

**Figure 10.** Verification of the ventilation conditions through CO2 concentration measurement.

Under these circumstances, the combination of these three ventilation mechanisms can be considered as a means to strengthen the prevention measures adopted in the studied classroom.

#### **6. Conclusions**

In conclusion, it can be affirmed that it is necessary to strengthen prevention mechanisms against the risk of contagion of SARS-CoV-2 in buildings lacking centralized controlled mechanical ventilation systems, where the main strategy consists of window opening to promote natural ventilation.

The most appropriate methodology needs to address the characterization of each of the possible strategies in the building. In the case studied, the protocol adopted consisted of the combination of natural ventilation, the maximization of the outdoor air supply through the air conditioning system, and the use of filtering devices. Simultaneously, the basic conditions of social distance, occupation, use of masks and guidelines for the use and cleaning of the building must be met.

As a final note, it is necessary to mention that throughout the 2020/2021 school year with 100% face-to-face teaching, there was no case of contagion among students and teachers inside the classrooms of the centre.

**Author Contributions:** Conceptualization, A.M. and I.P.-C.; methodology, A.M. and I.P.-C.; formal analysis, A.M., A.L.-Á. and M.Á.P.-M.; investigation, A.M.; resources, A.L.-Á.; data curation, A.M. and H.J.-M.; writing—original draft preparation, A.M.; writing—review and editing, A.M. and I.P.-C.; visualization, A.M.; supervision, I.P.-C.; project administration, A.L.-Á.; funding acquisition, M.Á.P.-M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research "Metodologías de estudio y estrategias de mejora de eficiencia energética, confort y salubridad de centros educativos en Castilla y León. Investigación básica" was funded by Junta de Castilla y León, Consejería de Educación, grant number VA026G19 (2019–2021).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Review* **The Influence of Outdoor Particulate Matter PM2.5 on Indoor Air Quality: The Implementation of a New Assessment Method**

**Dominik Bekierski \* and Krystyna Barbara Kostyrko**

Thermal Physics, Acoustics and Environment Department, Building Research Institute (ITB), 1 Filtrowa Str., 00-611 Warsaw, Poland; k.kostyrko@itb.pl

**\*** Correspondence: d.bekierski@itb.pl

**Abstract:** Epidemiological research has shown that there is a positive correlation between the incidence of disease and mortality in humans and the mass concentration of particulate matter. An average 1 g of suspended dust emitted in a room results in the same exposure as 1 kg emitted to the outside air. In this study, the authors described the state of knowledge on dust pollution inside and outside buildings (I/O ratios), and methods of testing the PM infiltration process parameters. According to the law of indoor–outdoor particle mass balance and the physical basis of aerosol penetration theory, a relatively simple but new method for estimating the penetration factor *P* was tested. On the basis of the curve of dynamic changes of internal dust concentration in the process of particle concentration decay and next of the followed curve of dynamic rebound of particle concentration, authors measured penetration factor of ambient PM2.5 through building envelope. Authors modification of the method is to be used for determining the value of the particle deposition rate *k* not from the course of the characteristics in the transient state (the so-called particle concentration decay curves) but from the concentration rebound course, stimulated by natural particle infiltration process. Recognition measurements of the mass concentration of suspended PM2.5 and PM10 particles inside the rooms were carried out. In this study, the choice of the prediction particle penetration factor *P* calculation method was supported by the exemplary calculation of the *p* value for a room polluted by PM2.5. The preliminary results of the penetration factors determined by this method *P* = 0.61 are consistent with the *P* factor values from the literature obtained so far for this dimensional group of dusts.

**Keywords:** particulate matter; dust pollution; IAQ; indoor–outdoor concentration ratio; penetration factor; air quality control

#### **1. Introduction**

The aim of the article is to present the authors' proposal for a new method of assessment of the influence of outdoor particulate matter PM2.5 on indoor air quality indirectly by determining the value of the parameter that determines the intensity of dust penetration into the interior of the building. This parameter is the penetration factor, i.e., the ratio of the amount of dust entering from the outside, ending the infiltration processes, to the amount of dust retained outside.

In this study, the authors described the state of knowledge concerning dust pollution inside and outside buildings (I/O ratios), and methods of testing the PM infiltration process parameters. Methods known in the literature are based on the observation of the process of dynamic air pumping from a dusty external environment into a room with the use of an exhaust blower or an air cleaner. However, the process is not only dependent on the difference in PM concentration levels but more on the O/I air pressure difference produced. The authors' proposal includes the observation of the PM concentration rebound curve of the dust indoor concentration level increasing caused by the natural process of dust infiltration from the outside, to achieve dust balance in the exterior and interior of the building protected by a dust-permeable envelope.

**Citation:** Bekierski, D.; Kostyrko, K.B. The Influence of Outdoor Particulate Matter PM2.5 on Indoor Air Quality: The Implementation of a New Assessment Method. *Energies* **2021**, *14*, 6230. https://doi.org/ 10.3390/en14196230

Academic Editors: Roberto Alonso González Lezcano and Christian Inard

Received: 5 August 2021 Accepted: 22 September 2021 Published: 30 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The authors' field study aimed to estimate indoor air quality, especially particulate matter concentration in relation to the infiltration process through the buildings' envelope. Several analyses based on air quality measurements and calculation procedures were undertaken. On the basis of the curves of dynamic changes of internal dust concentration in the processes of particle concentration decay and next followed curves of dynamic change particle concentration rebound the authors proposed the method of estimating the value of the penetration factor of ambient PM2.5 through a building envelope. Information on the value of the P factor can be a reliable parameter that classifies buildings in terms of resistance to dust infiltration into their interior, which partly characterizes the condition of the envelope structure and its resistance to dust penetration into the building. According to research in residential buildings, a large proportion of both groups of particles and their mass concentration are attributed to human activities such as cooking, smoking, vacuuming, gas stoves, the burning of solid fuels and candles, electrical appliances, cleaning, washing and walking. Both sources influence the characteristics of indoor particulate matter, and their concentrations are increased by the resuspension of particles deposited on internal surfaces in the air.

The term PM10 is used for particles with an aerodynamic diameter <10 μm. The term PM2.5 defines aerodynamic particles with a diameter <2.5 μm. The coarse fraction contains particles with an aerodynamic diameter between 2.5 μm and 10 μm. The "ultrafine particles" fraction has an aerodynamic diameter <0.1 μm. A commonly used indicator describing particulate pollution is the mass concentration of PM10.

Thatcher and Layton's [1] experiments concluded that PM > 5 μm may be resuspended, particles < 5 μm are not easily resuspended and particles < 1 μm show almost no tendency to re-suspend at high human activity.

In the project INDEX PM [2], it was demonstrated that indoor air PM dust pollution mainly consists of salt (ammonium sulphate, ammonium nitrate, sodium and potassium chloride), soot (elemental carbon, EC), minerals (silicon oxides, aluminium, calcium, iron, manganese, titanium, zinc, etc.), organic substances (shredded organic matter, also called organic carbon, OC) and materials of biological origin (bacteria and fungi, dandruff, pollen, fragments of plants and insects). All of these components have different particle size distributions and mechanisms, which are strongly dependent on particle size and are involved in removing particles from the indoor air. Sedimentation is the most important mechanism for the removal of coarse particles (>10 μm), diffusion and the agglomeration/coagulation of ultrafine particles (<0.1 μm).

Particles (0.1–1.0 μm) are most stable in air. A typical input fraction (fraction of the total PM mass emitted from the source that will be inhaled by the whole population) for an indoor PM source is 10−<sup>2</sup> to 10−3, compared to 10−<sup>5</sup> to 10−<sup>6</sup> for outdoor PM sources: An average 1g of suspended dust emitted in a room results in the same exposure as 1kg emitted to the outside air. Consequently, while internal sources are less efficient and their total emission loads are smaller than the PM of external sources, each of these sources, if present, may dominate both the sum of individual exposures and the inhalation risks of PM.

Between 1999 and 2016, the Canadian Census Health and Environment Cohort (Can-CHEC) conducted panel studies involving approximately 2.5 million Canadians [3]. The relationship between non-accidental mortality and the concentration of fine particulate matter (≤2.5 μm; PM2.5), ozone (O3) and nitrogen dioxide (NO2) in the environment was investigated. Models with combinations of these contaminants have also been tested and PM2.5 was found to be associated with an increased risk of non-accidental mortality, i.e., lung cancer, diabetes and ischemic heart disease. The results obtained provide evidence that long-term exposure to these three key components of ambient air pollution is associated with an increased risk of non-accidental mortality. Cumulative risk models suggest that exposure to PM2.5 alone does not fully characterise the toxicity of a mixed atmosphere and cannot fully explain the mortality risks associated with exposure to environmental contamination. The strongest cumulative risk estimate is for mortality from diabetes (HR = 1.180; 95% CI: 1.125, 1.236). Assuming the additive exposure of individual pollutants, HR (Health Risk) in 95% CI (Confidence Interval), as estimated using three pollutant models showed that a change in exposure by an average of 5% for all three pollutants together resulted in impact synergies of 1.075. The effect of PM2.5 and O3 was weaker than the effect of NO2 alone, but HR increased with the combination of all three pollutants, i.e., adding NO2.

It has also been proved that the strongest correlation between PM2.5 concentration and mortality increased by 1.5% when PM2.5 concentration increased by 10 μg/m3 per day [3], because particulate matter <PM2.5 enters into the bloodstream.

The health effects of PM2.5 were also studied by Hvelplund [4]. The Brownian motion of particles in the respiratory system is only visible for particles below 0.05 μm. The same tendency was found by Wang et al. [5], who investigated the deposition efficiency of particles with aerodynamic diameters between 0.001 and 300 μm moving through the nasal cavity with an air mass flow of 0.6 m3/h. Studies in this area have shown that the lowest deposition efficiency is in the range of 0.015 to 10.0 μm due to the low influence of diffusion and the effects of impact and gravity settling.

The number of exhaled particles is debatable, e.g., most particles below 1.0 μm were predicted to escape through the bronchiolar orifices. It is not the case, however, that all particles escape because some of them are absorbed directly into the bloodstream through the bronchioles, which in human airways make up most of the levels of the tracheobronchial tree. Particles larger than 5–10 μm are usually removed from the upper respiratory tract, indicating that the mucosa and nasal hair contribute greatly to the filtration of particles.

According to [4], inhalation exposure to particulate pollutants is one of the main threats to public health. Most existing airway morphometry models are theoretical or semi-empirical in nature; they were designed to predict the deposition fraction for an averaged subgroup of the general population. It is difficult to tailor a quick and accurate prediction to suit individual needs.

Hvelplund et al. [4] aimed to analyse the local particle deposition along an anatomically reconstructed model of the airways, which was developed from computed tomography images of healthy subjects. Computational simulations of airway fluid dynamics show that most particles are deposited in the bronchi. The accumulation of particles (0.1–2.0 μm) is the smallest fraction, about 11%, of sediment in the lower respiratory tract. Increasing the aerodynamic diameter >2.0 μm of the particles increased the fraction of deposited particles. This study determined the size related to the site of particle deposition in the airways. It turned out that most of the particles in the studied bands are deposited in a narrow region of the airway model, such as the BC (bronchi) and larynx.

Already in the 1980s, many epidemiological studies had shown that PM2.5 has obvious side effects on human health Lin et al. [6]. Research in China on the relationship between PM2.5 and human health has fully demonstrated that PM2.5 can increase the incidence of heart and lung disease, respiratory disease, cardiovascular disease, cancer and other diseases, and even the risk of death. Long-term exposure to environmental PM2.5 may be an important risk factor for hypertension and is responsible for the significant burden of hypertension in adults in China, as it leads to a reduction in lung function. PM2.5 is a risk factor for childhood asthma, attributed to decreased immunoregulation and deterioration in ventilation function. Exposure to PM2.5 can also affect reproductive health.

The I/O indicator describing the temporary steady state of dust pollution in a building is related to the process of infiltration of suspended dust from the outside into the building's interior and can be described with the assumption that there are no internal sources generating dust particles:

$$\frac{\text{I}}{\text{O}} ratio = \frac{\text{C}\_{in}}{\text{C}\_{out}} \tag{1}$$

Lee et al. [7] found that room temperature and floor level (expressed by the story where the room is located in the building) are powerful I/O predictors and gave the I/O equation for PM2.5, which is:

$$\frac{1}{\text{O}} = 0.629 + 0.0102 \cdot T - 0.00654 \cdot FL \tag{2}$$

where *T* and *FL* are temperature and floor level, respectively.

The indoor/outdoor (I/O) index is a measure for assessing the difference between the levels of indoor solid concentrations and the current outdoor concentrations, but it may be also recognised as an indicator of the persistence of particulate matter sources inside buildings.PM concentrations are affected by dust infiltration from the outside into buildings and internal sources. I/O ratios can vary considerably due to the building's design, location and the different activities of the occupants. The I/O ratio is also calculated to compare the dynamics of flows between indoor and outdoor PM in different apartments and buildings. As far as health protection is concerned, it is clearly the best when the I/O ratio for each dimensional fraction of dust is less than 1, asthisshows that the structural properties of a residential building reduce the penetration of PM from outside and, accordingly, reduce the residents' exposure to PM.

In epidemiological studies of external contamination, the concentration of external contaminants was previously used as an indicator of the stressor. Thus, the appropriateness of using suspended dust concentrations from the building surroundings as a proxy in personal exposure studies was questioned, and both outdoor and indoor concentrations began to be measured to investigate their correlation with external conditions, as well as the correlations among other pollutants. Positive correlations were found for PM, O3 and NO2.

While exposure to indoor air pollution changes under the influence of many factors, such as the type of microenvironment and the source of internal pollution, building characteristics and its location, ventilation parameters and assumed comfort conditions (Branco et al. [8]), as well as individual building occupant activity, studies have shown that only some of these factors are important. Therefore, several parameters of I/O interaction (Kalimeri et al. [9])were selected in order to minimise the error resulting from the use of concentrations in the external environment as a surrogate for estimating the exposure of the PM environment to humans indoors. These parameters are the infiltration coefficient (*Finf*) (i.e., the equilibrium fraction of particles from the environment that have penetrated into the interior and remain suspended), the *P* coefficient of penetration efficiency of particles through the leaks of the building envelope (i.e., the fraction of particles that entered the interior from the infiltration path through the external walls of the building, *P<Finf*) and the I/O ratio (the ratio of internal to external concentration of similar PM particles). Lv et al. [10] argued that regression analyses of the ratio of inside and outside particle concentration values confirm that the degree of correlation of changes in these concentrations expresses the infiltration coefficient.

Nadali et al. [11] found that coarse particles, e.g., PM10, have higher falling velocities than fine PM, which leads to lower levels in rooms, because coarse particles (PM) settle under the influence of gravity or settle on doors, window frames and furniture as a result of the effects of electric charges and turbulent diffusion.

In their research, the concentration of solid particles, expressed by the average I/O ratio, also differed significantly in apartments, and did not solely depend on the location of the buildings. This was because, although the air with PM outside can have a significant impact on the concentrations inside, the composition of the air in the room is mostly influenced by the sources (activities and materials) in the rooms, which can be identified as contributing significantly to the internal mass concentration of PM.

Bai et al. [12] addressed the question why the I/O ratio is so important in practice. Everyday health is characterised by the equation of health detriment for each individual, based on an environmental risk assessment from the "receptor" perspective of

$$HHD\_{i,j}^{\text{inhalation}} = \mathbb{C}\_{i,j} \cdot \left(\frac{IR \times t\_j}{m}\right) \cdot EF \tag{3}$$

where *C* is the concentration of particulate matter (μg/m3) with the indices: *i* = spring or summer or autumn or winter (1:1:1:1 annually), *j* = inside or outside (ratio 80:20), *HHD* is damage to human health *t* is the daily exposure time h, *m* is an I/O mixing ratio of 1, *EF* is an influence factor of 78 DALY/kg (or disease cases/kg of chemical intake), *IR* is the daily inhalation rate of air for each person, which is 13/24 (m3/h).

In questionnaire studies [12], the daily ratio of the students' exposure time to air polluted with dust inside and outside was calculated, and the mixing coefficient m was determined (the same as the I/O ratio). Due to different daily PM2.5 concentrations, health effects at different concentrations were calculated from Equation (3). The daily damage to the health of each person multiplied by the period of a year is the annual damage to the health of one person.

#### **2. The State of Knowledge on Dust Pollution Inside and Outside Buildings**

*2.1. Characteristics of Actual Air Pollutants in the Internal Environment*

According to the European programme INDEX-PM [2], the indoor air sources of PM are classified as:


Zhang and Duan [13] showed that burning a mosquito on a heating coil can release 626 μg/m3 PM2.5, which is 8.3 times the permissible concentration for a residential environment. Bai et al. [12] examined the concentration of PM2.5 in households using coal for cooking and found that it was significantly higher than in households using gas or electricity, and if coal was switched to gas or electricity, the concentration of PM2.5 in the kitchen would decrease by 40–70%. Zhang et al. [14] investigated different culinary habits, cooking methods, ingredients and even spices and found that they strongly influenced the composition of particulates. Culinary habits were also the subject of research by another group of researchers, Xue, Zhou et al. [15], who warn against the residential combustion of coal with a high sulphur content, as it can cause the total concentration of air pollutants to contain as much as 11.6% PM10 and even 27.5% SO2 in the winter heating season.Zhou, Liu. [16] indicated that human activities such as walking, dressing and cleaning may increase the concentration of PM2.5 indoors by 33%. Chinese researchers conducted experiments on the influence of wet sweeping and dry sweeping in the air in an office. The average levels of PM2.5 concentrations in the rooms before cleaning were 47.3 μg/m3, 40.6 μg/m3 and 39.4 μg/m3, respectively. The average levels of PM2.5 concentrations inside the premises after cleaning were 109.7 μg/m3, 97.5 μg/m3 and 43.3 μg/m3, respectively. The mean concentrations of PM2.5 increased 2.3 times, 2.3 times and 1.1 times, respectively. Therefore, it is recommended to wet sweep as often as possible under ventilation conditions. Printing also plays a role in increasing indoor dust concentration, and the release of PM2.5 from printers with different performance varied.

Some systematisation of the main sources of indoor air pollutants in our living environment is a necessary step to facing and reducing the associated health risks. Three

different environments are described: Home, school and office, and the pollutants for the home, including particulate matter, are listed in Table 1. These were data published at the Healthy Building conference by Simeone et al. [17] supplemented by data from the literature.


**Table 1.** Indoor air pollution list based on [16–24].

\* Isaxon et al. [19] report the peak number concentration of ultrafine particles for these activities: cooking 1.80 × <sup>10</sup><sup>5</sup> p cm−3, boiling 5.6 × 104 p cm−3, frying 1.4 × 104 p cm−3, oven 2.3 × <sup>10</sup><sup>5</sup> p cm−3, toaster 1.6 × 105 p cm−3.

> According to Wallace and Ott [20], cooking on gas or electric stoves and electric toaster ovens was a major source of UFP, with peak personal exposures often exceeding 100,000 particles/cm3 and estimated emission rates in the neighbourhood of 1012 particles/min.

> He, Morawska and Gilbert [25] investigated submicrometric particle number concentration measured during the cooking test and found that it was in the size range from 0.015 to 0.685 μm.

> Jantunen [18], in a study with multiple regression models using data collected from the EXPOLIS from six European cities, determined the internal concentrations of PM2.5, starting from the concentration generated by smoking (16%), a gas cooker (1.4%), construction dust and a gas cooker (all < 4%).Smoking and cooking also generated NO2 emissions. The research focused on the short-term effects of internal sources of PM, which are, e.g., different cooking methods generating concentrations of various solid particles in rooms, with peak

concentrations of 30–60 μg/m3 for PM0.02–0.5 and 10–300 μg/m3 for PM0.7–10. Cleaning activities (8 μg/m3 for PM0.02–0.5 and 30 μg/m3 for PM0.7–10) and those related to the mobility of residents (4 μg/m3for PM0.02–0.5 and 20 μg/m3 for PM0.7–10) contributed much less. Oven roasting was the most intense source of PM0.5 indoors, as well as frying (PM10).

Morawska et al. [26] report that some types of activity of inhabitants result in particularly high concentrations of PM2.5 in rooms. These include frying (median peak value: 745 μg/m3), grilling (718 μg/m3), candle evaporating eucalyptus oil (132 μg/m3) and smoking (79 μg/m3). The high maximum concentrations caused by these activities may result in the US EPA PM2.5 standard being exceeded for 24 h at 65 μg/m<sup>3</sup> in homes where such activities are carried out, provided that the activity is carried out for more than 24 h.

The reported emission rate of 0.99 mg/min−<sup>1</sup> from smoking is comparable with the results reported in the literature. For example, Klepeis et al. [27] measured the emissions factor of the respirable particle (PM3.5) emitted in an apartment where smoking took place. It turned out that the average PM3.5 emission index ranged from 0.98 mg/min−<sup>1</sup> (cigar) to1.9 mg/min−<sup>1</sup> (Marlboro cigarette). Brauer et al. [28] measured cigarette smoking using a nephelometer positioned in an environmental chamber and found that the emission rate of PM2.5 particles was 1.67 mg/min<sup>−</sup>1.

In a US indoor/outdoor air study, it was determined that smoking and cooking were the predominant activities associated with an elevated concentration of "fine particles". Smoking can add 20 μg/m−<sup>3</sup> (24 h average) of particles per household smoker [29] with short-term peaks of 300 μg/m−<sup>3</sup> that may persist for up to 30 min after the cigarette is extinguished. Home cooking generated particles (0.1 μm) accounted for 30% of the particle volume [30]. The cooking method, and particularly frying, variably increased particle concentration, and the consensus states that the I/O ratios were somewhat higher in houses with gas cookers than in houses without such cookers and heating sources. Large particles (>2.5 μm in diameter) are generated in homes with activities such as home cleaning (vacuuming and sweeping) [30,31], which can lead to the re-suspension of particles embedded in horizontal surfaces such as floors, carpets and furniture. The ratio between indoor and outdoor particle concentrations gives an indication of whether particles found indoors are the result of indoor generation.

#### *2.2. Limit Values for Indoor PM Concentrations*

The data taken from the Health-Based Ventilation Guidelines: Principles and Framework, which were published in [32,33], can be considered as setting the latest limit values for the exposure of humans to indoor pollutants, see Table 2 The guidelines adopted the data from the WHO Air Quality Guidelines, Global Update 2005 published in 2006 by the Regional Office for Europe (Copenhagen) [34] concerning the most life-threatening pollutants NO2, PM10, PM2.5 and O3, but also SO2, considered by some researchers to be an equally hazardous pollutant.

**Table 2.** Current air quality guidelines (figures in brackets indicate the average time for which the guideline values apply) [32,33].


#### *2.3. Characteristics of Actual Air Pollutants in the Outdoor Environment in Statistical Terms*

Primary particles from the industry and agriculture are usually larger than 10 μm, and their share in primary PM emissions is usually lower than that of PM10 and PM2.5. The same applies to non-road exhaust emissions from traffic sources (road, tyre and brake wear), which are a secondary source of particles smaller than 2.5 μm. Soot (BC–black coal) emissions mainly come from combustion processes in the transport sector (diesel vehicles) and from small domestic boilers The projected share in primary PM10 and PM2.5 emissions in the EU (Poland and 14 other countries) was included in the basic scenario of the European CAFE programme (European Commission's Clean Air for Europe) implemented in the CAFE programme for 2020, as illustrated in Figure 1.

**Figure 1.** Sectoral emissions of PM2.5 projected in the years 2000 to 2020 in the CAFE programme [35].

Overall, the CAFE baseline estimated that PM10 emissions were expected to decrease by around 40% in new Member States between 2000 and 2010. For 2020, it is planned that the sources of PM2.5 emissions from diesel exhaust gases will drop from 12% to 6% and that the largest source of primary PM2.5 emissions will reduce their emissions by 40%. On 21 November 2011, the European Commission took Poland to the European Court of Justice, accusing the country of lack of progress in the implementation of Directive 2008/50/EC of the European Parliament and of the Council of 21 May 2008 on ambient air quality and cleaner air for Europe (CAFE).

According to the WHO recommendation, the standards of the average daily concentration of PM10 may not be exceeded more than 35 times a year. Unfortunately, in Poland, it happens that during the winter months, days when the concentration of suspended dust is within the norm can be counted on the fingers of one hand Samek [36]. In China, from the statistics of PM2.5 and PM10 on the pollution levels of2015–2017, it was observed that the annual average of PM2.5 mass concentration was decreased Wang [37]. In Shenyang, PM2.5 decreased from 72 to 51 μg/m<sup>3</sup> and PM10 mass concentration decreased from 115 to 88 μg/m3 Yu [38].

This has disastrous consequences for the health of the inhabitants of large cities (Figures 2 and 3).

**Figure 2.** Measured by Samek et al. [36] average concentrations of PM2.5 in μg/m3 in Krakow (2014) and average concentrations of PM2.5 in μg/m3 measured in Changchun [12].

**Figure 3.** Average concentrations of suspended dust PM10 (red) in μg/m<sup>3</sup> measured in European cities in 2004 [39] and PM2.5 (blue) in μg/m3 observed in the same cities in 2016 [40].

The World Health Organization (WHO) has established specific standards for particulate matter concentrations in the outside air. Currently these are:


#### *2.4. Research on Actual Dust Pollution of the External Environment in Polish Cities*

The study was conducted in Warsaw as part of a project conducted by Juda-Rezler et al. [40] from the Warsaw University of Technology. The average concentration of PM2.5 in Warsaw in 2016 was 18.8 <sup>μ</sup>g/m3 (measured with standard deviation ±11.9 <sup>μ</sup>g/m3), when the EU annual limit value for PM2.5 was 25 μg/m3; WHO's annual air quality (10 μg/m3) was not met. PM pollutants in the atmospheric air differ in size and composition and according to an author [40], it is a mixture of primary sources (emitted from anthropogenic and natural sources) and secondary sources (formed in the atmosphere compounds as a result of the reaction of primary pollutants). The combustion of fossil fuels in the energy and smelting industry and the housing and road transport sector produces the most significant

anthropogenic source of both solid and gaseous precursors of secondary particles, including sulphur dioxide (SO2), nitrogen oxides (NOx) and volatile organic compounds (VOCs). Most of the particulate mixture in the atmospheric air are mineral dust organic matter, secondary inorganic aerosols (including nitrates, sulphates, and ammonia) and water. Many other components, however, are associated with PM, including micronutrients such as, for example, silicon (Si), magnesium (Mg), aluminium (Al), calcium (Ca), potassium (K), titanium (Ti) and trace metals including detrimental heavy metals, such as copper (Cu), arsenic (As), cadmium (Cd), chromium (Cr), lead (Pb) and zinc (Zn). The chemical composition of particulate matter, as well as other PM characteristics, may vary widely in different areas.

According to researchers Samek et al. [36] from the AGH University of Science and Technology in Kraków, in industrial areas of Poland such as Silesia or urban areas such as Kraków and some other cities, the levels of pollution cleanings often exceed air quality standards. The dust masses (PM) are the most important component of atmosphere pollution. The research was carried out in 2014 with an extension to the 2015 heating season. During this period, approximately 200 samples were collected, showing a daily variability of PM2.5 concentration in the air. The AGH University of Science and Technology team determined the lowest monthly concentration value for August 2014, amounting to about 10 μg/m3, the highest for February 2014 (70 μg/m3), while the annual mean value was approximately 31 μg/m3. Additionally, the samples were analysed for content and particle size distribution. Research by the AGH team also included the determination of the black carbon content (BCN).

The team of Scibor et al. [ ´ 41] from the Jagiellonian University in Kraków conducted a study in 2014–2015 of the percentage of PM2.5 dust in relation to the mass of PM10 dust indoors, with open and closed windows, and found that this share reached about 70% for both types of weather conditions: Good—at high wind speed; and bad—at low wind speed. The total share of PM10 and PM2.5 dust masses penetrating into the interior of the rooms from the outside was higher by about 10% in good weather (high winds) than in bad weather. Opening the window had no significant effect here. Due to the lower degree of dilution and chemical transformations, and also with a larger number of occupants indoors, the impact of exposure per unit mass of PM2.5 emitted in a room is two to three orders of magnitude greater than that of exposure from the same emissions in the outdoor environment.

The results obtained in Kraków by the Samek and Scibor teams can be compared, ´ for example, with the work of Morawska et al. [26] who studied the relationship between indoor/outdoor airborne particles in 16 localised residential homes in a suburb of Brisbane, Australia. By measuring the mass concentration of particles smaller than 2.5 μm, the Australian team showed that, while the periodic values of the I/O index varied widely from PM0.2 to PM2.5 with both less and more effective ventilation, their average I/O ranged from 1.01 to 1.08, so that the I/O ratios were much higher than in Kraków. Scibor et al. [ ´ 41] showed that the mean I/O values for PM10 and PM2.5 were higher in the rooms when the windows were closed under bad weather conditions. In good weather conditions, the relationship was the same for PM10, while for PM2.5, opening the window increased the value of the I/O ratio.

The results for Kraków show that I/O values were significantly below 1, but the fact that indoor concentrations are much lower than outdoors should be juxtaposed with the concentration level situation in this city where there are much higher outdoor and indoor dust concentrations than in many other cities. Therefore, it can be assumed that in Kraków there are much larger amounts of PM dust migrating inside buildings.

In the case of Kraków, I/O values in good weather conditions were much higher (0.92 for cases with open windows and 0.79 for closed windows) than values in bad weather conditions (0.46 and 0.47, respectively), which was consistent with the results of the regression analysis. Indoor PM concentrations were then significantly lower than outdoors. High I/O values for PM2.5 occurred in good weather, when the concentrations of PM2.5 indoors were two to four times higher than outdoors. The t-test showed that weather conditions and window opening are factors with a statistically significant impact on I/O.

#### *2.5. Influence of Meteorological and Building Parameters on the Level of Indoor Dust Pollution*

Weather conditions are considered to be the main factor influencing the quality of atmospheric air because of their role in dispersing, transforming and removing pollutants from the atmosphere. In urban environments, episodes of severe pollution are mainly caused by unfavourable weather conditions. Precipitation has a large impact on the concentration of particulate matter and washes away mainly coarse particles, but has little effect on changes in the concentration of fine particles. Therefore, atmospheric conditions are important for the level of indoor air pollution.

In a study on Hong Kong, Chan [42] has researched how changes in temperature, humidity, pressure, atmospheric stability, building insulation, wind, etc., influence constantly changing outdoor conditions and can modify the infiltration of outdoor air into a building in a built environment, regardless of which airflow systems one refers.

Chan [42] investigated the I/O ratios for respirable suspended particles (PM) and nitrogen oxides (NOx) in various meteorological conditions. At higher outside temperatures, pollutants in the air are pushed inside through the doors and gaps in the windows, while the reverse is true for lower outside temperatures. This also explains the small slope of the relationship with the windows closed. Conversely, it has also been found that this effect is slightly more significant for the gaseous NOx pollutant compared to the effect size for the particulate matter. This is because when the room is relatively well sealed, most of the air comes from the air conditioning vent, which effectively filters a large proportion of particulate matter but likely does not filter NOx. Chan also noticed that in most cases the I/O for particulate matter was slightly lower than for NOx, especially at higher temperatures. Overall, it can be seen that the I/O ratio for PM increases with increasing temperature (although at a slightly slower rate than for NO2).

Chan [42] also investigated the dependence of the I/O index on relative air humidity. In general, the I/O ratio increased with increasing outdoor humidity, which explains the fact that both the PM and NOx pollutants are readily absorbed or washed away by the water vapour in the atmosphere. Presumably, for the particulate form, this was of greater importance. The I/O ratios for PM increased slightly more for PM than for NOx, which may confirm that the leaching effect to the outside of the building was more significant for PM than for NOx.

The results of Chan's research were partially confirmed by Klaic et al. [43],who studied winter correlations between the spread of PM1.0 dust indoors and outside weather conditions in Zagreb. Xu et al. [44] showed a decrease in PM1 with a simultaneous increase in outdoor temperature, rainfall and horizontal wind speed, as well as an increase in indoor PM1.0 concentration with increasing relative humidity outside the building.

The question of the correlation of the PM concentration level in interiors with humidity and rainfall outside of buildings posed the most doubts, as shown by further research. Zheng et al. [45], in Beijing, addressed this concern by demonstrating the obvious impact of precipitation on the removal of solid particles from atmospheric air. Researchers observed that the mean concentration of PM2.5 decreased by 56.3% as a result of precipitation, and the mass concentration of PM2.5 fell below 60 g/m<sup>3</sup> in 72 h after precipitation. Within one hour following rain, the PM2.5 concentration level remained almost unchanged, but during the next 12 h it decreased. The large thickness of the mixed layer and the unstable structure of the atmospheric layers help to reduce the mass concentration of PM2.5.

A different look at the existing correlations between the PM concentration in interiors and meteorological parameters is presented by a team of Iranian researchers in the article by Nadali et al. [11]. They built, like Klaic et al. [43], a symmetric linear correlation matrix in which they analysed the possibilities of correlation of Cin concentration with air temperature and relative humidity, as well as wind speed. Since these newer results were based on a large number of buildings and because they partially contradict earlier results, it is worth mentioning that the correlation coefficients (r) between the PM concentration in the interior and the parameters were as follows: With temperature the correlation is negative; with relative air humidity, however, the correlation is r = 0.07, which means that no significant relationship was observed between the concentrations of solid particles and air humidity. This problem merits further investigation.

The positive correlation between the concentration of solids and temperature may be caused by the impulse to the phenomenon of thermal diffusion. At higher temperatures, external PMs are forced into buildings through windows and gaps in doors, while the opposite is true at lower ambient temperatures. The results of Nadali et al. [11] confirmed the positive correlations between I/O ratios and wind speed.

The existing correlations between the indoor concentrations of PM10, PM2.5 and PM1.0 in the parameters of a building were presented by Nadali et al. [11]. A symmetrical linear correlation matrix was built, which examined the possibilities of correlating the concentration of particles with a given particle diameter size (PM10, PM2.5 and PM1.0), i.e., indoor concentration Cin with building parameters: Age of building, building type, number of windows, ventilation, indoor smoking and particles of other dimensions. A less significant correlation was found between the age of construction and the number of windows, ventilation and cigarette smoke in rooms, and more significant correlations (*p* > 0.05) between the concentrations of PM particles of different sizes.

#### *2.6. Sources of Air Pollution by Particulate Matter (PMF Numerical Methods of Outdoor Sources Apportionment)*

The first databases on separate sources of air pollution (outside) of buildings with PM1 and PM2.5 dusts were created in2014 by the World Health Organization. These data were organised and collected by Karagulian et al. [46].

In order to reduce the health impacts of air pollution, it is important to understand the sources of pollutants contributing to environmental exposure. This study systematically reviewed and analysed the distribution of available sources for particulate matter surveys (10 and 2.5 microns in diameter, PM10 and PM2.5) performed in cities in order to estimate typical source contributions by country and region.

The percentage of separate sources of PM2.5 and PM10 dust in the urban environment for the Central and Eastern Europe region per source category according to the World Bank's list of economies (2012) for Central and Eastern Europe, provided by [46], is shown in Figure 4.

**Figure 4.** First source attribution to urban ambient particulate matter pollution by source category for Central and Eastern Europe according to [42].

The available records regarding the distribution of sources, however, show significant heterogeneity in the assessed source categories and incompleteness in some countries/regions.

In the years 2014–2017, receptor models tested using the PMF (Positive Factorised Matrices) method were increasingly used to isolate the sources of pollution in the outside air. The basis for the application of such a model by Samek et al. [36] in urban areas such as Kraków, where the levels of pollution often violate air quality standards, was the collection of measurement data on air pollution, especially PM2.5, considered to be the most hazardous to health. In 2014, approximately 200 samples were collected.

The lowest monthly concentration values were found for August 2014—around 10 μg/m<sup>−</sup>3.The highest was for February 2014—70 μg/m−3—while the annual average value was around 31 μg/m<sup>−</sup>3. Using the X-ray fluorescence method to test PM concentration, 15 elements were determined for each sample and eight inorganic ions were analysed by ion chromatography. Additionally, the samples were analysed for soot (PNE). The average concentration of PM2.5 in Kraków (31 μg/m<sup>−</sup>3) was twice as high as in 2009–2013 in Genoa, Barcelona and Florence, and similar to Milan.

A receptor model adapted to the research with the EPA PMF 5.0 program was developed, which was used to identify sources of air pollution with suspended dusts and chemical components (elements and ions) generated by separate sources. As a result of the modelling, six sources were identified. The sources of dust and their quantitative share in the total mass of PM2.5 was determined. The following sources have been identified: (1) Combustion, (2) secondary nitrates and sulphates, (3) biomass burning, (4) industry, (5) the soil and (6) road traffic. Monthly deviations of the efficiency of the source of pollution with PM2.5 dust are presented.

In the Warsaw area, the Juda-Rezler et al. [40] team also applied the positive matrix factorisation (PMF) method, supported by the analysis of enrichment factors, in order to identify the six main sources of PM2.5 dust: Residential combustion (fresh and old aerosols) (46% by weight of PM2.5), road fumes (21%) and unused exhausts (10%), mineral dust/construction works (12%), high temperature processes (8%) and steel processing (3%). In this analysis, primary organic carbon (POC) and secondary organic carbon (SOC) were classified as two separate components that helped distinguish between primary and secondary aerosol sources. Identification of the sources was also supported by the study of their annual and weekly profiles, as well as the analysis of the correlation of particulate components with meteorological conditions. Comparison of the attributed sources of air pollution with dust in Warsaw and Kraków is shown in Table 3.

The most expressive elements of PM sources in Warsaw are SOC, Cl− and As for combustion in apartments, NH4 +, Sb and POC for road transport, Ca and Mg for construction works, and SO4 <sup>2</sup><sup>−</sup> for PM transport over long distances.

The analysis of the collected data, including carbon species, eight main water-soluble ions, 21 small trace elements and local meteorological conditions were used to assess the nature and seasonal variability of PM aerosols in Warsaw, as well as to identify the contribution of natural and anthropomorphic sources to the aerosol levels.

The method of studying the active sources of dust pollution from which the pollutants originate in school buildings located near Athens, Greece, was investigated by [47]. The tests using the PMF method were carried out in 2017 and they referred to the mass concentrations (the highest was 72.02 μg/m−3) and the chemical analysis of PM10. Seasonal fluctuations were also examined.

Lv et al. [10] studied the concentration of solid particles in Daqing China, from the outside Cout, and their percentages expressed in mass concentrations in buildings of various applications, as well as the correlations of internal pollution with the locations of buildings exposed to external sources of dust of various origins (office: Cout = 22 μg/m<sup>3</sup> and ρ = 79.4% (ρ–contribution rate I/O of indoor and outdoor particle sources to the concentration of indoor particulate matter); classroom: 20 μg/m3 and 87.6%; city residence: 30 μg/m<sup>3</sup> and 75.0%; rural house: 34 μg/m3 and 90.2%). The researchers found that the share of external sources in the concentration of PM indoors exceeds 70%, and in classrooms and in the countryside it exceeds 90%.


**Table 3.** Comparison of the attributed sources of air pollution with dust in Warsaw and Kraków.

#### **3. Materials and Methods**

*3.1. Preliminary Measurements of Mass Concentrations of Dust Pollutants Inside Rooms*

There are two types of PM measurement instruments: Those that provide average concentrations over the sampling period and those that provide real-time instantaneous concentration monitoring. Instruments based on the gravimetric method are considered as reference methods and must meet the requirements of PN-EN 12341:2014 [48]. The principle of the reference method is to collect the particulates on a circular filter. This must be a glass fibre filter (GFF), a quartz fibre filter (QFF), polytetrafluoroethylene (PTFE) or PTFE-coated glass fibre, and must be thoroughly conditioned before and after collection. The filter itself should be weighed before and after collection.

Optical instruments based on light scattering, absorption or occultation of light by particles are used to measure dust concentration in real time. They must have certificates confirming their equivalence with the reference method.

A detailed description of the measurement strategy for airborne particles and legal measuring equipment can be found in ISO 16000-34:2018 [49] and ISO 16000-37:2019 [50]. Measurement of PM2.5 mass concentrations describes the strategies and procedures for measuring the mass concentration of PM2.5 indoors.

Tests of dust air pollutants in selected rooms were carried out from 27 January 2020 to 10 March 2020 in a low office building (two-story high), located in the centre of Warsaw in an urban development. Three office rooms with a different area and a different proportion of external and internal walls were selected for the study of air pollution. The rooms were empty, the leaks had been identified and taped over, leaving only airflow through the window vents and the undercutting of the front door. Localisation of the measurement devices in rooms is shown in Figure 5. The cubic volumes and number of windows of the analysed rooms were as follows:

Room 1–23.48 m3, 1 window

Room 2–48.75 m3, 2 windows Room 3–93.14 m3, 3 windows

**Figure 5.** Rooms 1,2,3 where measurements were performed with indicated point of measurement devices.

The rooms were situated on the ground floor of the building (approximately 2 m above ground level and 104 m above sea level). At the time of the test, the exhaust ventilation was turned off, and the ventilation was turned off one hour before the start of the measurements. In the immediate vicinity of the building in question, there are other office buildings and multi-occupancy residential buildings connected to the municipal central heating. In terms of height, the adjacent buildings can be characterised as low (height above ground level up to 12 m), medium (height above ground level up to 25 m) and tall (height above ground level up to 55 m). There are local access roads with low traffic around the office building in question. At a distance of approximately 120 m from the building in question, there is a main road with heavy traffic.

#### *3.2. Measurement Methods (Measuring Instruments and Procedures of Particle Concentration Measurements Inside and Outside the Building)*

Measurements of the mass concentration of dust inside were carried out by the AEROCET 831 Handheld Particle Counter (Met One Instruments Inc, Grants Pass, OR, USA) operating on the optical principle with a laser diode, and by the TSI QUEST EVM-7 optical-gravimetric environment monitor, indicating the mass concentration of particles, but also their size fractions PM2.5, PM4, PM10 (in terms of measuring the particle size distribution, the optical device is classified as a laser light scattering aerosol spectrometer— LSAS—for measuring particle sizing). The TSI QUEST EVM-7 environment monitor (TSI INCORPORATED, Shoreview, MN, USA) also measures the CO2 concentration inside the rooms. Used equipment is shown in Figure 6. Used devices are under the supervision of the environmental laboratory and are regularly checked and periodically calibrated. Each device was checked and calibrated before the experiments. Meters from Table 4, the dust and environmental condition measuring devices AEROCET and the TSI were set to zero prior to the measurement using the "filter 0" in order to "reset" the measurement path.

**Figure 6.** Set of measuring devices for indoor environment testing.


**Table 4.** Equipment for measuring the thermal parameters of the internal environment and the mass concentration of PM10 and PM2.5 particles in selected rooms and outside the building.

> The operating principle of a LSAS relies on particles being guided individually through an intensely illuminated volume. Commercially available equipment enables the particle mass concentrations to be estimated and displayed by the monitor, using evaluation software. Normally, these programs assume the ideal spherical form of particles and convert them to mass concentration. Calibration of the TSI QUEST EVM-7m is required and usually performed with PM latex particles of a defined diameter as the test aerosol. A comparison with other gauges should, however, always be made carefully because ultimately what is determined by LSAS is the particle equivalent optical diameter determined by calibration with monodisperse spherical latex particles.

> The lower detection limit depends very much on particle size: The larger the particles, the lower the limit of quantification. The characteristics of the devices used for measuring the internal environment, including PM10 and PM2.5 dust, are shown in Table 4 below.

> Outdoor concentrations of PM10 and PM2.5 on both measurement days will be based on CIEP-Chief Inspectorate of Environmental Protection, pol. GIOS archival data and the ´ results of measurements made with the AEROCET 831 m.

> At CIEP, dust concentrations were measured using an automatic method, by a CIEP environmental monitoring station located closest to the building selected for the study. CIEP provides access to hourly monitoring data on air quality in Poland, produced as part of the State Environmental Monitoring and collected in the JPOAT2.0 database.

> The CIEP pollution measurement stations taken into account during the tests differ from each other in terms of their location, as well as their location in relation to communication routes affecting the measured pollution levels.

> At the same time, measurements of the mass concentration of PM10 and PM2.5 were performed using the AEROCET 831 m described in Table 4, placed outside, right next to the window opening of the monitored room, i.e., outside the building. Due to the use of different devices at the Building Research Institute and CIEP, the not fully established measurement procedure of "outside the window" (a critical increase in the AEROCET 831m indications was noticed when the cleaner was turned on), different distances and positioning of the monitored rooms in relation to the sides of the world from the official air quality control points, the results of the present study's measurements are not fully comparable with the results presented by CIEP. Self-readings of outdoor concentrations should be more reliable for comparison because they were carried out at the same time

points as the indoor pollutant concentration. The outdoor environmental parameters were observed and recorded during experimental campaign. In these preliminary tests, the air velocity in the immediate external environment was not measured.

During the measurements of the mass concentration of dusts, the concentration of CO2 was also measured with the TSI QUEST EVM-7 m in each test room.

The measurements of the time curves of decay (measuring particle decay curve) and the particle concentration rebound curve in the interiors were carried out in time cycles of 150 min. The concentration of suspended dusts was rebuilt as a result of natural dust penetration through the external wall of the building and the gaps near windows and doors in the time period of from −50 to 0 min, as the cleaner was turned on in the time interval from 0 to 150. At point 0 (on the x-axis), when the cleaner was switched off, the indoor dust concentration was measured and had a minimum value. The second part of the transient curve is the concentration rebound curve.

#### *3.3. Procedures for Measurements of PM2.5 Mass Concentration in the Indoor Mode Performed at ITB*

During the tests, which were carried inside each room on three days, when the air pollution by PM2.5 outside was different, the following weather conditions were recorded, as shown in Table 5.

**Table 5.** Weather conditions during air quality and dust concentration measurements (source: www. weather.com and www.meteomodel.pl, accessed on 27 January 2020, 28 January 2020, 10 March 2020).


Indoor air pollution tests in three selected rooms were carried out according to the following scheme:


Thus, this study measured the transient concentration decay curves of the decrease in dust concentration in the room after the air cleaner was turned on until it was turned off at point Cin,0, and the transient PM concentration rebound curves of the recovery of dust concentration in specific time intervals determined on a real-time basis. The particle concentration rebound curves not the particle concentration decay curves were used in our research to measure the penetration factor. This modification of the method is to be used for determining the value of the particle deposition rate k not from the course of the characteristics in the transient state, the so-called particle decay curve, but from the course of the further time stimulated by infiltration, the so-called particle concentration rebound curve. This method was used to determine the value of k and then P was recognized as correct by He [25] andDiapuoli [51] but it has not been used so far.

One person stayed in the room during the measurements, hence the constant increase in the concentration of carbon dioxide, which was also measured with the TSI QUEST EVM-7 m.

The CO2 concentrations marked as c0 and ct were measured on each measurement day at the beginning and at the end of the measurements of PM2.5 mass concentrations.

The results of CO2 concentration measurements in room 1 (with one window) and in room 3 (with three windows) in ppm are given in Table 6 (assumed CO2 concentration outside equal to 500 ppm); according to the statement of the Minnesota Department of Health: 'The outdoor concentration of carbon dioxide is about 400 parts per million (ppm) or higher in areas with high traffic or industrial activity. The present authors' assumption is determined by localisation of the measurements points and their own experience in this area.

**Table 6.** The results of indoor CO2 concentration measurements (ppm).


Dust concentration tests in individual rooms were carried out sequentially (not simultaneously); therefore, the external environment for rooms 1–3 may differ slightly.

#### **4. Results and Discussion**

*4.1. Results and Discussion of Preliminary Measurements of Indoor PM2.5 and PM10 Concentrations*

The concentrations of PM10 and PM2.5 outside were taken from published data on the basis of measurements made by the CIEP Environmental Protection Station closest to the studied rooms (Warsaw, Wokalna 7and Al. Niepodległo´sci 227/233). Measurements of air quality stations are reported with an hourly interval and shown in Table 7.

**Table 7.** Outdoor air quality conditions during air quality and dust concentration measurements (source: GIOS). ´


Based on the data on dust concentrations obtained from the hourly monitoring of the Chief Inspectorate of Environmental Protection for PM10 and PM2.5 dusts on Day 1 (27/01/2020), Day 2 (28/01/2020) and Day 3 (10/03/2020), the curves of dust concentration changes outside the building were determined. The curves of dust concentration inside the building in the indicated rooms were drawn on the basis of measurement data made with the AEROCET 831 m. Figure 7a–c show changes in the mass concentration of PM10 and PM2.5 on the first, second and third days of measurements in rooms 1 and 3.

**Figure 7.** Forced changes in the concentration of PM10 and PM2.5 dust inside the tested rooms (points) and outside the building (solid line) on three working days of dust concentration rebound curve measurements.

Comparing the measurements of PM2.5 dust on day 1 (Figure 7a) and day 3 (Figure 7b) in room 1, it can be concluded that the rate of dust concentration level increase was higher on day 1, and the indoor pollution equilibrium state was reached by approximately100 min. The rate of dust concentration level was much lower on day 3, and the indoor pollution equilibrium state was reached by approximately150 min. This may be due to the smaller and continually decreased PM concentration gradient (Cout–Cin) (although at the same time during the measuring cycle, the temperature and wind speed were slightly higher). On day 3, the ambient concentration of dusts Cout measured by CIEP decreased in measuring time from 38 to 28 μg/m3.

By analysing the course of the measurement of PM10 on day 2 (Figure 7c) in room 3, it is possible to notice also a uniform decrease in pollution outdoors caused by high relative air humidity; therefore, the concentration gradient (Cout–Cin) also decreased. According to theoretical assumptions, however, the equilibrium level of PM10 concentration inside the room is set at a level about 40% higher than the level of PM2.5 concentration (Figure 7a). The measurement of the PM10 concentration rebound curve (Figure 7c) was completed when rainfall occurred. Authors found that the compliance between the CIEP (Chief Inspectorate of Environmental Protection) results with our PM2.5 measurements outside the building is of the order of uncertainty in the measurement of PM concentrations inside regarding the used field instruments in accordance with Table 4.

Obtained results of preliminary experiments have shown that a system for measuring the concentration of PM2.5 directly outside the window should be designed, which would be protected against disturbances caused by the operation of the air cleaner in the interior and changes by the meteorological parameters.

#### *4.2. Theoretical Basis and Possibilities of Predicting Dust Penetration through the Building Envelope. A Dynamic Model of the Mass Balance Equation of Indoor Particle Concentration Levels*

The process of penetration of dust from the outside into the interior of the building and the generation of PM by internal sources is a dynamic process [46]; therefore, all findings emphasise the importance but also the difficulty of determining separately the time courses of changes in the concentration of suspended dust in the interior, and the share in their mass of suspended particles from outdoors and the proportion of particles generated indoors. Such data are necessary for the interpretation of the results of epidemiological studies. In this framework, recent scientific research has focused on the study of factors influencing the penetration of particles through the building envelope and the quantification of the relative indoor proportion of particles coming from outside and remaining as dust suspended inside the room. The basic form of the dynamic mass balance equation is given in [25,51–56]; it describes the profiles of changes in the temporal concentration of PM during the particle decay curve or during the reconstruction of this concentration (particle rebound curve) in the room with Equation (4)

$$\frac{d\mathbf{C}\_{\rm in}(t)}{dt} = a \cdot \mathbf{P} \cdot \mathbf{C}\_{\rm out}(t) - (a+k) \cdot \mathbf{C}\_{\rm in}(t) + \frac{Q\_{\rm is}}{V} \tag{4}$$

where *Cin(t)* and *Cout(t)* are the concentrations of particles inside and outside at time *t*, respectively (mg/m3); *a* is the multiplicity of air changes (h−1); *P* is the dimensionless coefficient of particle penetration efficiency; *k* is the particle deposition rate (h<sup>−</sup>1); *V* is the volume of the interior (m3); and *Qis* is the rate of generation of particles by the indoor sources(mg/h). Equation (4) assumes perfect mixing of indoor air. It also ignores particle mass losses or gains due to differences in the gas-phase concentrations of condensable substances and changes in temperature/relative humidity conditions between indoor and outdoor spaces. The penetration efficiency factor (*P*) and particle deposition loss rate (*k*) are also related to building characteristics and indoor/outdoor conditions, but these parameters also depend on particle size, composition and their electric charge.

The particle penetration factor (*P*) is defined as the mass fraction of the particles in the infiltrated air passing through the building envelope, which depends on particle diameter (*dp*). This simple definition is

$$P = N\_{\text{escape}} / N\_{\text{total}} \tag{5}$$

where *Nescape* are the particles escaping through the leak outlet and *Ntotal* are the particles collected at the entrance of the leak in the building. The penetration factor is the most appropriate parameter for describing the mechanism of particle penetration through cracks and leaks in the building envelope, and at the same time the parameter describing the functionality of the building and, thus, its balance.

The main difficulty in the solution to this equation lies in the separate calculation of the penetration efficiency factor (*P*) and deposition rate (*k*). The values for these two parameters that are reported in the literature vary significantly. Deposition rate *k* presents the widest range of values in terms of size fractions in the literature. The penetration efficiency factor *P* seems to be more accurately calculated through the application of dynamic models.

Bennett and Koutrakis [57] developed a method for calculating the unknowns (*P*) and (*k*) using time-dependent indoor and outdoor particle concentrations and air exchange rate (*a*). Assuming there are no indoor particle sources, when using discreet time steps Δ*t*, expressed as Equation (4), Equation (6) can be rewritten as follows

$$\mathbf{C}\_{in,t} = \frac{a\_i P\_i \mathbf{C}\_{out, t-\Delta t}}{(k + a\_i)} \times \left(1 - e^{-(k\_i + a\_i)\Delta t} \right) + \mathbf{C}\_{in, t-\Delta t} \times e^{-(k\_i + a\_i)\Delta t} \tag{6}$$

Assuming steady-state conditions inside the building, no internal sources of suspended dust generation and no mechanical ventilation with a filter, Equation (6) can be transformed into a Equation defining the dust infiltration coefficient *Finf*. The infiltration of air with particles suspended indoors is also described by the infiltration coefficient (*Finf*), which determines the fraction of external particles that enters the internal microenvironment and remains suspended. The particulate matter infiltration process is defined by the following equation

$$F\_{\inf} = \frac{\mathbb{C}\_{\text{in}}}{\mathbb{C}\_{\text{out}}} = \frac{P \cdot a}{a + k} \tag{7}$$

where *Finf* is the PM infiltration factor (-).

"When indoor particle emissions cannot be avoided, the method developed by Long et al. [58] which is based on the linear regression approach can be used for *Finf* determination".

The analytical solution of Equation (6), based on field measurements of mass concentrations of PM2.5 inside and outside the building was used by Chao Chen et al. [59] in their study of particle penetration through window gaps. They used Equation (6), where the Δ*t* is time step Δ*t* = 1 h; and *ai*, *Pi*, *ki* are hourly air exchange rate, penetration factor and deposition rate, respectively. Therefore, Equation (6) can be resolved when *Cin*,t and *Cout,* (*i* = 1, 2 ... *n*) are known for a period Δ*t* of time, where hourly indoor and outdoor PM2.5 mass concentrations could be determined by field measured data, so the unknowns are air exchange rate (*ai*), penetration factor (*Pi*) and deposition rate (*ki*). As such, the number of equations is (*n* − 1) and the unknowns are 3(*n*−1) in the new Equation (4) (*n* − 1, *ai, Pi* and *ki*).

The experimental solution of the dynamic mass balance equation of indoor PM concentration is recommended by [25,52,53]. They argued that a dynamic variation of the PM concentration level in the sample room should be performed while simultaneously measuring the PM particle decay curve in real buildings.

When the test time is longer than the time constant, it is implied that the indoor PM2.5 concentration has reached a steady-state condition, then

$$\frac{d\mathbf{C}\_i(t)}{dt} = 0\tag{8}$$

Therefore Equation (4) can be summarized by Equation (9)

$$\mathbf{C}\_{\rm in,t} = \mathbf{C}\_{\rm in(0)} = \frac{P \cdot a \cdot \mathbf{C}\_{\rm out}}{a+k} \tag{9}$$

where *Cin,t* is the steady-state indoor PM2.5 concentration.

Upon determining the "decay term" (*a+k*) and the air exchange rate *a*, the penetration efficiency factor *P* is described by the following equation, which assumes the absence of internal sources PM.

$$P = \frac{(a+k)}{a} \cdot \frac{\mathbb{C}\_{in,0}}{\mathbb{C}\_{out}} \tag{10}$$

where *Cin*,0 is the concentration *Cin* of the PM particles already at a steady state with time.

By estimating the coefficient of particle infiltration from outside into the room, *Finf*, from the dynamic mass balance equation, it is difficult to determine independently the values of *P* and *k*. Over the last two decades, many methods have been used to estimate *P*

and *k* and calculate the relative contributions of particles from indoor and outdoor sources in the measured mass concentrations of given indoor particles. Diapouoli et al. [51] describe a broad overview of the methods for determining the values of these parameters, presenting different approaches, which they grouped into four categories according to the principles of their determination: (1) Steady-state assumption using the steady state of the mass balance equation; (2) dynamic solution of the mass balance equation using complex statistical techniques; (3) experimental studies using conditions that simplify model calculations (e.g., decreasing the number of unknowns); and (4) infiltration surrogates using a particulate matter (PM) constituent with no indoor sources to act as surrogate of indoor PM of outdoor origin. As they claim, however, the analysis of various methodologies and results shows that the estimation of particle infiltration parameters is still difficult.

The penetration factor *P* measurement principle, according to the authors' proposal, covers the realisation of the dynamic mass balance equation of indoor PM concentration, which is presented in Figure 8. Air exchange rate *a* is measured at first using the properly documented principle and therefore it is not described in Figure 8.

**Figure 8.** Schematic diagram of the P factor test procedure (with reference to Dong et al. [24]).

#### *4.3. Method of Determination of the Penetration Efficiency Factor P*

*Air exchange rate in the internal microenvironment (a)*

The rate of air exchange in the room (a) due to infiltration of air from the outside is a measurable parameter, and is mainly influenced by the construction and technical condition of the external wall of the building and the ventilation system, the activity of residents and meteorological conditions.

The size of (a) in the room where people are present can be estimated on the basis of the time curve of the decay of CO2 concentration in the room at night. The air exchange rate can be estimated from the exponential curve of decay of CO2 concentration over time, using Equation

$$a = \frac{1}{t - t\_0} \ln \left( \frac{c - c\_{out}}{c\_0 - c\_{out}} \right) \tag{11}$$

where *a* is the air change rate (h<sup>−</sup>1), and the values of time *t* and *t0* are read at the end and the beginning of the concentration decay curve (h−1), respectively. The values of *c* and *c*<sup>0</sup> are CO2 concentrations (ppm) measured at times *t* and *t0*, respectively, and *cout* (ppm) is the concentration of PM outside at time *t.* The air change rate estimated for different measurement cycles shows seasonal fluctuations in the multiplicity of air changes. In naturally ventilated buildings, air flow is caused by differences in temperature or pressure inside and outside the building.

Equation (11) was used by the Czech–Norwegian team of Chatoutsidou et al. [54] in a study of the infiltration process in The Baroque Library Hall building in Prague with a natural ventilation system by infiltration in order to calculate the number of air changes. In this study, a dynamic mass balance model was used, taking into account the penetration of particles from external and internal losses (deposition, ventilation). The model was used to determine the particle deposition rate *k* and the penetration efficiency *P*. As a result of the

three-year research, the values of *a* for spring were *a* = 0.13, for summer *a* = 0.11 and for winter *a* = 0.15. Equation (11) was also used in a simplified form by [25,60].

$$a = \frac{1}{t} \ln \frac{c\_t}{c\_0} \tag{12}$$

Deposition rate of airborne particles (*k*).

One of the simpler methods of determining the particle deposition coefficient is recommended by [25,51], who specified the factor of Equation (6) as the "decay term"— (*a+k*) in the time interval Δ*t*, using the regression analysis of the experimentally determined particle concentration decay curve. The temporary disappearance of the dust concentration in the interior, i.e., the *Cin*<sup>0</sup> concentration, to the *Cint* highest concentration (e.g., *Cin* level caused, for example, by the operation of the air cleaner) in accordance with equation

$$
\ln \left( \frac{\mathbb{C}\_{in,0}}{\mathbb{C}\_{int}} \right) = -(a+k)\Delta t \tag{13}
$$

is used to then derive the value of *k* from the value (*a+k*). An increasing trend was found for increasing particle sizes. At the maximum particle size, particle deposition rates are of the same order or even higher than their removal rates due to blown air exchange. According to the results of [1], higher deposition rates were found for larger particles, so the deposition coefficient *k* of PM10 particles is greater than *k* for PM2.5 particles. They also found large differences in deposition rates among the six residences studied as a result of a discrepancy in surface material texture and roughness as confirmed by Abadie et al. [61]. They also demonstrated that the deposition effect is temperature dependent.

Particle penetration efficiency through the external walls of the building (*P*).

There are several different models in the literature for predicting the penetration efficiency *P* of particles through the gaps in the building envelope. The equation resulting from the Lagrangian model (Chen, Zhao [52]) is particularly interesting. It calculates the trajectory of each particle by integrating the equilibrium forces acting on individual particles.

The practical significance of the research conducted by Chao et al. [62] from the Hong Kong Technical University concerned the determination of the *P* and *k* coefficients in residences with natural ventilation. Both parameters were dependent on particle size but showed a different upward and downward inversion profile with respect to particle size. The main causes of loss by deposition and penetration effects are diffusion, inertial impact/interception and deposition (showing a different up-and-down inversion profile against particle size).

The efficiency of *P* is related to particle size, as demonstrated by many researchers who compared the penetration of particles of different sizes [1,25].

Chao et al. [62] studied six buildings and argued that 'the penetration coefficient showed a hill-shape with respect to particle sizes and there was a peak (0.79) at the size range of 0.853–1.382 μm'.

It follows that the deposition coefficient *k* of PM10 particles is greater than *k* of PM2.5 particles while the penetration coefficient *P* of PM10 particles is less than *P* of PM2.5 particles. The *P* particle penetration factor is, therefore, another important factor determining the value of the I/O ratio. If the building is ventilated by infiltration, however, the expansion of the particle penetration coefficient is a strong function of the air exchange coefficient *a*, particle size and fracture geometry in the building envelope.

Chao et al. [62] also described a simplified way to analyse the model of indoor particle behaviour.

This simplified method has been developed for users of particle counters, who need to express concentrations in the form of a particle number concentration, i.e., (×103) p cm−3, so that there is a need to convert the units in which the particle concentrations are recorded, i.e., convert the particle mass concentration units μg/m−<sup>3</sup> per unit of particle number concentration (×103) p cm−3, assuming a spherical shape and uniform particle density and packing. Such conversions are performed for each particle size range.

Returning to the dynamic mass balance Equation (6), it can be said that all parameters in this equation are known values, except for the penetration efficiency coefficient *P* and the deposition rate *k*. Chao et al. [62] transformed Equation (6) into a transient form. For each particle size range, the concentration of particles in the interior during the forced change in dust indoor concentration is equal to Equation (14)

$$\mathbf{C}\_{in} = \frac{\mathrm{Pa}\mathbf{C}\_{out}}{(a+k)} + \left(\mathbf{C}\_{int} - \frac{\mathrm{Pa}\mathbf{C}\_{out}}{a+k}\right) \mathbf{c}^{-(a+k)t} \tag{14}$$

where *Cint* the is the initial dust pollution on the transient concentration decay curve.

Equation (14) consists of two parts: *P*·*a*·*Cout*/(*a+k*) and (*a+k*), which are the steadystate particle concentration and the time constant of the transient term, respectively.

Chao et al. [62] performed an experiment of the process of particle concentration decay of *Cin* at time (*t–to*) caused by intensive cleaning of the room from particles using an air cleaner. A typical profile curve of such a process is given by [62] and is also cited by [63]. The experiment consisted in artificially increasing the PM concentration to the *Cint* highest concentration value in the room in order to obtain a "drive" for the increase in the deposition rate, and then quickly reducing the concentration of solid particles to the steady state *Cin*<sup>0</sup> by inducing ventilation present in the building. Substituting all measured particle concentrations and the air exchange rate *a* obtained in the tracer gas CO2 decay test, the deposition rate from the particle decay curve can be expressed as follows:

$$k = \left(\frac{1}{t}\right) \ln\left(\frac{\mathbb{C}\_{in} - \mathbb{C}\_{in0}}{\mathbb{C}\_{int} - \mathbb{C}\_{in0}}\right) - a \tag{15}$$

The difficulty in applying this equation, however, lies in the correct choice of the time point in particle concentration decay curve at which the *k* value is to be calculated, since the chosen value of *t* will determine the value of *Cin(t)* from the decay curve or from the particle rebound curve. The penetration rate *P* can be found from the steady-state particle concentration upon determining the deposition rate *k*, which is shown as follows:

$$P = \left(1 + \frac{k}{a}\right) \frac{\mathbb{C}\_{\text{in}0}}{\mathbb{C}\_{\text{out}}} \tag{16}$$

The penetration factor consists of the ratio of the concentration found inside to the level of particle concentration outside (I/O ratio) and from the expression to the ratio of the particle deposition rate to the air exchange rate. In the case where the deposition rate *k* is much slower than the air exchange rate *a*, the penetration coefficient is equivalent to the I/O ratio.

The particle decay method can be replaced by the particle concentration rebound method as mentioned by [25,51,52]. Then, it will be a reversed process, the transition curve of which will reflect more closely the ease of particle penetration. This is because the experiment takes place under conditions of natural pressure difference I/O, and reflects better the efficiency of penetration of particles through the building envelope.

#### *4.4. Exemplary Determination of the Penetration Efficiency Factor Based on the Results of Tests of the Profile of Forced Changes in the Concentration of PM Particles in a Selected Room*

As an example of a measurement cycle, the results of which were used to calculate the value of the penetration efficiency factor *P* of PM2.5 particles through the external walls of the building, we chose the mass concentration *Cin* and *Cout* measurement cycle of 27 January 2020, made in room 1 (with one window), and presented in Figure 7a.

Assuming that for further calculations of the penetration coefficient, an experiment must be conducted that will force the concentration decay process, and then the process of restoring the concentration level of PM2.5 in the selected room (as in experiments in [26,62]), the authors present the transient curve of this dynamic process with the penetration of PM2.5 particles through the building envelope in Figure 7.

The first step was to determine the value of the air changes rate *a* in the selected room and under the thermal conditions of the measurement cycle. The CO2 concentrations marked as *c*<sup>0</sup> and *ct* were measured on each measurement day at the beginning and at the end of the measurements of PM2.5 mass concentrations. The values of CO2 concentration outside the building, *cout* = 500 ppm, were assumed in accordance with ISO 16000-26:2012 [64]. The *a* value calculated from Equation (9) was for the selected room and measurement date *a* = 0.28.

The value *a* (ACH air changing rate h<sup>−</sup>1) is calculated from equation

$$a = \frac{1}{(t - t\_0)} \cdot \ln\left(\frac{c\_t - c\_{out}}{(c\_0 - c\_{out})}\right).$$

In order to use Equations (14)–(16), the authors had to convert the results of particle concentration measurements in the selected measurement cycle (Figure 9) from the mass concentration units <sup>μ</sup>g/m−<sup>3</sup> to the numerical concentration units (×103) p/cm−<sup>3</sup> (number of particles to cubic cm). The method of conversion was taken from the article by Morawska et al. [26], assuming (for the size range of PM2.5 particles) the conversion factor of 21.3/18 of the values of mass concentration units to units of numerical concentration.

**Figure 9.** Coupled dynamic processes: Particle concentration decay curve combined with particle concentration rebound curve. Decay curve starts at the concentration point *Cint*.

The second step is to determine the value of particle deposition rate *k* (from the concentration decay curve *Cin* of PM2.5 particles or from the particle concentration rebound curve (see Figure 7) in the selected time interval Δ*t* (then, it is a reverse cycle) in the room under the conditions of the determined air change rate *a* (see Table 8). The *k* factor will be calculated for the measurement cycle shown in Figure 7, using Equation (7) simplified to the form (13) with the data from Table 9.

**Table 8.** Data for calculating the a value (air changing rate h<sup>−</sup>1).



**Table 9.** Data for calculating the *k* value (deposition rate h<sup>−</sup>1).

From the PM2.5 particle concentration decay curve measured in Room 1 on 27 January 2020 (Figure 8) curve segment Δ*t = t-t*<sup>0</sup> of the particle concentration rebound curve was selected. *Cin*<sup>0</sup> = 4.5 × 103 p cm−<sup>3</sup> was calculated for time *<sup>t</sup>*<sup>0</sup> and *Cin* = 30.7 × 103 p cm−<sup>3</sup> was calculated after 60 min for break point *t* of the particle level restoration curve. Based on Equation (13), the value (*a+k*) = 1.90 was determined, which gives, after subtraction, *a* = 0.28 and *k* = 1.62

Calculation of the value of *k* (deposition rate h<sup>−</sup>1) from Equation (13)

$$\ln\left(\frac{\mathbf{C}\_{\text{iv0}}}{\mathbf{C}\_{\text{in}\,t}}\right) = -(a+k)\Delta t$$

In the literature, however, there is a large discrepancy in the values of the deposition rate factors for the size range of PM2.5 particles, which are reported with the values of the air exchange rate.

In the range of *a* from 0.45 to 0.61 h−1, i.e., in conditions similar to our air changing rate *<sup>a</sup>* = 0.28 h−1, the value of *<sup>k</sup>* = 2.01 ± 1.11 h−1was given in He, Morawska and Gilbert [25].The authors decided to compare the obtained value with others, e.g., Chao et al. [62] (*k* = 0.53 h−<sup>1</sup> to *k* =1h−1). We chose our value of *k* = 1.62 h−<sup>1</sup> for further calculations because it is close to that given by [25] and because previously a similar value of *k* = 1.3 h−<sup>1</sup> for PM2.5 was obtained by the team Thatcher et al. [65] from the Lawrence Berkeley National Laboratory.

The third step is to calculate penetration *P* from Equation (16) with the data from Table 10.

$$P = \left(1 + \frac{k}{a}\right) \frac{\mathcal{C}\_{\rm in0}}{\mathcal{C}\_{\rm out}}$$

**Table 10.** Data for calculating penetration factor *P*.


Thus, after converting the values of *Cin*<sup>0</sup> = 4.5 × 103 p/cm−<sup>3</sup> and *Cout* = 52.0 ×103 p/cm<sup>−</sup>3, and substituting all parameter values into Equation (16), the value *P* = 0.61 is obtained. From the course of the measurement cycle (Figure 7a), it can be seen that the value of P will be higher because the concentration level of Cout during the particle concentration rebound curve measurement was constantly decreasing.

The value is verified by the data collected by Chao et al. [62] who, based on their research, compiled the values of *P* factors for particles in wide particle size ranges.

For PM2.5, it is possible to determine the *P* value of PM2.5 particles by interpolating the *P* factor value from the chart prepared by Chao et al. [62]. The calculated value *P* = (2.458 + 2.642)/2 = 0.63 is roughly consistent with our value *P* = 0.61.

According to a recent study by Yu et al. [38], in diverse pressure differences, the 0.25–2.5 μm particles within the scope of penetration factor *P* is close to 1, which increased along with the pressure difference between the two sides of the gap. Four penetration factor models compared Chen and Zhao [52] and present much lower *P* values for PM2.5.

#### **5. Conclusions**

Predicted infiltration factors *Finf* in urban residences were collected, tested and classified by Baxter et al. [66] from the Harvard School of Public Health in Boston. They studied various factors related to occupant behaviours and home characteristics, which might influence ventilation patterns or infiltration factors. These include age of construction, housing type (multi- vs. single-occupancy homes), floor levels, opening windows, seasonal differences in *Finf* and air conditioning or HVAC system use.

The transport of outdoor particles across the building envelope (i.e., penetration) is an important physical factor that contributes to particle concentration and size distribution inside buildings. The penetration factor can be equalised when doors and windows are open, but under different conditions the *p* values ought to be measured.

Outdoor pollution sources, rather than indoor pollution sources, are the principal contributors causing indoor particulate pollution in modern office buildings. When there are no substantial indoor particle pollution sources, 60–70% of the indoor PM2.5 mass density comes from outdoor particulate pollution, according to the study.

This paper presents a modified method of determining the parameters of the process of penetration of PM2.5 particles through leaks in the external walls of the building with the use of the processes of natural recovery of the levels of particle concentration. So far, dynamic tests of PM infiltration have been performed using the blower-door depressurization procedure by introducing the *n*-th power disturbance in the air flow caused by increasing the pressure gradient in the I/O system. The modified method promotes the measurement of *Cin* concentration during the process of equalizing the level of PM2.5 concentrations inside and outside the building.

The preliminary results of the penetration factors determined by this method are consistent with the *P* factor values from the literature obtained so far for this dimensional group of dusts. Information on the P factor value can be a reliable parameter that classifies buildings in terms of resistance to dust infiltration into their interior, which may have a significant impact on indoor air quality. Obtained results of preliminary experiments have led to the conclusion that a system for measuring the concentration of PM2.5 directly outside the window should be designed, which would be protected against disturbances caused by the operation of the air cleaner in the interior and changes by the meteorological parameters.

The authors have tried to describe the current state of knowledge in the field of research on interior pollution by dust and establish a roadmap for launching and improving measurement and test methods so that in the future a dynamic indoor particle model could describe indoor air quality, when exposure to PM2.5 takes place. This model will be further tested using the Monte Carlo method before being included in the Combined IAQ model (Piasecki, Kostyrko [67,68]).

**Author Contributions:** Conceptualization, K.B.K. and D.B.; methodology, K.B.K. and D.B.; validation, K.B.K. and D.B.; formal analysis, K.B.K. and D.B.; investigation, K.B.K. and D.B.; resources, K.B.K. and D.B.; data curation, K.B.K. and D.B.; writing—original draft preparation, K.B.K. and D.B.; writing review and editing, K.B.K. and D.B.; visualization, K.B.K. and D.B.; supervision, K.B.K. and D.B.; project administration, K.B.K. and D.B.; funding acquisition, K.B.K. and D.B. Both authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data supporting reported results can be found in the Thermal Physics, Acoustics and Environmental Protection Department, Instytut Techniki Budowlanej, Warsaw, Poland.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Energies* Editorial Office E-mail: energies@mdpi.com www.mdpi.com/journal/energies

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34 Fax: +41 61 302 89 18

www.mdpi.com

ISBN 978-3-0365-2507-5