**Sustainable Development of Energy, Water and Environment Systems (SDEWES 2022)**

Editors

**Oz Sahin Russell Richards**

Basel • Beijing • Wuhan • Barcelona • Belgrade • Novi Sad • Cluj • Manchester

*Editors* Oz Sahin Griffith University Southport Australia

Russell Richards The University of Queensland Brisbane Australia

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Sustainability* (ISSN 2071-1050) (available at: https://www.mdpi.com/journal/sustainability/ special issues/4G5TJ52Q70).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

Lastname, A.A.; Lastname, B.B. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-9676-1 (Hbk) ISBN 978-3-0365-9677-8 (PDF) doi.org/10.3390/books978-3-0365-9677-8**

© 2023 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license. The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons Attribution-NonCommercial-NoDerivs (CC BY-NC-ND) license.

## **Contents**



## **About the Editors**

### **Oz Sahin**

Dr. Oz Sahin is a systems modeller with specialised expertise in a range of modelling approaches to various issues of natural resource management, risk assessment, and climate change adaptation. His research interests include sustainable built environment; climate change adaptation risk assessment; ecosystem-based approaches; integrated water, energy, and climate modelling; integrated decision support systems using coupled system dynamics and GIS modelling; Bayesian network modelling; multiple criteria decision analysis; and operational research methods and models. He is currently working as senior modeller at the University of Queensland (School of Public Health, Faculty of Medicine) and Griffith University (Griffith Climate Change Response Program and Cities Research Institute).

#### **Russell Richards**

Dr. Russell Richards is a Senior Lecturer at University of Queensland Business School. He is a systems modeller with 16 years of experience in developing and applying models to social–ecological settings in Australia and internationally. His expertise includes applying systems thinking and system dynamics modelling for socioecological systems, health modelling (including one-health), risk assessment using Bayesian network models, and developing apps for research and teaching. He has an extensive research portfolio that includes working with decision makers and stakeholders from a range of industries and disciplines including government, health, NPOs, tourism, fisheries, and recreation. Russell currently lectures in system dynamics at the University of Queensland and has previously lectured in risk assessment and decision analysis using Bayesian networks.

### *Editorial* **Sustainable Development of Energy, Water and Environment Systems (SDEWES 2022)**

**Oz Sahin 1,2,3,4,\*, Russell Richards 5,6 and Ioana C. Giurgiu <sup>7</sup>**


The findings of the most recent Intergovernmental Panel on Climate Change (IPCC) report highlighted significant gaps in the targeted global reductions in greenhouse gas (GHG) emissions [1]. Implementation of sustainable development is at the core of achieving these targets and mitigating the anthropogenic environmental impacts. At the same time, sustainable development, as defined by the UN 2030 Agenda [2] and its associated sustainable development goals (SDGs), requires a holistic, interdisciplinary approach, which is not without its challenges. Indeed, studies tracking global progress towards meeting the 2030 SDG targets [3] show similar gaps in the implementation of sustainable solutions, with several SDG targets described as 'off track'. There is therefore an urgent need to address the challenges of an interdisciplinary, unified and practical sustainable implementation approach.

In this context, since 2002, Sustainable Development of Energy, Water and Environment Systems (SDEWES) Conferences have provided a platform for interdisciplinary discussion and advancement of sustainable solutions. Engaging a broad range of topics such as food, water, energy and waste cycles, the SDEWES Conferences Special Issue (SI) series provides an outlook on the key contributions and scholarly work presented at the SDEWES Conferences each year. The 2021 SI showcased a series of seven selected papers, providing insights into the topics of biorefineries, sustainable use of organic materials for various applications and waste management in industrial processes (Contributions A1–5), energypositive buildings (Contribution A6) and methods for evaluating economic aspects of energy markets (Contribution A7).

This SI provides further depth and broadening of the topics addressed, bringing together fifteen selected papers presented during the 2022 SDEWES Conference series at the 5th South East European, 3rd Latin American and 17th Conferences on SDEWES in Vlorë (Albania), São Paulo (Brazil) and Paphos (Cyprus). The selected papers tackle a broad range of topics from circular governance models to education in relation to sustainable practices, impacts of current legislative frameworks targeting energy efficiency, built environment energy performance, bio-based industrial applications, transport and sustainable energy services, production, supply and infrastructure maintenance strategies. This SI, therefore, provides an interdisciplinary and holistic perspective on sustainable development solutions, with several of these papers highlighting interrelations between various fields and aspects of sustainability.

Combining both top-down and bottom-up approaches, this SI opens with a proposal for a flexible circular governance model assessed through a territorial circularity index

**Citation:** Sahin, O.; Richards, R.; Giurgiu, I.C. Sustainable Development of Energy, Water and Environment Systems (SDEWES 2022). *Sustainability* **2023**, *15*, 15805. https://doi.org/10.3390/ su152215805

Received: 1 November 2023 Accepted: 7 November 2023 Published: 10 November 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

1

and based on the analysis of current approaches and challenges for implementing circular economy principles at the urban scale (Contribution B1). Based on the review of relevant literature surrounding assessment and implementation of circularity indices, Rangoni Gargano et al. (Contribution B1) highlight the key challenges (flexibility and coherent policies) and enablers (use of synergies and circular approaches to create financial incentives) of transitioning from the typical linear urban production and waste models to circular, sustainable urban models. The proposed governance model and circularity index provide a step-by-step practical approach for assessing barriers and enablers through four identified key areas: material flows, loops, sharing and competitiveness. The theoretical governance model was tested via questionnaires aimed at a range of stakeholders in Italy, South Tyrol and showed an excellent potential to capture the complexities of circular model implementation and inform territorial-level decision making to facilitate transitioning to circular models.

The relationship between governance, legislation and sustainable implementation is further explored through a series of five papers focusing on the built environment and, adding detail to the discussion prompted by Rangoni Gargano et al. (Contribution B1) around the significant impact and potential of urban areas to enable sustainable development.

The first of the five papers (Contribution B2) explores the impacts of European Energy Performance of Buildings Directive (EPBD), focusing on Lithuanian multi-apartment blocks in relation to the implementation and effects of local and European legislation aiming to reduce energy consumption and CO2 emissions. Monstvilas et al. perform statistical analysis based on registered Energy Performance Certificates (EPC) to provide insight into the effectiveness of the directives, as well as their impact on building features and energy performance, looking at the historical evolution of these parameters alongside increasingly stringent legislative requirements. The results indicate that in Lithuania, such policies are successful in improving the energy performance of buildings. Several improvement areas (e.g., hot water production, lighting, and electric appliances) that could further increase energy savings are also identified. While the study focuses on the Lithuanian context, the building typologies discussed, as well as the process of the EU directive adoption at the national levels, are similar in a number of European countries. The results are thus relevant for a broader range of contexts and provide insight into the performance of the EU approach.

Next, Caruso et al. (Contribution B3) propose a new design for a concrete masonry unit with embedded insulation. The thermal performance of the proposed masonry module is assessed through theoretical u-value calculations (ISO simplified and detail methods) and in situ monitoring of actual performance. Although the module exceeds the performance of conventional construction systems and Maltese building regulation requirements, testing shows a significant gap between the theoretical and actual thermal performance of the module. This highlights the broader issue of differences between theory and practice, where built applications often under-perform due to a multitude of factors such as installation quality, and it identifies an area of improvement for legislation and afferent calculation methods that affect the energy performance of the built environment.

Delving deeper into the links between urban fabric thermal performance and its effect on occupants and energy consumption, Liaw et al. (Contribution B4) explore the relationship between thermal comfort, energy consumption and energy poverty in the context of Brazil's social housing initiatives which strived to provide shelter and cover housing shortages for low-income families. This paper presents a case study for a social housing block in Brazil, which, as a typical low-cost project, was built to lower energy performance standards, leading to a lack of thermal comfort in the building. As a result, residents rely on mechanical ventilation and other appliances to achieve some level of thermal comfort. The implicit impact is that some low-income families cannot afford either equipment or the resulting increase in energy bills, therefore becoming exposed to health risks due to the hot and humid local climate. At the same time, the use of additional mechanical cooling and ventilation solutions increases both energy consumption and GHG

emissions for these projects. Using a system dynamics model, the authors explore various passive ventilation solutions and other low-cost design changes (e.g., use of insulating materials), showing that increased insulation and passive ventilation solutions (increased window size and opening typology) can drastically increase thermal comfort. In a broader context, this paper highlights the efficiency and potential of low-cost design solutions and draws attention to social housing as a key urban typology that may aid in reducing overall urban energy consumption and GHG emissions and increase thermal comfort.

Similarly, Mangan (Contribution B5) draws attention to retrofitting as a necessary step in improving the performance of the built environment. While energy performance legislation setting minimum targets can have positive impacts on the overall building stock performance, given that most urban contexts are dominated by existing, often historical stock, retrofitting plays a key role in holistically achieving future energy performance targets. This paper proposes a combined parametric and multiple-criteria decision analysis workflow for retrofit, which aims to help increase implementation and feasibility of retrofit solutions by providing insight to key decision makers (designers and homeowners) towards a performance-based rather than building code minimum requirement approach. On the basis of a case study in Istanbul, Turkey, the author proposes a three-step workflow including identification of key design parameters and performance indicators, performance analysis and generation of various combinations and possible retrofit approaches and a multi-objective analysis for the selection of an optimal solution that takes into account benefits and trade-offs between primary energy consumption savings, life-cycle cost savings and payback periods.

The last built environment paper provides further detail on how improved energy efficiency can be achieved using Combined Heat and Power (CHP) and Combined Cooling, Heating, and Power (CCHP) systems for the hospitality sector in Malta. Magro and Borg (Contribution B6) look at a range of Maltese hotel typologies, aiming to determine the technical and economic feasibility for improving energy efficiency by the use of CHP and CCHP, providing a holistic perspective which takes into account both the technical and energy demand features of the different hotel typologies in relation to the implementation of CHP and CHPP but also the impact of the broader legislative and market trends (e.g., feed in tariffs, available grants and other incentives). Based on the results of a series of simulations, the authors highlight that the feasibility of both co- and trigeneration solutions is highly dependent on financial factors, with CHP providing feasible solutions, especially for four-star hotels, and CCHP not feasible in the current legislative and market context despite its high potential and technical performance. The authors discuss implications and conclusions for hot and cold climates and how they may differ, highlighting the correlations between feasibility and policy, incentives and market approach (e.g., feed-in tariffs). Although this paper focuses on the Maltese context, it provides a clear example of the impact of policy and market approach in the adoption, feasibility and development of sustainable energy generation technologies.

While the built environment is highly representative in terms of establishing primary demands, the larger scale service provision including production, supply and waste cycling infrastructures play a key role in achieving sustainable future development. The following six papers, therefore, focus on industrial production processes, services and infrastructure.

Linking with the discussion around built environment and domestic energy consumption in the previous section, Aranda et al. (Contribution B7) explore the possible application of energy service provision and performance contracts in a domestic context. This paper proposes several innovative methods of applying pay for performance (P4P) approaches for domestic customers to improve benefits for both service providers and users. The authors highlight low-cost home smart system retrofits as a means of achieving the required levels of monitoring and data collection, as well as the implementation of artificial intelligence algorithms to optimize service provision, management of peak demands through flexible schemes and end user behaviors. While such approaches have been used in the past for commercial applications, variability of usage and challenges for data collection and installation of monitoring infrastructure have represented barriers for the emergence of similar initiatives in the domestic sector. The authors suggest that the application of P4P schemes in this context can improve overall energy performance while creating a new market sector and thus an incentive for energy service providers.

Aside from improvements in service provision through monitoring and optimization of energy consumption and resource allocation, the exploration and improvement of technologies harnessing waste and using alternative sources for energy production could provide key alternatives to fossil fuels. Based on the case of the German—Jordanian Water-Hydrogen-Dialogue project, Adisorn et al. (Contribution B8) explore relations between water, waste water, energy and hydrogen production sectors to identify opportunities for hydrogen production in water-scarce contexts like Jordan. Although hydrogen has a variety of applications (e.g., mobile fuel cells, excess energy storage, steel manufacturing), its production process is energy intensive, with 'green hydrogen' production depending on water availability. Using a combination of desk research and expert workshops, this paper provides a systematic analysis of water–hydrogen relationships, highlighting key water feedstocks deriving from water and waste water treatment processes and their potential use and reuse in the production of hydrogen. The findings are transferred to the specific context of Jordan, aiming to inform policy and decision making by highlighting key risks and opportunities. However, in a broader context, the research is relevant for both resource use management in water-scarce contexts and process optimization in general.

The following two papers focus on the optimization of lignin processing for bio-based refineries which have numerous applications including alternative (biofuel) energy production and other industrial uses. Adamcyk et al. (Contribution B9) explore optimization through high-temperature lignin separation during ethanol organosolv pre-treatment in biorefineries. The study demonstrates improved lignin production and overall yields. A series of experiments were used to analyze separation of extract and residual biomass at different temperatures after pre-treatment. Results showed that the higher lignin concentrations at high temperatures led to a 46% improvement in the yield of solid lignin without impacting lignin purity. Optimizing lignin production through this method, thus, has a high potential to improve efficiency and promote the economic viability of lignocellulose biorefineries.

Providing insight into potential applications and optimization of industrial lignin processes, for chemical and cosmetic industries, Tomasich et al. (Contribution B10) explore the use of colloidal lignin particles (CLPs) as a sustainable alternative to fossil-based and synthetic ingredients. Several experiments were conducted to determine production and characterization of CLPs from different bulk lignins and assess the potential of CLPs as emulsifiers in Pickering emulsions. The production process was successful in obtaining CLPs from a variety of bulk feedstocks as well as in stabilizing Pickering emulsions by use of CLPs, highlighting an opportunity to advance the transition from fossil fuel-based to bio-based economy, an essential step towards future sustainable development.

Given that sustainable economies and their associated industrial processes, supply strategies and services are still in an emergent stage, improvement and optimization of existing infrastructure and especially the mitigation of the risks and lingering effects of the fossil fuel economy are key to achieving sustainable futures. The following two papers provide poignant examples for how such approaches can be managed, focusing on the case of in-line inspection robots for gas and oil pipelines. A key implication of developing more efficient in-line inspection protocols is the early warning and prevention of potential environmental hazards due to pipeline malfunctions.

The first paper presents a comprehensive literature review and analysis of existing in-line inspection (ILI) tools and technologies for steel oil and gas pipelines. Parlak et al. (Contribution B11) review the key types of ILI tools providing a comparison of their associated sensor types, capabilities and limitations. ILI tools are classified according to pipeline structure and context, capability and application areas and assessed through comparison of advantages and disadvantages of various combinations of sensors. Findings suggest that although other tools are still more prevalent in today's market, due to their numerous advantages, electromagnetic acoustic transducer technologies are likely to dominate the smart ILI tool market in the future. Additionally, the authors discuss the positive environmental impact of ILI tools, noting significant reductions in hazardous incidents correlated with the gradual introduction of ILI tools.

The second paper (Contribution B12) presents a new approach towards wirelessly controlled ILI robots which can aid in early warning and prevention of ecological damages due to malfunctions along transmission and distribution lines for oil and gas. The authors add to existing research which used the transmission pipe as a conduit to transmit a low-frequency signal over 100 m [4], proposing and testing an improved system which can be used for complex transmission and distribution networks with various bends and specialized transitions. Based on laboratory and real-world tests, this study demonstrates significant advantages of the proposed system including the possibility of long-range video transmission and communication, improved communication by use of low-attenuation-frequency windows and feasibility of early diagnosis that can reduce incidents with detrimental environmental impact.

The final three papers included in the SI offer an additional human-centric dimension to the previous discussions around optimizations, policy and practical implementation of sustainable solutions, focusing on the impact of cultural factors and education on the adoption and successful implementation of sustainable solutions.

Linking with the previous infrastructure- and industry-focused papers, Iancu et al. (Contribution B13) discuss the impact of cultural factors on the adoption of low-emission passenger cars in EU countries. Through a mix of literature review and data collection, the authors first analyze distribution of high-, medium- and low-emission passenger vehicles across EU countries, followed by a characterization of each country based on six cultural dimensions according to Hofstede [5]. Using multiple regression analysis, the various degrees of low-emission vehicle adoption are correlated with the intensity of the six cultural dimensions, providing insight into the relationship between cultural traits and adoption of more sustainable transportation options. Based on this analysis, specific marketing strategies and considerations for decision and policy makers are discussed for each identified correlation, adding to the available tools that may help to encourage adoption of sustainable solutions.

Complementing cultural factors, Rosi et al. (Contribution B14) focus on the possible impacts of education systems on the adoption and implementation of sustainable solutions, exploring the relationships between logistics industry education, growth and sustainability in the context of the Middle East (ME). This study developed a novel conceptual framework to help analyze data and identify keywords relating to the integration of sustainability integration logistics and supply chain management-related study programs. Correlation analysis was conducted to identify links between sustainability integration and sustainability and logistics performance index. The results of the study indicate varied conditions across the 15 ME countries analyzed, with some areas focused on efficiency more than sustainability and an overall lack of integration for topics such as circular economy and corporate social responsibility. Although no correlation was established between sustainability curriculum integration and country-wide sustainability indices, given the high impact of education curriculums through their influence on graduates and future work force, the conceptual framework and analysis method presented provide a systematic and practical example for identifying curriculum areas in need of improvement.

This SI concludes with a paper by Abina et al. (Contribution B15) presenting interface design and testing results for a computer-based interface (decision support system) for monitoring sustainability-related competencies in higher education institutions. A result-oriented engagement system for performance optimization (RESPO), which collects data on required competencies and available educational programs, is adapted from a business-oriented application for higher education institution contexts. Using practical trials in higher education institutions, the interface successfully underwent initial testing

and validation with further testing and development underway. A key finding was the identification of the need to integrate key competencies in relation to educational programs, employer needs and international and national strategies. Overall, the presented RESPO system shows an excellent potential as a tool that uses learning analytics and competency monitoring to inform decision making in higher education, as well as to improve competency levels and adoption of sustainable approaches.

This SI of *Sustainability* overviews selected papers submitted to the SDEWES Conference series in 2022. These papers address various facets and scales of sustainable development, providing insight into linkages between different sectors, disciplines and policies which are vital to ensuring sustainable futures. Practical examples of new methods and approaches for future development of circular and bio-based economies, energy-efficient buildings, optimizations of services, supply chains, infrastructure, cultural and educational aspects are included alongside strategies for retrofitting, improved management and mitigation of potentially hazardous effects of existing infrastructures. This SI and selected papers respond to the urgent need for interdisciplinary knowledge development, understanding of interlinkages and adoption strategies required to address the growing climate crisis and accelerate the implementation of sustainable alternatives.

Future SDEWES conferences will continue to provide a platform for interdisciplinary discussion and dissemination of new research methods and findings on sustainability. Readers are referred to the International Centre for Sustainable Development of Energy, Water, and Environment Systems (SDEWES) for information on the upcoming events [6].

**Author Contributions:** Conceptualization, O.S., R.R. and I.C.G.; writing—original draft preparation, I.C.G.; writing—review and editing, O.S. and R.R.; supervision, O.S. and R.R.; project administration, O.S. and R.R. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **List of Contributions A: SDEWES 2021**


#### **List of Contributions B: SDEWES 2022**


#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **A Feasibility Study on CHP Systems for Hotels in the Maltese Islands: A Comparative Analysis Based on Hotels' Star Rating**

**Bernice Magro and Simon Paul Borg \***

Department of Environmental Design, Faculty for the Built Environment, University of Malta, MSD 2080 Msida, Malta

**\*** Correspondence: simon.p.borg@um.edu.mt; Tel.: +356-23402870

**Abstract:** In Europe, the energy consumed for heating and cooling purposes by the hospitality sector is significant. In island economies such as that of the Mediterranean Island of Malta, where Tourism is considered essential to the local economy, energy consumption is perhaps even more significant, and energy-efficient systems, or the use of renewable energy, are often listed as possible solutions to counter this. Based on this premise, the research contained in this paper presents an investigation on the technical and financial feasibility of using Combined Heat and Power (CHP) and Combined Cooling, Heating, and Power (CCHP) systems for the hospitality sector in Malta. Using a supply– demand design methodology, the research made use of the software package RETScreen to model the electrical and thermal demand of a number of hotels ranging from 3- to 5-star hotels. Based on these modelled hotels, different scenarios were simulated to analyze the technical and financial implications of installing a CHP in these modelled hotels. A number of parameters, including thermal size matching, presence of financial grants, electricity tariffs, feed-in tariffs, and fuel prices, were tested out for a total of 144 scenarios. Results showed that the parameters having the highest impact were those of a financial nature. Specifically, the study showed that the 4-star hotels considered were the hotels which would benefit the most from having such systems installed.

**Keywords:** combined heat and power; hospitality; feasibility; sensitivity analysis; simple payback

**1. Introduction**

The Energy Performance of Buildings Directive (2012/27/EU) [1], and subsequent revision (2018/844/EU) [2], highlight the fact that around 40% of the EU's final energy consumption is used for building space heating and cooling.

Hotels are no different from any other building, in so far as energy requirements go, and they require energy commodities such as space heating and cooling, domestic hot water, and electricity, and often this energy requirement is significant. In fact, given the building typology and the type of activity going on in hotels, various authors have highlighted how, compared to other buildings, hotels are significant energy consumers. Pérez-Lombard et al. [3], for example, highlighted how in 2003 the energy consumed by hotels as a percentage of the total energy consumed by the commercial sector varied between a minimum of 14% in the USA and a maximum of 30% in Spain. Likewise, Smitt et al. [4] describe hotels as very energy-intensive buildings whose market, at least until the onset of COVID-19 in 2020, experienced a significant annual growth of between 7–13%.

In response to this, on a national level, various countries have placed emphasis on reducing energy dependency, especially where heating and cooling are involved, and a number of legislative, technical, and academic documents encourage the introduction of measures that promote energy-efficient systems, such as cogeneration installations as an alternative to traditional heating and cooling.

The Mediterranean Islands of the Maltese Archipelago are no different from any other developed country, and the fact that the country heavily relies on tourism [5] makes this

**Citation:** Magro, B.; Borg, S.P. A Feasibility Study on CHP Systems for Hotels in the Maltese Islands: A Comparative Analysis Based on Hotels' Star Rating. *Sustainability* **2023**, *15*, 1337. https://doi.org/ 10.3390/su15021337

Academic Editors: Oz Sahin and Russell Richards

Received: 17 December 2022 Revised: 2 January 2023 Accepted: 9 January 2023 Published: 10 January 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

aspect even more important. The National Energy and Climate plan of Malta, for example, states that Malta needs to achieve an energy saving of 0.24% of the annual final energy consumption each year for the next 10 years until 2030 [6]. As is to be expected, in the hospitality sector heating and cooling are the predominant energy loads, with the plan highlighting that the main consumer of fuel-based spatial heating is the hospitality sector.

Based on this context, the scope of this paper is to present a holistic feasibility study on the use of Combined Heat and Power (CHP) and Combined Cooling, Heat and Power (CCHP) systems in hotel buildings in Malta. Contrary to most studies, however, rather than focusing on one type of hotel and its specific energy demands and commodity costs, this study takes into consideration a more high-level approach to cover a wider spectrum of hotels with their energy demands and boundary conditions. The objective of this study is therefore to observe the potential energetic and financial impacts that a CHP and a CCHP system has on a hotel, based on its star rating and eventually on its characteristics.

### **2. The Maltese Hotel Industry, Its Energy Consumption, and the Use of Cogeneration** *2.1. Tourism in Malta—The Hotel Industry and Its Classification*

According to Maltese legislation, Legal Notice 351 of 2012 (Tourism Accommodation Establishments) [7], hotels in Malta are classified into five classes (or stars), with the classification being based on several criteria which licensed hotels must abide by in order to get classified. Such criteria include aspects such as cleanliness, general impression, reception, room equipment, noise control and air conditioning, sleeping comfort, public areas, facilities, etc. Based on these criteria, each hotel is given points and ranked according to the standard achieved. Focusing particularly on the three- to five-star hotel range, for which this research is intended, given that this industry segment equates to almost 83% of the total beds in serviced accommodation, a 3-star hotel has to obtain 250 points, a 4-star hotel has to attain 380 points, while a 5-star hotel has to attain and maintain 570 points.

Based on this classification, and considering only the three- to five-star hotel range, the market size is as shown in Figure 1 [5].

**Figure 1.** Number of Hotels (**left**), Number of Beds (**right**) in the three- to five-star hotel range.

#### *2.2. Energy Consumption in the Hotel Industry in Malta*

Although it is often thought that the energy consumption of a hotel is directly related to the respective hotel's classification, it is in fact more complex than this, and there are several factors impacting the energy consumption of hotels. Such factors often include occupancy rate, building typology, facilities within the hotel such as laundries and spas, etc. Whereas one manages to find the energy consumption of certain specific hotels, it is often difficult to find comprehensive studies studying the energy consumption of a range of hotels, often because such studies are expensive and cumbersome to make given the huge variety. In fact, it has been only recently that large hotels in Malta started being requested to produce energy audits in line with Legal Notice 196 of 2014 (Energy Efficiency and

Cogeneration Regulations) [8]. Since then, a benchmarking scheme has been established for the Maltese hospitality sector, and this has provided a good insight as to the energy consumption of hotels in Malta. This will be discussed in detail in Section 3.1.

#### *2.3. CHP—Combined Heat and Power*

One of the measures which is often brought forward to address energy-efficiency is the use of Combined Heat and Power. CHP, or cogeneration, is the simultaneous production of thermal and electrical energy, with the former being a utilized form of the waste energy stream of the latter [9]. The system takes the form of a prime-mover which produces electricity, from where waste heat can be recovered and utilized for multiple purposes, including water and space heating. With the addition of thermally activated chilling systems, the waste stream from cogeneration systems can also provide space cooling, and become known as trigeneration systems or Combined Cooling, Heat, and Power [10,11].

There are different prime-mover options available on the market which can be used as CHP systems, including fuel cells, reciprocating internal combustion engines, microturbines, gas turbines, and steam turbines. Whereas fuel cells are still not commercially competitive to satisfy the energy demands required, gas and steam turbines are only typically commercially competitive when required to satisfy energy demands, which are above what is typically required to satisfy the energy demands of a hotel [12]. Reciprocating internal combustion engines, on the other hand, are very flexible systems offering a wide range of power capacities, the possibility of heat being recovered at different locations and temperatures, and the possibility of using different types of fuels [12]. Based on this premise, and as will be discussed later, the prime mover which was considered for the purpose of this study was the reciprocating internal combustion engine.

Other than the energetic and environmental performance, the principal benefit of using a CHP system is economic, either through saved fuel costs [13] or through fiscal incentive schemes such as electricity feed-in tariffs (FIT) [14,15]. Various case studies have shown that savings created over the lifetime of such systems are enough to offset the total lifetime operating cost, including capital costs, running costs, and maintenance costs, and leave enough profitable margin to invest in the system [16–19]. Generally, most systems are used to reduce electricity costs, either as a base load system or else as an electricity peak shaving device, whichever makes most economic sense [16]. CHP systems also offer increased power reliability since power interruptions would have less of an impact [11]. Since CHP systems are more energy-efficient, a reduction in emissions such as CO2, NOX, and SO2 per kWh generated is also to be expected when compared to systems which generate heat and power separately. Additionally, grid-related benefits can be also obtained. These can include reduced grid congestion, reduced peak power requirements, and less transmission and distribution losses [12]. It is therefore no surprise that cogeneration accounts for more than 8% of the total electricity generation [20].

Notwithstanding this, many challenges keeping cogeneration and trigeneration units from reaching high market penetration levels remain. Such challenges are often associated with the financial risk inherent to an investment, which is considered as a significant upfront expenditure, coupled with the requirement of specialized technical knowhow, which must guarantee an optimized design for the specific circumstances where the investment is being made [21].

#### *2.4. Cogeneration in the Hotel Industry*

Literature is full of examples of cogeneration and trigeneration systems being integrated in hotels, either through simulation or as field studies of real case scenarios. Some such as Salem et al. [13] and Rotimi et al. [22] use an existing hotel for the optimum sizing of a CHP system using software. Others such as Galvao et al. [23] and Smith et al. [24] are more specific in their analysis of cogeneration in the hotel industry, focusing on bioenergy and thermal storage, respectively.

The focus, however, is always on individual case scenarios, targeting one hotel with its specific energy characteristics. The reason for this is that, as pointed out earlier, apart from the climatic conditions, there are several factors impacting the amount of energy which a hotel uses. These include physical parameters, such as the building footprint and construction typology, and the efficiency of the systems installed, and operational parameters such as the presence of catering facilities, laundries, swimming pools, spas, seasonality, and occupancy levels. All these contribute differently to the heating and cooling demand of a hotel.

Given this diversity, matching the CHP to the energy demand of a specific hotel is often a laborious task. Depending on whether they are thermally or electrically matched, CHP systems can perform at a loss if they are oversized, as there would be significant instances in time when they would be idle, unless of course heat is dumped or the system is made to work for a significant portion of time at part-load. Likewise, undersizing a system typically, though not always, does not allow the full potential of a CHP design to be obtained [25]. A financial analysis carried out on a number of UK hotels showed that the payback period for differently sized systems was typically in the order of 2.5 years when the CHP system was appropriately sized, whereas for smaller systems, although less expensive than the larger systems, the payback period was longer due to less savings [26]. It is, however, only fair to point out that such results have to be seen in the context of the fiscal environment where the CHP is operating.

Having a perfectly supply–demand matched CHP system is nonetheless important for the success of any project, and typically detailed studies are required for each specific hotel before any system is financed or given the go ahead. In this context, research [24] investigating the use of trigeneration systems for the hotel industry in North Cyprus found that when sizing the trigeneration system for the minimum base load, the system was more feasible than when sizing it for the maximum value of the base electrical load. In the cases investigated where the minimum base load was met, the simple payback period was between one to four years. For the trigeneration systems which were sized for the maximum electrical load, the simple payback period was between one to six years for systems connected to the grid [26].

#### **3. Methodology**

The paper hereby being presented makes use of a supply–demand design methodology, whereby the thermal and electrical characteristics of a number of representative hotels were first modelled using the software package RETScreen [27], for the results then to be subsequently used in a series of parametric sensitivity studies conducted to understand the effect of a number of parameters on the feasibility of a CHP or CCHP system deployed in a particular hotel typology.

Given that as discussed in the introduction the purpose of the analysis was a holistic, high-level approach to analyze the feasibility of CHP and CCHP systems in different hotel typologies in Malta, the software RETScreen was chosen, as it is a purposely designed tool aimed specifically at assessing the feasibility (both technical and financial) of clean or alternative energy technologies, such as cogeneration [28], under a variety of scenarios.

#### *3.1. Modelling the Demand*

The methodology made use of data measured and collected from six different hotels operational in the Maltese Islands. The data set was derived from surveys conducted for the BEST (Benchmarking Energy and Sustainability Targets) program [29], a benchmarking program initiated in 2016 aimed at monitoring energy consumption in hotels and putting forward simplified calculations for energy savings based on the use of energy-efficiency technologies. In this regard, Figure 2 shows a part-screenshot of some of the technologies which can be studied within the BEST platform, whilst Figure 3 shows the portal where the data can be inserted by the user.


**Figure 3.** BEST platform showing the energy input portal.

The data utilized in this study primarily consist of monthly electrical energy, water, fuel consumption data, and relative guest occupancy. The monthly data related to energy is subdivided into energy consumed, renewable energy generated, and heat produced. Other operational data, such as footprint and conditioned area, were also collected through this program. To create a diverse enough spectrum of hotels, the data of six different hotels were chosen. Two of the hotels had a 5-star rating, two had a 4-star rating, and two had a 3-star

rating. Since the study is related to CHP systems, the heat demand was considered based on the fuel consumed for heating. Therefore, the 5-star hotels chosen had a despairing fuel consumption, with one higher than the other. The same reasoning was applied for the choice of the 4- and 3-star rated hotels. Although the physical size of the hotel was not the only factor impacting the electrical and/or fuel consumption, as the facilities offered also played a role, it was noted that the hotel with the higher energy consumption in each category was always practically double in size in terms of heated floor area (m2). The data shown in Table 1 below is the data that was collected and used for the simulations. Given the financial nature of the data, only a limited amount of data is disclosed in this paper.


**Table 1.** Raw data used for the simulations.

#### *3.2. Paramters Investigated*

Based on their technical and financial importance in conducting such a feasibility study, for this study five parameters were chosen, namely:


It needs to be noted that although the prices and tariffs quoted for fuel and electricity, were the ones in force in 2020, these have remained largely unaltered over the last two

years, as the Government of Malta decided to financially absorb and cap much of the international increases in energy prices. Therefore, notwithstanding some subtle increases, these increases are well within the ranges investigated in the study.

All simulations were carried out for the 6 hotels to gauge the feasibility of using such systems. Figure 4 shows the parameters and variables considered, and the different simulation combinations created.

**Figure 4.** Parameters and variables investigated in each simulation.

#### *3.3. Assumptions and Modelling Considerations*

As part of the study, a number of assumptions needed to be considered. Below are some of the assumptions considered as part of the modelling created and the simulations carried out:


heating peak loads were assumed to be met using power from the grid, and an additional boiler, respectively.

When modelling the reciprocating engine, four factors were taken into consideration, namely, the power capacity (kW), the minimum capacity (%), the engine's heat rate (kJ/kWh), and the heat recovery efficiency (%). The latter three conditions were kept constant, while the power capacity, being one of the investigated parameters, was varied between oversized, matching, and undersized.

The minimum capacity is the lowest power at which the engine can operate. A value of 25% was chosen, this being the typical minimum capacity for reciprocating engines. This value is of importance since, if the system cannot be turned down according to the heating needs, the electricity would have to be sold to the grid or the CHP would have to be turned off and intermediate systems would need to be used. The heat rate specifies the fuel consumed per unit of power output. The heat rate was set at 9500 kJ/kWh, a typical heat rate for different reciprocating engine sizes [34]. The heat recovery efficiency specifies the amount of available heat which can be recovered by the system being proposed. Not all the heat produced can be recovered as sometimes the recovery temperature is too low, therefore the heat recovery efficiency was set to 75%. The system availability was set to 95% to account for annual maintenance [35].

#### **4. Results**

#### *4.1. Energy Results*

For each simulation carried out, the energy and financial performance of the proposed system was evaluated. Table 2 shows the energy analysis for the CHP systems of all the representative hotels considered.

From the values shown in Table 2, it can be observed that the results obtained are indeed affected by the size of the engine chosen. In all cases the efficiency of the CHP system was constant at 81.4%, since the operating strategy was kept constant as heat load following. However, it can be noted that for *hotel 1*, *hotel 2*, *hotel 5*, and *hotel 6* the matched size and oversized reciprocating engines deliver the same amount of electricity to the load, and the same amount of fuel is consumed, while for *hotel 3* and *hotel 4*, when the system is oversized, less electricity is supplied to the load and less fuel is consumed. The reason for this can be attributed to the fact that the minimum capacity was set to 25% of the power capacity of the reciprocating engine. If the monthly load is less than the minimum capacity, the model assumed that the system is 'Off' during that period. For the former hotels, this shows that when oversizing the system, the engine still works over the specified minimum capacity. The results obtained for the latter two hotels indicate that these hotels are more sensitive to the amount of operating hours, and in these cases, less heat is recovered.

When the CHP system is undersized, it can be observed for *hotel 1*, that more electricity is being produced and more heat is being recovered, than when the system was at the matched size. This is likely because when a smaller system is being utilised it has a better matching power-to-heat demand ratio, thus operating for a higher number of hours and eventually producing more energy. The excess heat recovered is being used by the hotel and some heat is still required, therefore in some time periods the boiler is still required to reach the peak load. This also applies to *hotel 2*, *hotel 4*, and *hotel 6*. In *hotel 3* and *hotel 5* when the system is undersized, slightly less electricity is being produced and less fuel is being consumed compared to the matched sized system.

Finally, it is interesting to observe how in all scenarios there is no electricity exported, meaning that the power-to-heat demand ratio of the hotels is in all cases significantly higher than that which can be provided by the CHP systems considered in the analysis. For warm climates this could be solved either by adding the cooling load to the total thermal load, as will be discussed for the CCHP case, or by adding thermal storage to the system.


**Table 2.** Energy analysis of CHP systems.

#### *4.2. Financial Results*

In terms of financial results, this study utilized the Simple Payback as the assessing method, as it quickly shows when a CHP system would be most feasible to operate, and therefore best summarizes the financial performance which can be obtained.

In the analysis carried out in this study, a payback period higher than 30 years was considered not to be feasible, since the lifetime of most CHP systems typically does not exceed that range [36]. To this effect, Table 3 illustrates a heat map showing the results obtained in terms of the payback period in years for the different scenarios investigated.



Given the large number of combinations carried out, for space limitations, only the extreme values for each variable are being shown. These are shown under three headings: thermally matched CHP system (heading M), oversized CHP system (heading O), and undersized CHP system (heading U). The feed-in tariff is not being included, given that for the CHP-only analysis, no electricity was exported to the grid. To better show the results obtained, a color coding system was used to show the results. The scenarios marked in red show that the payback period is too high for the system to be considered feasible (>30 years). These results are marked as (NP), meaning that the system is not profitable and that the annual costs incurred are higher than the annual savings generated, and therefore the system would not be feasible. The result boxes marked in green show that the system is feasible (<30 years). Different shades of green are used to indicate for which scenarios the system would be most feasible. The darker the shade of green, the lower the payback period.

Although the study was carried out on only six different hotels, certain trends when laid out in tabular form become quite clear. For this reason, rather than analyzing the numbers for each individual hotel, the general trends which have been observed through the study will be presented. Amongst these general trends, the following observations are perhaps the most important:


Comparing the results on a star rating basis, it can be observed that when comparing the two 5-star rated hotels, although *hotel 2* has a much smaller CHP system due to a lower heating demand, a higher payback period can be observed than *hotel 1* for all CHP sizes investigated. It can therefore be concluded that even if the initial investment is lower, the project may still not be as feasible due to a lower return.

For the 4-star hotels, a low payback period can even be observed at mid-point electricity tariff of €0.15/kWh, which is around 4.7 years for systems having matched CHP system size, a grant of 65%, and coupled with the highest fuel rate. The table show that the payback period of a CHP system installed in the 4-star rated *hotel 4* remains quite feasible, even if the CHP system is oversized. However, similarly to the 5-star hotels when the grid electricity tariff is at its lowest point, that is €0.09/kWh, the system becomes less feasible, except for the undersized system.

Relating the 3-star hotels, it can be observed that *hotel 6*, which has a CHP system half the size of *hotel 5*, was still less feasible than *hotel 5*. Even when the system was oversized, *hotel 5* was still deemed to be feasible, however when no grant was considered in the analyzis the system had a much higher payback period, especially at the higher fuel rates.

#### *4.3. Combined Cooling, Heat, and Power Results*

Based on the fact that the simple payback period results obtained for *hotel 4*, were the most promising, it was decided that *hotel 4* was the hotel chosen to be modelled with a complete CCHP system.

When modelling a CCHP system, some minor changes had to be carried out on the base model to have a more realistic CCHP model. Apart from selecting the combined cooling, heating, and power model and setting up the cooling load and the cooling equipment, as explained in the assumptions listed in Section 3.3, the size of the reciprocating engine had to be increased from 40 kW to 301 kW to include for the additional thermal load required to drive the absorption chiller. If instead of an absorption system, an electrical system was chosen, the heat load required would have remained the same.

Additionally, as indicated in previous sections, the minimum capacity of the reciprocating engine had to be specified. For CHP systems, the minimum capacity throughout the simulations was kept at 25%, however for the CCHP system simulations this had to be reduced to 10%. Since the engine was enlarged in size to cover for the thermally driven cooling load, if the minimum capacity had been kept higher than 10% it would have rendered the system non-operational for the shoulder months. In fact, due to the higher power capacity in these modelled scenarios, electricity was exported to the grid, and therefore the feed-in tariff also played a role in these simulations.

In order to satisfy the additional heating load imposed by the absorption chiller, the size of the boiler also had to be increased from 45 kWth to 345 kWth.

#### 4.3.1. CCHP—Energy Results

Table 4 below shows the energy results when adding an absorption system to the originally modelled CHP system. It can be observed that even when the reciprocating engine is undersized by 20% it still had a higher capacity than that required to satisfy the electricity load (after removing the cooling load).


**Table 4.** Energy analysis for a CCHP system following the heating load feeding *hotel 4*.

#### 4.3.2. CCHP—Financial Results

Table 5 shows the financial results obtained for the *hotel 4* equipped with a CCHP system. In this case, only the simulations for a 65% grant on the capital cost are being considered for the matched size CCHP system, as shown.


**Table 5.** Financial analysis for a CCHP system following the heating load feeding *hotel 4*.

A general observation from the results obtained is that a CCHP system for the hotel investigated, *hotel 4*, at the proposed financial conditions (and heat demand matching) is not feasible. In fact, when considering the simple payback period, it is always not profitable, since none of the scenarios investigated returned a payback period smaller than 25 years. Likewise, the Net Present Value is always negative for all the different sized systems. When analyzing the simulations further, for the matched sized system to be feasible, the electricity tariff must be in the region of €0.30/kWh, whilst the propane cost must be at around €0.40/kg. Alternatively, increasing the feed-in tariff to more than the current €0.11/kWh is also a possible solution to get the system feasible.

#### **5. Conclusions**

Based on the premise that typically a detailed feasibility study needs to be done whenever a CHP or CCHP is being considered as a possible energy-efficiency measure for an energy-demanding and complex building such as a hotel, the scope of this study was to diverge from that basic idea by providing a holistic high-level approach towards providing general trends on the performance of CHP and CCHP systems in the hospitality sector in Malta. Instead of focusing on just one hotel, this study therefore analyzed a variety of hotels, ranging from 3-star to 5-star hotels to investigate the performance of an installed CHP system, and how a number of technical and fiscal parameters would affect such a performance.

By using RETScreen, an energy management software program, six hotels were modelled using real-life measured data from existing hotels. The hotels were chosen to cover a wide range of hospitality energy consumption trends. Once the six chosen hotels were modelled, five different parameters were chosen and varied to study their impact on the feasibility of a CHP system installed in these six hotels. Simulations were done to determine which variable had the most influence on the feasibility of installing a CHP system. The parameters considered were the size of the CHP system, the availability of a capital grant, the grid supplied electricity tariff, fuel prices, and the feed-in tariff. A total of 144 simulations were carried out. For the hotel with the most promising results, a detailed CCHP analysis was then added to include for the provision of space cooling. Notwithstanding the fact that the study was conducted on a sample size of only six hotels, interesting trends and results were obtained, which will be augmented and consolidated in future with the inclusion of more hotels in the study.

From an energy point of view, the main results and trends observed mainly revolved around three aspects:


From a financial point of view, the main results and trends observed mainly revolved around the sensitivity of the system performance to the fiscal parameters being analyzed. Specifically, that:


For CCHP systems considering the additional cost of the chiller, the same results obtained for the CHP-only case largely apply, with the FIT playing an important part in terms of the financial soundness of such systems.

**Author Contributions:** Conceptualization, B.M. and S.P.B.; methodology, B.M. and S.P.B.; software, B.M. and S.P.B.; validation, B.M. and S.P.B.; formal analysis, B.M. and S.P.B.; investigation, B.M. and S.P.B.; resources, B.M. and S.P.B.; data curation, B.M. and S.P.B.; writing—original draft preparation, B.M.; writing—review and editing, S.P.B.; visualization, B.M. and S.P.B.; supervision, S.P.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Given the financial nature of the data, only a limited amount of data is disclosed in this paper.

**Acknowledgments:** The manuscript is a revised version of an original scientific contribution that was presented at the 17th Sustainable Development of Energy, Water and Environmental Systems (SDEWES) conference held between the 6th and the 10th of November 2022 in Paphos, Cyprus and that was subsequently invited to be submitted for review for inclusion in a special issue of Sustainability dedicated to the said Conference. Compared to the original conference paper, the introduction, the literature review and the conclusion were completely re-written and expanded to better highlight the approach taken in this study, thus better highlighting the novelty addressed within the research article. Additionally, the results were re-organized to improve on the presentation. Specifically, Table 3, the '*Simple payback heat map analysis for all scenarios*' was added to better show the results obtained.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

**Edmundas Monstvilas 1, Simon Paul Borg 2, Rosita Norvaišiene˙ 1,\*, Karolis Banionis <sup>1</sup> and Juozas Ramanauskas <sup>1</sup>**

	- MSD 2080 Msida, Malta

**Abstract:** As per general provisions of European Directive 2010/31/EU on the energy efficiency of buildings (recast), the Lithuanian government transposed the Directive into Lithuanian national law. In the process, the Lithuanian government prepared strategic documents in the field of energy performance and renewable energy that were integrated together through the National Energy and Climate Plan for 2021–2030 (NECP). To better understand the current situation vis-à-vis energy performance, the main characteristics of buildings pertaining to the Lithuanian multi-apartment building stock, classified according to their energy performance class, are analysed and discussed in this paper. Through the exploitation of data from the national Energy Performance Certificate (EPC) register, an overview of the energy performance of the existing Lithuanian residential building stock is presented along with an analysis of the unused potential energy savings pertinent to this building category. The results obtained from the analysed data of energy consumption in buildings shows that the policies adopted over the years were successful in improving the building stock, promoting the move towards the specifications required by a Class A++ (nearly zero energy buildings—NZEB) by 2021. The results show that this was primarily achieved by a significant reduction in the thermal energy used for space heating.

**Keywords:** energy performance certificate; CO2; multi-apartment buildings; heating; energy consumption; cooling

#### **1. Introduction**

Building energy consumption is a significant area of research, often leading to direct policy action aimed at improving energy efficiency. This is often performed using various strategies and often accompanied by the requirements set through voluntary and sometimes binding international agreements, aimed at reducing greenhouse gas emissions.

Since 2010, the amount of energy-related Carbon Dioxide (CO2) emissions by buildings has been steadily increasing by around 1 percent per year, until 2020 when it dropped to 9Gt due to the decreased activity in the services sector [1]. Notwithstanding the fact that minimum energy performance standards are becoming stricter, and the installation of heat pumps and renewable equipment is growing, the energy sector continues to increase the amount of greenhouse gases emitted. In fact, in 2019, the direct and indirect amount of greenhouse gas emissions emitted by buildings reached an all-time high of 10Gt [2]. It was the highest level of CO2 ever recorded and was mainly due to the growing demand for energy-related space heating and cooling. The continuous consumption of fossil fuels, the lack of clear policies, and insufficient investments in sustainable buildings are also often blamed for the limited achievements being obtained in the area of energy efficiency in buildings. This is irrespective of the huge potential in energy savings available.

**Citation:** Monstvilas, E.; Borg, S.P.; Norvaišiene, R.; Banionis, K.; ˙ Ramanauskas, J. Impact of the EPBD on Changes in the Energy Performance of Multi-Apartment Buildings in Lithuania. *Sustainability* **2023**, *15*, 2032. https://doi.org/ 10.3390/su15032032

Academic Editors: Oz Sahin and Russell Richards

Received: 19 December 2022 Revised: 12 January 2023 Accepted: 17 January 2023 Published: 20 January 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### *1.1. Directives as a Tool towards Achieving Energy Efficiency in Buildings*

Over the years, to counter this inertia in improving energy efficiency in buildings, the main tool adopted by the European Union has been in the form of European Directives, typically requiring Member States initially to legislate minimum building energy efficiency requirements, and eventually to reach a situation where all new buildings have to be nearly Zero Energy Buildings. Starting in 2002 with the Energy Performance of Buildings Directive (EPBD) (2002/91/EC) [3], up until the latest revised Energy Performance of Buildings Directive in 2018 (2018/844/EU) [4], the EU has used these Directives to promote energy performance in buildings through a harmonised European approach. The EU Directives on the Energy Performance of Buildings in 2010, and more recently in 2018, specifically obliged EU member states to comply with minimum requirements for energy performance in new and present buildings, to drive a reduction in energy consumption and greenhouse gas emissions, and indirectly create an increased demand for the consumption of energy produced from renewable sources [1–3].

The latter version of the Directive aims at minimizing even further the use of fossil fuel for the energy provision of buildings, specifically aiming to make the energy performance requirements even stricter and move towards the construction of zero-emission buildings [5]. To implement this goal, the following indicators are set to be achieved by 2050 (compared to 2020):


#### *1.2. Building Certification*

Throughout the years, and generally since the first version of the Directive, the level of energy performance of buildings and hence their environmental impact has been assessed through certification. Although certification in EU Member States may take slightly different forms, not so much in the scientific methodology, but rather how this is practically carried out, building energy performance certification typically is a procedure regulated by legal acts, covering the calculation of the energy consumption of the building, the evaluation of the energy performance of the building, and the assignment of its performance to a specific energy performance class. The original main idea of Energy Performance Certificates (EPC) was to create a document that would be used to provide information to the participants in the construction sector (landlords, tenants, real estate agents, etc.) about the energy performance of buildings [4–6], and hence make an informed decision on the 'value' of a particular building.

EPCs were initially introduced more than 20 years ago; however, very little research has been conducted to analyse the results and impact of the certification policy on the construction market [7–13]. There is a lack of a comprehensive overview of the available data in the EPCs used in each country across the EU, and although numerous studies on EPCs have been carried out in Northern and Southern Europe [14–21], detailed analysis of energy consumption by individual building engineering systems in the context of total energy consumption and CO2 emissions is severely lacking.

#### *1.3. Energy Efficiency in Buildings: Progress in Lithuania and Scope of Research*

In the process of implementing the provisions of a number of EU directives, the Lithuanian government has also prepared many strategic documents in the field of energy performance and renewable energy that were integrated together through the National Energy and Climate Plan for 2021–2030 (NECP) [22].

Specifically, the Lithuanian government is making great efforts to improve the energy performance in buildings. In this regard, being an essential element of the Lithuanian Energy Strategy for 2030, the decrease in current energy consumption of buildings needs to necessarily start by defining the Lithuanian building stock, especially the residential sector, distinguishing the various specific characteristics and elements of buildings. This is with the scope of determining the unused potential of energy savings. To this end, a thorough analysis of the EPC data could help to disclose the response of the construction market to increasingly stricter energy performance regulations.

To this effect, following a thorough review of the literature of the development of energy performance regulations in Lithuania, presented in Section 2, the article hereby being presented discusses and analyses the main characteristics of Lithuanian multi-apartment buildings that determine the energy performance class of buildings within the context of energy performance. Based on this, Sections 3 and 4 present the main purpose of this article, that is, to determine the differences between buildings that satisfy the requirements of the EPBD of 2006 (buildings of class C) and those that satisfy the demands of the EPBD directive of 2021 (buildings of class A++); that is, to determine the impact that increasingly stricter requirements set by subsequent revisions of the EPBD directives have had on the various energy performance indicators of buildings in Lithuania. Such indicators include the heated area, building compactness index, envelope average U-value, and energy consumed for hot water production. Results are presented in Section 5.

#### **2. Development of Energy Performance Regulations in Lithuania**

The inception of environmental protection and energy-saving policies in Lithuania dates back to 1992 when the first energy performance specifications for buildings were approved. The values of the normative thermal transmittance coefficients of the envelopes were reduced by approximately three times due to this newly approved regulation.

The energy performance requirements of buildings were updated for the second time in 1999 and for the third time in 2005. The obligations approved in 2005 were based on the EPBD regulation of 2002. The requirements for the certification and classification of energy performance were provided, and the plan was made to improve the overall building energy performance before 2020.

The certification of energy performance of buildings in Lithuania first began in 2006, with the transposition of the first European Directive (2002/91/EC) [3]. Certification is carried out using an approved calculation methodology that takes into consideration aspects such as building envelope, heating and cooling systems, lighting, and other energy systems present. On the other hand, the impact of user behaviour on energy performance indicators of buildings is not taken into account. The certification calculation methodology has been regularly updated since it came into use. Before 2012, it was calculated according to the annual calculation method [3]. Later, however, the monthly method was applied. Primary energy consumption was included in the calculation method in 2014. Currently, the Technical Building Regulations STR 2.01.02: 2016 '*Design and certification of energy performance of buildings*' [6] provides the background for the certification methodology, according to which the following aspects of building energy consumption must be evaluated:


The energy efficiency of buildings in Lithuania is not related to a particular numerical value of energy consumption but rather is defined according to the energy performance characteristics satisfied by a particular building. Buildings are divided into nine classes, namely: A++, A+, A, B, C, D, E, F, and G, according to their energy performance characteristics. Based on this principle, the legal acts set normative minimum requirements for the thermal characteristics of building envelopes, such as the U-value for roofs, external walls, windows, doors, etc., which must be satisfied for a building to be included in a particular energy performance class [23]. Whereas this applies to all energy performance classes, from energy performance class C upwards, the performance of the building's engineering systems and other building characteristics, such as the calculated air tightness of the building, the efficiency of the ventilation heat recovery system, and the presence of renewable energy sources, is also taken into consideration in awarding a building with a particular energy performance class.

Based on this premise, the research being presented in this paper analyses data collected from multi-apartment residential buildings certified during the period of 2014–2020, stored in a central repository held at the Centre of Certification of Construction Production, as will be discussed later on. Buildings' certification is compulsory when buildings are sold, purchased, rented, inherited, and before renovation, with the goal of improving energy performance [4]. In order to complete construction procedures of new buildings, the EPC is also compulsory. The same applies to buildings after major renovations. In the case of purchase, sale, or rental of the building, an individual apartment may be certified if the whole building has not been certified as yet. A typical EPC without detailed measurements and calculations may also be issued for building units (apartments) in old buildings. Such a procedure was established by the Lithuanian government in order to reduce residents' expenses for certification when apartments are sold or rented and is valid only for individual apartments not entire buildings.

From a preliminary analysis, it results that most of the renovated buildings belong to class C. This occurred because, when the buildings were being renovated, in most cases, it was not attempted to simply satisfy the minimum requirements but rather to achieve economically substantiated energy-saving results, that is, to improve the level of energy performance of buildings up to class C or even possibly to class B.

The requirements for classes B, A, and A+ were established during the period of 2014–2020, so these in most cases reflect the level of energy performance of newly built and certified buildings. Starting in 2016, buildings had to have a minimum energy performance class equal to A, followed by a class equal to A+ for buildings built after 2018. The demands for class A++ were not yet mandated in the analysed period, as these became obligatory only in 2021, but as real estate developers saw the benefit of building energy-efficient buildings, they started designing and constructing buildings having such an energy performance class. This means that in the analysis carried out (covering the certification period of 2014–2020), a number of EPCs of class A++ buildings (albeit small) complying with such energy characteristics were also included in the analysis.

As part of this analysis, it is important to note what were the necessary requirements for buildings in Lithuania over the years. In 1992, the year of entry of the first minimum building performance energy classes, the requirement was for class G or better. Later, the requirements were made stricter, leading to the current energy class requirement of A++ for new buildings set in 2021. When analysing the shift of indicators from class G to A++, it is therefore important to read this in conjunction with the year the class for that particular building was established. The changes in building energy performance requirements along the years are presented in Table 1. In 2006, when Lithuania started adopting the provisions of the EU energy performance directive, the valid requirements at the time in Lithuania matched those required by class C of the EU energy performance directive. From then onwards, the shift which occurred from class C to A++ may be seen as a result of stricter EU directives and other international agreements.

**Table 1.** The requirements for Energy Performance of Buildings over the years.


#### **3. Multi-Apartment Buildings in Lithuania: Certification**

According to the data available from the Lithuanian Real Estate Property Register, the total building stock in Lithuania consists of around 661 thousand buildings with a total area of 201.7 million m<sup>2</sup> [24], of which 41 thousand buildings are multi-apartment residential buildings, having a total area of around 59.5 million m2. In terms of their year of constriction, it can be seen in Table 2 that the majority of these multi-apartment residential buildings were built between 1961 and 1992.

**Table 2.** Multi-apartment residential buildings sub-divided according to their year of construction.


Source: Real Estate Property Register (RPR) (31 December 2019).

Based on the figure of 41,021 multi-apartment buildings at the beginning of 2020 in Lithuania, the total floor space of these multi-apartment residential buildings amounted to 29% of the total area of the Lithuanian building stock. Table 3 shows the multi-apartment building stock area in Lithuania, sub-divided by size of individual property.


**Table 3.** Multi-apartment residential building stock in Lithuania.

Source: Real Estate Property Register (RPR) (31 December 2019).

The typical energy consumption designed for these houses is between 160 and 180 kWh/m2 year. In terms of heating systems, according to data published by the Lithuanian Heating Association [24], 46.97% of multi-apartment buildings are supplied by a district heating system, with the remaining share of multi-apartment residential buildings using either a centralised boiler supplying the entire building or else an individual boiler module placed inside each apartment. A smaller percentage of multi-storey residential buildings are heated via the use of electric radiators [25]. Notwithstanding the high percentage of multi-apartment buildings connected to a central district heating system, on an area basis, this is still small, given that only 26% of the total area comprising the entire Lithuanian multi-apartment building stock is currently connected to a centralised heating system. To address this, the Lithuanian long-term strategy is to transform the current building stock in a way that would lead to a much more efficient use of energy (with conditions mature enough to transform these buildings into almost zero energy buildings) and make the country independent of fossil fuels by 2050.

In all this, the energy certification of buildings plays an important role, positioning itself as one of the most important tools of the energy policy of buildings in Lithuania. Over the period between 2007 and 2021, 257,196 EPCs were issued and registered in Lithuania. For the purpose of this research, the data comprised the certification calculations of 5558 multi-apartment buildings registered between the period of 2014–2020 [26]. Figure 1, to this effect, shows the distribution of energy performance certificates issued for these 5558 multiapartment buildings. More certificates of energy performance of buildings are present in central registries, but these were unfortunately not available.

**Figure 1.** The number of multi-apartment building certificates analysed from December 2014 to April 2020.

#### **4. Methodology**

In order to better understand the properties of the various energy performance classes, specific detailed data from the 5558 multi-apartment building were checked using the NRG6 certification software, prepared according to the energy efficiency specifications set by the building evaluation methodology presented in STR 2.01.02–2016 [26], and then analysed using Microsoft Excel. Data from the 5558 EPCs was then divided into nine sections according to the classes of building energy performance, that is, A++, A+, A, B, C, D, E, F, and G. The data categories analysed included heated area (m2); thermal transmittance coefficient (W/m2K); energy consumption (kWh/m2) for space heating, space cooling, and hot water production; CO2 emission; and the distribution of predominant heating systems in buildings.

The indicators present in each section were calculated using a weighted average methodology, which the authors feel is most suited for this type of analysis. Using this methodology, *Kavg* could be found using the heated area of the building as the weighting factor. The calculation utilised is that shown in Equation (1):

$$K\_{\text{avg}} = \frac{\sum \left( K\_{\text{x}} \cdot A\_{p.x} \right)}{\sum \ A\_{p.x}} \tag{1}$$

where *Kx* is the value of the respective analysed index of the individual building '*x*', for example, length, width, area of windows, energy consumption, etc.; *Ax* is the heated area of the individual building '*x*' in m2.

The average value of the thermal transmittance coefficient, U-value in W/(m2K), of the entire building of an individual building '*x*', including windows, *Ux*.*avg* was calculated as follows:

$$
\mathcal{U}\_{\text{x.avg}} = \frac{H\_{\text{x}}}{A\_{\text{env},\text{x}}} \tag{2}
$$

where *Hx* is the specific heat losses of the individual building '*x*', including losses in linear thermal bridges, W/K; *Aenv.x* is the total envelope area of the individual building '*x*' in m2.

The value of the building compactness index of the individual building '*x*', *Lc.x*, (m<sup>−</sup>1) was calculated using Equation (3):

$$L\_{\text{cf.x}} = \frac{A\_{\text{env.x}}}{V\_{\text{x}}} \tag{3}$$

where *Aenv.x* is the total area of envelopes of the individual building '*x*', m2; *Vx*, is the volume of heated premises of the individual building '*x*' in m3.

Although in the Lithuanian energy performance calculation methodology there are no requirements for the compactness index of the building, the building compactness index is an important parameter when assessing the effect architectural shape and form [27] have on the heating energy consumption, as is the ratio of the area of the outer building structure to the heated volume. In cold climates, compact forms should be used to minimise the heat loss part; therefore, a reduction in the compactness index is a desirable energy-efficient strategy [28].

Apart from this, other aspects were also analysed, including the energy consumption for hot water production, space heating and cooling, and the electrical energy consumed for electrical appliances and lighting.

In terms of climatic zone, hence, the outdoor conditions to which the analysed building stock is exposed, Lithuania belongs to a cool temperate climate zone, where summers are moderately warm and winters are moderately cold. The average temperature in July is around +17 ◦C and, in winter, this goes down to −5 ◦C. The difference between the average temperatures is therefore approximately 20 ◦C. Although the Lithuanian territory is in a cool, temperate climate zone, the western part of the country falls under the impact of the Baltic Sea, where higher annual precipitation, faster wind speed, and higher average yearly temperature are recorded. The part of the building stock in the western territory amounts to approximately 11% of the total area of the building stock. All residential buildings that fall under this investigation are in the same climatic zone, which, according to the Köppen–Geiger climate classification, is defined as a Dfb climatic zone [29]. Lithuania has an annual heating season of between 5 and 6 months when the outside temperature is lower than +10 ◦C.

#### **5. Results and Discussion**

#### *5.1. Analysis of the Heated Area and Compactness Index of Multi-Apartment Buildings*

Looking at the overall heated area, it can be observed in Figure 2a that, as the minimum energy performance class of buildings increased from a minimum of G to A++, an increasing trend with respect to the heated area within multi-apartment buildings was experienced.

On the contrary, during the same period, a decreasing trend with respect to the building compactness index was observed, as shown in Figure 2b. Using the compactness index of a building as a tool to assess the impact the shape of a building has on the energy efficiency of a building, it can be deduced that, as the energy performance class was being improved, an improved compactness index was being obtained through the increase of the volumetric design of the buildings and the use of better targeted design solutions. As discussed, the compactness index has a substantial influence on the need for heating; therefore, it is necessary to optimise the energy concept solution in the initial building design. According to the data presented, it can be stated that the stricter directives on energy performance and their implementation (data from class C to A++) are reflected in the 36% increase in heated area (from 1630 m2 to 2227 m2) and the reduction in compactness index by 23% (from 0.53 m−<sup>1</sup> to 0.41 m<sup>−</sup>1).

In line with most modern architectural trends, the design tendencies of glazed envelopes also evolved. The trend shown in Figure 3 shows that, over the analysed period, there was an overall increase in the use of glazing in building envelopes by almost 25% (from 21.8% to 27.4%). Although this is beneficial as it increases the amount of natural light available, the increased area within the building envelope lets in more sunlight, which has resulted in an increase in the thermal and primary energy consumed to cool buildings.

**Figure 3.** The average of window area in building facades, %.

#### *5.2. Analysis of the Level of Thermal Insulation and Energy Consumption for Heating*

According to the data inspected and the analysis performed, the thermal insulation level of buildings has, over the period analysed, that is, between a minimum requirement of class G and a minimum requirement of class A++, improved almost five fold. In fact, the average envelope U-value decreased from 1.04 W/(m2K) for class G to 0.215 W/(m2K) for class A++, as shown in Figure 4. Observing the specific impact of the EU directives from 2006 (class C), that is, from the transposition year of the first EPBD in Lithuanian law up until class A++ became the minimum mandated requirement, the thermal insulation level improved by approximately 1.6 times, with the average envelope U-value decreasing from 0.35 W/(m2K) for class C to 0.2154 W/(m2K) for class A++.

Due to the increased level of thermal insulation of the building envelope, the average final energy consumption used for heating purposes decreased from 262 kWh/m<sup>2</sup> year for buildings having an energy performance class G compared to 13 kWh/m<sup>2</sup> year for buildings having an energy performance class A++; as shown in Figure 5a, there was a decrease of almost 95%. Likewise, primary energy consumption decreased from 440 kWh/m2 year for buildings having an energy performance class G compared to 19 kWh/m2 year for buildings having an energy performance class A++, as shown in Figure 5b. Truth be

told, the marked decrease can also be partly attributed to the air permeability in building requirements introduced in Lithuania in 2014 [30].

**Figure 4.** Average thermal transmittance coefficient of the building envelope, in W/(m2K), in buildings of various energy performance classes.

**Figure 5.** Average of (**a**) thermal and (**b**) primary energy consumption for heating in buildings, in kWh/m2 year, in buildings of various energy performance classes.

Similarly, to the analysis performed for the building envelope, considering exclusively the period when the EU directives were in effect, that is, from 2006 onwards, it can be observed that, in reality, there was no marked reduction over that period compared to the entire duration of the analysis. In fact, the reduction in annual heating energy consumption between class C compared to class A++ is only 43 kWh/m2 year (56 kWh/m<sup>2</sup> year for class C compared to 13 kWh/m2 year for class A++), whereas comparing the entire range of energy performance classes, that is, from class G to class A++, the reduction was 249 kWh/m<sup>2</sup> year (262 kWh/m<sup>2</sup> year for class G compared to 13 kWh/m<sup>2</sup> year for class A++). This means that significant work in terms of promoting and legislating in favour of energy efficiency in buildings had already been carried out even before the transposition of the Energy Performance of Buildings Directive, and that once the law was enacted in 2006, the requirements imposed by the directive most likely found an already receptive and favourable environment.

The analysis between the thermal energy consumption for heating (Figure 5a) and cooling (Figure 6a) of buildings revealed an important trend, that is, that thermal energy consumption for cooling has increased significantly in importance with the increase in the energy performance of buildings.

**Figure 6.** Average of thermal (**a**) and primary (**b**) energy consumption for cooling in buildings, in kWh/m2 year.

Thermal energy consumption to cool buildings of class C accounts for only 14% of the energy consumption for heating (8 kWh/m2 year used for cooling against 56 kWh/m2 year used for heating), while thermal energy consumption to cool buildings of class A++ accounts for 55% of the energy consumption for heating of energy consumption for heating (7.1 kWh/m<sup>2</sup> year used for cooling against 13 kWh/m<sup>2</sup> year used for heating). Whereas the reason for this may be attributed to an absolute increase in cooling demand, this is only partially true. The reality is, in fact, that there has been such a concerted effort at decreasing heating demand that the overall importance of the two in terms of the overall percentage of energy consumption has seen a shift towards each other. A very similar trend is seen in primary energy consumption for the cooling and heating of buildings.

Although this is positive in terms of absolute energy consumption, as it shows that the policies which have been enacted to reduce the heating demand have been successful, it also means that in the future, especially during summers, the means of protecting the buildings against overheating and the energy efficiency performance of cooling systems will become increasingly important. It is also possible to foresee that the increased construction of buildings of class A++ will cause a growth in the need for cooling devices in buildings and, hence, the associated electricity energy required.

#### *5.3. Analysis of Energy Consumption for Hot Water Production*

The need for thermal energy for the production of domestic hot water covers the following:


As has already been discussed, the growing level of thermal insulation of the envelope stemming from stricter directives and legislation has created the preconditions for the decrease of thermal energy consumption used for heating buildings. However, the same cannot be said for the reduction in thermal and primary energy consumption for the production of domestic hot water. The need for domestic hot water in buildings has not had a significant change over time. Typically, reductions in final energy consumption for domestic hot water in buildings can be achieved through a number of actions, some of which are technology-based, while others are driven by human behaviour. For the former, these actions include the shortening of the length of the system pipes, insulating hot water supply pipes, and increasing the performance of the equipment used to heat water. The latter actions typically relate to educating building occupants on aspects such as the duration and use of hot water, etc. These are not always easy to implement because often technology advancement is slow to respond or because of reluctant behaviour from the consumer side.

Figure 7 shows the energy consumption of the aforementioned systems in buildings based on the energy performance class. Compared to the energy consumption in buildings of class C, the thermal energy consumption in hot water production systems in buildings of class A++ decreased only by 25% (from 52 kWh/m2 year to 39 kWh/m<sup>2</sup> year), with a corresponding reduction in primary energy consumption (from 71 kWh/m2 year to 49 kWh/m2 year).

**Figure 7.** Average of (**a**) thermal and (**b**) primary energy consumption for hot water production, in kWh/m2 year, in buildings of various energy performance classes.

#### *5.4. Analysis of Energy Consumption for Electrical Consumption*

Figure 8 shows how the average primary energy consumption due to electricity also diminished with increasing energy performance class. As is to be expected however, the decrease is not so expansive as in the case of, for example, space heating. In part, this is because, similarly to hot water production, energy efficiency improvements are not only technology driven but are also dependent on human behaviour. Additionally, in certain cases, there has been an overall increase in the electrification of certain activities leading to an overall increase in electricity consumption [31].

#### *5.5. Overall Share of Energy Consumption in Lithuanian Multi-Apartment Buildings*

Whereas the overall primary energy consumption consumed by multi-apartment residential buildings has gone down, it is also interesting to note how the final overall share of primary energy consumption in buildings has changed over the years. In this regard, Figure 9 shows the primary energy consumed by use for a class C building and a class A++ building.

Taking as an example the primary energy used to heat buildings, buildings having an energy performance class of class C utilise a share of around 35% of their total primary energy consumption for heating. This is significant compared to class A++ buildings which consume only 15% of the total. This is by far the most marked difference, also showing how effective policies were in reducing energy consumption used for heating in buildings.

On the other hand, the other main categories, such as the energy used for lighting and electrical appliances, space cooling, and domestic hot water, have all shown increases in their share of primary energy consumed. Again, this is not to say that the overall final energy consumption in these three categories has increased, but rather that their share within the overall balance of final energy consumption has increased.

In terms of domestic hot water production, despite a significant reduction in the amount of primary energy necessary for the production of hot water in buildings of class A++ compared to buildings of class C, this consumption use has increased its share from 33.8% (class C) to 40.1% (class A++); likewise, for the second individual largest primary energy consumer, that is, energy used for lighting and electrical appliances. In terms of space cooling, although an increase has been observed in terms of share, this is still small compared to the former two uses indicated above.

These overall results indicate that, whereas significant work has been performed on the aspect of producing better building envelopes, much still needs to be done in order to reduce primary energy consumption in buildings even further. Moreover, future research, policies, and legislation will need to start looking more actively at reducing energy from uses other than merely heating and cooling and focus much more on other energy uses in buildings.

Summarizing the results of the primary energy consumption analysis allows one to conclude that the requirements demanded by the energy performance directives to move to the construction of buildings of class A++ (NZEB) in Lithuania from 2021 were successfully implemented. This led to a significant reduction in primary energy consumption for heating and, to a lesser extent, in primary energy consumption for the production of hot water, lighting, and electrical appliances. In fact, relatively high primary energy consumption remains in class A++ buildings with regard to hot water production, where the primary

energy consumption (48.6 kWh/m2 year) is almost three times higher than the primary energy consumption (18.8 kWh/m2 year) to heat buildings was witnessed.

#### **6. Conclusions**

The purpose of the directives for the energy performance of buildings in the residential sector is to gradually reduce the consumption of non-renewable primary energy used in buildings and thus reduce CO2 emissions. Similarly, one of the key priorities of the Lithuanian National Energy Independence Strategy is to increase the energy performance of buildings. To better understand the success or failure achieved so far, it is essential to adequately assess energy performance indicators and to select appropriate energy saving tools to determine unused energy savings potential and reduce energy consumption in buildings.

The research presented in this paper describes the differences between buildings that comply with the requirements of the first transposition of the EPBD Directive of 2006 (buildings of class C) and buildings that comply with those set by the latest EPBD Directive of 2021 (buildings of class A++). Specifically, it describes the impact that the increasing requirements of EPBD directives had on various energy performance indicators of buildings. To provide a better perspective, results are also compared to earlier requirements dating back to the first energy efficiency legislation enacted in Lithuania. The was performed using Energy Performance Certificates or EPCs as the source of data.

The statistical analysis of certificates revealed that, when the energy performance requirements became stricter following the transposition of the first EPBD in 2006, the average heated area of multi-apartment buildings increased by 36%. This was, however, countered by an increase in the building compactness index, which resulted in an increase in the volumetric efficiency of buildings and the use of better target solutions, increasing the quality of the designed buildings.

The primary energy consumption analysis shows that the requirements set by the energy performance directives to move to NZEB constructions were successfully implemented with a significant reduction in the primary energy used, particularly for space heating. Relatively high primary energy consumption remains present in class A++ buildings with regard to lighting, electric appliances, and hot water production, where primary energy consumption is 2.6 times higher than the energy consumption for space heating. Primary energy consumption could therefore be further reduced if the aforementioned energy consumption for the production of hot water, lighting, and electric appliances could become more energy-efficient.

In relation to cooling, results show that, with the higher level of insulation and increased percentage of glazing in building envelopes, space cooling in NZEB buildings will most likely be a significant energy consumer, at least regarding the overall share of energy consumption.

Finally, this paper should not be seen on its own but rather as a first step towards understanding energy performance trends in buildings in Lithuania. Specifically, future work analysing other building typologies and making use of the results obtained for the production guidelines can be considered a natural follow-up to this study.

**Author Contributions:** Conceptualisation, E.M., R.N. and K.B.; methodology, E.M. and K.B.; validation, J.R., R.N. and S.P.B.; formal analysis, R.N. and S.P.B.; investigation, E.M. and R.N.; resources, E.M., K.B., J.R. and R.N.; data curation, E.M.; writing—original draft preparation, R.N. and S.P.B.; writing—review and editing, S.P.B.; visualisation, R.N. and S.P.B.; supervision, E.M. and S.P.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Acknowledgments:** The manuscript is a revised version of an original scientific contribution that was presented at the 17th Sustainable Development of Energy, Water and Environmental Systems (SDEWES) conference held between the 6th and 10th of November 2022 in Paphos, Cyprus and that was subsequently invited to be submitted for review for inclusion in a special issue of Sustainability dedicated to said Conference. Compared to the original Conference paper, the manuscript was extensively revised and re-written.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **A Performance-Based Decision Support Workflow for Retrofitting Residential Buildings**

**Suzi Dilara Mangan 1,2**


5600 MB Eindhoven, The Netherlands

**Abstract:** The trend towards high-performance residential buildings with new building regulations necessitates fundamental changes in the residential market, which is currently driven by low initial investment costs and dominated by weak innovative cycles. This change involves a difficult decisionmaking process that must consider the multiple and generally conflicting objectives regarding optimal retrofitting for residential buildings. This study aimed to develop an approach that would provide feedback about a building's energy and economic performance in relation to the decisionmaking process to ensure that the complex residence retrofitting process is more efficient. For this purpose, a performance-oriented decision support workflow is recommended for a typical multifamily apartment block within a hypothetical settlement context in Istanbul Province, which includes (i) an automated parametric energy simulation through the coupling of EnergyPlus and MATLAB® to determine differences between retrofit alternatives in relation to the building envelope, energy systems and renewable energy systems, and (ii) a multiple-criteria decision analysis to determine the retrofit alternatives by which the optimal performance can be achieved, taking into account the conflicting nature of key performance indicators (primary energy saving and life-cycle cost saving). Architects and residence owners—who are the main decision makers—can use this proposed workflow to explore effective retrofit alternatives and to make informed decisions about performancebased retrofitting by comparing the energy and economic performance of these alternatives.

**Keywords:** performance-based retrofit; decision support; building performance simulation; parametric analysis; multiple-criteria decision analysis; residential buildings

### **1. Introduction**

In the future, residential buildings with high levels of energy consumption will play a major role in increasing the energy demand throughout the world and will exert significant pressure on the primary energy supply. The IEA report [1] states that the share of electricity in energy use in buildings will increase from 33% in 2017 to 55% in 2050. Residential buildings, which are responsible for approximately 70% of the energy consumption in buildings, are the main source of energy demand in buildings. On the other hand, it is predicted that with the major improvements to be made, the electricity demand will be approximately 300 million tonnes of oil equivalent (Mtoe) lower than it would normally be in 2050 [1]. Therefore, various energy policies have been developed in the last few years in order to support the economic, environmental and social gains that can be obtained by retrofitting residential buildings. For example, it has become mandatory to change the ongoing dominant dwelling-production paradigm in the residential sector in a way that prioritises the improvement of residential building performance in order to ensure long-term global energy security. However, the production of high-performance residential buildings is neither simple nor straightforward [2,3]. Performance goals are shaped by many factors, such as the current legal restrictions and the effects of the built environment; hence, no prototype is available that provides high performance at a low cost [2]. Moreover,

**Citation:** Mangan, S.D. A Performance-Based Decision Support Workflow for Retrofitting Residential Buildings. *Sustainability* **2023**, *15*, 2567. https://doi.org/10.3390/ su15032567

Academic Editors: Oz Sahin and Russell Richards

Received: 13 January 2023 Revised: 25 January 2023 Accepted: 27 January 2023 Published: 31 January 2023

**Copyright:** © 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

in the residential building production process that is based on the iterative trial-and-error method [4,5], the number and complexity of possible solutions increases as the performance targets for residential buildings become more ambitious, and therefore, the evaluation of various retrofitting alternatives becomes very challenging [6,7].

A wide range of decision support tools is used to address these challenges. Parametric analysis and multiple-criteria decision analysis are the methods that are commonly considered in the development of decision support tools. Parametric analysis is used to obtain wide-ranging solutions to improve residential building performance and evaluate the effect of various design variables on these solutions. Multiple-criteria decision analysis is used to determine the alternatives that best meet the multiple and conflicting objectives, i.e., those that give the optimal performance within this wide range of solutions. Many studies emphasise that integrating these methods to improve residential building performance will facilitate the movement away from conservative approaches, in which the minimum requirements specified in the code are only provided to meet investment costs, and a weak innovative cycle is dominant towards performance-based approaches [8–11]. Consequently, this study proposes a decision support workflow based on a computational performance that enables adaptation to the normative frameworks and performance rating systems while providing iterative feedback on retrofit decisions that have an important effect on residential building performance, which will make a positive contribution to this process of change in the residential sector. A comprehensive solution space search approach based on the integration of a parametric residential energy simulation with a multiple-criteria decision analysis was selected as the basis for this recommended workflow. This workflow was used to analyse how early-stage retrofit decisions affect residential energy consumption and economic burden based on the contextualised simulation framework. The potential of the recommended workflow to support the retrofit decision-making processes of the target decision makers (residence owners and architects) is presented through a case study conducted in Istanbul Province, Türkiye.

#### *1.1. Background*

The building sector, which is at the focal point of significant problems such as the production of sustainable built environments, combating climate change and increasing energy security, has the serious potential to solve these problems. In this respect, the building sector, which is responsible for 36% of global final energy consumption and 39% of energy-related CO2 emissions [12], is seen by many countries as a key component in developing cost-effective energy and climate change policies and achieving the determined targets. However, the rate of energy intensity reduction in the building sector has been declining in recent years, and is much less compared to the 2.5% increase in building floor area that occurred from 2017 to 2018 [13]. These findings reveal that the vast majority of the energy efficiency potential envisaged for buildings will not be exploited unless current policies are changed [14]. At this point, the issue of the efficiency gap comes to the fore, which defines the difference between the level of investment actually made in regard to energy efficiency and a higher level that is technically and economically feasible. Although the barriers that cause the efficiency gap differ from country to country and city to city in terms of importance [15], it is emphasised that the most basic barrier is the knowledge gap [16–19].

Awareness-raising and catch-up work activities, which are of key importance in reducing this knowledge gap, are handled within the scope of many policies, and efforts are made to establish a balance between supply and demand in the building sector. This effort is crucial to avoid urban areas where inefficient building stocks with low adaptability are accumulated, which may result in high costs in the coming years due to the long lifespans and high energy consumption levels of buildings. Notably, for developing countries where there is rapid urbanisation, high population growth, and a mismatch between energy supply and energy demand, the current limited knowledge and the lack of expertise and awareness on the supply (architect, engineer, contractor, investor and other stakeholders) and

demand (building owner and tenant) sides deepens this efficiency gap. In this context, it is an obstacle that the decision makers from the supply and demand parts of the current fragmented building sector are not aware of cost-effective applications and technologies that will provide energy savings, or they do not find the possible positive impact levels of these applications and technologies on the building's energy, economic and environmental performance convincing [20,21]. Performance-based housing production [22,23], which is seen as the primary solution to overcome this obstacle, necessitates radical changes to the housing production market that is currently based on conservative approaches, where the minimum requirements specified in the legislation are only focused on low initial investment costs and a weak innovative cycle is dominant. Although the realisation of this change is challenging, developing the design process in the best way can have a wide impact and help achieve new sustainability goals. In particular, the potential to determine the design solution that best meets the conflicting objectives of building performance (e.g., minimum energy consumption and minimum life-cycle cost) based on the design's ultimate goal is highest in the early design process [24–26]. At this point, although it is difficult to determine the level of impact of early design decisions on building performance, the use of appropriate computer-based software aimed at shedding light on this problem is widely accepted. In particular, building performance simulations are an integral part of sustainable design, facilitating the examination of the impact of design decisions on solutions that provide the required life-cycle performance at a reasonable cost, thereby helping the architect (decision makers) develop an overall understanding about the quantitative performance indicators [27–29]. The evolution of the normative structure of building regulations towards performance-based criteria has dominated the workflows found in building performance simulation tools that support the final building design stage [30,31]. On the other hand, the integration of building performance simulations into the decision-making process that supports the iterative nature of the early design process has been limited [30]. However, in recent years, the development of decision support tools with design-oriented (multivariate) workflows in the field of building performance simulation, which predominantly supports analysis-oriented (on a single solution) workflows, has also gained momentum [30,32]. The transition from the phenomenon of simulation to the design decision-making process ranges from the use of fairly simple pre-decision evaluation and analysis tools to utilizing parametric and optimisation decision tools aimed at integrating building performance simulations and informing design in the early design stage [32,33]. Parametric simulation tools offer great opportunities for informative support in the early design phase as they help to examine alternative design solutions for improving building performance and determining the effectiveness of different design parameters for these solutions as well as simulation-based optimisation tools that enable decision makers to find the most appropriate solution for a specific purpose among these alternative solutions. It is possible to say that the design solutions based on many studies conducted in this context concentrate on geometry (settlement scale [34–38], building scale [37–43]), building envelope [42,44–48] and energy systems [42,46,49,50] and renewable energy systems [50–54]). Prioritising these design solutions by considering quantitative or qualitative criteria for energy-efficient building design or building retrofitting is the focus of these research studies. In this context, criteria that can be considered in support of the decision-making process, according to the analysis by Kolokotsa et al. [55], can be listed as: (i) primary or final energy consumption, heating and cooling loads, annual electrical energy consumption, embedded energy and energy savings based on building retrofitting with regard to energy; (ii) initial investment costs, life-cycle cost, net present value and replacement cost in relation to cost; (iii) annual CO2 emissions and global warming potential, life-cycle environmental potential and CO2 emission reduction potential as related environment; (iv) thermal comfort, noise level and availability of daylight for indoor quality and comfort; and (v) durability, safety and functionality within the framework of other criteria. Depending on the number of criteria evaluated, optimisation methods can be classified as single-objective or multi-objective. Although the single-objective optimisation method is used in most of the studies (about 60%), the necessity of evaluating mostly conflicting

design criteria within the scope of energy-efficient building design changes the orientation of the studies in this field to multi-objective optimisation [2]. In this context, the objectives commonly used in the studies are listed as energy, cost, thermal comfort and CO2 emissions, according to the rate of consideration [2,56]. This makes it easier to analyse quantitative performance data for multiple criteria and develop an understanding of the results of different design solutions to meet the stringent requirements of high-performance building design.

However, parametric simulation and simulation-based optimisation tools still remain only research tools, despite having the potential to provide crucial support in the decisionmaking process at the early design stage, and their integration into building production practices is still very limited [7,57]. The reasons for this limited level can be listed as follows: (i) formulating the problem correctly and choosing the appropriate tool for solving the problem requires knowledge and experience; (ii) the complex structure of the simulation and optimisation tools; (iii) detailed data entries exceed the expertise level of decision makers (e.g., architect and building owner) in the building design process; (iv) long computation processes; (v) lack of integration into the design workflow; and (vi) the absence of a graphical user interface to facilitate the analysis of large numbers of simulation outputs [7,56,58,59]. Although it is possible to overcome these obstacles by using graphical user interfaces and algorithms that take advantage of modern computer architectures and imaging features, the last listed reason especially highlights the lack of suitable and understandable visualisation techniques [7,58,60,61]. Some studies on the development of tools aimed at supporting the decisions that can be made in the early design stage to minimise this deficiency have been carried out by Ochoa and Capeluto [62], Petersen and Svendsen [28], Attia et al. [32], Naboni et al. [63], Elbeltagi et al. [64], Nault et al. [10], and Nik-Bakht et al. [65]. These decision support tools, which share common goals such as reducing design inputs and shortening simulation times, are mostly based on the parametric simulation method, and the analysis results are presented with different graphical user interfaces that have been developed. In these studies, visual feedback based on the comparison of design alternatives from visualisation techniques was considered to facilitate the analysis of the cause–effect relationship based on different values for the design parameters or the selection of different design parameters. This visual feedback ranges from static images (graphical displays) [10,28,32,62], which have an analytical and meaningful value, to multivariate interactive visualisation techniques (e.g., parallel coordinate graphs), where the entire solution space is visualised [63–65].

#### *1.2. Problem Statement and Research Objective*

Although investments in the energy efficiency of buildings are insufficient, many obstacles continue to be encountered in practice. In this respect, some critical gaps that hinder the development of the housing market can be listed, such as the insufficient coverage of current energy efficiency legislation, an insufficient level of compliance with current legislation, the complexity of production of sustainable, energy-efficient buildings, and insufficient information about alternative solutions [66–68]. This situation is more serious in developing countries where a high energy demand based on a rapid urbanisation rate is experienced. Considering the long lifespan of buildings, facilitating steps must be taken to ensure the successful and widespread improvement in energy-efficient buildings in the housing market. It is very important that these steps are structured specifically to fill the current knowledge gap.

Considering the restrictions listed above, it does not seem possible to establish a qualified supply–demand balance in the housing sector unless the inconsistency between the need for information and access to information in the current housing production and retrofitting processes is eliminated. This situation raises awareness of the need to provide an adequate and constructive information flow to consumers (building owners, etc.) and producers (architects, etc.) who play an active role in the early design phase, but have limited awareness or knowledge.

The aim of this study is to create data that enable decision makers to make informed decisions regarding residential building retrofitting and to present these data in a way that facilitates the decision-making process. Within the framework of this purpose, the target decision makers are the architects and residence owners who represent the supply and demand sides in the housing sector. Therefore, questions such as ". . . (e.g., low-e coated glass) what happens?", "what is the optimal energy and cost-effective design option or options?" and "which design parameters should I prioritize in the retrofitting/design of the building with high energy efficiency (e.g., passive building)?", which architects and residence owners constantly face in their decision-making processes, were considered in developing the methodological framework of the study. These sub-questions shaped the key query of the study, such as "how can architects and residence owners consider residential energy and economic performance, and even environmental performance, when making informed design decisions for sustainable, energy-efficient building retrofitting?" In this context, a computational, performance-based design-support workflow has been taken as the basis to systematically and exhaustively search and analyse technically applicable and accessible design solutions and to determine optimal solutions. Contributions made within the framework of this approach can be listed as: (i) integrating parametric and optimisation analyses of early-stage design decisions with regard to energy and cost-oriented performance evaluations in a single platform; (ii) considering the shading effect of neighbouring buildings based on the configurations of urban forms in residential energy simulations to establish a consistent analysis framework using the performance-based design-support approach; (iii) considering the entire building system holistically by optimising the building's subsystems (building envelope, energy systems and renewable energy systems) through a multi-objective performance evaluation; and (iv) visualising the obtained data to support the integrated performance view.

#### **2. Methods**

This study focused on creating a contextualised computational framework that will contribute to reducing the barrier between building design/retrofitting and building performance and bridging the gap between theory and practice to realise a transition towards energy-efficient housing production in the built environment. In this context, the performance-based decision support workflow can be applied both in new residential building design and in residential building retrofitting, and it consists of three main steps. The steps regarding the workflow are given in Figure 1 and are explained in detail below through the case study.

**Figure 1.** Framework of the proposed decision support workflow.

#### *2.1. Step One: Preprocessing and Model Abstraction*

#### 2.1.1. Definition of the Key Performance Indicators

One of the factors that play an essential role in the success of the residential building performance improvement process is the definition of the key performance indicators based on the priorities of the target decision makers. Thus, the current study focused on defining the performance indicators: (i) to facilitate communication between the architect and the residence owner, who constitute the primary target audience for the developed decision support workflow; (ii) to enable these two important decision makers to focus on the design variables to improve the residential building performance and compare the various retrofit alternatives; and (iii) to increase the level of awareness about the data obtained regarding the energy identity certificate that each building is required to obtain in Türkiye that is outlined within the scope of Energy Efficiency Law No. 5627 [69] and the Energy Performance Regulation in Buildings [70]. Within this context, an evaluation of the studies [2,3,55] previously conducted to improve residential building performance indicates that the following key performance indicators are commonly used: (i) primary energy consumption, including residential energy consumption (heating, cooling, lighting, etc.) and the energy savings provided by renewable energy systems; and (ii) life-cycle cost, including long-term expenditures in defining the economic viability of the retrofitting alternatives. This study uses these two key performance indicators, with their conflicting structures of primary energy savings (PESs) and life-cycle cost (LCC) savings, in order to determine the performance levels of the alternatives for improving residential buildings in comparison to those of the reference situation, which will ultimately inform the retrofit decision-making process.

#### 2.1.2. Identification of the Target Residential Building and Settlement Form

In this study, the analyses focused on multifamily residential buildings (apartments) that constitute the majority of housing in the total residential area in Istanbul Province. The residential buildings constructed by the Housing Development Administration of Türkiye, one of the main actors in apartment block-based dwelling production and urban transformation, have also been analysed. Within this context of defining the target residential building geometry, the following factors were considered: (i) design parameters (plan type, building height, roof type and window-to-wall ratio (WWR), i.e., the total window area/total facade area) on a building scale; (ii) design parameters (settlement form, ratio of building height to street width (H/W) and orientation) on a settlement scale. A four-module residential building with a floor area of 100 m2 was used to define the typical apartment block with a square footprint and a form factor (the building length/building depth in the plan) of 1.00. The height of the building was 15 m, and the height from floor to floor was 3 m. The roof type has been defined as a pitched roof and the WWR for all facades was 30%. The settlement form was based on a 3-by-3 matrix according to a uniform configuration of a 9-point block with the same features on a hypothetical site. The block at the centre of the matrix arrangement was the target residential building for which the relevant analyses were performed. For the H/W ratio and orientation, the comprehensive analysis results of Mangan et al. [37] were taken as the basis.

Regarding energy consumption and economic impact, the applicability of the proposed solutions was evaluated according to a reference residential model that used a comparative framework as the basis for improving the target residential building. Within this context, Istanbul Province, where the case study was conducted, has over 5.4 million dwellings [71] and is the area most affected by the urban transformation process initiated for the renewal of the residential housing stock following the 1999 Marmara earthquake [72]. The primary goal of this urban transformation effort has been the accelerated demolition and reconstruction of the pre-2000 residential housing stock; thus, the pre-2000 building configurations were disregarded in the present study. Accordingly, the residential buildings constructed from 2000 onwards were chosen as the basis for defining the configurations that are suitable for the reference residential (RefR) model. The stratification details of the

opaque and transparent components of the building envelope concerning the optical and thermophysical features were determined based on the limit values of overall heat transfer coefficients (U values) of the building envelope specified for Istanbul in the Thermal Insulation Requirements for Buildings standard, TS 825 [73]. The U values of the building envelope were as follows: exterior walls—0.55 W/m<sup>2</sup> K (Uwall\_limit: 0.60 W/m2 K); roof—0.40 W/m<sup>2</sup> K (Uroof\_limit: 0.40 W/m<sup>2</sup> K); ground floor—0.53 W/m<sup>2</sup> K (Ufloor\_limit: 0.60 W/m2 K); windows—2.60 W/m<sup>2</sup> K (Uwindow\_limit: 2.60 W/m2 K). In terms of the building's energy systems, it was presumed that the heating energy demand would be met by a central hot-water boiler and that a radiator system would be present in the residential modules. The type of energy that was used was accepted as natural gas and the boiler's nominal heat efficiency was 80%. Another presumption was that a split air-conditioning system with an energy efficiency ratio (SEER) of 4.20 would be used for cooling. The hotwater system was accepted to comprise stand-alone electrical water heaters with an 80% heat efficiency. A building occupancy schedule was developed based on the official survey of the Turkish family structure, and the user density used was 0.04 m2/person within the context of building usage [74,75]. The user activity level was defined as 110 W/person [76]. The occupancy schedule-based operations of energy systems were presumed to ensure an indoor temperature of 20 ◦C between 07:00 and 23:00 and a temperature of 13 ◦C for the rest of the time, when heating was desired, and when cooling was desired, the temperature was assumed to be 26 ◦C between 07:00 and 23:00 and 32 ◦C at other times. The natural air change rate used was 0.5 h−<sup>1</sup> [77].

#### 2.1.3. Definition of the Solution Space Design Variables

Defining the solution space in a way that meets the decision maker's requirements and the present and future building regulation criteria is important to ensure that the developed decision support workflow achieves a high level of efficiency [78]. The study was limited to five-storey, square-plan residential buildings within a detached settlement form (fixed H/W and orientation) where the settlement and building geometry-related variables were kept constant. This limitation is in agreement with the widely used building and settlement geometry as determined by the high rate of production of residential building settlement areas that consist of square-planned apartments, and it also facilitates the parametric analysis process that constitutes the basis of the proposed workflow. However, the study did consider the various design variables of the building envelope, energy systems and renewable energy systems to cover the retrofit alternatives, ranging from conformance to the present national building standard TS 825 [79] requirements to combinations ensuring the development of nearly zero-energy buildings (e.g., passive home U values [80] and photovoltaic system usage). National and international standards and current residential market analysis studies were used to define the relevant range and distribution of the 13 different design variables defined in this context. The characteristics of the relevant design variables are presented in Tables 1 and 2.

#### *2.2. Step Two: Performance Analysis*

Performance analyses were conducted, and the relevant performance indicators were calculated for each retrofit alternative in the solution space using a building performance simulation within the context of the second step of the workflow. Parametric energy simulations were conducted for this purpose so that improvements could be made to the residential building performance on an iterative basis in the early design stage. The parametric energy simulations aimed to determine the extent of the effect the design variables have on the performance indicators. Accordingly, the target residential building and settlement form identified in Section 2.1.2 was created using DesignBuilder program (DesignBuilder Software Limited, Gloucestershire, UK) [81], the comprehensive interface of the EnergyPlus v8.7.0 software [82].


#### **Table 1.** Characteristics of solution space design variables.

EW—exterior wall; R—roof; GF—ground floor; W—window; SCE—solar control element; HS—heating system; CS—cooling system; HCS—heating-cooling system; DHW—domestic hot water; PV—photovoltaic; HCB—horizontal coring brick; AAC—autoclaved aerated concrete; XPS—extruded polystyrene; SW—stone wool; FSCE—fixed solar control element; EVB—external venetian blind; ASHP—air source heat pump; VRF—variable refrigerant flow. 1Euro = 6.5250 Turkish lira and 1.1245 US dollars.

**Table 2.** Characteristics of the defined glazings.


The main input data file (IDF) created during preprocessing was manipulated based on the design variables presented in Tables 1 and 2; thus, new IDFs defining each retrofit alternative were created. A coupling function to provide a connection between EnergyPlus and MATLAB® [83] was written to make it possible to iteratively define all the different combinations of the solution space in the relevant design-variable fields of the text-based IDFs. This made it possible to automatically perform the dynamic energy simulations based on the EnergyPlus calculations of each retrofit alternative with the written MATLAB code using the climate data of Istanbul Province [84]. The comma-separated values (CSV) files of the conducted parametric energy simulations were processed in the MATLAB environment, and the performance indicators explained in Section 2.1.1 were calculated for each retrofit alternative. The PESs and LCC savings, which were used as the key performance indicators in the retrofitting of residential buildings, were calculated using the following equations:

$$PES = (1 - \frac{PEC\_{alt}}{PEC\_{RefR}}) \times 100\tag{1}$$

*PECalt* is the annual primary energy consumption of the retrofit alternative (kWh/m2-year) and *PECRefR* is the annual primary energy consumption of the reference residential model (kWh/m2-year). The primary energy consumption (*PEC*) values of both the reference residential model, and the retrofit alternatives were calculated with the following equation [85]:

$$PEC = \sum \left( E\_{\text{cons\\_fuel}} \times f\_{p,\text{fuel}} \right) - \sum \left( E\_{PV} \times f\_{p,PV} \right) \tag{2}$$

*Econs*,*fuel* is the annual energy consumption based on type of fuel (kWh/m2-year); *EPV* is the annual amount of energy generated by the photovoltaic (PV) system (kWh/m2-year); *ƒp*,*fuel* is the primary energy conversion coefficient by fuel type; and *ƒp*,*PV* is the primary energy conversion coefficient related to the electricity generated by the PV system. In Türkiye, the primary energy conversion coefficients within the equation based on the type of fuel consumed are 1.00 for natural gas and 2.36 for electricity [77]. The primary energy conversion coefficient used for the electricity generated with the PV system was accepted to be the same as the primary energy conversion coefficient of electricity defined for Türkiye. The annual degradation in the power output of the PV modules was taken as 0.5% per year [86].

$$LCC\, saving = (1 - \frac{LCC\_{alt}}{LCC\_{Ref}R}) \times 100\tag{3}$$

*LCCalt* refers to the life-cycle cost (EUR/m2) of the retrofit alternative, and *LCCRefR* refers to the life-cycle cost (EUR/m2) of the reference residential model. The life-cycle costs (*LCC*s) of both the reference residential model and the retrofit alternatives were calculated according to the following equation [87]:

$$L\mathbb{C}\mathbb{C} = I + Rel - Res + E + OM\mathbb{k}R \tag{4}$$

*I* is the initial investment cost (EUR/m2); *Repl* is the present value of the replacement cost (EUR/m2); *Res* is the present residual value (EUR/m2); *E* is the present value of the energy cost (EUR/m2); and *OM&R* is the present value of the non-fuel operating, maintenance and repair cost (EUR/m2).

The two important components in the LCC calculations are the calculation period and the costs. In the present study, the calculation period was accepted as 30 years. The costs of the building components that have no effect on the building energy performance, as well as the costs that are the same within the context of the alternatives, were not considered during the cost calculations [88]. The current market unit costs, based on the price proposals from the suppliers, were used to determine the initial investment costs of the alternatives in the solution space, and these costs are presented in Table 1. The unit costs only contain the material prices. The timing and number of the building system replacements depend on the estimated lifespan of the system and the length of the calculation period. Within this context, the calculation period used in this study encompasses the lifespan of the variables related to the building envelope [89], and no replacement is foreseen. The lifespan of the energy systems' components was obtained from Annex A of the EN15459 standard [90], and the annual maintenance and repair costs of these systems were also calculated by taking this annex into account. The maintenance and repair costs related to the PV system components (PV module + balance of system) were considered to be within the scope of renewable energy systems and were taken into account in the calculations [86,91]. The energy costs were calculated based on the local energy prices [92,93] in combination with the energy consumption according to the calculated fuel types and the energy generated from the PV systems. From the life-cycle perspective, the residual values were calculated for the components that have a lifespan longer than the specified calculation period. To determine the present values, the considered costs, other than the initial investment costs, were discounted in comparison to the calculation start year of 2019, and were based on a discount rate of 3% [88]. In addition to the *LCC* calculations, the discounted payback

periods (DPPs) within the framework of the same data and assumptions were calculated with the following equation [87]:

$$\sum\_{n=1}^{t} \frac{\Delta \mathbb{C}\_{op}}{(1+i)^n} \ge \quad I \tag{5}$$

Δ*Cop* signifies the operational cost (*E* + *OM&R*) savings (EUR/m2), *I* is the initial investment cost (EUR/m2), *i* is the discount rate and *t* is the calculation period.

A solution space, including the 147,456 possible alternatives for improving residential building performance, was evaluated within the scope of the energy and economic performance. Simultaneous parallel calculations were performed to shorten the long duration needed to perform the simulations in the MATLAB environment and to perform the postprocessing of all the retrofit alternatives. An Intel® Core™ i7 9750H CPU 2.60 GHz processor was used for the calculations.

#### *2.3. Step Three: Multiple-Criteria Decision Analysis*

Because the present study considered the conflicting key performance indicators, it was not easy to define the retrofit alternatives that could ensure the optimum performance in the wide solution space that was produced. Therefore, within this context, a multiplecriteria decision analysis method was used to investigate the retrofit alternatives that best met the conflicting objectives to solve the multi-objective optimisation problem. The analysis conducted in the MATLAB environment was based on Pareto optimisation of the total solution space. The objective functions used to determine the Pareto solutions (trade-off solutions) that best met the preferences of the target decision makers were PESs and LCC savings, which are defined as the key performance indicators at the highest level and are provided below:

$$\text{Max}\{f\_1(\overline{\mathbf{x}}), f\_2(\overline{\mathbf{x}})\} \cdot \overline{\mathbf{x}} = [\mathbf{x}\_1, \mathbf{x}\_2, \dots, \mathbf{x}\_m] \tag{6}$$

*f* <sup>1</sup> indicates the primary energy savings (%), *f* <sup>2</sup> is the life-cycle cost savings (%), *x* refers to the combinations of the design variables and *m* is the number of design variables.

None of the solutions can optimise all the objective functions at the same time in multi-objective optimisation problems. Consequently, a single optimal solution, as found in single-objective optimisation problems, was not obtained. However, as many solutions as possible were obtained with the Pareto optimisation that was performed. This can provide decision makers with choices from among the retrofit alternatives in the solution space based on their own preferences (design alternatives with a high energy performance for architects and those with a low life-cycle cost for residence owners).

#### **3. Results**

#### *3.1. Performance Analysis Results*

An evaluation of the reference situation with regard to the findings related to the performance analysis resulted in the calculation of the existing PEC of the target residential building (108.20 kWh/m2-year) and the LCC (183.62 EUR/m2). Next, 6144 automated parametric energy simulations based on 13 different design variables were performed for the target residential building. The key performance indicators were calculated for each retrofit alternative within the solution space containing more than 1 × <sup>10</sup><sup>5</sup> design-variable combinations (Figure 2).

Each grey point in Figure 2 represents an original solution/retrofit alternative. The defined retrofit alternatives based on the different design-variable combinations with discrete values are concentrated in four main clusters related to the key performance indicators. We found that the design variables defined for domestic hot water (DHW) and the PV system, which are a sub-category of the energy systems and the renewable energy systems, respectively, had a noticeable effect on the concentration of the retrofit alternatives within four main clusters: I, II, III and IV. Within this context, the ranges for these design variables with a high sensitivity index are: (i) in main cluster I, the DHW η is 2.41 and a rooftop PV system is available; (ii) in main cluster II, the DHW η is 2.41 and a rooftop PV system is not available; (iii) in main cluster III, the DHW η is 0.86 and a rooftop PV system is not available; and (iv) in main cluster IV, the DHW η is 0.86 and a rooftop PV system is available. Furthermore, Figure 2 indicates that the key performance indicators of the retrofit alternatives are concentrated in six separate subclusters within each cluster. The design variables defined for the solar control element (SCE), which is a sub-category of the building envelope, and the heating and cooling systems, which are sub-categories of the energy systems, were found to have an effect on the concentration of the retrofit alternatives within these six subclusters.

**Figure 2.** The key performance indicators calculated for each retrofit alternative in the solution space.

#### *3.2. Multiple-Criteria Decision Analysis Results*

Pareto optimisation was performed to determine the best trade-off between the defined objective functions (PESs and LCC savings) within the scope of the multiple-criteria decision analysis. The Pareto solutions are presented within a scatter plot by using the calculated PESs, LCC savings and DPPs on the axes (Figure 3). This visualisation technique enables decision makers to choose the appropriate design solution and provides a deep awareness of the energy consumption and economic impact of each retrofit alternative in comparison to the reference situation. Within this context, the Pareto solutions are presented in a parallel coordinates plot to enable decision makers to visualise them with the key performance indicators of PESs, LCC savings and DPP data, and with the design variables that are components of the retrofit alternatives (Figure 4).

As seen in Figure 3, the solution of the multi-objective optimisation problem based on the conflicting PES (horizontal axis) and LCC saving(vertical axis) values was characterised with the Pareto solutions that best meet all the objectives that are of interest to the decision makers. Moreover, the DPP data calculated for the Pareto solutions were defined with various colours, and each point expressing the Pareto solutions on the scatter plot is coloured based on the relevant DPP data. These trade-off Pareto solutions can be classified into two subclusters according to whether or not renewable energy systems are present in the retrofit alternative. The calculations showed that Pareto solutions, with a PES value of 38–49% and an LCC saving value of 55–82%, would be able to recoup their initial investment within a calculation period of 30 years. An analysis of these Pareto solutions in relation to the configurations of the design variables revealed that the relevant retrofit alternatives do not contain design variables, such as external venetian blinds (EVBs) from the building envelope category, air-source heat pump (ASHP) and variable refrigerant flow (VRF) from the energy systems category and a PV system from the renewable energy systems category, which have a higher initial investment cost than the other design variables. Furthermore, an analysis of all the retrofit alternatives within the Pareto solutions revealed that: (i) the energy optimal solution that maximised the PESs (PES: 80%; LCC saving: −184%) was calculated as having a PEC value of 21.55 kWh/m2-year and an LCC value of 521.99 EUR/m2, and this retrofit alternative would be unable to recoup the initial investment cost within 30 years; and (ii) the cost optimal solution that maximised the LCC savings (PES: 38%; LCC saving: 82%) was calculated as having a PEC value of 66.91 kWh/m2-year and an LCC value of 32.65 EUR/m2, and this retrofit alternative would be able to recoup the initial investment cost within one year. A summary that describes the design variables of the Pareto solutions was visualised using the parallel coordinates plot provided in Figure 4 in order to make it easy to understand the design-variable configurations related to all the Pareto solutions and enable the decision makers to make an informed decision regarding these variables. In this way, it is very easy to determine which values can be recommended to the decision makers for the various design variables within the context of the Pareto solutions. For example, a thermal insulation thickness of 0.04 m has been found to be appropriate, as the thermal insulation thickness used on the ground floor (p6) has a very low sensitivity index, and thus, it has a negligible effect on the key performance indicators. Moreover, in terms of energy systems, when the analysis of the patterns in the multivariate data is carried out on the parallel coordinate plot, it is understood that systems with high efficiency values are recommended for both KPIs. In this respect, it is perceived that the highest efficiency values defined for the boiler used for heating (p9), air conditioning systems used for cooling (p10) and hot water systems (p12) in Pareto optimal configurations are present and these systems provide high values in terms of LCC savings. In terms of PEC savings, there is insight that a higher saving value is achieved with the higher efficiency levels of VRF systems (p11) compared to other energy systems.

**Figure 3.** The Pareto solutions defined within the context of multiple-criteria decision analysis.

**Figure 4.** Visualisation of the configurations of the design variables related to the Pareto solutions and the relevant performance indicators within the parallel coordinate plots.

#### **4. Conclusions**

The study presented in this paper proposed a decision support workflow based on a computational performance that can be used for retrofitting residential buildings and designing new residential buildings in the context of Istanbul Province (a temperate humid climate region) in Türkiye. The discussed decision support workflow used a systematic, comprehensive solution space search approach, starting with the modelling of the reference situation of a target residential building within a settlement form with a uniform configuration, and concluding with the production of a wide solution space based on various design variables related to the building envelope, energy systems and renewable energy systems. Performance analyses were conducted, and the Pareto solutions that fit the conflicting objectives were determined.

The suggested decision support workflow has significant potential to facilitate the target decision makers' (architects and residence owners) decision-making processes related to the multi-objective retrofitting of residential buildings in Istanbul, a city undergoing exponential urban development. However, the output may differ depending on variations in the defined input; thus, it is possible that the residential building performance findings estimated from a series of assumptions may be different from the values measured during the actual application. External factors, such as climate change and user behaviour, can also play a role in these varying results. Within the context of data uncertainty, one must also consider that the LCC calculation results may vary according to the economic data (discount rate and energy price escalation rate) that were mainly taken into account, as well as the different calculation periods. Furthermore, although outside the scope of the current study, it would be beneficial to present the obtained solution space using a simple, internet-based graphical interface; this would enable sending rapid feedback to the target decision makers, which would provide effective guidance in the early stage of the decision-making process. Notably, with the use of interactive visualisation techniques, such as filtering, brushing and zooming, the efficiency level of parallel coordinate plots, which makes it difficult for decision makers to make meaningful inferences, can be increased due to the clutter caused by many overlapping lines. However, this study is a good starting point for effectively guiding the target decision makers during that stage of the process. It is important to note that the development of the recommended performance-based decision-support workflow based on the abovementioned issues is ongoing.

**Funding:** This study was supported within the scope of the International Postdoctoral Research Program of Scientific and Technological Research Council of Turkey (TUBITAK 2219).

**Institutional Review Board Statement:** Not applicable.

**Acknowledgments:** This research was largely conducted while Suzi D. Mangan (Corresponding author) worked as a guest researcher at the Building Physics and Services Unit, Department of Built Environment, Eindhoven University of Technology, from 2019 to 2020. The author would like to thank the members of the Building Performance Chair of Eindhoven University of Technology, led by Jan L.M. Hensen, for their valuable contributions throughout the study. This paper was also presented at the 5th Southeast European Conference on Sustainable Development of Energy Water and Environmental Systems held, in Vlore, on 22–26 May 2022.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Review* **A Comprehensive Analysis of In-Line Inspection Tools and Technologies for Steel Oil and Gas Pipelines**

**Berke Ogulcan Parlak \* and Huseyin Ayhan Yavasoglu**

Department of Mechatronics Engineering, Yildiz Technical University, Istanbul 34349, Turkey **\*** Correspondence: boparlak@yildiz.edu.tr

**Abstract:** The transportation of oil and gas through pipelines is an integral aspect of the global energy infrastructure. It is crucial to ensure the safety and integrity of these pipelines, and one way to do so is by utilizing an inspection tool called a smart pig. This paper reviews various smart pigs used in steel oil and gas pipelines and classifies them according to pipeline structure, anomaly-detection capability, working principles, and application areas. The advantages and limitations of each sensor technology that can be used with the smart pig for in-line inspection (ILI) are discussed. In this context, ultrasonic testing (UT), electromagnetic acoustic transducer (EMAT), eddy current (EC), magnetic flux leakage (MFL), and mechanical contact (MC) sensors are investigated. This paper also provides a comprehensive analysis of the development chronology of these sensors in the literature. Additionally, combinations of relevant sensor technologies are compared for their accuracy in sizing anomaly depth, length, and width. In addition to their importance in maintaining the safety and reliability of pipelines, the use of ILI can also have environmental benefits. This study aims to further our understanding of the relationship between ILI and the environment.

**Keywords:** in-line inspection; smart pig; sensor; oil; natural gas; pipeline; environment

#### **1. Introduction**

Today's main energy sources are petroleum and natural gas. If the primary energy consumption in the United States of America (USA) is analyzed based on sources, oil accounts for 35% of the total and natural gas accounts for 23% [1]. The transportation of these energy sources is important in terms of accessing energy. Pipelines are considered the most efficient way to transport oil and gas resources [2]. Furthermore, it surpasses other modes of transportation such as road, train, air, and sea when traveling long distances in terms of convenience, cost, safety, and environmental friendliness. As a result, almost all natural gas is transported via pipelines. For instance, about 97% of the natural gas and oil transported in Canada is carried through pipelines. As of 2017, the entire length of gas and oil pipelines in the world is estimated to be approximately 3,550,000 km, with gas pipelines measuring 2,965,600 km in length and oil pipelines measuring 584,000 km [3].

Oil and natural gas pipelines spanning kilometers are susceptible to accidents for a variety of reasons. The main causes of pipeline accidents are corrosion, gouges, plain and kinked dents, smooth dents on welds, smooth dents with other types of anomalies, manufacturing defects in the pipe body, girth and seam weld defects, and cracking [4]. If these anomalies are not addressed, they may result in pipeline failures such as leaks or ruptures, resulting in increases in costs, environmental risks, and even catastrophic accidents. According to the data of the US Department of Transportation Pipeline and Hazardous Materials Safety Administration (PHMSA), there were 681 accidents labeled as serious in the USA between 2002 and 2021. In these accidents, 260 people lost their lives, 1112 people were injured and USD11,043,742,158 financial losses occurred. Apart from that, as a result of the rupture of the pipeline in Michigan in July 2010, 1,000,000 barrels of oil leaked and caused great damage to the environment [5].

**Citation:** Parlak, B.O.; Yavasoglu, H.A. A Comprehensive Analysis of In-Line Inspection Tools and Technologies for Steel Oil and Gas Pipelines. *Sustainability* **2023**, *15*, 2783. https://doi.org/10.3390/su15032783

Academic Editors: Oz Sahin and Russell Richards

Received: 9 January 2023 Revised: 31 January 2023 Accepted: 1 February 2023 Published: 3 February 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Pipeline integrity (PI) programs must be used to maintain the gas supply and prevent accidents. PI is threatened by many factors. These factors can be listed as mechanical [6], operational [7], natural [8], and third-party [9]. Periodic assessment of PI is needed to prevent pipeline accidents and thus higher costs, environmental risks, and fatal accidents. Methods used for the assessment of PI are hydrostatic pressure testing [10], direct assessment [11,12], and in-line inspection (ILI) [13].

Hydrostatic pressure testing is used to locate leaks and confirm the performance and durability of pipes, tubing, and coils [14]. During testing, the pipeline is typically filled with water, and the pressure is maintained above the maximum operating pressure for a period of time. Meanwhile, critical anomalies in the pipeline cause leakage. This proves that anomalies that do not leak in the pipeline are not critical and that it is safe to operate the pipeline at maximum operating pressure. Testing can damage the pipeline, especially when performed at levels higher than 100 percent of the specified minimum yield strength of the pipe material, so ILI is often preferred over hydrostatic testing [15].

Direct assessment is the inspection of pipelines by operators. Operators should combine their knowledge of the pipeline section's physical properties and operating records with the results of the examination, inspection, and evaluation [16]. Direct assessment is effective on limited anomalies such as internal corrosion, external corrosion, and stress corrosion cracking (SCC). Therefore, ILI offers a more comprehensive assessment than direct assessment.

The only inspection technology that offers extensive information regarding anomalies that do not pose an immediate threat to the PI is ILI [17]. The ILI tools can classify anomaly types and specify their orientation, size, and location [18]. There are numerous ILI robots that specialize in detecting metal loss, cracks, geometry deformations, leaks, and wax deposition [19]. These robots were classified as pig type [20], wheel type [21–23], caterpillar type [24,25], wall-press type [26], walking type [27], inchworm type [28], and screw type [29] by Choi and Roh [30].

Pigs are the most-preferred robots in the ILI industry due to their features such as longer operation time, less downtime, being driven by product flow, and compatibility with developing sensor technologies. Pigs are essentially cylindrical electronic tools equipped with a traction system that completely covers the inner wall of the pipeline, a battery system that provides energy for the pig, and sensors that detect anomalies in the pipeline. These electronic tools can be used as geometry tools, mapping tools, metal-loss tools, and crack-detection tools [31]. Furthermore, smart pigs can be classified according to many factors. Figure 1 summarizes these factors.

**Figure 1.** Classification of smart pigs [32].

Pipelines are classified as piggable or un-piggable. Pig operations can be conducted without difficulty in pipelines that are piggable. In un-piggable pipelines, the pig cannot be operated due to factors limiting the mechanical characteristics of the pig such as sharp turns, consecutive bends, and tee transitions in the pipeline. In addition, piggable lines may become un-piggable over time for various reasons [33]. These reasons can be listed as debris accumulation, configuration changes, and corrosion formation in the pipeline. Technologies such as closed-circuit television cameras (CCTV) [34] and smart ball [35] are used in un-piggable pipelines [36]. For piggable pipelines, the smart pigs that have been summarized in the literature by Song et al. [37] are the most suitable tool for the ILI of pipelines because of their speed, the sensor technology they can carry, and their ability to survey very long distances in one go. In the existing literature, the smart pig types and their sensor technologies have not been comprehensively evaluated in terms of anomaly-detection capabilities, historical developments, and hybrid sensor topologies.

This study aims to address the lack of understanding regarding the causal relationship between pipeline anomalies, pig-adaptive sensor technologies, and the environment in the current literature. The core of this study is based on a conference paper [38] presented at the 3rd Latin American Conference on the Sustainable Development of Energy, Water, and Environmental Systems, which includes literature reviews on basic ILI sensors. The main contributions of this paper are as follows:


Following this introductory information, Section 2 provides the causes, hazard potentials, and management methods of pipeline anomalies. Section 3 includes the current status of ILI sensor technologies, as well as the advantages and limitations of basic pig-adaptive sensors, their working principles, and development chronologies in the literature. The anomaly-detection capabilities of ILI sensors and their hybrid structures are extensively evaluated in Section 4. The relationship of ILI sensors to the environment is given in Section 5 and finally, Section 6 presents the conclusion.

#### **2. Types of Pipeline Anomalies**

Anomalies occur in oil and natural gas pipelines for various reasons. The formation process of these anomalies can be examined in three classes. While corrosion anomalies in the pipeline occur depending on time, anomalies caused by mechanical damage or disasters occur independent of time. Anomalies originating from fabrication such as bends, buckles, and wrinkles are classified as stable. The management process of anomalies is associated with these formation processes. Therefore, the formation process of anomalies is important for PIM.

PIM is a systematic approach to ensuring the safe operation of pipelines. This process involves identifying and mitigating potential hazards that could lead to pipeline accidents. PIM can be divided into three main categories: assessment, planning, and management. Assessment involves close observation of the internal and external sections of pipelines to identify any anomalies and determine the overall condition of the pipelines. Planning encompasses all the activities aimed at maintaining or repairing pipelines, such as defining operations and procedures, conducting inspections, and performing maintenance and monitoring. Management includes tasks such as data management audits, fit-for-service evaluations, burst pressure assessments, and third-party verification. Through this comprehensive approach, PIM plays a vital role in protecting the environment and communities.

Pipeline anomalies can cause structural stress on the pipeline and increase the risk of pipeline failure. As such, it is important to examine the causes of occurrence, hazard potentials, and management methods of pipeline anomalies as part of PIM. Walker [39] grouped the most important pipeline anomalies as geometric deformation, metal loss, and cracking. This paper provides comprehensive information regarding the formation and management process of these anomaly groups.

#### *2.1. Geometric Deformation*

The pipeline may be subject to operational distortions such as pressure fluctuations, excessive mechanical forces, poor workmanship, or third-party damage. These distortions are the main source of geometric deformations in the pipeline. Geometric deformations in the pipeline can be listed as metal movement, denting, metal removal, cold working of the underlying metal, and puncturing [40]. Among them, the dent is one of the three most common typical anomalies (i.e., corrosion, dents, and cracks) encountered in oil and gas pipelines [41]. Dents are formed on the surface of pipes due to external loads such as excavation activities during the construction of pipelines [42]. Dents can be classified as plaint dents [43] and composite dents [44]. Plaint dents pose no major threat to PI, whereas composite dents pose a greater threat to PI [45].

Dents cause stress and strain concentration [46,47]. Therefore, dent management is an important issue for PIM. Different approaches can be used for dent management. Warman et al. [48] presented an approach that Duke Energy Gas Transmission has implemented for dent management. This approach involved characterizing dents and mechanical damage in the pipeline system by integrating data collected from high-resolution ILI tools operated over 2000 miles. Torres and Piazza [49] developed a new engineering tool for the integrity management of dents using finite element analysis (FEA). This tool was used to develop fatigue life trends. There are other studies on the management and characterization of geometric deformations in the literature [50–52]. However, the most common use for detecting geometric deformations is pigs with mechanical contact (MC) probes.

#### *2.2. Metal Loss*

The pipeline may be exposed to rusting, cavitation, and corrosive substances over time. These reasons are the main source of metal losses in the pipeline. Metal loss can manifest itself as gouging, corrosion, and erosion. Among them, corrosion is one of the most important problems affecting safety in oil and gas pipelines, as shown in Figure 2, and accounts for approximately 30% of all equipment failures [53]. Corrosion continually reduces the pipe-wall thickness and can significantly accelerate the formation of leaks [54].

**Figure 2.** Percentages of oil pipeline failure types [55].

Accurate estimation of the corrosion rate is of primary importance for integrity management planning of a pipeline [56]. Machine-learning-based approaches are suitable approaches to predict the behavior of corrosion occurring in pipelines [57]. Hamed et al. [58] proposed a nonparametric calibration model based on k-nearest neighbor interpolation to improve field data collected from ultrasonic testing (UT) and magnetic flux leakage (MFL). The model improved the accuracy of pipeline wall-thickness measurements. In this way, the life of the critical part of the pipeline can be better predicted.

ILI tools are widely used for corrosion detection in oil and gas pipelines [59]. ILI is accepted as the optimum approach to detect and characterize the anomalies as well as reveal the growth rate information of the active anomalies in the pipeline [60]. Low and Selman [61] outlined the capabilities and limitations of ILI methods used to inspect corroded pipelines. Huyse et al. [62] presented a study that tested the performance of various ILI methods on top of the line (TOL) corrosion. According to the study, MFL technology showed the best success in detecting TOL corrosion. Palmer and Schneider [63] revealed that hybrid sensor technology, which combines different ILI methods such as UT, electromagnetic acoustic transducer (EMAT), MFL, and eddy current (EC), will be effective in sizing complex metal losses.

#### *2.3. Cracking*

Pipelines are constantly exposed to environmental effects, external loads, and ground movements due to their nature. These effects are the main source of cracking in pipelines. In addition, cracks often occur in a hybrid form in oil and natural gas pipelines. Examples of these hybrid anomalies are crack in corrosion (CIC), SCC, and crack in dent (CID).

Crack-like anomalies may occur simultaneously with corrosion anomalies and represent a new hybrid form of an anomaly called CIC [64]. Bedairi et al. [65] presented a study for predicting the failure pressures of CIC anomalies to determine the applicability of PI assessment methods. The predicted failure pressures were conservative when compared to the experimental results, with a mean difference of 17.4% for five different CIC anomalies with various depths.

SCC is defined as the growth of crack formation in a corrosive environment. These cracks often have a high aspect ratio and pose a major threat to the PI [66]. As a result, estimating the crack growth rate (CGR) of SCCs is critical. Song [67] developed a mathematical model for this purpose. The developed model was used to predict CGRs with two methods called the potentiodynamic polarization curve and the Butler–Volmer equation. The potentiodynamic polarization curve was good at predicting high CGRs, while the Butler– Volmer equation was good at predicting low CGRs. Ryakhovskikh and Bogdanov [68] determined the conditions for operating a pipeline with SCC cracks by considering the temporal variations of the CGR to certain accident statistics. The findings showed that pipes with crack depths between 0.1 and 0.25 δ (where δ is the pipe-wall thickness) could be left operating until a scheduled inspection if the CGR is estimated. Palmer et al. [69] presented a case study on CGR estimation based on repeated EMAT data. The method simply involved passing the EMAT sensor over the relevant crack periodically. Estimated CGRs determined during the presented case study showed reasonable results in line with the available literature.

Dents adjacent to welds can cause cracks to develop in the welds and cause a combined anomaly referred to as a CID or dent–crack anomaly [70]. These anomalies can lead to major accidents such as bursts in the pipeline. The effect of the location of the CID anomaly on burst pressure has been discussed in the literature [71–73].

ILI is frequently used in the management of oil and gas pipeline cracks. UT has proven to be the most suitable and reliable technology for crack detection in pipelines [74]. However, the application of UT technology requires a liquid medium. Therefore, EMAT is used as a substitute for UT technology in gas pipelines.

#### **3. In-Line Inspection Sensor Technologies**

The ILI system collects data from internal pipelines and is a key component of the pipeline industry's integrity management system that promotes safe, efficient, and costeffective pipeline operations [75]. ILI technologies are constantly evolving depending on developments in sensor technologies. INGU Solutions has developed an inspection ball called Pipers [76], which includes a built-in three-axis accelerometer, gyroscope, magnetometer, and pressure and temperature sensors. The working principle of Pipers is similar to that of MFL, but the inspection ball uses the earth's magnetic field instead of the magnetic field created by a permanent magnet. While the inspection ball is low-cost and easy to operate, its measurement accuracy is low compared to conventional pig-adapted ILI sensors.

Pig-adapted novel ILI sensors are frequently encountered in the literature. Sampath et al. [77] introduced a new pig-adapted non-contact optical sensor array method for real-time inspection and non-destructive evaluation (NDE) of pipelines. The proposed sensor array included simple light-emitting diodes to send light to the pipeline's inner wall and light-dependent resistors to receive reflected light. The new array was successful on deposits, cavities, and uniform corrosions. Feng et al. [78] developed a novel alternating current field measurement probe that can be integrated into pigs and used in the inspection of natural gas and oil pipelines. The proposed probe consisted of sensors, supports, and inductive components (core and coils) and showed good results in corrosion and cracks. Sampath et al. [79] developed a smart pig that detects metal loss and geometric deformations using an optical sensor and bimorph sensor arrays. The proposed bimorph sensor array is based on the piezoelectric principle and consists of a bimorph sensor, probe tip, and cantilever beam components. When the probe tip passes over a geometric deformation, the cantilever beam is bent, the bending strain is measured by the bimorph sensor, and anomaly characterization is performed. The results showed that the proposed ILI method could accurately identify the anomaly size and location.

Despite extensive research on novel pig-adapted ILI sensors, these technologies tend to be inferior in practical application compared to the longstanding and preferred basic ILI sensor technologies in the industry. Basic ILI sensor technologies can be classified as UT [80], EMAT [81], EC [82], MFL [83], and MC [84]. This paper presents the methodology, limitations, and recent developments of basic ILI sensor technologies.

#### *3.1. Ultrasonic Testing*

UT is typically performed using a handheld probe that is passed over the surface of the material being inspected. The probe emits sound waves that travel through the material and are reflected back to the probe. The time it takes for the sound waves to travel through the material and be reflected back to the probe is measured, and this information is used to detect anomalies in the material. Figure 3 illustrates the functioning principle of a UT sensor.

In comparison to other technologies, ultrasonic is currently the most reliable ILI technique [85]. It is more feature-sensitive than MFL and gives better results on thickwalled pipes. UT waves can detect discontinuities in materials such as metals and polymers both above and below the surface. However, it is only used in liquid media due to limitations in its methodology. In addition, the pig should not accelerate to high speeds due to the difficulty of coupling the sensor to the pipe wall. The anomalies that UT can detect are internal and external metal losses, flanges, cracking, welds, etc.

**Figure 3.** The working principle of a UT sensor.

The development of UT technology on different anomalies has occurred over time. The first ILI tool using ultrasonic technology for crack detection was introduced in the early 1990s [86]. This tool allowed inline detection of both internal and external cracks. Reber et al. [87] presented a design that made it possible to use the same tool for metal-loss or crack inspection. In this design, the same basic instrument assembly and electronic module could be used for both configurations by changing the sensor carrier. Significant savings in mobilization and demobilization costs have been achieved as a single tool can be used for both inspection tasks. However, two physically separate operations were still necessary. Reber et al. [88] later introduced another design that eliminated this problem and offered both metal-loss and crack detection in one operation. Thanks to this new array, higher inspection speeds, improved resolution, and accuracy have been achieved compared to the previous design. Dobmann et al. [89] examined the performance of UT at pipe welding points. Ultrasonic sensors arrayed in different structures according to axial, spiral, and girth weld types successfully detected welds in an investigation mission to detect transverse anomalies in an offshore pipeline. The internal rotary inspection system (IRIS), an ultrasonic pulse/echo immersion technique, first introduced by MatEval [90] in the early 1980s, was used by Birchall et al. [91] for anomaly detection in pigs. As a result of this study, fundamental pipeline anomalies such as external erosion, internal dent, and internal corrosion have been successfully detected. Slaughter et al. [92] presented a case study on the new-generation UT ILI crack tool called Combo WM CD. This tool showed improvements in high sensitivity, reduction in signal losses, higher resolution, probability of detection (POD), and anomaly sizing. UT technology can also be used to detect anomalies in unreachable locations in the pipeline. There are ultrasonic techniques such as higher-order mode cluster [93], multi-skip [94], and S0 mode lamb wave [95] in the literature. Khalili and Cawley [96] used these techniques to detect corrosion at unreachable points in the pipeline and classified them according to their success. Of these techniques, the higher-order mode cluster was very little affected due to its low surface motion and showed the best overall performance.

UT tools generally have three resolution components. These are axial resolution, circumferential resolution, and depth resolution. Willems [97] examined the developments in UT technology developed in the second decade of the 21st century in terms of hardware, data digitization, data processing, and data storage, and attributed the increase in these three basic resolutions to these concepts. Of these concepts, data storage has always been a problem because the UT tool contains too many transducer systems [98]. In this case, reducing the ultrasound data may be the solution. In the literature, there are techniques such as entropy coding [99], transformation techniques [100], techniques based on behavioral information of ultrasound signals [101], and FPGA-based architecture techniques [102]. The most efficient of these methods is the FPGA-based architecture technique, with an average data reduction of 96.5%.

#### *3.2. Electromagnetic Acoustic Transducer*

EMAT is a UT method that generates ultrasonic waves in the material being inspected rather than using a transducer. The EMAT sensor employs an electromagnetic field to produce ultrasonic waves. This field is generated by an electrical current that flows through a coil. The electromagnetic field excites the surface of the material being tested, causing it to oscillate. As the surface oscillates, it creates ultrasonic waves that travel along the surface of the material. The receiver circuit in the EMAT sensor detects these ultrasonic waves and thus the anomaly is characterized using the frequency, amplitude, and other properties of the wave. The working principle of an EMAT sensor is given in Figure 4.

**Figure 4.** The working principle of an EMAT sensor.

EMAT technology is relatively new compared to other basic ILI technologies. EMAT does not require any coupling fluid. Hence, it can be used in both liquid and gas pipelines. Scanning is reliable since there is no requirement for coupling between the probe and the pipe wall, but the probe and the wall must be separated by a specific distance. The anomalies that EMAT can detect are blisters, laminations, cracking, wall thickness, etc. In addition, EMAT technology is known to be a reliable and accurate method for the detection, identification, and sizing of hybrid anomalies such as SCC [103].

EMAT and UT technologies have significant differences in methodology. While UT concentrates on classical wave modes generated by piezoelectric probes, EMAT is the most advanced technology concentrating on shear horizontal (SH) and guided waves [104]. Parameters such as amplitude and phase shift of these wave modes are important in the classification and sizing of anomalies. Hirao and Ogi [105] developed an EMAT technique to detect corrosion anomalies on the outer surfaces of steel pipelines and determined that the amplitude and phase shift of the SH1 mode is more sensitive to the presence of anomalies than the SH0 mode. In the study, round-trip signals of SH0 and SH1 modes have proven to be uniquely responsive to surface anomalies. Gauthier et al. [106] tested the probe's success on notches on a pipe, using multi-mode SH waves generated by EMAT. Zhao et al. [107] demonstrated that the n1 mode SH wave generated by EMATs can successfully detect and classify mechanical dents at the outer surface of the pipe wall at a depth of 25% or more along the wall. Klann and Beuker [108] presented a study on the detection of cracks in steel pipelines using SHn waves produced by EMAT. The detection tool using SHn waves produced by EMAT was compared with the traditional MFL method. In anomaly classification, EMAT performed better than MFL. Cong et al. [109] proposed a new EMAT design based on a magnetostriction mechanism to generate and receive longitudinal guided waves. The proposed design has advantages such as small volume and light weight, which helps to increase inspection efficiency in anomaly detection in pipes.

Apart from wave modes, there are different parameters such as configuration that affect the anomaly-sizing success of EMAT. Tu et al. [110] introduced a different EMAT configuration called ring array to enlarge the detection range. Thanks to this configuration, the entire cross-section of the pipe can be scanned with a single solenoid coil and enough permanent magnets. The new configuration yielded successful results in detecting pipewall thickness and anomalies in the pipeline. In addition to configuration and wave mode, the lift-off distance of the EMAT is an important criterion. For the EMAT to work effectively, there must be a constant lift-off distance between the pipe's inner wall and the sensor. This distance causes the EMAT to be sensitive to noise [111]. Noise reduction is a critical step to increase the reliability of the EMAT system [112] and noise can be reduced using signal processing methods [113–118].

#### *3.3. Eddy Current*

The EC method is only applicable to conductive substances. When an EC sensor is used to test a gas pipeline or other conductive structure, the sensor generates an electromagnetic field that penetrates the surface of the material. When this magnetic field intersects with the conductive material, electromagnetic ECs are induced in the conductive material. If there are any defects or discontinuities in the material, such as cracks or corrosion, they will disrupt the flow of the ECs. The sensor can detect these disruptions and use them to identify the location and size of the anomalies. Figure 5 demonstrates the operation of an EC sensor.

The EC technique is frequently used for crack detection [119,120] due to the advantages arising from its methodology. EC also offers non-contact testing and no residual effects. However, as with EMAT technology, the lift-off effect is exhibited here due to the non-contact nature of the tool. The anomalies that EC can detect are cracks, laminar anomalies, etc.

Different EC imaging methods can be used to detect anomalies in metal structures. These methods are low-frequency EC imaging [121], multi-frequency EC imaging [122], transient EC imaging [123], and pulsed EC (PEC) imaging [124,125]. Among these imaging methods, PEC is frequently seen in the literature. By developing a feature-extraction algorithm based on principal component analysis, Tian et al. [126] demonstrated that more anomaly information is provided for PEC than traditional peak value and time. The new feature extraction algorithm eliminated the lift-off effect and proved effective in detecting cracks without scanning. Safizadeh and Hasanian [127] presented an advanced PEC technique for the detection of corrosion anomalies. The optimum test parameters were obtained by simulating the PEC test on a pipe with Maxwell software. The results obtained from the artificial anomalies produced on the inner surface of a gas pipe revealed that the lift-off effect was eliminated with the PEC technique and the corrosion was successfully

detected. Arjun et al. [128] presented a study on optimizing the PEC probe configuration for deeper penetration of the magnetic field in the material. The detection sensitivity of the optimized probe was investigated using notches machined at different depths, and the detection sensitivity of the PEC probe was increased. Tian et al. [129] proposed a new method for thickness measurement by analyzing the PEC detection system. The method does not need to evaluate electromagnetic parameters. This greatly simplifies the machine learning process, improves measurement accuracy, and can make the system more stable than traditional ways. Yu et al. [130] proposed an approach to reduce lift-off noise for detecting anomaly depth or width based on the investigation of the relationship between the peak value of the difference signal and lift-off. The proposed approach was validated by experiment and the results showed that lift-off noise can be greatly reduced in the PEC technique. Park et al. [131] developed PEC technology to detect the amount of thinning of a carbon steel pipe wall covered with insulation. The results showed that the PEC system could detect wall thinning in an insulated pipeline. Piao et al. [132] proposed a new high-speed PEC-detection method to detect internal/external anomalies using conductivitydependent and permeability-dependent distribution models of EC induced in the inner surface of steel pipes. The method showed high inspection speed, good linearity, and superior sensitivity.

**Figure 5.** The working principle of an EC sensor.

EC can gain expertise in different types of anomalies over time because of its replaceable structure. For example, pipelines manufactured with corrosion-resistant alloy (CRA) cannot be inspected with UT or MFL. UT sensors cannot transmit the sound wave over the CRA liner. The MFL cannot inspect the pipe as it cannot penetrate the magnetic field due to the CRA liner. Asher and Boenisch [133,134] introduced a new ILI sensor technology based on the magnetic EC and multi-differential EC principles developed by Innospection Ltd. and ExxonMobil. This sensor was tested on anomalies artificially added to the CRA pipe, such as metal loss, erosion, internal girth weld, and crack-like defects. All anomalies were

detected with the sensors, including very small anomalies. Remote field EC (RFEC) has advantages such as being unaffected by the skin effect and material properties in anomaly detection of metal pipelines. However, the RFEC probe is large in size and the signal received by the sensing coil is weak. She et al. [135] proposed a new configuration for RFEC. Thanks to this new configuration, the probe size was reduced and the signal received by the sensing coil was strengthened.

#### *3.4. Magnetic Flux Leakage*

MFL sensors typically consist of a permanent magnet and a sensor coil, which are placed on opposite sides of the material being tested. The permanent magnet generates a magnetic field within the material. If there are any imperfections or anomalies in the material, they will disrupt the magnetic field and cause a leakage of magnetic flux. Leakage of magnetic flux is then detected by the sensor coil, allowing the presence and location of anomalies in the material to be determined. Figure 6 illustrates the operation of an MFL sensor.

One of the oldest methods for detecting metal loss is the MFL method [136]. MFL is also the most common ILI technology. MFL sensors are a highly effective NDE tool for the inspection of ferromagnetic materials, such as steel pipes. MFL testing is relatively fast and inexpensive compared to other NDE methods, such as UT. It can conveniently determine the location and orientation of the anomaly and whether it is inside or outside the pipe. The anomalies that MFL can detect are metal losses, metallurgical changes, dents, etc.

The success of the MFL technique depends on many parameters. One of these parameters is a sensitive magnetic sensor. Pham et al. [137] developed the planar hall magnetoresistance magnetic sensor for use in MFL. The results showed improvements in the bipolar and linear responses to the magnetic field, high sensitivity, and low thermal drift. To find out which parameters the MFL signal is affected by, FEA is often used [138]. Chen et al. [139] studied MFL signals on four different corrosion anomalies with three-dimensional FEA. The findings showed that the relative position of the corrosion affects the amplitude of the MFL signal. In addition, the amplitude of the MFL signal is also affected by the morphology of the anomaly. Conventional MFL technology creates a magnetic field aligned to the axis of the pipe being inspected. Therefore, while the MFL can easily detect anomalies perpendicular to the field, it has difficulty detecting long and thin anomalies. In contrast to the traditional MFL structure, Kim et al. [140] presented a method called circumferential MFL (CMFL) or transverse field inspection (TFI) for the detection and characterization of signals produced by long and narrow cracks formed by the external–internal pressure difference. The findings showed that the circumferential magnetic fields maximized the leakage of

magnetic flux in cracks. Technology developed by T.D. Williamson Inc. [141] called spiral MFL (SMFL) possesses MFL and CMFL detection capabilities. The proposed MFL has become able to detect both perpendiculars to the field and long and thin anomalies. Okolo and Meydan [142] presented a quantitative approach based on the pulsed MFL (PMFL) method for the detection and characterization of signals produced by hairline cracks. The findings show that the proposed technique can be used to classify hairline cracks. One of the parameters affecting the success of the MFL technique is the processing of MFL inspection signals. Mao et al. [143] presented a detailed study on preprocessing and processing of MFL inspection signals. Carvalho et al. [144] used an artificial neural network (ANN) to classify signals collected along the weld bead as defective or non-defective. Again, ANN was used to classify signals labeled as defective as external corrosion, internal corrosion, and lack of penetration. The study showed a success rate of 94.2% for the first classification and 71.7% for the second classification. Ma and Liu [145] used the immune radial basis function neural network to process MFL signals. The location and size of the corrosion were successfully determined in the tests. Layouni et al. [146] presented a study to detect, locate, and estimate the size of metal-loss anomalies from MFL scans of oil and gas pipelines. Pattern-adapted wavelets were used for the anomaly length and ANN was used for the anomaly depth. The proposed technique is computationally efficient, provides a high level of accuracy, and works for a wide variety of anomaly shapes. In the processing of MFL signals, noise elimination is also a crucial step. Ji et al. [147] presented a noise-elimination algorithm called adaptive fuzzy lifting wavelet transform to solve the noise reduction problem in MFL signals. The findings show that this method achieves a better noise reduction than that obtained with conventional wavelet transform. Mukherjee et al. [148] suggested a new scheme of channel equalization algorithm to correct misalignments of MFL sensors, resulting in excellent signal recovery and noise elimination.

#### *3.5. Mechanical Contact*

MC tools contain a mechanical arm, a mechanical or magnetic encoder, and a spring system. The spring system pushes the mechanical arm against the pipeline's inner wall. The displacement of the mechanical arm in contact with any anomaly in the pipeline is read by the encoder as angular displacement and converted into depth information by processing. Figure 7 depicts the working principle of an MC sensor.

**Figure 7.** The working principle of an MC sensor.

The majority of MC tools are used to identify pipeline geometry. Besides their ability to measure the diameter and roundness of a pipeline, MC sensors can detect any geometric anomalies in the pipe wall. MC sensors can also measure the distance between the pipeline wall and any objects that may be present within the pipe, such as deposits or debris. This can help identify potential blockages or obstructions that may cause problems in the pipeline. The anomalies that MC can detect are dents, welds, buckles, wrinkles, ovality, and pits, among other irregularities.

In MC tools, choosing an encoder is crucial. Mechanical encoders can have several issues in terms of sensitivity to collision and vibration and excessive power consumption. For this reason, magnetic encoders are generally preferred in MC tools. Kim et al. [149] developed a high-resolution, low-power-consumption, fast-response MC tool using a noncontact rotary sensor produced using an annular anisotropic magnet and a hall effect sensor. Later, Kim et al. [150] designed a 30" geometry pig using this MC tool. The pig included the inertial measurement unit (IMU), odometer, and MC tool. The MC tool was used to measure ovality, dent size, and pipeline inside diameter, the IMU for trajectory measurement and three-dimensional coordination in space, and the odometer for distance traveled and instantaneous velocity measurement. Xiaolong et al. [151] gave general information about the design principles of the MC tool (mechanical structure, mathematical model, sensor circuit design, methodology, etc.). In addition, the basic components of the sensing arm were optimized by analyzing the magnetic field strength and structural strength. Canavese et al. [152] designed a new low-cost and low-risk foammade pig that can detect, position, and size internal diameter changes and anomalies. In this design, a strain sensor is used to detect the anomaly depth. Laboratory results showed successful sizing of internal diameter changes and corrosion. The results obtained in the field test [153] were compared with the results provided by the commercial conventional MC tool launched into the same pipeline under the same conditions. The developed pig provided more information about the pipeline structure than the commercial one.

One of the disadvantages of the traditional MC tool is the dynamic behavior of the mechanical arms under operating conditions. There are three different types of mechanical arms [154], called a wheel, arm, or probe, and they all have different dynamic behavior. Li et al. [155] tested the dynamic behavior of the probe-type mechanical arm. Analytical and experimental results show that pigging speed and spring force are closely related to mechanical arm sensitivity. Zhu et al. [156] tested the dynamic behavior of the wheel-type mechanical arm. According to the experimental results, high pigging speed increases the measurement error, while low pigging speed reduces the measurement error. Increasing the spring force can reduce or even eliminate the measurement error. Li et al. [157] proposed a bouncing model that can be used to calculate the bouncing height and sliding length of the wheel-type mechanical arm along the convex defect. According to the model, it was observed that the bouncing was caused by the sudden angular acceleration, and the sudden angular acceleration was caused by the anomaly shape and rigid collision. Paeper et al. [158] designed a non-contact measurement technology as a solution to the dynamic disadvantage of the traditional mechanical arm. This technology was a mechatronic arm with a conventional mechanical arm and a touchless operating proximity sensor. As a result, geometric anomalies in the pipe, such as dents or wrinkles, are detected by the sensor, which offers comprehensive shape information.

#### **4. Anomaly-Detection Capabilities of In-Line Inspection Sensors**

ILI sensors are used to detect and size anomalies such as metal loss, cracking, and geometric deformation. The basic ILI sensors cannot exhibit the same resolution and accuracy for every anomaly type, or even detect some anomalies. This is known as the anomalydetection capability of that sensor. In this section, the anomaly-detection capabilities of UT, EMAT, EC, MFL, and MC sensors are given, and basic sensors used in the market and some hybrid models are compared in terms of resolution and accuracy.

Table 1 summarizes the basic ILI sensors' capability for detecting anomalies. These sensors are not capable of providing the same level of resolution and accuracy for all types of anomalies. Comparing the two commercially available UT [159] and EMAT [160] tools, the minimum length that the UT tool can detect is 25 mm, while the minimum length that the EMAT tool can detect is 40 mm in crack detection. Although the minimum depth it can detect in a long seam in both tools is 2 mm, the length sizing of the UT tool is ±10 mm and the length sizing of the EMAT tool is ±20 mm. The findings show that the UT tool performs better at detecting and sizing cracks. In metal-loss detection, the UT [161] tool's length sizing accuracy is ±7 mm, the MFL [162] tool's length-sizing accuracy is ±15 mm, and the EC [163] tool's length-sizing accuracy is ±6 mm. Width-sizing accuracies are ±8 mm for UT tool, ±15 mm for MFL tool, and ±5 mm for EC tool. The findings show that the EC tool performs best at detecting and sizing metal losses. However, MFL tools are generally preferred for detecting and sizing metal loss, as the UT tool is limited to liquid mediums and the EC tool is unable to measure the metal loss on the pipe's outer surface or wall thickness. ILI technologies used in the inspection of natural gas and oil pipelines are sometimes used as hybrids to combine their advantages or eliminate their disadvantages. In metal-loss detection, the MFL-UT [164] hybrid tool's depth-, length-, and width-sizing accuracy is ±0.4 mm, ±7 mm, and ±8 mm, respectively, while the MFL-EC [165] tool's is ±1.3 mm, ±6 mm, and ±5 mm, respectively. The findings showed that the MFL-UT hybrid tool was better at depth sizing and the MFL-EC hybrid tool was better at length and width sizing.

**Table 1.** Detection capabilities of the basic ILI sensors.


\* indicates capability.

Apart from the measurement sensitivity of the smart pig, sometimes hybrid pigs are preferred for anomaly classification. For instance, while the MFL sensor can detect metal losses on both the inner and the outer surfaces of the pipe, it cannot determine whether the damage is on the inner or outer surface. In this case, due to the presence of hybrid structures such as (MFL + EC), it is possible to deduce that the EC-detected metal losses are internal. Dual MFLs can also be used for the same purpose. While the first MFL saturates the entire pipe, the second MFL with a lower magnetic field strength can only saturate half the pipe. Therefore, anomalies that can be detected by both MFLs are labeled as internal. In addition, MC tools are inefficient for detecting metal loss and cracks, but they are unique in detecting dents and ovality as they are in direct contact with the pipe's inner wall. Thus, MC tools are frequently used in hybrid pigs.

#### **5. Impact of In-Line Inspection Sensors on Environmental Issues**

Anomalies such as geometric deformation, metal loss, and cracking occur in pipelines for various reasons. These anomalies cause rupture of pipelines, hence leakage. Sometimes anomalies trigger other anomalies to occur. Hafez [217] studied the case of an oil spill that occurred in an undersea crude oil pipeline. According to the case outputs, a plaint dent formed in the pipeline, presumably by external effects, caused corrosion and cracking, respectively, and caused the pipeline to rupture. Pipeline leaks have occurred in Russia [218], Peru [219], Canada [220], and the USA [221] in recent years. These leaks caused thousands of barrels of oil to leak and pollute the environment.

Pipeline accidents have considerably decreased as a result of the development of ILI technology. In fact, most pipeline accidents in the last decade have not occurred on ILI-applied liquid or gas transmission pipelines, but on non-ILI-applied local gas distribution systems [222]. This has resulted in a major reduction in the volume of hazardous liquid leaks.

Siler-Evans et al. [223] analyzed data on pipeline incidents that took place in the USA from 1968 to 2009 and found that the number of hazardous liquid pipeline accidents has dropped significantly over the past four decades, resulting in a fourfold reduction in the annual volume of hazardous liquid leaks. PHMSA's statistics on oil pipeline accidents affecting people or the environment in the USA between 2010 and 2020 also support this. The volume spilled rate per billion barrel-miles transported is given in Figure 8. Although the volume spilled rate per billion barrel-miles transported has fluctuated since 2010, the trend line is in a downward trend.

**Figure 8.** Barrels spilled per billion barrel-miles between 2010 and 2020.

#### **6. Conclusions**

ILI is crucial to the sustainability of the oil and gas pipeline industry. Advances in ILI technology will lead to more accurate detection of anomalies in the pipeline and better prediction of the growth rate of anomalies and the life of the pipeline, preventing pipeline ruptures and eliminating the potential vital, environmental, and financial damages of rupture. This paper discusses the potential of ILI technology used in natural gas and oil pipelines. The most common types of anomalies in oil and natural gas pipelines are mentioned, and the causes of anomalies and anomaly management are emphasized. By mentioning the basic ILI sensors, the methodologies, historical developments, limitations, and studies of the sensors in the literature are presented. The sensors were compared according to their anomaly-detection capabilities, and the accuracy and resolution information of the basic and hybrid sensors used in the sector were given. The effect of the use of ILI tools on the integrity of the pipeline was examined, and the trends in the number of accidents and the amount of harmful liquid leaking into nature were examined.

The growing demand for energy has resulted in the construction of new pipelines all over the world. According to PHMSA's data, a total of 107,835 miles of gas pipelines were constructed in the USA between 2010 and 2021. Although significant lengths of pipelines have been constructed worldwide, the rate of serious accidents has been reported to be relatively low [224]. It is undeniable that ILI tools account for a significant portion of this decline. For this very reason, ILI tools seem to have a bright future in PIM, such that the smart pig market, which was worth USD544.7 million in 2017, is expected to be worth USD717.9 million in 2023 and USD900 million in 2027 [225]. As worldwide industrialization continues, the world's energy needs will increase. With the need for energy, the extension of pipelines will become inevitable. Because increasing the length of pipelines increases the chances of vital, environmental, or financial consequence of pipeline rupture, ILI technology must continue to advance.

The most in-demand ILI sensor technologies in the industry are still the MFL for metal losses, the UT for cracks, and the MC for geometric deformations. However, due to the advantages it brings, EMAT technology is becoming more popular among ILI methods and is likely to dominate the smart-pig market in the future. EMAT has the potential to provide more detailed information about pipeline conditions than other inspection methods and does not require a couplant, as it uses electromagnetic waves to generate the ultrasound signal. Additionally, EMAT is effective at detecting metal losses and is capable of detecting other types of anomalies, such as cracks and geometric deformations. The non-contact nature of EMAT inspections allows for inspections to be performed without the need for pipeline cleaning or shutdown, saving time and resources. Although EMAT saves time and resources, in general, smart pigs are inherently problematic in terms of time and resources due to pipeline preparation, the complexity of data analysis, and becoming stuck in the pipeline. For this reason, the ILI market is likely to be shared by new and inexpensive products such as inspection balls in the future.

**Author Contributions:** Conceptualization, H.A.Y.; methodology, H.A.Y.; evaluating literature, B.O.P.; validation, B.O.P.; formal analysis, B.O.P.; visualization, B.O.P.; resources, B.O.P.; writing—original draft preparation, B.O.P.; writing—review and editing, H.A.Y.; supervision, H.A.Y. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **High Temperature Lignin Separation for Improved Yields in Ethanol Organosolv Pre-Treatment**

**Johannes Adamcyk 1,\*, Stefan Beisl <sup>2</sup> and Anton Friedl <sup>1</sup>**

<sup>2</sup> Lignovations GmbH, 3400 Klosterneuburg, Austria

**\*** Correspondence: johannes.adamcyk@tuwien.ac.at

**Abstract:** The full utilization of renewable raw materials is necessary for a sustainable economy. Lignin is an abundant biopolymer, but is currently mainly used for energy production. Ethanol organosolv pre-treatment produces high-quality lignin, but still faces substantial economic challenges. Lignin solubility increases with temperature, and previous studies have shown that it reprecipitates during cooling after the pre-treatment. Thus, a possibility for the optimization of lignin production with this process can be the separation of extract and residual biomass at high temperatures. In this work, lignin was extracted from wheat straw at 180 ◦C, and the extract was separated from the remaining solids at several temperatures after the pre-treatment. The results show that 10.1 g/kg of lignin and 2.2 g/kg of carbohydrates are dissolved at the pre-treatment temperature of 180 ◦C, which is reduced to 8.6 g/kg of lignin and 1.2 g/kg of carbohydrates after cooling. The precipitation of lignin separated from the extracts at 180 ◦C showed that a higher lignin concentration at high temperatures results in a 46% improvement in the yield of solid lignin, while there was no significant impact on the lignin purity.

**Keywords:** biorefinery; organosolv; lignin; pre-treatment

#### **1. Introduction**

The efficient utilization of renewable resources, such as lignocellulosic biomass, is a highly relevant issue nowadays. Biorefinery processes have been heavily investigated as a method to separate and valorize the major components of lignocellulose, showing both great potential and economic obstacles [1,2]. Lignin is one of the three major compounds of lignocellulose and should play a major role, since it is the most abundant renewable polymer with an aromatic skeleton [3]. Lignin is already produced and available in large quantities as a side-product of pulping processes. However, lignin from conventional pulping processes, such as Kraft- and sulfite pulping, is scarcely used as a material, but is mainly used for energy production, despite it being produced in larger amounts than necessary to cover the internal energy demand of the pulping process [4]. In comparison, ethanol organosolv pre-treatment results in a high-quality lignin that is suitable for various value-added material applications, such as carbon fibers [5,6], food packaging [7], or sunscreens [8], while employing a completely renewable solvent in a sulfur-free process. Such applications could both improve the economic viability of a lignocellulose biorefinery and offer a more sustainable raw material for these products. On the other hand, organosolv pre-treatment has some economic drawbacks, such as the necessity to recover the solvent by distillation [9], and the higher demand on equipment due to high pressures during pre-treatment [10]. To be economically competitive, this technology needs to be optimized towards the production of high-quality lignin at simultaneously high yields, and the sufficient delignification of the residual biomass.

In most studies investigating organosolv pre-treatment, biomass and solvent are mixed in an autoclave, heated to a specified temperature for a certain time, and cooled to

**Citation:** Adamcyk, J.; Beisl, S.; Friedl, A. High Temperature Lignin Separation for Improved Yields in Ethanol Organosolv Pre-Treatment. *Sustainability* **2023**, *15*, 3006. https:// doi.org/10.3390/su15043006

Academic Editors: Oz Sahin and Russell Richards

Received: 11 January 2023 Revised: 3 February 2023 Accepted: 6 February 2023 Published: 7 February 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

room temperature, followed by separation of solids and liquids, and an analysis of the fractions [11–15]. However, since lignin solubility increases with temperature [16], it is likely that some lignin is solubilized at high temperatures but reprecipitates during the cooling. In fact, several works have reported this lignin re-precipitation during the cooling process. Guo et al. [17] washed the biomass after pre-treatment and compared the lignin in the wash with the lignin dissolved in the liquor. They found very similar structural features in both lignins; however, the molecular weight of the reprecipitated lignin was slightly higher than that in the liquor. Rossberg et al. [18] reported the precipitation of lignin during cooling in a pilot scale operation, which formed deposits in the tanks, but had properties that made it still suitable for material applications. From a process perspective, these depositions are a relevant issue for several reasons, since they indicate an increased delignification, but also necessitate frequent removal. Studies from Weinwurm et al. [19] and Xu et al. [20] showed that some of the reprecipitating lignin also forms deposits on the pre-treated fibers. This shows that reprecipitation during cooling has to be considered in process design, as it potentially reduces both the yield of the resulting lignin and the quality of the residual biomass. On the other hand, this indicates a promising route for optimization by lowering solvent demand, and increasing lignin yield and delignification, which was also suggested by Schulze [21].

While the liquor and residual biomass can be separated at elevated temperatures in pilot-scale organosolv pulping [21], this is generally not performed in lab-scale experiments. As mentioned, previous studies have found that lignin re-precipitates during cooling after the extraction, but a systematic investigation of the temperature dependency of this is still missing. In this work, we investigated the influence of temperature on the solubilization of lignin and other compounds after ethanol organosolv extraction. Samples were taken at different temperatures during cooling after extraction and were analyzed on their composition, with a focus on the lignin content. The lignin in the obtained samples was also analyzed on its molar mass distribution, yield, and purity after precipitation by mixing with water. The results suggest that significantly more lignin is dissolved at high temperatures, and provide information about the solubility of lignin at elevated temperatures. This could lead to a more efficient process for the production of lignin, which is suitable as a sustainable and renewable raw material in value-added applications.

#### **2. Materials and Methods**

#### *2.1. Materials*

The wheat straw used in this work was harvested in Lower Austria in 2019. The particle size was reduced in a cutting mill equipped with a 2 mm round hole sieve, after which the fine particles were removed with a 0.209 mm vibrating screen and the coarse particles were removed with a 0.606 mm vibrating screen. The wheat straw was characterized in a previous work [22] using methods from the National Renewable Energy Laboratory (NREL) for the characterization of lignocellulosic biomass [23–25]. It consisted of 35.31 wt% glucan, 21.94 wt% xylan, 2.13 wt% arabinan, 0.72 wt% mannan, 0.67 wt% galactan, 17.35 wt% lignin, 1.09 wt% ash, and 20.45 wt% extractives. Ethanol (ChemLab 100%, Zedelgem, Belguim) and ultra-pure water (18 MΩ/cm) were used as solvents in the extraction; 2-furaldehyde (furfural, 99%), hydroxymethylfurfural (HMF, 99%), acetic acid (99.7%), arabinose, galactose, glucose, xylose, and mannose were purchased from Merck (Darmstadt, Germany) and used for the analytics.

#### *2.2. Extraction*

All extractions were carried out with a 1 L Zirbus autoclave (Bad Grund, Germany). The autoclave was filled with 40 g of wheat straw dry matter and 440 g of 60 wt% aqueous ethanol. The mixture was pretreated for 60 min by heating to 180 ◦C and holding it at that temperature, which was reached after 50 min. After 60 min of combined heating and holding time, the autoclave temperature was set to the respective sampling temperature and held at that temperature during the sampling by the manual control of cooling and heating.

#### *2.3. Sampling and Lignin Precipitation*

The samples of the extract were taken through a submerged metal tube equipped with a 0.5 μm metal sinter filter at set temperatures after pre-treatment, specifically 180, 160, 140, 80, and 20 ◦C. Samples acquired at 20 ◦C were centrifuged at 24,104× *g* for 20 min (according to the methods used in previous works [26]) to verify that all the solids were removed with the filter. No sediment was found after the centrifugation, and no significant differences between the dry matter contents of the filtered extracts and the filtered and centrifuged samples were found. From the sampling tube, the sample was led into a sampling bottle through a tube. For samples taken at temperatures above 80 ◦C, approximately 750 mL of ultra-pure water (18 MΩ/cm) at an ambient temperature was filled into the sampling bottle to avoid solvent evaporation. The exact mass of the water was noted to calculate the amount of sample in the mixture of water and extract. A schematic of the autoclave including the sampling tube, filter and sampling bottle is depicted in Figure 1. To avoid the clogging of the filter and carry-over of the samples, the sampling tube and filter were disassembled and cleaned between experiments. Three separate experiments were carried out for each sampling temperature, and the reported results are averages of the triplicates and standard errors.

**Figure 1.** Schematic of autoclave and sampling unit.

After sampling, the autoclave was cooled to room temperature. The remaining extract and straw were collected in a nylon cloth and the extract was pressed from the residual straw using a hydraulic press (Hapa, HPH 2.5, Achern, Germany) at 200 bar. The extract was centrifuged at 24,104× *g* for 20 min and decanted to remove all solids.

To determine the comparability of pre-treatment conditions and the impact of the increased time due to sampling, the severity factor (*R*0) of the pre-treatments was calculated:

$$R\_0 = \int\_0^t e^{\frac{T-100}{14.75}} \, dt \tag{1}$$

where *t* is the time in minutes, *T* is the temperature in ◦C, 14.75 is an empirical constant, and 100 is the reference temperature of 100 ◦C [27].

#### *2.4. Lignin Precipitation*

All samples were diluted to the liquor/water ratio of the most dilute sample (1:4.68 *wt*/*wt*) with ultra-pure water at an ambient temperature to precipitate the lignin. The resulting suspensions were filtered at an ambient temperature with a cellulose nitrate filter (Whatman, Maidstone, UK), with a pore size of 0.1 μm. The filtrate was dried at 105 ◦C until it was at a constant weight, in order to determine the dry matter and calculate the precipitation yield. The solids were removed from the filter, freeze-dried, and analyzed on their lignin and carbohydrate content.

#### *2.5. Analytics*

The dry matter content of the liquid samples was determined by drying them in a drying oven at 105 ◦C until at constant weight. The liquid samples were characterized on their lignin and carbohydrate content, degradation products formed during extraction, ash content, and carbohydrate content according to laboratory analytical procedures from the National Renewable Energy Laboratory (NREL) [23,25,28]. A Thermo Scientific ICS-5000 HPAEC-PAD system with deionized water as eluent was used for the sugar determination. The degradation products were determined with a Shimadzu LC-20A "prominence" HPLC system using 5 mM H2SO4 as eluent.

The molecular weight distribution was determined by high performance size exclusion chromatography (HP-SEC) using three TSK-Gel columns in series (PW5000, PW4000, PW3000; TOSOH Bioscience, Darmstadt, Germany) at 40 ◦C in an Agilent 1200 HPLC system (Agilent, Santa Clara, CA, USA). The samples were freeze-dried and dissolved in 10 mM NaOH (which was also used as eluent in the HP-SEC) at 1 mg/mL. Polystyrene sulfonate reference standards (PSS GmbH, Mainz, Germany), with molar mass peak maxima at 78,400, 33,500, 15,800, 6430, 1670, 891 and 208 Da, were used for calibration.

#### **3. Results and Discussion**

Wheat straw was extracted at 180 ◦C using ethanol organosolv extraction, and samples were taken at different temperatures during cooling. The samples obtained with this method are named hot-sampled extract (HSE), followed by the sampling temperature in ◦C. Figure 2 shows examples of temperature profiles in the autoclave during the extractions. The temperature during sampling was maintained by the manual control of the mantle heating and cooling, resulting in slight temperature fluctuations in some cases (e.g., Sampling at 80 ◦C in Figure 2). However, these fluctuations can be assumed negligible compared to the differences between the sampling temperatures.

**Figure 2.** Temperature curves in the autoclave during experiments.

It is evident from Figure 2 that the sampling required a certain amount of time (approximately 10 min), which slightly varied, depending on pressure and filter performance. The severity factor *R*<sup>0</sup> was calculated for all experiments (Table 1) according to Equation

(1), in order to estimate the impact of the sampling time on pre-treatment severity. Around 250 mL of the extract was taken as a sample in each experiment, and the rest was separated from the straw after complete cooling (termed "extract after cooling", EAC); this was also analyzed on its composition. The comparison of these extracts from the experiments with samples taken at 180 ◦C and the experiments at 20 ◦C (Figure 3) show that the difference between the sampling temperature has a far greater impact on the amount and composition of the extracted compounds than the increased severity caused by the time needed for sampling. This is in agreement with findings from other groups [11,29,30], showing that the time has a relatively small impact on delignification compared to other factors. It can be concluded that the differences between the samples are mainly caused by differences in solubility, rather than increased severity.


**Table 1.** Severity factor of the experiments.

The samples were also analyzed on the content of HMF, Furfural and Acetic acid (Figure 4). While there is an increase in the concentrations of HMF and Furfural, the standard errors for the higher concentrations are comparatively large. These higher concentrations in some samples could be explained by the slightly higher severities at higher sampling temperatures. However, the results do not indicate a significant correlation of the concentrations with the sampling temperatures for any of the compounds. This further supports the assumption that the changes in severity are negligible and the increase in extract composition are due to the temperature-dependent solubility.

Figure 5 depicts the average dry matter contents and compositions from all experiments. The dry matter content decreases from 17.5 ± 0.4 to 13.6 ± 0.2 g/kg during the cooling process from 180 to 20 ◦C, while the total lignin content decreases from 10.1 ± 0.3 to 8.6 ± 0.2 g/kg. However, the ratio of lignin in the dry matter decreases with increasing sampling temperatures, from 62.3% at 20 ◦C to 54.3% at 160 ◦C, increasing again to 57.8% at 180 ◦C. There are no significant changes in the extracts' ash content with the sampling temperature, and thus a slight decrease in the ratio of ash in the dry matter is observed. Consequently, the ratio of carbohydrates and unidentified compounds increases with the sampling temperature. This suggests that the solubility of these compounds is increased more by higher temperatures than that of lignin. Interestingly, the absolute content of lignin in the extract has the largest decrease, from 180 to 160 ◦C, while the ratio of lignin in the dry matter has the largest decrease, from 80 to 20 ◦C. The former indicates that the solubility of some lignin fractions at first rapidly decreases during cooling, while the decrease in solubility below 160 ◦C is comparatively small. On the other hand, the significant decrease in the ratio of lignin in the dry matter from 80 to 20 ◦C must be due to the different solubility behavior of the other compounds.

**Figure 4.** Content of acetic acid, HMF and Furfural at different sampling temperatures.

**Figure 5.** Composition of the dry matter in samples obtained at different temperatures.

A more detailed analysis of the carbohydrates in the samples shows that not all of the carbohydrates present in the extract behave in the same way. Figure 6 shows that there is only a small increase in the concentration of glucan, which is assumed to be mostly derived from cellulose. On the other hand, the sugars derived exclusively from hemicellulose (xylose, galactose, arabinose, and mannose) have significantly increased concentrations at higher sampling temperatures. This seems logical, since hemicellulose is usually subjected to more depolymerization than cellulose in organosolv pre-treatment [10], which is corroborated by the significantly lower concentration of glucan in the samples compared

to the hemicellulose-derived carbohydrates. These results partially explain the change in the lignin ratio of the dry matter with temperature, since lignin solubility shows a modest increase at low temperatures and a sharp increase at high temperatures, while the solubility of carbohydrates has a near-linear increase over the whole temperature range investigated. Comparison of the monomeric and oligomeric carbohydrates in the extracts further reveals that this decrease in solubility is mostly caused by the oligomeric carbohydrates. This suggests that the correlation of sugar concentration and temperature is mostly caused by the solubility limit of oligomeric carbohydrates derived from hemicellulose.

**Figure 6.** Content of total cellulose- and hemicellulose-based carbohydrates, and sum of monomeric carbohydrates in the extracts at different sampling temperatures.

The trend in lignin solubility could be explained by the polydisperse nature of the solubilized lignin polymers. The molecular weight distribution of the lignin in the samples was measured to determine differences between lignins collected at different temperatures. Both the mass (Mw) and number averaged molecular weights (Mn) showed only slight changes with the sampling temperature (see Table 2). In comparison, Rossberg et al. [18] found that the molecular weight of organosolv lignin spontaneously precipitating during cooling was much higher than that of lignin precipitated by the dilution of the extract. In contrast to this work, they analyzed the re-precipitating lignin separately from the rest. Results more comparable to this work were obtained by Guo et al. [17], who found that the molecular weight of lignin in washing liquor was only slightly higher than that in the original liquor.



Several previous studies [18,26,31] have shown that lignin solubility is connected with molecular weight, and specifically that smaller lignin fragments have a higher solubility than larger molecules. Figure 7 depicts the molecular weight distributions of lignin samples obtained at different temperatures. While there are only small differences in the absorbance-based distributions, the mass-weighed distributions show that there might

be a lignin fraction at the upper limit of the measuring range. These high-mass lignin fractions are significantly lower when the extracts are separated at 20 ◦C, compared to all other temperatures (see Figure 7b). This corroborates the fact that, during the cooling, predominantly large lignin fractions become insoluble and precipitate.

**Figure 7.** (**a**) Number- and (**b**) mass-based molecular weight distributions of lignin from different sampling temperatures.

From a process perspective, the yield of solid lignin product is highly relevant. Lignin is commonly precipitated from organosolv liquors by the addition of water as an antisolvent [32–34], which was also performed in this work. Three different yields were calculated and are shown in Figure 8: The extraction yield was calculated as the mass of lignin in the extract compared to the lignin present in the untreated wheat straw, the precipitation yield was calculated as the mass of solid lignin after precipitation compared to the lignin present in the extract, and the total lignin yield was calculated as the mass of solid lignin after precipitation compared to the lignin present in the original wheat straw. All yields were calculated under the assumption of complete extract separation. Figure 8 shows that there is a significant increase in the lignin yield with the sampling temperature, from 33.6 ± 1.7% at 20 ◦C to 48.9 ± 0.9% at 180 ◦C. This means that the overall lignin yield can be increased by up to 45.5% if solids and liquids are separated at higher temperatures. In previous works, the solvent-shifting precipitation of lignin from organosolv extracts

resulted in an average yield of 48.2% for the precipitation step [35], which is in the same order of magnitude as the yield for extraction and precipitation combined at the highest sampling temperature.

**Figure 8.** Yield of precipitated lignin for different sampling temperatures.

As can be seen in Figure 8, the increase in the overall yield with the sampling temperature is relatively steady. On the one hand, this increase can be explained by the generally higher lignin content of the extracts separated at higher temperatures, and, on the other hand, by the lower solubility of the lignin separated at high temperatures. Interestingly, there are slight differences between the correlations of the extraction yield and the precipitation yield with the temperature. While the extraction yield has the most significant change from 160 to 180 ◦C, the precipitation yield has a strong increase from 20 to 80 ◦C, which then flattens from 80 to 140 ◦C, and then reaches a maximum or plateau around 160 ◦C. While the correlation with the extraction yield is likely caused by a decreasing lignin solubility at lower temperatures, the trend in the precipitation yield suggests structural differences in the solubilized lignin, which affect the solubility at lower ethanol concentrations. For example, the average extraction yields for the 20 and 80 ◦C sampling temperatures are 54.3 ± 1.0% and 55.1 ± 1.1%, while the precipitation yield significantly increases from 61.8 ± 2.0% to 72.7 ± 2.2%. This suggests structural changes in the solubilized lignin during the cooling, resulting in more lignin being soluble after the addition of water when sampling at 20 ◦C compared to 80 ◦C; this agrees with the much higher amount of smaller molecular weight lignin that is found at the 20 ◦C sampling temperature (see Figure 7b).

Since the increase in yield with the sampling temperature coincides with an increase in oligomeric carbohydrates in the extracts (see Figure 6), it is possible that the increased total lignin yields at higher sampling temperatures are caused by carbohydrates covalently bound to the precipitating lignin; this would lower the purity, and thus the product quality. However, an analysis of the lignin and carbohydrate content of the precipitated lignin (Figure 9) shows that both the purity and the concentration of carbohydrate contamination have no significant correlation with the sampling temperature. This suggests that the additional oligomeric carbohydrates found at higher sampling temperatures are not bound to the lignin, and even though they become insoluble during cooling, they become soluble again at lower ethanol concentrations when lignin is precipitated. Therefore, the lignin yield and delignification can be increased by the separation of residual biomass and extract at high temperatures, without a negative impact on the purity of the resulting lignin. At the same time, more oligomeric, hemicellulose-derived sugars are solubilized in the liquor and stay dissolved after lignin precipitation, making them available for further use, and increasing the cellulose content of the residual biomass.

**Figure 9.** Purity of precipitated lignin for different sampling temperatures.

#### **4. Conclusions**

In this work, the composition of organosolv extract at several temperatures after pre-treatment was investigated. The results show that the content of solubilized lignin and oligomeric carbohydrates derived from hemicellulose decreases during the cooling of the extract, with the largest decrease in lignin while cooling occurring from 180 to 160 ◦C; meanwhile the carbohydrate content decreased almost linearly. This behavior was explained by the correlation of temperature and the solubility of lignin and oligomeric carbohydrates. The higher lignin content at elevated sampling temperatures resulted in a significantly higher yield of precipitated lignin. Based on the lignin present in the untreated wheat straw, a total lignin yield of 49% was reached when sampling at 180 ◦C, compared to 34% at 20 ◦C. Concerning the lignin properties, lignin in extracts separated at temperatures above 20 ◦C had a higher molecular weight, indicating lower solubility of larger lignin molecules. There was no significant trend in the extracts' content of ash, degradation products, or monomeric carbohydrates, suggesting that these compounds do not reach their solubility limit during extraction.

These results indicate that the separation temperature has a significant influence on the process efficiency of organosolv pre-treatment. While the highest lignin contents in the extracts and total lignin yields were achieved by separation at 180 ◦C, even the second lowest separation temperature of 80 ◦C resulted in a significant increase in the yield of precipitated lignin (40%), compared to separation at 20 ◦C (34%). Separation at an elevated temperature should also improve the quality of the resulting residual biomass, since higher amounts of lignin and hemicellulose are removed. Lastly, the purity of the lignin was not affected by the separation temperature. This means that the relatively simple measure of separating extract and residual biomass at elevated temperatures results in increased delignification and the removal of hemicellulose-derived carbohydrates from the biomass, with no significant influence on the purity of the produced lignin. All of these factors combined would constitute an improvement in the efficiency, the subsequent economic viability of a lignocellulose biorefinery, and enable lignin to be used as a sustainable raw material in value-added applications.

**Author Contributions:** Conceptualization, J.A.; methodology, J.A.; formal analysis, J.A. and S.B.; investigation, J.A.; writing—original draft preparation, J.A.; writing—review and editing, S.B. and A.F. All authors have read and agreed to the published version of the manuscript.

**Funding:** Open Access Funding by TU Wien.

**Data Availability Statement:** Data supporting the results are included within the paper.

**Acknowledgments:** The authors acknowledge the TU Wien University Library for financial support through its Open Access Funding Program. This paper builds on a paper on the same topic that was presented by the first author at the 2022 Sustainable Development of Energy, Water, and Environment Systems (SDEWES) Conference (6–10 November, Paphos).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Transformation of the RESPO Decision Support System to Higher Education for Monitoring Sustainability-Related Competencies**

**Andreja Abina 1, Bojan Cestnik 1,2,3, Rebeka Kovaˇciˇc Lukman 4,5, Sara Zavernik 6, Matevž Ogrinc 1,3 and Aleksander Zidanšek 1,3,5,\***


**Abstract:** A result-oriented engagement system for performance optimisation (RESPO) has been developed to systematically monitor and improve the competencies of individuals in business, lifelong learning and secondary schools. The RESPO expert system was transferred for use in higher education institutions (HEIs) based on successful practical application trials. The architecture and functionality of the original RESPO expert system have been transformed into a new format that will collect information on the required competencies and the available educational programmes to help students effectively develop competencies through formal and non-formal education. First, the initial version of the RESPO system and its functionality were tested on a selected group of students and higher education staff to validate and improve its effectiveness for the needs of HEIs. This paper summarises the key findings and recommendations of the validators for transforming the RESPO application into an application for HEIs. In addition, the selection of competencies in the RESPO application database has been adapted to align with selected study programmes and the need to develop sustainability-related competencies. These findings can support professionals working in higher education institutions in developing students' future competencies and fostering the targeted use of learning analytics tools.

**Keywords:** higher education; competencies development; decision support; STEM education; sustainability

#### **1. Introduction**

As the current crises, from epidemics to wars to energy crises, drive up the prices of raw materials, final products and food, motivated and well-educated employees are becoming a key factor for any successful organisation. Formal educational programmes do not always meet the needs of employers. Therefore, acquiring knowledge and skills through non-formal forms of education is essential to fill the gaps in developing competencies in vocational education and training (VET) and universities. Employers are becoming more demanding both for technical and soft skills required to work effectively. Therefore, there is an urgent need in education to help individuals identify, capitalise on, and manage their learning. Integrating formal and non-formal learning in higher education is crucial in this respect by improving learning methodologies that can better serve self-directed learning and self-management skills.

**Citation:** Abina, A.; Cestnik, B.; Kovaˇciˇc Lukman, R.; Zavernik, S.; Ogrinc, M.; Zidanšek, A. Transformation of the RESPO Decision Support System to Higher Education for Monitoring Sustainability-Related Competencies. *Sustainability* **2023**, *15*, 3477. https:// doi.org/10.3390/su15043477

Academic Editors: Oz Sahin and Russell Richards

Received: 16 January 2023 Revised: 5 February 2023 Accepted: 9 February 2023 Published: 14 February 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

This challenge is even more significant if the specificities, needs and expectations of the new generation of students currently experiencing the transition from education to employment, i.e., the millennial generation, are considered [1]. This fact is highlighted in several recent documents published by the European Commission [2–8] and other international organisations, such as the Organisation for Economic Cooperation and Development (OECD) [9] and the World Economic Forum (WEF) [10]. Successfully bridging formal education and non-formal training requires tools that enable individuals to identify competency gaps and find the most appropriate training to fill them. The RESPO is a tool that will collect information on the required competencies and the available educational programmes to help students effectively develop competencies through formal and non-formal education.

Although the concept of sustainable development has its origins in the 1987 World Conference on Earth and Development (WCED) [11] and the 1992 Rio Conference [12], its implementation has been significantly improved by the 2030 Agenda for Sustainable Development Goals [13] from the United Nations. To achieve these ambitious goals, widespread lifelong learning has become a necessity. Unsustainable patterns of human behaviour and the shift of human activities towards sustainability require the strengthening of sustainability competencies, as they enable a critical reflection on prevailing values, policies and practices and help to make difficult decisions towards a better environment for all people [14]. The European Commission has, among other things, prepared a list of key competencies for lifelong learning [3] that can contribute to systematic learning. RESPO was initially designed around these competencies, but through practical application of the system and users' feedback (HE professors, students), it became apparent that the list of competencies should be adapted to assess, monitor and develop individuals' competencies more systematically. The competencies of study programmes, individual subjects, qualifications, the jobs they can be employed in after their studies and the needs of employers in the future should be considered when designing the competencies database for the RESPO application. Furthermore, Ferreras-Garcia et al. [15] found that the gender of students also plays an important role in developing competencies for innovation-oriented action. The gender of students should be considered when organising training for students, so that both genders can acquire the same competencies and gender differences in the later transition to the labour market can be reduced.

Decision Support Systems (DSS) are usually understood as computer systems that mimic the decision-making ability of relevant human experts. They have been used in education for decades and have proven valuable in assessing competencies and helping select the training most conducive to improving competencies [16]. Sánchez et al. presented an interesting system for competency-based evaluation of student curricula [17], and the European Com-ProFITS project assessed competencies using an expert system for human resource management (HRM) [18]. Achcaoucaou et al. used the Tricuspoid online competency assessment tool to determine students' level of skills for course organisation, course content selection and teaching/learning procedures [19]. They focused on the development of soft skills, namely entrepreneurial competencies. The tool application allowed students to identify their strengths and weaknesses and develop personal strategies to improve their competencies. It provided teachers with additional information on the impact of their contribution on students' competency development. Kleimola and Leppisaari [20] conducted a case study to determine which competencies future higher education students should acquire during their studies and how learning analytics can support these competencies' development. The results of this study showed the great potential of learning analytics to support the development of their competencies, as it provides a tool for reflection on learning and competency development and increases self-awareness of strengths and weaknesses in competency development. In addition, learning analytics promoted goal orientation, metacognition and learning to learn, active participation and learning self-confidence.

As conventional education is migrating to online training, especially in the crucial crises we have been facing in recent years, self-regulated and self-managed learning of individuals is an important factor influencing the success of the learning/training process [21]. Therefore, on one side, DSSs supporting career advisors, supervisors and professors are becoming crucial in monitoring the career and progress in the competency development of individual students as lifelong learners. On the other side, it allows learning analytics on larger social groups, e.g., the analysis and visualisation of social interactions and the design of dedicated learning groups and community development practices. Furthermore, the DSSs were also applied to higher education management. For instance, Teixeira et al. proposed a DSS to select students for Erasmus+ short-term mobility based on students' enrolment and grades as well as their hard and soft skills [22]. Alisan and Serin [23] proposed a DSS that can maintain the quality and competitiveness of HEIs' departments and their course portfolio. The proposed DSS provides valuable information on which departments should be established or closed and the appropriate course offer and course content design. In this way, students are better informed when choosing universities, departments, courses or further career paths.

In this paper, first, the structure of the original RESPO application is defined, tested and validated by end users, i.e., a small group of students and higher education staff. The key findings and recommendations of the validators for transforming the RESPO application into an application suitable for higher education institutions are summarised. In addition, the selection of competencies in the RESPO application database has been adapted to align with selected study programmes and courses in nanoscience and nanotechnology. This study provides valuable insights into the relationship between the important elements of learning analytics in monitoring competencies in higher education and decision making based on advanced decision support tools. At the same time, the study's findings extend our previous research and development and contribute to the conceptualisation and application of decision support systems in higher education.

The developed web-based decision support tool will be further tested through transdisciplinary training activities in higher education institutions in four EU countries (Slovenia, Spain, the Netherlands and Belgium). Based on the evaluation of the progress in the development of students' competencies, a handbook of recommendations will be produced, with the main aim of encouraging other higher education institutions to integrate the results of the RESPO X application into their institutional educational strategies. The RESPO X decision support tool will contribute to filling the skills gaps of students as future employees and researchers who can overcome barriers to the adoption and deployment of new technologies in companies and research institutes, increasing the productivity and sustainability of future factories and the scientific performance of universities and research institutes. The RESPO X key findings will also offer recommendations for other higher education institutions, employers and policymakers to invest financial and human capital in additional training to upskill the future workforce.

#### **2. Decision Support System Transformation for Competencies Monitoring in Higher Education**

RESPO was first designed to monitor employees' competencies in the companies participating in the Competence Centre for Factories of the Future (KOC-TOP), established at Jožef Stefan International Postgraduate School (IPS), Ljubljana, Slovenia. Later, the system was extended to monitor the competency development of secondary school students in the RESPO project under the Slovenian national programme "Students Innovative Projects for the Benefit of Society". Based on successful practical application trials, we wanted to transfer the developed system to higher education institutions. The architecture and functionality of the original RESPO system were transformed into a new format that will collect information on the required competencies and the available educational programmes to help students effectively develop competencies through formal and nonformal education.

#### *2.1. Basic Conceptual Design of RESPO Decision Support System*

The basic structure of the RESPO system is shown in Figure 1. The multi-criteria system to support decision-making in developing competencies is presented in more detail in our previous contribution [24]. RESPO first assesses the current level of each competency of each individual. Ideally, this assessment is done through an objective test or exam. If this is not possible, the assessment is carried out by a supervisor who works closely with the candidate and is, therefore, able to make a relatively good assessment. Each competency level is assessed when the individual attends one of the available training programmes. This way, an assessment is made of how each training programme improves each competency. Although these assessments of the effectiveness of training programmes are somewhat arbitrary and subjective and vary considerably between different learners, this method provides valuable information on effective training programmes.

**Figure 1.** The basic structure of the RESPO system [25].

#### *2.2. Structure of RESPO Application*

The first version of the application is adapted from the previous projects RESPO and KOC-TOP (coordinated by IPS) and the transformation from a company to higher education institution was performed during the four-day training in Greece in the RESPO X project. The RESPO application was presented to a small group of validators (three undergraduate information and communication (ICT) students, one postgraduate ICT student, four higher education (HE) professors and two persons from non-governmental organisations (NGOs) related to training) from two aspects: a student and an administrator. In the following paragraphs, the structure of the RESPO application is presented.

#### 2.2.1. Main Menu

The main menu shows the username and the user's function/role in the organisation. Below the user's info section, the submenus are divided into:


The current version of the RESPO X application still originated from the RESPO application, which was intended to be used in the companies for their employees. These categories' names were transferred during the training to the higher education systemic nomenclature.

#### 2.2.2. Upload

The "Upload" submenu allows users to import some data from Excel files (Figure 2). Currently, competencies and trainings can be uploaded. Templates are available for the import data. In the RESPO X project, the ESCO (European Skills, Competencies, and Occupations) platform (https://esco.ec.europa.eu/en (accessed on 30 November 2022)) was recognised as a very useful tool whose competencies can be extracted and included in the RESPO X application. This "import ESCO" allows users to upload competencies from the ESCO database regarding the selected occupation type, e.g., managers, professionals, etc.


**Figure 2.** Upload and import functions in the RESPO X application.

#### 2.2.3. Employees = Students

Under the "Employees" submenu, the users can type the employee's details, such as name, phone, city and country, e-mail address and username. A special selection container here is "Workplace", which can be added under the submenu "Workplaces" together with the competencies needed to perform work at this workplace. The workplaces represent study programmes in the transformation of the nomenclature into HEIs. The administrator (i.e., supervisor, professor, career centre staff) can also see the list of all users and edit or delete each. Under the section "History", the administrator can follow each user's progress in the competencies' development during a defined period. The progress is also presented on the graph.

#### 2.2.4. Competencies

The submenu "Competencies" allows users to add each specific competency. The competency type, Hogan ID, name and description for each competency must be defined. All added competencies can be found under the section "All competencies", where the competency type is first given and the list of added competencies can be seen by clicking on each type (Figure 3). Competency types and competencies can also be edited or deleted by the administrator.


**Figure 3.** Defining competencies in the RESPO X application (competencies are given in Slovenian, the translation for the competencies type is as follows: 1. Professional, 2. Social, 3. Leadership (including in a team), 4. Business, 5. Change management, 6. Intercultural competencies).

#### 2.2.5. Workplaces = Study Programmes

This submenu allows users to add workplaces considering the job systematisation in each company. The user must add a short description and required competencies for each workplace. The relevance of each competency needed at a specific workplace is estimated and scored. Under the section "Workplaces", all added workplaces can be found. A list of needed competencies is provided with their relevance and minimum scores required by clicking on a specific workplace. Herein, the application also offers the input of new competency. As mentioned, the workplaces represent study programmes transforming the nomenclature into HEIs.

#### 2.2.6. Trainings

This submenu is dedicated to trainings. The users can add appropriate trainings and determine its name and description as well as the duration of training with start and end dates. For each training, relevant competencies must be specified and selected from the list. The selected competencies will ensure that the users improve their competencies by attending the selected training. The administrator can also see all added training with its specifications and covered target competencies.

#### 2.2.7. Analytics

Under this submenu, the administrator can select among four different algorithms, which assess the user's lack of competencies and suggest the training from the database that the user could attend to improve their competencies and minimise this lack of competencies. The lack of competencies is estimated based on users' current level of competency, the relevance of this competency for the employee's workplace and the relevance of the training targeting this competency. If RESPO finds appropriate training in its database, a suggestion is offered to the administrator, who can send an invitation to the employee. The employee still has the option to decline the invitation.

The RESPO expert system uses different algorithms to find the best-suited training course for each user/learner. It uses information on the individual competencies, competencies required in the education programme, and the potential of each training course to improve each competency. This "Analytics" section is significant for supervisors to select an appropriate algorithm and to understand what each algorithm is considering when estimating the student's lack of competencies.

The RESPO system uses four different algorithms to select the most optimal training for the student. It considers the current level of development of each competency the student has before the training and the level required to pass each course and study programme in which the student is enrolled. In the future, we intend to incorporate advanced technologies such as machine learning into the algorithms, where the system would also consider the needs of employers for each competency for the workplaces where students can be employed.

#### 2.2.8. Status

The "Status" submenu shows which users were invited for each training and which accepted or declined the invitation. The administrator has the option to resend the invitation or even cancel the training. The second tab under this submenu shows the list of training for each user, both accepted or declined.

#### 2.2.9. Options

This submenu is currently devoted only to users' password recovery by the administrator.

#### *2.3. Validation of RESPO Application*

As part of the RESPO X project, training for RESPO validators was carried out in Larissa, Greece, in September 2022. The training was intended for the HE staff (professors, technicians, researchers, career centre staff, etc.) and a small group of ICT students to validate the developed RESPO application. The training directly supported the main objective of the project, i.e., the development and optimisation of the online accessible RESPO X application to offer a systematic solution for students when selecting the most optimal training to enhance their professional and personal competencies and skills for future jobs. The participants validated the functionality and architecture of the main features of the online application. Special attention was also given to the elements ensuring the user-friendly and disability-friendly tool, which allows access and use also to those with disabilities. Based on participants' feedback, a comprehensive report for further decision support system optimisation was prepared and summarised in this contribution. According to the above-described RESPO structure, the validators' findings and suggestions are summarised in Table 1.

#### *2.4. Translation of Nomenclature from RESPO to RESPO X Application*

The most important first step in transforming the RESPO application in HE is translating the nomenclature from a company to higher education institutions. During the validation training, the translation of the main menu presented in Figure 4 was suggested. According to the application structure, the corresponding database will also be updated.


**Table 1.** Validators' suggestions and feedback on the RESPO application testing.

**Table 1.** *Cont*.


#### *2.5. Elements Ensuring the User-Friendly and Disability-Friendly RESPO Application*

When developing and optimising the RESPO application, the needs of users with different disabilities (visual, auditory, physical and cognitive) will be considered. Our goal is to create a helpful decision support tool and support people in need. Since the EU is discussing ways to enforce accessibility with regulation to improve accessibility for services and products, making our app accessible to everyone is ethical as it can potentially create a significant learning advantage. The validators gave several important recommendations for further application development and optimisation:


**Figure 4.** Translation from business to HE nomenclature.

#### **3. Key Competencies for Science, Technology, Engineering and Mathematics (STEM) Education and Learning**

The upgrade of the RESPO system for higher education in nanofields was initiated under the RESPO X project of the Erasmus+ programme. The first step was to identify the most relevant competencies for students by comparing the study courses' competencies and outputs with the European Commission's list of key competencies. The ESCO classification of competencies was used [26]. We found that most of the competencies related to nanoscience and nanotechnology were missing in the ESCO system. The first version of the RESPO X application includes datasets from the ESCO platform, i.e., occupations with skills. Currently, the user can upload the ESCO occupation pillar, which includes and distinguishes between skill/competency concepts and knowledge concepts by indicating the skill type. There is, however, no distinction between skills and competencies.

It was also found that the competencies base in the RESPO expert system needed to be adapted to the curricula and employers' needs, considering the recommendations of the European Commission, UNESCO, the OECD and the WEF. In the framework of the Norway grant-funded RESPO-VI project, a comprehensive report on 21st-century competencies will be prepared, focusing on those that are or will be needed by science, technology, engineering, and mathematics (STEM) students. We will shortly produce a list of key competencies aligned with employers' needs, selected study programmes and international and EU policy strategies and guidelines. Thus far, the RESPO X project has identified the first relevant competencies for the participating STEM students, which are listed in Table 2 in Section 3.2.


**Table 2.** Identified competencies in the RESPO X project.

**Table 2.** *Cont*.



**Table 2.** *Cont*.

#### *3.1. EU and Other Relevant Strategies as a Basis for the Selection of Competencies for RESPO Application*

The European Council has repeatedly stressed the key role of education and training for the EU's future growth, long-term competitiveness and social cohesion. To achieve this, it is crucial to strengthen the education element of the knowledge triangle "researchinnovation-education", starting at an early age—in schools. The competencies and learning habits acquired at school are essential for developing new skills for new jobs later in life. A more flexible learning environment is required to help students develop different competencies while maintaining basic knowledge. Suggested approaches included new pedagogical and cross-curricular approaches to complement and involve learners more in the design of their curricula. Literacy and numeracy are essential components of key competencies, as they are fundamental for further learning. Numeracy, mathematical and digital competencies and an understanding of science are also key to full participation and inclusion in the knowledge society and the competitiveness of modern economies. Today's job seekers need to be able to work collaboratively, communicate and solve problems skills that are developed primarily through social and emotional learning. Combined with traditional skills, these social and emotional skills will equip learners to succeed in the evolving digital and green economy.

KeyCoNet (http://keyconet.eun.org/ (accessed on 30 November 2022)) is a growing network of more than 100 organisations funded by the European Commission under the Lifelong Learning Programme to improve the delivery of key competencies in school education. The European Commission uses the word "competence" instead of "competency", so we use their spelling when we refer to the documents of the European Commission. The KeyCoNet network uses the European framework on "Key Competences for Lifelong Learning" as a reference point, which defines the following eight key competencies:


These key competencies are all interdependent and closely related to seven transversal skills:


WEF listed the following 21st-century skills for students according to three categories [9]:


According to WEF reports, the top 10 work skills will change over the next decade. Workers and job seekers will have to be more analytical, critical, systematic, innovative and creative. They will need to become active lifelong learners who will be able to handle stress and be ready to adapt to rapid changes. Figure 5 shows how the list of the top 10 skills that employers will look for in employees, including students, when they enter the labour market has changed over the past decade and will change over the next few years. Some skills, such as complex problem-solving, remain on the list throughout the period. Creativity slowly gives way to originality and ideas. Skills that emphasise the individual being able to learn actively through various learning strategies while remaining analytical, decisive, judgmental, rational and systematic are becoming more and more dominant.

**Figure 5.** The changing top 10 skills between 2015 and 2030, according to WEF reports.

#### *3.2. Selection of Competencies from Study Programmes in Nanoscience and Nanotechnology*

As part of the validation training, the list of relevant courses by higher education institutions participating in the RESPO X project was reviewed. A short selection of all the courses on the list of selected study programmes was made. This selection will be updated with a few additional courses from the partnership HEIs and with lecturers outside the partnership. Such courses were selected through which the students will have an opportunity to develop and enhance a core set of skills for researchers, which will enable them to make a more substantial contribution to sustainable development. Barth et al. found that the development of key competencies for sustainable development combining formal and informal learning can enhance the skills required for sustainability [27].

At the training, the participants also compared the selection of a core set of skills for students with those included in transdisciplinary training courses at the participating HE institutions. Three categories of skills were selected that can contribute the most valuable sustainability-related skills to engineering students. These categories include **digital**, i.e., information technology and skills with a focus on artificial intelligence, professional **STEM**

skills with a focus on materials science and advanced technologies for developing new, more environmentally friendly materials and **sustainability** skills, including environmental responsibility, which will be more valuable in future jobs. The participants selected the most appropriate courses and lecturers for the student training in each category. The selected courses are, therefore, divided into three modules, where some new courses can still be added:


The specific competencies identified in the RESPO X project as important to the participating student are listed in Table 2. The selected competencies are ranked into eight categories of Key Competencies, as defined by the European Commission.

After completing the list of RESPO X competencies, the existing RESPO database will be updated and algorithms for selecting the most effective education programmes will be reviewed and improved to allow the learners to improve their competencies in a shorter time and at a lower cost.

#### **4. Discussion on the Drawbacks of RESPO X**

There were some obstacles to transferring the RESPO application from the business environment to the higher education system. The first difficulties appeared in changing the terminology, namely the use of study programmes and the more detailed fields of study that appear in some HEIs. In some study programmes, there are also modules which divide subjects into compulsory and optional, thus changing the relevance of the competencies to the specific study programme.

The other major obstacle relates to the introduction of the assessment method. The method chosen in this study is rather subjective, as it is an assessment by the training provider or lecturer. This problem will be solved by introducing standardised multiplechoice questionnaires to be filled in by students before and after the training. There is also the question of whether competencies are assessed only by the lecturer or course provider, the student's supervisor or the person who manages the student's career progress at HEI. One possibility is even to introduce 360-degree assessment as it is known in companies. As different countries use different assessment systems, it is also necessary to introduce a grade converter from one system to another. European Union rules already exist that need to be incorporated into the RESPO X system.

A third disadvantage is that the current system does not consider that different subjects may have the same competencies of varying relevance for each subject. With the introduction of advanced machine learning algorithms, the RESPO X system can look for similarities between subjects and weigh the importance of a competency according to its relevance to a particular subject.

The fourth obstacle, and probably not the last one, is related to ensuring the protection of personal data, namely the need to ensure compliance with GDPR rules and to introduce layers of data protection at the level of the institution using the application and at the level of the owner of the application.

#### **5. Conclusions and Future Work**

European Union policies and guidelines highlight skills as key to sustainable competitiveness, resilience and social inclusion. This realisation is also at the core of the European Skills Agenda, which focuses on investing in lifelong learning (upskilling and reskilling) to sustain recovery from the COVID-19 pandemic and to meet the challenges of a digitalising world and a greener economy. As these changes are already under way and accelerating, Europeans will need to acquire new skill sets or improve their existing skills to better adapt to the rapid changes ahead and to succeed and be satisfied in the future labour market. However, knowledge and skills have become key factors for individual well-being and

economic success in the 21st century. Without investing in people's knowledge and skills, a high quality of life in society, technological progress, economic competitiveness and innovation cannot be expected. Countries need to focus on creating the right mix of skills and ensuring that these skills are fully exploited in the labour market.

During the first practical trials of the RESPO application, it was realised that the RESPO database of key competencies needs to be updated and adapted to educational programmes, employers' needs and international as well as EU strategies and recommendations. Therefore, in another project, RESPO-VI, funded by Norwegian grants, a selection of competencies for the RESPO database will be prepared, which will also be adapted to EU policies and strategies and other relevant recommendations (OECD, WEF, UNESCO) as well as employers' needs. The selected RESPO X competencies represent a starting point for a comprehensive list of the most relevant competencies for students and researchers in the field of nanotechnologies, which will evolve throughout the project and become a standard reference for nanotechnologies professionals.

The RESPO X application, when fully developed, will be tested among the students of four HEIs during the training, which will be prepared in an international environment at the Universitat Politecnica de Catalunya (UPC) in Spain in spring 2023. The planned training will be implemented to cover one or more skills for smart green transition defined by the partnership, which will be included in the RESPO X expert system and online application. Lecturers from all participating HEIs will collaborate to prepare joint training composed of several courses, which will be performed by lecturers or experts from these HEIs. They will focus on content, which will provide participants with a set of skills for the smart green transition, mainly digital competencies, STEM skills (material science including plasma science and gas discharges, advanced research and technology), environmental responsibility and sustainability skills. Each participating HEI will select several students, according to a priori known selection criteria, who will attend the lectures and monitor their competencies' development through the RESPO X application with support from the HE staff. The students from all participating HEIs will attend the courses to develop or enhance their competencies and skills and to become efficient professionals at future jobs after completing their studies. Thus, an evaluation of the effect of the RESPO X decision support system on competencies development will be prepared as a handbook on policy recommendations based on validation of the RESPO X application and the students' transdisciplinary training.

The presented transformation of the RESPO expert system into the RESPO X application provides excellent opportunities for significant improvement of the learning process, both in higher education and in lifelong learning. The findings can help professionals working in higher education institutions create appropriate conditions for developing students' future competencies and foster the targeted use of learning analytics tools.

**Author Contributions:** Conceptualisation, A.A., S.Z. and A.Z.; methodology, A.A. and A.Z.; software, M.O. and B.C.; validation, R.K.L. and A.Z.; investigation, A.A. and S.Z.; data curation, B.C.; writing—original draft preparation, A.A. and S.Z.; writing—review and editing, A.A., R.K.L. and A.Z.; visualisation, A.A..; supervision, B.C. and A.Z.; project administration, A.Z.; funding acquisition, A.Z. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work includes results from the RESPO X project within the Erasmus+ programme of the European Union and from the RESPO 2 project, which was co-financed by the Republic of Slovenia and the European Union under the European Social Fund within the Students Innovative Projects for the Benefit of Society programme of the Slovene Public Scholarship, Development, Disability and Maintenance Fund. Part of this work was financed by the competence centre KOC-TOP project, which is co-financed by the Republic of Slovenia and the European Union under the European Social Fund within the Competence Centres programme of the Slovene Public Scholarship, Development, Disability and Maintenance Fund.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** All presented data are available at IPS, which has a coordinator role in the RESPO X project.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Production and Characterisation of Pickering Emulsions Stabilised by Colloidal Lignin Particles Produced from Various Bulk Lignins**

**Julia Tomasich 1,2,\*, Stefan Beisl <sup>2</sup> and Michael Harasek <sup>1</sup>**


**\*** Correspondence: julia.tomasich@tuwien.ac.at

**Abstract:** The use of lignin, an abundant phenolic bio-polymer, allows us to transform our fossilbased economy into a sustainable and bio-based economy. The transformation of bulk lignin into colloidal lignin particles (CLPs) with well-defined surface chemistry and morphology is a possible way to cope with the heterogeneity of lignin and use it for material applications. These CLPs can be used as emulsifiers in so-called Pickering emulsions, where solid particles stabilise the emulsion instead of environmentally harmful synthetic surfactants. This work investigates the application of CLPs produced from various bulk lignins as a stabiliser in o/w Pickering emulsions with two different oil phases (solid and liquid state). The CLPs had a primary particle size of 28 to 55 nm. They were successful in stabilising oil-in-water Pickering emulsions with high resistance to coalescence and a strong gel-like network. This enables novel applications for CLPs in the chemical and cosmetic industries, and can replace fossil-based and synthetic ingredients.

**Keywords:** lignin; colloidal particles; Pickering emulsion; rheological behaviour

#### **1. Introduction**

Driven by concerns about climate change, environmental pollution, and the exhaustion of fossil fuels, interest in biobased products has risen dramatically in the past years. Since biobased products cost more than those produced from non-renewable resources [1], lignocellulosic biorefineries are designed to meet the challenges of combining renewable feedstock and industrial production technologies [2]. The raw materials for these biorefinery concepts are straw, wood or grass consisting of cellulose, hemicellulose, and lignin [2]. Currently, polysaccharides are utilised to produce biofuel or other chemical products [3], whereas lignin is underutilised, and around 40% of the lignin is traditionally burned to generate energy for the biorefinery process [3,4].

Lignin accounts for around 15–30 wt% of the total dry matter of woody and non-woody plants and, thus, is the second-most abundant renewable biopolymer [4]. Depending on the plant source, the complex molecular structure is given by the different amounts of its primary monolignols, p-coumaryl alcohol, coniferyl alcohol, and sinapyl alcohol, which result in p-hydroxyphenyl (H), guaiacyl (G), and syringyl (S) lignin subunits linked by both carbon–carbon and ether bonds [3,4]. According to the presence of the three monolignols in lignin from different plants, it is roughly classified into softwood lignin (mainly coniferyl alcohol), hardwood lignin (coniferyl and sinapyl alcohol), and grass lignin (all three alcohols) [5]. Furthermore, the extraction or pre-treatment processes lead to additional structural changes and modifications of functional groups, making lignin valorisation even more challenging [3,6,7]. Lignin produced by commercial pulp and paper processes, such as kraft lignin or lignosulfonates, contains high amounts of sulphur. In contrast, organosolv lignin, steam explosion lignin, soda/alkali lignin, and enzymatic hydrolysis lignin (EH) are isolated without sulphur in pilot-scale processes [6,8].

**Citation:** Tomasich, J.; Beisl, S.; Harasek, M. Production and Characterisation of Pickering Emulsions Stabilised by Colloidal Lignin Particles Produced from Various Bulk Lignins. *Sustainability* **2023**, *15*, 3693. https://doi.org/ 10.3390/su15043693

Academic Editors: Oz Sahin and Russell Richards

Received: 3 January 2023 Revised: 10 February 2023 Accepted: 14 February 2023 Published: 16 February 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Despite this challenging heterogeneity caused by its natural and technical origin, the outstanding physicochemical properties of lignin make it worthwhile to investigate methods to valorise the obtained technical lignins. These properties include biodegradability; UV absorbance; good mechanical properties, such as high stiffness; antioxidation characteristics; and resistance to microbial and fungal attacks. Furthermore, lignin contains both hydrophobic and hydrophilic groups [4,6,8,9].

Due to the physicochemical properties and the availability of lignin, the technical, economic, environmental, and socio-economic dependencies have been investigated in the literature. Chauhan et al. [10] reviewed the application of lignin in various sectors and showed that the usage of this aromatic biopolymer is essential for bioeconomy and sustainability. In order to achieve further value addition to this feedstock, an investigation of lignins' usage and valorisation has to be conducted.

One possible way to valorise technical lignins and to cope with the heterogeneity of lignin is the transformation into colloidal lignin particles (CLPs) with well-defined surface chemistry and morphology [11–13]. In literature, several production methods for CLPs are described [4,8,12,14,15]. The CLPs utilised in this work were produced by the method of solvent shifting, during which the lignin solubility was decreased by the continuous addition of the anti-solvent (water) [11,16]. The thus-formed particles are received in aqueous, highly concentrated dispersions, ready for numerous applications. Österberg et al. [12] and Beisl et al. [17] summarised the various fields of application for these spherical particles produced by the solvent shift method.

One of the described applications is the utilisation of CLPs as an emulsifier in so-called Pickering emulsions. These so-called Pickering emulsions can be any type of emulsion stabilised by solid particles instead of surfactants [18]. Due to their stability against coalescence and better biocompatibility, Pickering emulsions can be applied in a broad range of fields [19]. Yu et al. [20] and Saberi Riseh et al. [21] showed that encapsulation of pesticides and biocontrol of bacteria are of interest. Taking advantage of the properties of Pickering emulsions, green materials can be used to stabilise agrochemicals or drugs and, furthermore, control their release [20,22]. Another field where Pickering emulsions stabilised by lignin particles are relevant is the cosmetic industry. The properties of lignin mentioned above make it worthwhile to use the CLPs as a natural ingredient in sunscreens to protect human skin from UV irradiation [23,24]. The present study demonstrates the possibility of valorising different technical or bulk lignins produced from various feedstocks. The heterogeneity of the different bulk lignins is decreased by transforming them into CLPs with similar properties. This valorisation step enables lignin, to be used instead of synthetic surfactants, to stabilise droplets in emulsions.

This work investigates the properties of CLPs from different bulk lignins and their ability to stabilise oil-in-water (o/w) Pickering emulsions. Five bulk lignins (organosolv, alkali, and three enzymatically hydrolysed) were dissolved, precipitated into CLPs, and concentrated. The resulting particles were analysed for their particle size, molecular weight, and water contact angle. The Pickering emulsion was prepared by mixing the CLP suspensions with two different oil phases under a high shear rate. The produced emulsions were investigated in terms of rheological behaviour and stability. Furthermore, the emulsions were analysed by fluorescence microscopy to identify the CLPs as emulsifiers in the formed emulsions.

#### **2. Materials and Methods**

In this work, CLP suspensions were produced from different bulk lignins, cleaned, and concentrated. The suspensions were analysed by scanning electron microscopy (SEM), dynamic light scattering (DLS), high-performance size exclusion chromatography (HPSEC), and contact angle measurement. Pickering emulsions were produced with two types of oils and aqueous CLP suspensions. Stability tests, rheological investigation, and fluorescence microscopy were performed to analyse the different lignins in terms of their performance as an emulsifier.

#### *2.1. Materials*

To produce the CLP suspensions, the following materials were used: organosolv lignin (OS) (beech wood, Fraunhofer CBP, Leuna, Germany), alkali lignin (AL) (grass, Protobind 1000, PlT Innovations, Rüschlikon, Switzerland), three different enzymatic hydrolysis lignins originating from birch (EH1) and beech wood (EH2 and EH3), ethanol (EtOH) (96 wt%, AustrAlco GmbH, Spillern, Austria), and ultra-pure water (18 MΩ/cm). The emulsions were prepared with shea butter (refined and deodorised), babassu oil (coldpressed and refined) (oil phase 1), and MCT oil (oil phase 2) purchased at Naturkosmetik Werkstatt OG, Linz, Austria.

#### *2.2. Preparation of CLP Suspensions*

The bulk lignins were dissolved in aqueous ethanol (60 wt%) and filtered. Out of these solutions, CLPs were formed by decreasing the solubility of lignin in the medium with the addition of water in a static mixer, as described by Beisl et al. [11]. In order to increase the concentration of the precipitated particles, lower the remaining ethanol concentration, and remove the dissolved lignin and impurities in the suspension, ultrafiltration in diafiltration mode was performed according to the method described in a previous work by Miltner et al. [25]. To eliminate the influence of the lignin particle concentration on the emulsification property, the particle concentration was set to a concentration of 4.85 wt% for all suspensions. The production of the CLP suspensions was conducted in cooperation with Lignovations GmbH, Tulln/Donau, Austria.

#### *2.3. CLP Characterisation*

#### 2.3.1. Particle Size and Morphology

The hydrodynamic diameter and size distribution in the aqueous suspension were determined with a Litesizer 500 (Anton Paar GmbH, Graz, Austria) using dynamic light scattering (DLS). The refractive index of the CLPs was set to 1.53 and the imaginary refractive index to 0.1; the given values are averages and standard errors of three separate measurements.

The primary particle size and the morphology of the CLPs were analysed with a scanning electron microscope (SEM) (Quanta 200 FEG-SEM, Fei, Hillsboro, OR, USA) at an acceleration voltage of 5 kV. The samples were sputtered 2 times with 4 nm Au/Pd (60:40 wt%) before measurement. The primary particle size was manually evaluated with ImageJ software, and the given values are averages and standard errors from 150 counts.

#### 2.3.2. Contact Angle Measurement

Microscope glass slides were treated with piranha solution (H2SO4 96%:H2O2 (30%) 3:1) for 30 min, then cleaned with ultra-pure water followed by EtOH (100%, Chem-Lab NV, Zedelgem, Belgium), and then left to dry. These slides were dip-coated with CLP suspension three times to achieve a uniform coating. The wettability of colloidal CLPs was determined by sessile-drop contact angle measurement using distilled water in a tensiometer (Attension Theta Flex Auto 4, Gothenburg, Sweden). The dosing volume of each drop was 5 μL, controlled by both a precise dispenser and drop image analysis.

#### 2.3.3. Molecular Weight

The molecular mass distribution of the CLPs was investigated with an alkaline highperformance size exclusion chromatography (HPSEC) using 10 mM NaOH as an eluent. Three columns, in series, at 40 ◦C (PW5000, PW4000, PW3000; TOSOH Bioscience, Darmstadt, Germany) were used with an Agilent 1200 HPLC system (flow rate: 1 mL/min, DAD detection at 280 nm, Santa Clara, CA, USA). The particle suspensions were diluted with ultra-pure water, and the pH was adjusted by NaOH to reach the same concentration as the eluent. The calibration of the columns was performed using polystyrene sulfonate reference standards (PSS GmbH, Mainz, Germany) with molar masses at 78,400 Da, 33,500 Da, 15,800 Da, 6430 Dam, 1670 Dam, 891 Da, and 208 Da at peak maximum.

#### *2.4. Preparation of o/w Pickering Emulsions*

Pickering emulsions were produced by mixing two different oil phases with the aqueous phase containing the CLPs (4.85 wt%) as an emulsifier, using the same weight ratio (1:1), for a total amount of 100 g. The first oil phase (1), containing shea butter and babassu oil (weight ratio 1:1), was heated up to ~50 ◦C and melted, whereas the MCT oil was simply weighed at room temperature. Emulsification was performed with an Ultra Turrax T25 equipped with the dispersing tool S 25 KD—25 G (IKA-Werke GmbH&Co.KG, Staufen, Germany) by continuously adding the oil phase into the CLP suspension during mixing at 12,000 rpm for 2 min. The emulsions containing shea/babassu were mixed at 37 ◦C due to the given melting point of the oil phase, whereas the emulsions containing MCT oil were mixed at room temperature. Parts of the prepared Pickering emulsions were poured into glass centrifuge tubes and left covered and without sunlight for 24 h to stabilise. The remaining emulsions were stored in closed plastic bottles.

#### *2.5. Pickering Emulsion Characterisation*

#### 2.5.1. Stability Test

The Pickering emulsions were centrifuged to investigate their stability to coalescence using a benchtop centrifuge equipped with a swinging bucket rotor (Sigma centrifuge 4–16KS, rotor 11660; Sigma Laborzentrifugen GmbH, Osterode am Harz, Germany) at a relative centrifugal force (RCF) of 500 g for 10 min, and directly thereafter, at RCF 1000 g for 10 min. The volume of the separated phase was measured to determine and compare the relative stability of the emulsions.

#### 2.5.2. Fluorescence Microscopy

The Pickering emulsions were imaged using a polarisation microscope (Nikon Upright Eclipse Ci, objective Plan Apo λ 40x/0.95 equipped with a Nikon LV-UEPI2 Universal Epi Illuminator 2, Long Island, NY, USA) using a FITC filter with a range from 465–495 nm for excitation and 515–555 nm for emission. The emulsion was placed on a microscope glass covered by a coverslip with a spacer in between. The droplet size of the emulsions was manually evaluated with ImageJ software, where at least 100 droplets were measured for each droplet size distribution.

#### 2.5.3. Rheological Tests

All rheological analyses were performed using a plate–plate rheometer (MCR300 SN621304, measuring system PP25, Anton Paar GmbH, Germany, gap size 1 mm, Rheoplus/32 Multi3 V3.40). The temperature was set to 25 ◦C by a Peltier lower plate. The emulsion's linear viscoelastic region (LVE) was determined with an amplitude sweep test using a constant angular frequency (strain γ = 0.01–10%, angular frequency ω = 10 1/s). As a function of the LVR, a frequency sweep test was performed to investigate the trend of the storage and loss modulus of the emulsions with constant deformation (amplitude strain = 1%). Additionally, a dynamic viscosity measurement was carried out within a specific shear rate region (0.0001–0.02 1/s).

#### **3. Results and Discussion**

#### *3.1. CLP Suspensions*

Solutions of five different bulk lignins in aqueous ethanol (60 wt%) were prepared, from which CLP suspensions were obtained by the addition of an anti-solvent in a static mixer. The suspensions were purified and concentrated in membrane filtration steps. The produced CLPs were chemically and physically characterised by dynamic light scattering, SEM imaging, HPLC, and contact angle measurements with water.

The hydrodynamic diameter (HD) resulting from DLS measurements showed significant differences for the CLPs produced by the different bulk lignins (Figure 1). The result of EH1 CLPs showed a minimum HD of 80 nm (±0.5 nm), followed by EH2 and EH3 CLPs

with 108 nm (±4 nm) and 110 nm (±0.4 nm), whereas particles produced from OS lignin and AL showed distinctly higher HD values, around 300 nm.

**Figure 1.** Hydrodynamic diameter and polydispersity (DLS) of CLPs and primary particle size average (SEM).

To compare them with these results, the primary particles were also analysed. A primary particle was assumed to be identifiable as a constituent particle in aggregates or agglomerates with a spherical shape. The size of the primary particles, evaluated by manual measurement of the CLPs from the SEM images (Figure 2), showed a significant difference from the HD results. However, the order of magnitude of the primary particle size was similar for the different CLPs, ranging from 28 nm (±8 nm) for AL to 55 nm (±20 nm) for EH3. The difference between the hydrodynamic diameters obtained by DLS measurement and the primary particle size obtained by evaluation of the SEM images could be explained by agglomeration or aggregation of the primary particles. CLPs produced from AL comprising the smallest primary particles result in agglomerates with HDs of 320 nm (±9 nm), while CLPs with primary particle sizes of 55 nm (EH3) form smaller agglomerates with HDs of 110 nm. This result coincides with the theory of Zhang et al. [26] that agglomeration occurs due to the greater surface energy of smaller particles and, consequently, a stronger hydrogen bonding effect. However, data in Figure 1 further show a dependency of agglomeration formation on the bulk lignin's origin, since particles produced from EH lignin show a lower tendency to agglomerate than CLPs from OS and AL.

In Figure 3, the mass-averaged molecular weights (Mw) and polydispersity indices (PDIs) of the CLPs are presented. Comparing the trend of the CLPs' molecular weight, only small differences are noticeable. Mws for the CLPs all result between 1691 Da (EH3) to 1914 Da (OS). The varying Mws for the three CLPs produced by EH might be explained by different conditions during the enzymatic hydrolysation processes.

Colloidal lignin particles tend to stabilise oil-in-water (o/w) or water-in-oil Pickering emulsions, depending on their hydrophilicity or hydrophobicity [27,28]. CLPs with contact angles θH2O < 90◦ are preferentially wetted by the aqueous phase and build an adsorbent layer around the oil droplets due to their hydrophilicity [29]. The results shown in Table 1 indicate that the produced CLPs are hydrophilic, with contact angles ranging from 31.4 to 45.1◦. The contact angle for AL CLPs, compared to OS CLPs, exhibits slightly higher values, suggesting that this led to the generation of smaller primary particles for AL due to the hydrophilicity [26]. The three CLPs produced from EH show strongly varying contact angles, from 31.4◦(EH3) to 38.1(EH1) to 45.1◦(EH2). This leads back to the results of molecular weight distribution, where the highest MW from EH CLPs was indicated for EH2 and the lowest MW for EH1 (Figure 3), suggesting that the Mw distribution influences the contact angle slightly, comparing the enzymatically hydrolysed lignins.

**Figure 2.** SEM images of OS (**a**), AL (**b**), EH1 (**c**), EH2 (**d**), and EH3 (**e**) CLPs.

**Table 1.** Contact angle of water on CLPs.


#### *3.2. Pickering Emulsions*

The Pickering emulsions were prepared by mixing CLP suspensions with an oil phase (mass ratio 1:1) and applying high shear rates. The two oil phases were a 1:1 mixture of shea butter and babassu oil (solid state) and MCT oil (liquid state), respectively, at room temperature. The mixture of babassu oil and shea butter was chosen to represent the oil/fat phase of cosmetic creams. To investigate the ability of the produced CLPs to stabilise o/w emulsions, centrifugation at different RCFs (500 g, 1000 g) was conducted.

**Figure 3.** Mw with the PDI of the different CLP suspensions.

The molecular structure of lignin, with its mainly hydrophilic and minor hydrophobic parts, result in water contact angles of 35–45◦, as mentioned in 3.1. This property is accountable for stabilising the o/w Pickering emulsions because the water phase wets the CLPs, and the minor hydrophobic part of the particle makes it possible to attach to the oils phase, therefore stabilising the oil drop in the continuous water phase [19,28]. The particle size and morphology of primary particles also influence the ability to stabilise emulsions [19].

Directly after the emulsification, all samples were homogeneous and showed no segregation. For stability monitoring, a 7 mL sample of each emulsion was put into a centrifugation tube. First, segregation of the water phase, below the homogeneous emulsion layer, was observed after 24 h for the three emulsions prepared with EH and MCT oil. The segregation volume was monitored after each centrifugation procedure and is shown in Figure 4. Interestingly, only segregation of the water phase was observed, and the emulsion was resting on top of the segregated aqueous phase (Figure 5). This could be an indication of a stable dispersion of the oil droplets by the CLPs and good resistance to coalescence.

**Figure 4.** Segregation of the water phase of prepared Pickering emulsions with different CLP suspensions and shea/babassu oil (**a**) and MCT oil (**b**), before and after centrifugation at 500 g and 1000 g.

**Figure 5.** Emulsions with shea/babassu and AL after 24 h (**a**) and after centrifugation at 1000 g (**b**); emulsions prepared with MCT and EH1 after 24 h (**c**) and centrifugation at 1000 g (**d**).

Despite the strong particle agglomeration in the suspensions of OS and AL, those two CLP suspensions formed the most stable emulsions according to the stability test result. The highest coalescence and segregation were observed for the emulsions stabilised by CLP suspension EH1. Comparing the two different oil phases, the emulsions prepared with MCT oil already showed a higher tendency for segregation before the centrifugation step. Nevertheless, the remaining emulsion phases showed comparable segregation behaviour to the emulsions prepared with shea/babassu oil. In Figure 5, AL CLP-stabilised emulsions and EH1 CLP-stabilised emulsions are shown before and after centrifugation. Overall, the results of the stability tests suggest that all emulsions contain a phase with a strong network of dispersed droplets, demonstrating stability at 1000 g.

Moreover, the droplet size of the o/w emulsions was evaluated by measuring the drops shown in images taken by fluorescence microscopy (Figure 6). Due to the fluorescence of lignin molecules [30], the taken fluorescence microscopy images show oil droplets surrounded by CLPs. The average droplet size can be found in Table 2. However, no general correlation between droplet size and emulsion stability was identified.

**Figure 6.** Images taken by fluorescence microscopy: OS shea/babassu (**a**), EH1 MCT (**b**).



The rheological behaviour of emulsions provides information about their structure, the properties of the components, and their stability [31]. Therefore, the prepared emulsions were investigated concerning their viscosity over a specific shear rate and analysed by oscillatory tests.

The amplitude sweep test was performed to find the linear viscoelastic (LVE) region of the emulsions, where the structure of the sample is intact. The tests were performed by applying oscillatory strain over a continuous rising amplitude (0.01–10%) at a fixed frequency (10 1/s) while recording the values of the storage (*G* ) and loss (*G"*) modulus. The maximum amplitude allowed for the emulsions can be derived by plotting the complex modulus *G\** , calculated from the storage and loss modulus (Equation (1)), over the deformation amplitude (γ).

$$|G^\*| = \sqrt{(G')^2 + (G'')^2} \tag{1}$$

The maximum value for the deformation amplitude γ was ascertained from the data plotted in Figure 7 in order to stay dependent on the linear viscoelastic region, according to the evaluation process explained in the literature [31,32]. The point of the critical deformation amplitude is marked in both plots (Figure 7) at γ crit. = 1.2%.

**Figure 7.** Amplitude sweep test of CLP emulsions with shea/babassu (**a**) and MCT oil (**b**) to determine the LVE region.

The frequency sweep tests are dynamic oscillatory measurements that give information about the viscous and elastic behaviour of a viscoelastic system. Referring to the result from the amplitude sweep test, frequency sweep tests were carried out at the deformation amplitude γ of 1% to obtain the storage (*G* ) and loss (*G*") modulus of the emulsions. The moduli indicate a strong or weak network formation of dispersed droplets in emulsions and, therewith, elastic or viscous behaviour. A strong network is indicated by the fulfilment of the criteria *G* > *G*" and both moduli being independent of frequency. The ratio of the moduli is shown in Figure 8a,b by plotting the loss factor tanδ (Equation (2)) [33]. A low and constant loss factor implies an elastic response of the emulsions under shear stress and, therefore, a higher developed gel-like colloidal network [34].

$$
tan\delta = \frac{G''}{G'}\tag{2}$$

The emulsions prepared with shea/babassu showed a similar behaviour during the frequency sweep, and the loss factor remained relatively constant, except for that of the emulsion prepared by CLP suspension EH1 (Figure 8a). The low and frequency-independent values of the loss factor for the OS, AL, EH2, and EH3 formulations indicate an elastic response of the emulsion and, thus, the appearance of a gel-like colloidal network. In Figure 8c, the two moduli are plotted for the emulsion prepared by AL (green) and EH1 (purple) with shea/babassu. The AL emulsion shows an almost independent trend of the moduli and the fulfilment of the criteria for stable emulsions *G* > *G*" over the applied angular frequency. Hence, an elastic response to the applied shear stress is given due to the existence of a strong network of dispersed droplets stabilised by particles. This result coincides with the stability tests of the emulsions (Figure 4a), where the segregation volume for AL emulsion was very low. The findings for the frequency sweep tests performed using the emulsions prepared with CLP suspensions OS, EH2, and EH3 (Figure 8e) showed similar behaviour. On the other hand, the trend of the moduli (Figure 8c), as well as the trend of the loss factor (Figure 8a) for the shea/babassu emulsion prepared with CLP suspension

EH1, strongly depend on the frequency, and the values of the moduli are related to each other (*G* ~*G*"). In addition, this result agrees with the stability tests mentioned above.

**Figure 8.** Loss factor resulting from frequency sweep tests with shea/babassu emulsions (**a**) and MCT emulsions (**b**); storage and loss modulus of shea/babassu with AL and EH1 (**c**); storage and loss modulus MCT with AL and EH2 (**d**); storage and loss modulus of shea/babassu with EH2, EH3, and OS (**e**); storage and loss modulus of MCT with EH1, EH3, and OS (**f**).

The loss factor results for the emulsions prepared with MCT oil show frequency independence and low values for the CLP suspensions EH1, EH3, and OS (Figure 8b). The loss factor of the emulsions prepared with AL showed strong dependence at low frequencies and lower dependence at higher frequencies. The loss factor of the emulsion prepared with CLP suspension EH2 strongly depended on the frequency (Figure 8b). In Figure 8d, the dependence of the moduli on the frequency for EH2 is highlighted even more, as is the failure to perform the criteria *G* > *G*" for this emulsion. The trend of the moduli for the AL emulsion also shows dependence on the frequency, but fulfils the criterion of *G*

> *G*" over a certain frequency. The frequency sweep test for the emulsions prepared with MCT oil and CLP suspensions OS, EH1, and EH3 results in frequency-independent moduli, and the data fulfil the criteria *G* > *G*", indicating strong networks and elastic response to shear stress. These rheological data do not agree entirely with the results of the stability tests carried out in the centrifuge (Figure 4b), which might be due to the different physical properties of MCT oil compared to the other oil phase, which consisted of shea butter and babassu oil.

Furthermore, the emulsions stabilised by OS CLP suspensions demonstrated the best long-term stability with shea/babassu as well as with MCT oil. This circumstance coincides well with the very low loss factor, which also shows almost no dependence on the frequency for both emulsions (Figure 8a,b), as well as frequency-independent moduli fulfilling the criteria *G* > *G*".

Emulsions commonly exhibit non-Newtonian properties, where the viscosity is a function of the shear rate. Furthermore, these non-Newtonian fluids show shear-thinning behaviour, by which the viscosity decreases as the shear rate increases [35].

The viscosity measurements were performed at very low shear rates (0.0002–0.02 1/s). In Figure 9, the viscosity trends of the shea/babassu emulsions (a) and the MCT emulsions (b) are shown. The non-Newtonian and shear-thinning behaviour can be observed for all emulsions. Furthermore, according to Derkach [36], the viscosity is strongly dependent on the particle size of the stabilising particles in Pickering emulsions. High viscosity values are achieved by small stabilising particles. The results indicate that some of the agglomerates measured by DLS (OS and AL CLPs Figure 1) might disperse during the emulsion formation due to the high viscosity of the emulsions with shea/babassu and MCT, which were stabilised by OS CLP suspension. The high viscosity is another explanation of the observed long-term stability of the OS emulsions.

**Figure 9.** Viscosity of the emulsions prepared with shea/babassu (**a**) and MCT (**b**).

The comparison of the results from the frequency sweep and viscosity measurements shows a consensus. Looking at the emulsion with shea/babassu stabilised by CLP suspension EH1, the low viscosity (Figure 9a) coincides with the results from the frequency sweep test (Figure 8a,c). In addition, the results for the emulsion with MCT stabilised by EH2 show accordance. Furthermore, the viscosity values, as well as the results from the frequency sweep test concerning the emulsions stabilised by EH3, OS, and AL, indicate a strong network of disperse drops stabilised by small particles.

#### **4. Conclusions**

The production and characterisation of colloidal lignin particles from different bulk lignins, followed by the preparation of o/w Pickering emulsions stabilised by those CLPs, were performed in the present work. The transformation of the bulk lignins into CLPs resulted in particles with similar primary particle sizes (27–54 nm) and molecular weights in the same range (1691 Da–1914 Da) with relatively low PDIs. The measurement of the

contact angle with water allowed the first indication of the ability of the CLPs to stabilise o/w Pickering emulsions. All CLPs showed hydrophilic properties, with contact angles from 31.4 to 45.1◦. The agglomeration of OS and AL CLPs does not affect their ability to stabilise o/w Pickering emulsions.

Pickering emulsions containing a shea butter/babassu oil mixture, as well as MCT oil, were characterised through stability tests, fluorescence microscopy, and rheological analyses. Emulsions containing shea/babassu showed slightly higher stability than those stabilised by MCT oil, suggesting that this is an effect of their different aggregate phases at room temperature. Moreover, according to the stability tests, the CLPs produced by OS and AL lignin exhibited a better performance as emulsifiers than the EH CLPs. Moreover, only the segregation of water was observed, indicating a stable droplet network and a high resistance to the coalescence of the oil phase.

The linear viscoelastic region was ascertained by performing an amplitude sweep test for all prepared emulsions. The critical deformation amplitude was found to be 1.2%. Hence, the oscillatory frequency sweep tests were performed at an amplitude of 1%. Physical stability and a gel-like colloidal network of the emulsions with shea/babassu stabilised by AL, OS, EH2, and EH3, and with MCT oil stabilised by OS, EH1, and EH3, were observed. Non-Newtonian and shear-thinning behaviour, of all emulsions were proven in the viscosity measurements.

Overall, the use of several different bulk lignins to produce CLPs, as well as the subsequent preparation of o/w emulsions, was successful. The transformation of bulk lignins into colloidal particles made it possible to obtain particles with similar properties. The valorisation of various bulk lignins from different feedstocks by producing CLPs revealed the opportunity to overcome the heterogeneity of available lignins. This is an essential step concerning the need to replace synthetic or fossil fuel-based materials to make a step towards a biobased economy. The ability of the produced CLPs to stabilise the prepared o/w Pickering emulsions showed their potential to serve as emulsifiers in different application fields. The replacement of synthetical surfactants in the cosmetic industry would be an improvement concerning human health and environmental friendliness.

**Author Contributions:** Conceptualization, J.T.; methodology, J.T.; formal analysis, J.T.; investigation, J.T.; writing—original draft preparation, J.T.; writing—review and editing, S.B. and M.H. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Austrian Research Promotion Agency (FFG), grant number FO999887928. Open Access Funding by TU Wien.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The production of the CLP suspension was carried out in cooperation with Lignovations GmbH, Austria. The contact angle measurement was carried out in cooperation with Oihana Gordobil, InnoRenew CoE, Slovenia. The fluorescence microscopy was carried out using the facilities of the Research Group for Physical Chemistry of Aerosol Particles, TU Wien, Austria. The rheological investigations were carried out using facilities of the Research Group for Polymer Chemistry and Technology, TU Wien, Austria. Scanning electron microscopy image acquisition of the lignin particles was carried out using facilities at the University Service Centre for Transmission Electron Microscopy (USTEM), TU Wien, Austria. The authors acknowledge the TU Wien University Library for financial support through its Open Access Funding Program.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

**Juan Aranda 1,\*, Tasos Tsitsanis 2, Giannis Georgopoulos <sup>3</sup> and Jose Manuel Longares <sup>1</sup>**


**Abstract:** The market of energy services for the residential sector in Europe is very limited at present. Various reasons can be argued such as the high transaction costs in a highly fragmented market and the low energy consumption per dwelling. The rather long payback time for investments render Energy Services Companies' (ESCOs) services financially unattractive for many ESCOs and building residents, thus hindering a large potential of energy savings in a sector that is responsible of almost half of Europe's energy consumption. If the ambitious 2030 and 2050's decarbonisation targets are to be met, the EU's residential sector must be part of the solution. This paper offers insights about novel ESCO business models based on intensive data-driven Artificial Intelligence algorithms and analytics that enable the deployment of smart energy services in the domestic sector under a Payfor-Performance (P4P) approach. The combination of different sources of energy efficiency services and the optimal participation of domestic consumers in aggregated demand response (DR) schemes open the door to new revenue streams for energy service providers and building residents and reduce the hitherto long payback periods of ESCOs services in the sector. Innovative business models for ESCOs and demand flexibility Aggregators are thoroughly described. Especially customised Performance Measurement and Verification protocols enable fair and transparent P4P ESCO contracts. The new human-centric energy and non-energy services increase the energy consumption awareness of building users and deploy behavioural and automated responses to both environmental and market signals to maximise the economic benefit for both energy service providers and consumers, always respecting data protection rules and the consumers' comfort preferences. The new hybrid business models of P4P energy services make traditional EPC more attractive to energy service providers, with low cost data collection and treatment systems to bring payback periods below 10 years in the residential building sector.

**Keywords:** ESCO business models; aggregator business models; energy services for buildings; energy performance contract; Pay-for-Performance contract; energy efficiency in buildings; demand response; demand side flexibility; residential sector

#### **1. Introduction**

Despite the large economic energy saving potential in the EU [1], the Energy Service Companies (ESCOs) market is less developed at the domestic building sector compared to other sectors such as the industrial or service building sectors [2,3]. Around 80% of the ESCO market is focused on public buildings, mainly educational and health-care facilities and municipal and regional buildings. The residential building sector is out of the scope of ESCOs to a large extent [4]. Energy Performance Contracting (EPC) providers have been most active in the services and the public building sector, since they are mainly targeting energy contracting offerings to large customers, partly explained by the large transaction costs of energy performance contracts [5].

**Citation:** Aranda, J.; Tsitsanis, T.; Georgopoulos, G.; Longares, J.M. Innovative Data-Driven Energy Services and Business Models in the Domestic Building Sector. *Sustainability* **2023**, *15*, 3742. https:// doi.org/10.3390/su15043742

Academic Editors: Oz Sahin and Russell Richards

Received: 29 December 2022 Revised: 10 February 2023 Accepted: 15 February 2023 Published: 17 February 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

There are, indeed, specific barriers that make a large-scale application of the conventional ESCO model for residential buildings particularly difficult [6] apart from sectorrelated barriers (low per-unit consumption, few identifiable homogenous units, lack of the necessary energy intensity to justify investment in today's EPC business models [7,8]). In addition, the decentralised structure of the residential sector hinders the uptake of EPC [9].

However, the increasing penetration of smart solutions for residential dwellings (smart home technologies) and the generation of huge data streams that can facilitate better knowledge of the demand side, the drastic cost reduction for on-site generation and storage, the proliferation of self-consumption models and energy communities, together with the growing de-centralization of the energy system that intensifies the need for introduction of small residential consumers in smart grid management strategies, point the way towards the definition and deployment on innovative energy services that can transform small residential consumers as active energy actors and equal participants in progressively opening energy markets [10,11].

Such favourable conditions are further enhanced by the political commitment at EU [12] and national levels for the empowerment of small consumers to become active elements of the future energy system and an integral part of the integrated EU Energy Market, thus necessitating for new business models and services that can facilitate this transition of the EU energy landscape. Nevertheless, with the drastic reduction in technology costs and the opportunity raised for the creation of significantly high revenue streams through energy markets, it becomes evident that a new era arises for the residential buildings market associated with very attractive payback periods for targeted investments towards energy efficiency, self-consumption optimization, and provision of services to energy grids through demand responses and flexibility [13].

In this sense, an analysis of the framework conditions reveals important factors that can impact the technology deployment in the market in the future:


cant impact on the feasibility of many energy efficiency measures. In particular, low energy prices make it difficult to guarantee short-term returns on energy efficiency investments. This could be intensified in some Member States, where energy prices are highly subsidised, hence, there is no incentive from consumers to reduce their energy consumption.


The novel services presented in this paper are based on services that use enhanced Performance Measurement and Verification (PMV) protocols to construct continuous consumption and generation baselines based on real-time data (metering and sensing). This PMV is the foundation of the Pay-for-Performance (P4P) principle of the new energy services that goes beyond the current traditional EPC in use by ESCOs for energy retrofitting solutions in buildings, as depicted in Figure 1.

**Figure 1.** Factors hindering the penetration of EPC in the residential building sector.

The paper begins by describing the Pay-for-Performance (P4P) approach applied to the provision of energy services in the residential sector in Section 2. A proposal of innovative energy service bundles is made in Section 3, targeting energy efficiency services, demand response services to the grid, and non-energy services under a P4P perspective. Consequently, the paper describes how the new services can be exploited in two new business models for ESCOs and demand side Aggregators in Section 4. To conclude, a comprehensive development of a tailored PMV methodology is exposed in Section 5, to allow for a fair and accurate verification of the energy and non-energy service performance.

#### **2. The P4P Approach**

Pay-for-Performance (P4P) programmes have existed for more than 20 years, in different forms, primarily targeting the commercial and industrial building sectors, mainly due to wide smart metering penetration in these sectors [25]. Such programmes have been, in their majority, utility-driven, focusing on building retrofitting performed on behalf of the utility by 3rd parties, which, in turn, receive incentive payments for the actual savings achieved over time and at specified milestones.

On the other hand, historically and to date, there have been just a few programmes oriented towards the residential building sector. Non-availability of fine-grained metering data from residential buildings and loads has been the main reason that has hindered the penetration of P4P into the residential buildings sector. Moreover, available programmes are targeting individual energy efficiency measures that are focusing on achieving energy savings from the replacement of lighting devices and systems, which can be verified and estimated a priori, though missing, to unleash the energy savings potential of residential buildings from a whole building perspective. Another inefficiency of available P4P programmes is that they mostly focus on and remunerate energy savings achieved through specific measures, missing the system perspective and the new role buildings can be assigned with in the smart energy system through their transformation into active flexibility nodes and their introduction as equal participants in flexibility markets.

With the increasing availability of household smart metering, sub-metering data, and IoT data, the residential sector is faced with new opportunities that can be realized with the design and launch of innovative P4P programmes that effectively combine energy efficiency and flexibility triggering measures. Increased data accessibility and granularity can facilitate the measurement and verification (M and V) of achieved savings and flexibility provided in a transparent and objective manner [26], thus enhancing customers' trust over P4P programmes and services. Advancements in data analysis and AI need to be adopted towards the definition of innovative data-driven methods for the dynamic and accurate baselining of energy performance in residential buildings, with a particular view into the objective verification of achieved savings and the remuneration of flexibility services. Moreover, new business models need to emerge that can effectively combine energy efficiency and demand response (flexibility) services and allow both Energy Service Companies (ESCOs) and Aggregators to engage in new business, improve the attractiveness of their traditional offerings and enhance them with hybrid services, de-risking relevant retrofitting investments and ensuring the viability of their business on the basis of standardized hybrid P4P service contracts.

This novel and enhanced P4P approach lies in the heart of the work presented in this paper, and intends to deliver the next generation of Energy Performance Contracting (EPC) on the basis of (a) hybrid innovative energy services properly combining energy efficiency and demand responses; (b) objective, data-driven, and AI-enabled measurement and verification schemes to ensure the objective verification of savings as well as fair and transparent remuneration of flexibility under the principles of Pay for Performance; (c) synergetic business models between Aggregators and ESCOs; and (d) clear and wellspecified legal/contractual provisions.

The ultimate target of the P4P approach presented hereinafter is to extract significant value out of the huge energy efficiency and flexibility potential of the (currently overlooked) residential sector by leveraging on a single point of contact for both EE and DR services, increasing the attractiveness and transparency of relevant investments and services, while preparing the ground for significant uptake and market penetration through the validation of significantly reduced payback times (in comparison with current P4P programmes addressing the residential building sector) achieved with the creation of new revenue streams from residential flexibility transactions in energy markets.

Our P4P approach introduces a variety of service bundles to be provided by ES-COs/Aggregators, under the Energy-as-a-Service (EaaS) model, towards residential consumers in the frame of hybrid energy efficiency and flexibility offerings under the principle of Pay for Performance. These bundles combine (i) building retrofitting and investments for the installation of smart equipment (metering, sensing, actuating), together with extended offerings for the installation of distributed generation (PV) and storage (batteries) units; (ii) energy efficiency services, spanning behavioural transformation and targeted guidance towards energy savings along with more advanced concepts for net metering/selfconsumption maximization through smart automation; (iii) flexibility services (with the introduction of storage and electric vehicles as means for enhancing flexibility); and (iv) non-energy services (comfort preservation, indoor air quality, security, well-being, emergency notification services, etc.). Figure 2 shows a summary of the P4P contracts main elements and associated advantages.

**Figure 2.** The frESCO P4P Approach—Main elements and associated advantages and benefits.

Energy efficiency and flexibility services leverage on the latest advancements in the areas of Big Data Management and AI analytics to safeguard the data-driven nature of the P4P approach introduced, while promoting the provision of truly personalized and human-centric services, ensuring the transparency and objectiveness of measurements and verification through the adoption of a dynamic baselining and normalization approach that relies on accurate long- and short-term forecasting of building energy performance (demand, generation, flexibility). To this end, a standards-based end-to-end interoperability framework has been established, facilitated through an advanced Big Data Management Platform that effectively addresses and facilitates data collection, management, processing, and analysis of the needs of the variety of services involved in our Pay-for-Performance approach. The Big Data Management Platform enables the interoperable bi-directional communication and data exchange between building amenities and software artefacts utilized for the provision of the P4P services, while preserving data privacy and security through the user-centric definition and assessment of custom rules for data anonymization and accessibility.

The innovative services bundles introduced in our P4P approach are complemented by appropriately drafted business models (for ESCOs and Aggregators) that focus on the establishment of highly profitable business cases for all involved actors by properly extending traditional P4P business offerings. Such new business models aim at allowing ESCOs and Aggregators to individually or jointly provide energy efficiency services combined with flexibility services to the energy system under hybrid service contracts and legal arrangements that ensure attractive payback periods for any investments associated with the services (from IoT device and smart meter/sub-meter subsidies, to generation, storage, and EV incentives).

Settlement of P4P service contracts is performed on the basis of a data-driven Measurement and Verification Methodology that leverages real-time data streams from building amenities to ensure transparent verification of energy savings and flexibility provisions and, respectively, objective remuneration of all involved actors in the realization of the P4P services. Our Measurement and Verification Method is intended to offer fairness, simplicity, accuracy, and replicability in order to foster end users' trust in the remuneration mechanism by properly enhancing existing methodologies and overcoming current barriers (such as the selection of representative days as basis for estimation, the setting of exclusion rules to avoid considering non representative consumption, the definition of adjustment types and windows, and manipulation attempts from the users, etc.). It provides a more objective measurement of the achieved performance for each specific building and an AI-enabled evidence-based definition of relevant energy performance baselines (in the long and short-term), thus ensuring accurate calculation and transparent verification of the provided flexibility and achieved energy savings.

#### **3. Innovative Advanced Energy Services**

ESCOs and demand side Aggregators are the business stakeholders of the new generation of energy services addressed to the residential sector. ESCOs deliver energy efficiency services while demand flexibility Aggregators valorise aggregated demand response as a service to the grid for balancing and congestion management. In order to maximise the benefits for building users and facility managers the two roles may be played by the same energy service provider utilising a common data collection and managing infrastructure. A proposal of the new data-driven innovative energy services for building users is shown in Table 1.


**Table 1.** Proposal of new data driven ESCO and Aggregator innovative energy services for residential consumers.


#### **Table 1.** *Cont.*

#### *3.1. New Data-Driven Energy Efficiency Services*

The potential of building data usage in energy management systems in buildings is increasingly recognised [27,28]. Energy Efficiency (EE) services focus on obtaining energy savings in different ways from powerful energy analytics to provide the best efficiency strategies for the optimal use of self-produced energy. Two main efficiency strategies are proposed: (a) give recommendations to users for implicit triggering of EE actions, and (b) smart control and scheduling to trigger automatic actions on controllable Distributed Energy Resources (DER). One service, specifically addressed to prosumers, aims at optimising self-consumption from distributed generation assets.

In this regard, the following four energy services are proposed for energy efficiency:


These efficiency services, along with additional possible retrofitting measures encompassed in traditional EPC contracts, need a medium to forecast long term demand with a reference period prior to the service deployment, in order to assess holistically the impact of the service deployment.

#### *3.2. New Data-Driven Demand Response Services*

Aggregated demand responses will play a key role in the future Energy Market [29]. Flexibility services FL are devoted to the provision of demand flexibility from domestic users to be used in grid management in two ways: (a) balancing services to Grid Managers, Distribution System Operators (DSO), Transport System Operators (TSO), and Balance Responsible Parties (BRP); and (b) grid congestion management to alleviate transport and distribution congestion problems at the local and global levels, and the avoidance of costly grid expansion and storage capacity investments made to accommodate an increasing amount of renewable energy sources with high generation uncertainty [30].

In this sense, the following three flexibility services are proposed for this matter:


#### **4. New Business Models for Energy Service Providers in the Residential Sector**

The business actors involved in providing these services are ESCOs and Aggregators. These are called the service providers and often a single service provider can adhere to both roles. The main phases in the new energy service value chain are:


The new cost, revenue, and saving distribution before and during the contract timeframe is shown in Figure 3. Traditional EPC services require a substantial amount of upfront costs with an estimated payback period in the order of 5–15 years, depending on the intervention package. To shorten this period, as many services as possible must be

deployed simultaneously (efficiency, optimisation, flexibility, and non-energy services), since the sunk costs associated with the infrastructure are common, and thus savings or revenues can be maximised.

**Figure 3.** Costs, revenues, and savings before and during the contract timeframe.

The maximum exploitation of the infrastructure and relevant optimisation of service deployment greatly impacts the required amount of payback time for the business itself.

It is, hence, extremely important for the service provider to be able to maintain a close to real time monitoring of both the energy consumption and the availability of demand flexibility assets. This feature, combined with direct communication and control are paramount to the effectiveness of the solutions and the maximisation of revenue streams. On the other hand, end-users (usually homeowners) require a control, or an opt-out option, regarding any installed component, in order to maintain their comfort level. This is a conflict-of-interest issue that new business models aim to address by admitting warranties for the capital deployer and the eventual contract duration.

#### *4.1. New P4P Business Models for ESCOs*

ESCOs primarily provide energy efficiency services. Typically, they operate under an EPC arrangement and are expected to bear the upfront costs of the investment, either directly or indirectly, for example, in combination with a financier. The energy services under the P4P concept provide for:


The above services function as components of the revenue streams, which, in turn, are, essentially, savings measured and verified by a robust, fair, and transparent Performance Measurement and Verification (PMV) methodology. The ESCO business model actors and relationships are depicted in Figure 4.

In contrast with a traditional EPC contract, the P4P energy efficiency services provide for centralised information and control to the end-user, the optimization of selfconsumption, storage or grid consumption according to market signals, awareness over potential energy savings and informative billing, transparent valorisation of the energy savings, and cash flow monitoring and control.

**Figure 4.** ESCO actors and BM.

According to the Business Model Canvas methodology [31], the new ESCO Business Model is summarised in the following points:

Core Values


Pains Experienced


Jobs to complete


#### *4.2. New P4P Business Models for Demand Response Aggregators*

Demand side Aggregators on the other hand, are primarily interested in executing flexibility services on demand by grid operators (Explicit DR) or according to price signals from the markets (Implicit DR). Explicit DR implies direct participation in the balancing, capacity, or even wholesale markets. The participants receive direct compensation for providing the energy flexibility required. Implicit DR implies exposure to market or network charges according to the time of use of the electricity. Consumer behaviour is

driven by real-time market signals triggered. In contrast to explicit schemes, the actor is not committed to act but, instead, receive the benefits in the form of a reduced energy bill. Although both demand response schemes are compatible and interesting [32], the business models proposed in this paper refer to explicit schemes to respond to market needs of distributors, network operators, and BRPs.

The Aggregator's objective is to perform peak-load management in active energy markets (flexibility, capacity, energy, etc.) and receive the corresponding market compensation, while maintaining users' comfort levels. Services for Aggregators thus imply hybrid models that effectively combine energy efficiency as well as demand flexibility services that can maximise revenues through the control of small, but significant when aggregated, loads. These services include:


**Figure 5.** Aggregator actors and BM.

The value proposition of the new Aggregator Business Models can be summarised to the following:

Core Values


Pains Experienced

• Load micromanagement.


Jobs to complete


In the optimum scenario, building residents contract both types of services simultaneously provided by a service provider that plays both ESCO and demand flexibility Aggregator roles. This hybrid business model adds the savings derived from data-driven efficiency services and the revenues obtained from the participation in open energy flexibility markets to traditional EPC models, thus decreasing the expected payback period of the data platform investment.

Among others, the main factors affecting the economic viability of the new P4P energy services are:


#### *4.3. Challenges Faced for the Implementation of the Novel Energy Services*

Energy (and non-energy related) services offered under the P4P approach face a series of risks and challenges for their successful market deployment. Technical risks involve cybersecurity and data privacy issues, integration with legacy equipment, the lack of proper maintenance plans, and lack of equipment standardization with different communication protocols and data formats [34].

Economical risks may include the lack of a commercial fit between energy savings services and consumer profiles and habits, meaning that the energy services would not reach their full potential in the long run, impacting the payback period accordingly. From a social point of view, the greatest risk relies on potential lack of interest of consumers to deploy energy services due to the initial cost and performance uncertainty. While periods of high energy prices usually trigger energy efficiency investments, the lack of disposable income due to the high energy expenditure creates significant barriers. Price volatility as such can produce both effects. This risk can be mitigated through the use of a relatively low-cost system of commercially available equipment handling large amounts of data. It enables the provision of a variety of services combining different revenue streams (savings, market remuneration) to fit a vast diversity of buildings with a standard solution.

There are other important challenges the novel business models face:

• Scarcity of valuable data in residential buildings. A high proportion of the current building stock is not prepared to capture and store real time data for AI to deliver valuable services to the building residents. This data includes indoor and outdoor temperatures, humidity, presence, air quality, and metering [35]. The new business models must cater for the necessary data collection and transfer systems such as sensors, meters, and gateways. Smart plugs and actuators for automated services are also required. The abovementioned equipment is commercially available, standard, and affordable.


Finally, the most important challenge for the deployment of the proposed services is the maturity level of the markets from both an infrastructure point of view as well as a competition perspective between market participants. Smart P4P services require smart meters and smart grid technologies fully deployed by the TSO/DSOs and, at the same time, the existence of smart tariffs and incentives from market participation and engagement. Energy retailers have not fully exploited these approaches, thus the public remains partially unaware of their potential. Local regulation should strive towards the smart grid and dynamic tariff schemes while ensuring consumer protection from critical prices. While this is only a matter of time, a quicker pace towards sustainability may be required.

#### **5. PMV Methodologies for Successful P4P Contracts**

The P4P contracts require a direct relation between service payment and energy performance. Hence, this performance must be accurately measured and verified. Indeed, P4P energy services are based on a specific PMV Methodology that uses real-time data streams to ensure (a) objective validation and assessment of the feasibility and effectiveness of the new business models, and (b) transparent and fair remuneration of the involved actors for the achievement of energy savings and the provision of flexibility to the grid. The PMV methodology focuses on the establishment of a robust and transparent method based on data streams from local resources and blockchain technology. The new PMV method is based on existing methodologies [34,35], extending the measurement and verification protocols to demand response on the basis of fairness, simplicity, accuracy, and replicability, in order to foster end users' trust in the remuneration mechanism.

Two clear scenarios should be depicted where different methodological approaches are followed: (a) energy efficiency measurement and verification and (b) demand flexibility measurement and verification.

#### *5.1. PMV Methodologies for P4P Energy Efficiency Services*

The EE PMV aims at measuring energy savings for smart retrofitting and the energy efficiency of new services. Savings are measured holistically at a dwelling or building level in specific periods of medium- and long-time horizons (e.g., billing periods). Savings are directly enjoyed by the end users, and they pay the ESCO a share proportionally to the savings obtained in the period for the service delivery. Savings are achieved from various implicit (behavioural changes) and explicit (automation) strategies, as well as from building and equipment retrofitting. Baselines are built based on historical data comprising a full year. If no historical data are available, baselines are constructed upon pre-selected models and, then, calibrated for a given period using the data flow of energy consumption and the energy consumption driving parameters in the Ex-ante period. Baselines are seasonal to provide a better fit to the actual building energy profile for every season of the year. Baselines are fixed in the reporting period, but they are subject to continuous accuracy checks that may reveal the necessity of non-routinary baseline adjustments calculated periodically at every billing or reporting period. In this sense, EE PMV methodologies are similar to existing protocols in use [36]. Figure 6 shows the ex-ante and ex-post scenarios for energy efficiency measurement and verification.

**Figure 6.** Ex-ante and ex-post scenarios for energy efficiency measurement and verification.

#### *5.2. PMV Methodologies for P4P Demand Response Services*

DR services offer aggregated domestic consumer demand flexibility to grid operators for congestion management and grid balancing services. These services are demanded on short event basis and the performance is measured via the energy shift achieved by the automated operation of available Distributed Energy Resources (DER) by the Aggregator. The Aggregator delivers this demand flexibility to the grid and shares the market remuneration with the flexibility providers or the building users. Flexibility-related events need a short-term forecast so the shortest possible reference period for baseline training is selected while guaranteeing an accurate prediction. Thus, the baseline is load-based and dynamic, as it is recalculated on a continuous basis as new data come into the moving reference or training period. This way, the baseline is updated to any change in external weather

conditions or behavioural changes, avoiding the need of continuous manual adjustments, fitting to the latest user energy profile. Finally, the baseline is calculated using the values of the independent variables just prior to the event, thus reflecting the actual conditions that are the closest to the event. Figure 7 shows the Ex-ante and ex-post scenarios for energy flexibility measurement and verification.

**Figure 7.** Ex-ante and ex-post scenarios for energy flexibility measurement and verification.

#### *5.3. Performance Assessment of Non-Energy Services*

Non-energy services are an additional value added of data-driven services that use the available user and building data to deliver optional benefits to the end users such as comfort, noise control, air quality, or others, all under P4P contracts. The performance of these optional services is not based on energy measurements but on compliance with the contractual service levels. Measurements of the involved service parameters are compared to the target values to derive service payments from the degree of compliance with the set targets.

#### **6. Conclusions**

The new energy services described in this paper constitute an innovative portfolio expansion for ESCOs in the residential sector, where these companies hardly have any presence today. Among other reasons, the low EPC penetration in the residential sector is explained by the low benefits derived from a scarce per capita consumption rate and the high transaction costs involved due to the high energy usage fragmentation. These factors and the difficulty to aggregate significant energy amounts to justify an investment in an energy management system has kept ESCOs away from a sector that represents 40% of the EU's final energy consumption.

Today's digital technology can drive energy service costs down and make them available to many domestic consumers. The main contribution of this research is to show how ESCOs can extend the conventional concept of EPC models, traditionally suitable for retrofitting services, to a new set of attractive standard data-driven services powered with AI algorithms. These services can be delivered on a continuous basis in an unsupervised manner, and are extensive simultaneously to a large number of building residents, thus reducing the per-capita operation costs. These offerings are now expanded by novel services to the grid in the form of demand flexibility services for congestion management, grid balance, and ancillary services triggered by network operators in local energy markets. In addition, non-energy services can also be designed to suit consumers needs such as

automation and comfort, air quality, noise control, or safety and surveillance, among others, as value-adding alternatives, delivered with the same infrastructure.

The main problem with EPC services in the current domestic sector is the difficulty of establishing the right performance and verification procedures to ensure a fair remuneration for the services and adapt the verification methodology to non-monitored and changeable independent variables such as outdoor and indoor weather conditions, PV generation, or user comfort preferences. In this sense, the new P4P energy service generation solves this problem by creating a Pay-for-Performance model, where performance is continuously being monitored by means of powerful and accurate near real-time forecast algorithms fed by the digital platform data captured from metering and system behaviour parameters. These P4P models enable a fair distribution of energy savings and flexibility market remuneration that shares proportionally the benefits among all market actors, the service providers on one side (ESCOs and/or Aggregators) and the consumers/prosumers on the other side.

The new set of P4P hybrid energy services combine the traditional EPC retrofitting services with the savings obtained from several implicit and explicit energy efficiency strategies and with the potential remuneration of demand response from flexibility markets triggered by network managers. The combination of these revenue sources derived from the use of the same digital platform can reduce the payback times greatly, below 10 years, thus making them more attractive to residential building managers and residents.

These energy service sets can be provided by either ESCOs or Aggregators, but the ideal scenario for service providers would be to play both roles along with possible nonenergy services enabled by the digital platform.

**Author Contributions:** Conceptualization, T.T.; Formal analysis, J.A. and G.G.; Investigation, J.M.L.; Methodology, T.T.; Writing—original draft, J.A.; Writing—review and editing, J.M.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This contribution has been developed in the framework of the frESCO project 'New business models for innovative energy services bundles for residential consumers', funded by the European Commission under the H2020 Innovation Framework Programme, project number 893857.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data presented in this study are openly available in www.frescoproject.eu.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **Governance Model for a Territory Circularity Index**

**Elena Rangoni Gargano 1,\*,†, Alessia Cornella 1,2,† and Pasqualina Sacco <sup>1</sup>**

<sup>1</sup> Fraunhofer Italia—IEC, Via Alessandro Volta 13/A, 39100 Bolzano, Italy

<sup>2</sup> EURAC Research—Institute for Renewable Energy, 39100 Bolzano, Italy

**\*** Correspondence: elena.gargano.r@gmail.com; Tel.: +39-34-6625-2759

† These authors contributed equally to this work.

**Abstract:** In a world that seeks to reduce the environmental impact of urban areas and implement the Circular Economy, governance is seen as a key to the ecological transition and the achievement of Sustainable Development Goals. How can we use data, knowledge, and resources at our disposal to put into practice a governance model that implements the Circular Economy of territories? This study devised this model. The comparative assessment of enablers and barriers presented in the literature review conducted allowed for the categorisation of indicators related to the literature sample, leading to the creation of a "Territory Circularity Index" composed of four thematic areas. The index was then incorporated into an innovative governance model intended to serve as a practical tool for local governments and policy makers. In the context of the Circular Economy and Sustainable Development, a "Flexible Governance Model" tailored to the territory could effectively contribute to the creation of coherent policies, an open and transparent process, and facilitated consultation with local stakeholders. The evaluation of the results indicates the potential of the "Flexible Governance Model for a Territory Circularity Index" in promoting effective mechanisms for implementing the circular economy, based on the dual quantitative and qualitative approach from which the model originated. The research could be particularly important for various stakeholders: researchers, policy makers, entrepreneurs, and governments.

**Keywords:** governance; circular economy; circular territories; index; circularity

#### **1. Introduction**

Discourses around circular territories and their governance models are gaining traction both in academia and daily practice. According to a definition by [1], a circular city is "a city that practices circular economy principles to close resource loops, in partnership with the city's stakeholders to realize its vision of a future-proof city". The application of the Circular Economy (CE) principles through governance practices in urban contexts is worth investigating for several reasons. First, urban areas represent the leading drivers of linear production and waste models, as well as constituting the primary centres of economic activity [2]. Because of their increased social and economic weight, cities are putting unprecedented strain on the environment: urban areas are responsible for the consumption of over 70% of globally produced resources and energy, 70% of overall GHG emissions, and for the generation of over 70% of waste [3]. It is, therefore, difficult to imagine a CE-driven remodelling of the extant socio-economic order without specific attention placed on urban settings and how they are managed. Furthermore, from a more research-driven perspective, urban contexts are a useful unit of analysis because they can be taken to represent a scaled-down representation of higher-level macro trends [4].

CE stands as an interdisciplinary approach to redesigning the fundamental structures supporting our linear production, consumption, and disposal mechanisms in favour of more responsible and sustainable socio-economic systems [5]. Despite numerous diverging definitions, CE is generally understood as a paradigmatic shift in the "take-make-dispose" consumption patterns central to modern consumerism [6]. CE theories envisage the slowing

**Citation:** Rangoni Gargano, E.; Cornella, A.; Sacco, P. Governance Model for a Territory Circularity Index. *Sustainability* **2023**, *15*, 4069. https://doi.org/10.3390/su15054069

Academic Editors: Oz Sahin and Russell Richards

Received: 17 January 2023 Revised: 17 February 2023 Accepted: 20 February 2023 Published: 23 February 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

down of the linear consumption model to encourage the reuse of materials so that the output of one consumption process may constitute the input in other production processes, thus looping consumption patterns and, ultimately, designing out waste [7]. Because of the entrenched nature of the "take-make-dispose" model in all aspects of capitalism, transitioning away from this pattern entails a restructuring of economic, social, ecological, technical as well as cultural/ethical components [8] informing governance actions. As a side effect, the implementation of circular models helps foster material efficiency, promote the mitigation of resource depletion, and improve the minimisation of waste [6].

Increased attention placed on circular models has given rise to a vast body of literature aimed at analysing trends and CE modelling [9]. An overview of the literature highlights how EU-level standards for CE assessment constitute a common point of departure in academic literature for the development of a variety of circularity indices [10]. Several authors work EU-detailed standards into circularity models that can be applied to specific national, regional, or urban contexts [9,11–14]. However, many of the developed models contain blind spots that require addressing: their flexibility and adaptability are often limited, they frequently provide narrow opportunities for customisation and they present difficulties in reproducibility. Additionally, many authors report experiencing a lack of data availability as a hurdle in the advancement of circularity [9,15]. A supplemental set of studies moves beyond circularity assessment and shifts towards models that can help pinpoint barriers and opportunities in CE plans implemented by EU cities and regions [16]. Parallelly, an additional line of academia is intent on carrying out an analysis of macro-level circularity by focusing on components such as material recycling [17], resource longevity [18], material efficiency [19,20], policy design [21], supply networks [22], and decision-making [23]. Published work in the field of policymaking presents extra commonalities: grey literature places less emphasis on quantifiable metrics and chooses to carry out observation-based assessments aimed at enhancing the CE implementation. Documents such as OECD reports [24–27] focus on assessing social, normative, and political projects as well as civil society initiatives while not specifically delving into indices; when indices are present, this type of study relies on indicators that are then applied to geographical contexts.

The main objective of this research paper is to create a flexible governance model that territories (whether individual cities or agglomerations of municipalities) can use to implement circular economy initiatives. To this end, the paper is structured as follows. The Introduction lays the foundation for the quantitative and qualitative analysis and broadly contextualises the research in question with clear references. The Literature Review focuses on a quantitative collection of data related to the subject matter, focusing on barriers and drivers to CE implementation in territories and on the existence of indices and indicators for measuring urban circularity. The Methodology provides the explanation and processes the authors have followed for the creation of the governance model hereby presented. The Discussion summarises the findings of the study and highlights its usefulness in the context of local governance. An index derived from the extrapolation of the literature results is proposed here, as well as their interconnection with those derived from the survey conducted by the authors. Finally, with an innovative approach, the authors propose a new model of urban governance for the achievement of circular economy goals. This model makes use of the authors' multidisciplinary approach, the results of the literature review and the surveys conducted on Italian cities, as well as the timely analysis of barriers and drivers of the CE. The Conclusions provide some concluding remarks and introduce the opportunities for the empirical application of the index.

#### **2. Literature Review**

The authors, aware of the difficulty in practically applying CE to a specific territory, intended to analyse the state of the art in this regard, approaching it with the following research questions:

What is the state of the art in evaluating circular economy in territories?

What are the barriers and drivers for implementing a circular economy in territories?

To answer the research questions, the study has followed the process explained below (Table 1).

**Table 1.** Number and type of literature sources collected that propose indicators to evaluate circularity in territories.


#### *2.1. Collection of Relevant Sources to Evaluate Circular Economy in Territories*

To answer the research question *"What is the state of the art to evaluate circular economy in cities?*" the study focused on keywords and started by combining terms. The first literature collection was executed by utilising databases such as Scopus and Web of Science to identify relevant academic literature. Later, source collection was added through Google Scholar for academic sources and Google search engine for grey literature. In this case, documents in English, Italian, and French (1) were selected. The collection of grey literature includes policy papers, articles, and institutional websites. The diverse nature of the documents was ensured to support a diversity of sources for greater applicability. The keywords for the literature collection were: "Circular Economy", "Circular Economy index", "Circular City", "Circular Economy [name of city]", "Circularity index [name of city]", "Circolarità [name of city]", "Indici circolarità [name of city]". Documents were selected also based on the date of publication to ensure an updated account. Because of the interdisciplinary nature of CE, the authors chose to not carry out further screening to exclude literature addressing other topics associated with CE; intersectionality between CE topics and other sustainability-related areas was not viewed as having any type of negative impact on the outcome of the study.

#### 2.1.1. Categorisation Indices from the Literature Review

Having collected the sources, the research classified data across categories. This process enabled the production of a customised methodology [28]. The review is conducted by categorisation across 21 variables: *Year of publication*; *Type of source*; *Geographical area*, *Geographical level of application*; *Barriers*; *Drivers*; *Circularity index*; *Territory circular index*; *Sustainable Circular Index*; *Category of impact*; *Metrics*; *Grouping*; *SDGs considered*; *SDG connection*; *Sectors involved*; *Actors involved*; *Initiatives*; *Existing tools*; *Circular services*; *Circular laws and policies*; *Flexible or Customized metrics*.

After the categorisation, the authors analysed barriers and drivers, the presence of indices and indicators, and their flexibility. The collected data derives from existing literature on the European landscape and from semi-structured interviews detailing the experience of Italian cities as part of a research conducted by ICESP, the Italian Platform of Stakeholders of the Circular Economy [29] with the involvement of Fraunhofer Italia—IEC. The study benefits from the participation of public entities representing 28 Italian cities, thus providing an essential contribution to the state of CE adoption in Italy.

#### 2.1.2. Flexible Indicators from the Literature Review

The review process resulted in the identification of 15 indices for circularity, which includes 30 flexible indicators applicable in territories: *Produced Waste*; *Recycled/Recovered Waste*; *Hazardous Waste*; *Economic Performance*; *Certifications*; *Material Productivity*; *Material consumption*; *Material Inputs and Outputs*; *Material Sourcing*; *Research & Development (R&D) Investments*; *Job Creation*; *Energy Consumptions*; *Energy Intensity*; *Energy Efficiency*; *Procure-* *ment Practices*; *Symbiosis*; *Land Occupation*; *Land Consumption*; *Water Intensity*; *Water Demand*; *Water Consumption*; *End of Life Recycling Rate*; *Environmental Footprint*; *Distributed Value*; *Emission of Particles*; *Air Emissions*; *Emission Reduction*; *Fundings*; *Local Communities*; *Public Policy*; *Building Management*.

The selected indicators are both qualitative and quantitative. On the one hand, quantitative indicators serve to collect statistical and structured data, easily measured and compared. On the other hand, qualitative data collects information that attempts to describe a subject rather than measure it.

Many of the identified indicators are cross-sectional and vastly applicable, while others are indicative of a willingness to customise assessments in relation to the specificity of settings: some territories wished to focus on sectors (e.g., mineral waste) or account for their geographical specificities (e.g., forest biomass consumption). The collected metrics came from grey literature and academic papers. Where grey literature presented a good rate of publication in 2021, papers showed an earlier thematic investigation in 2019 and 2020. As is clearly visible from the graph reporting total results (Figure 1), there was an increase in the overall volume of literature addressing CE topics starting in 2018, with a maximum level reached in 2021.

**Figure 1.** Number and type of literature sources collected that propose indicators to measure circularity in territories.

The most considered scaling contexts in the literature illustrate how metrics applicable at the city level prevailed over metrics applicable at the regional and national levels. The literature shows how only less than half of the documents out of the total contain an index. Within them, only a few expressly use specific indicators in an aggregate form, while the remaining disaggregated a set of parameters that are normally assessed through a scoring system. The absence of an index was more common in the grey literature promoted by local institutions, while documents produced at higher levels of governance were more likely to contain an econometric model.

#### *2.2. Collection of Barriers and Drivers to Implement Circular Territories*

To answer the research question "*What are the barriers and the drivers for implementing circular economy in cities?*" the study provides a systematic review of CE barriers and drivers in Italian cities. The Italian case study benefits from the direct contribution of local governments and public actors and provides insight into implementation dynamics. Findings suggest that Italy is characterised by significant disparities in CE performance, with some cities ranking quite high and adopting innovative approaches to circularity while others performed poorly [29]. The review resulted in an assessment of barriers and enablers to CE implementation as derived from a combination of the results from the literature review and surveys.

#### 2.2.1. Nominations of Types of Barriers from Surveys

For this study, barriers to CE implementation were grouped to assess the frequency of their occurrence: Lack of collaboration and awareness; Absence of regulation and standardisation; Undersupplies of funds; Deficits of expertise; Low flexibility; Shortage of political support; Lack of consumer interest; Insufficiency of instruments; Other. According to the findings, a deficit of regulations and standardisation is the most common barrier to the effective implementation of CE practices. The second most common one is the absence of collaboration across different sectors and actors, while the third entry is a lack of awareness. Low flexibility and insufficient political support are the fourth most occurring, indicating the inability to create social, normative, and economic infrastructures. The absence of funds and instruments constitutes other significant obstacles along with lack of consumer interest and market rigidities.

According to the ICESP surveys [29], Italian urban contexts are witnessing conflicting forces: on one hand, Italy is experiencing growing interest in CE, while, on the other hand, concrete barriers hinder its implementation. Larger urban contexts present several commonalities regarding barriers: insufficiency of resources and instruments is reported as being one of the primary ones, along with lack of funds, absence of integration, and of normative support. Similarly, cities report experiencing failure in collaboration networks and information sharing, as well as hindrances in CE-oriented education. Additionally, difficulties in reducing landfilled waste are highlighted. Mid-sized and small Italian cities report scarcities of funds and investments, inadequacies of entities that promote citizen engagement, lacunas in normative support, and the need for effective communication. Further, barriers linked to sharing initiatives, failure in a structured form of collaboration, lack of awareness, and integration also constitute obstacles. When it comes to the political view, an absence of adequate programming, political barriers and missing incentives are identifiable at most levels. Overall, the Italian context presents many commonalities when it comes to hindrances in CE implementation at the city level.

#### 2.2.2. Drivers from Surveys

In analysing surveys of Italian cities, economic factors are the most discussed drivers to enable territories to manage the material flow and reduce costs [7]. In this sense, CEoriented business models that enable value addition through resource and knowledge sharing can also be considered [11,15]. Some interviewed cities have policies aimed at producing direct and indirect positive economic effects by leveraging taxation and financial support or are instead trying to develop customised tools for better coordination. These policies aim to improve waste collection systems, limit water use, and monitor environmental factors. Another set of initiatives involves land redevelopment and building conversion. Some territories prefer to focus on material innovation to reduce the environmental impact of urban areas, while others are implementing substantial changes to promote electrification or resilient green spaces [29].

The research highlights the importance of sharing the meaning of synergies between sectors, the promotion of relationships among stakeholders, and the potential of a multisectoral system. Cooperation provides an opportunity for cities to facilitate CE through a collective approach among stakeholders and levels of government [27]. Knowledge sharing and training show the need for awareness [30–32]. Thus, education is another driver in Italian cities [7,31,33–36]. Moreover, of importance is the principle of the "product as service" [11,29,33,34], which refers to services as an alternative to physical goods that require the use of resources and production processes; by relying on nonphysical services, production and consumption are reduced.

Much of the results assess the potential of looping actions to address water scarcity and promote energy recovery from organic waste to sustain urban metabolism [35], as well as mitigate the negative effects of urban population growth and help avoid waste redundancies [35]. Environmental goods are also a primary set of drivers such as reversing fossil fuels, reducing pollution, preventing biodiversity loss, and pursuing a zero-waste economy, which have long been staples in circular economy discourses [7,9,13,14,36–38].

#### **3. Methodology**

#### *3.1. Research Approach*

The vision underlying this research process is like the concept of Circular Economy: by its very nature CE, dealing with complex systems, must be approached by strategies with the following characteristics:


#### *3.2. Methodology's Structure*

The previous literature review led the authors to understand both stakeholders' difficulties in implementing CE and the drivers for overcoming barriers. In addition, it showed how circularity is evaluated in territories and how Italian cities govern it.

The study intends to overcome the gaps that territories are having in governing and assessing circularity by proposing a flexible governance model to guide them in the transition towards CE through stakeholder engagement, the analysis of enabling factors and barriers related to their territory, and the use of a flexible circular index. To do so, the methodology was inspired by different approaches [28] and divided into stages.

#### *3.3. Development of the Flexible Governance Model and its Territory Circularity Index*

The process of developing a flexible governance model to guide cities in the transition has followed two steps of creation. Firstly, the research focused on analysing barriers and drivers of circular economy in territories. Secondly, it tried to consider the line to follow once a territory decides to embark on a sustainable and circular transition. Below, the process has been explained (Table 2).

**Table 2.** Description of the development of the Territory Circularity Index and Flexible Governance model.


The governance model proposed by the authors draws from a Quadruple Helix governance model [39]. For this reason, cities wishing to embark on a path toward circularity are led to define a quadruple governance strategy that simultaneously involves actors from the spheres of Government, Research, Industry and, finally, Civil Society. To measure the CE of cities, once the proposed governance model is applied, the authors considered it essential to aggregate the results obtained from the literature review (drivers, barriers, and indices) to develop an index (named Territory Circularity Index by the authors) for the analysis of the circular economy that fits the territory and the needs of its stakeholders. For

a better understanding, the in-depth methodology of the governance strategy is described in session "4.4. Flexible Governance Model for a Territory Circularity Index".

#### **4. Discussion**

#### *4.1. Barriers and Drivers for a Territory Circularity Index*

The image below (Figure 2) provides a representation of crossing identified drivers and barriers from the literature review and surveys. The analysis bought to the creation of four action groups to develop circularity in a territory. Later they were referred to as sub-indices for a final index that can measure a complete circularity.

**Figure 2.** The figure describes the barriers and drivers collected and summarised in the literature and surveys. The data were grouped into four areas, which were further converted into four sub-indices.

The outer rectangle contains barriers deriving from the literature review (green dots) and from surveys (blue dots). Barriers are then organised within the four quadrants corresponding to the four sub-indices: the material flow index, the loops index, the competitive index, and the sharing index based on their pertinence to each area. Repetition of certain barriers across multiple quadrants is not uncommon, as the same element can constitute a barrier towards the implementation of multiple processes. The inner rounded rectangle contains the identified drivers and organises them within the four quadrants corresponding to the four sub-indices. Green triangles indicate *drivers* identified in the literature, and blue triangles represent drivers as derived from the surveys. The crossing of drivers/barriers with the index itself allows for enhanced customisation, a more focalised approach to reaching objectives, and a more evident correlation between available means and expected results. In fact, decision-makers may choose to start with an assessment of drivers and barriers in their own urban socio-economic order and make decisions based on where their drivers/barriers fall in the scheme.

The sub-indices defined incorporate 30 flexible indicators from the literature review and select the most appropriate for each area. Not all the indicators found were included in the four sub-indices. The Territory Circularity Index aims to maintain and re-elaborate some of the key concepts inferred from the literature and surveys without forgoing the introduction of some original features. The final index was devised with the intent of ensuring simplicity of use, primarily thanks to its division into four thematic areas, which will be analysed below.

#### *4.2. The Structure of the Territory Circularity Index*

The Territory Circularity Index (Figure 3) has been defined by data analysed throughout the research phase. The four final sub-indices intended to represent the most frequently considered indicators in the literature review, then divided according to the four categories of barriers identified as the most difficult to achieve to reach territorial circularity. The final index tries to express a simple route for implementing circularity, sustainability, positive economic growth, and social inclusion, especially when crossed with empirical data deriving from drivers and barriers.

**Figure 3.** The Territory Circularity Index is divided into four sub-indexes. Each sub-index can be described with connections to possible actions and indicators.

The four sub-indices can be described as follows:


• The "Sharing Index"—describe a city's ability to engage in the exchange of goods, services, and information and is underpinned by indicators assessing circular maturity, platforms for sharing, sharing spaces, circular cooperation, and educational programs.

When considering the Territory Circularity Index, the authors highlight how CE-driven goals at the local and urban levels can and should be rooted in overarching sustainability goals. To this end, connections can be established between the most common indicators and the Sustainable Development Goals (SDGs), so that the Territory Circularity Index may be anchored to the UN's Agenda 2030. The SDGs that can be considered in large numbers are SDG 12 (Responsible consumption and production), linked to the "*Material flow index*", "*Competitiveness Index*", "*Loops Index*", and "*Sharing Index*"; SGD 11 (Sustainable cities and communities), linked to the "*Competitiveness Index*", and "*Sharing Index*"; SDG 6 (Clean water and sanitation), linked to the "*Material flow index*", "*Loops Index*" and "*Sharing Index*"; SDG 7 (Affordable and clean energy), linked to the "*Material flow index*", "*Loops Index*" and "*Sharing Index*".

#### *4.3. Flexible Governance Model for a Territory Circularity Index*

The transition to CE is not without its obstacles. Public entities at the city level often find themselves having to sift through large volumes of information, something which impedes effective decision making. Direct interaction with local governments for the present study highlights the need for a model that can help governance actors in implementation practices. These results suggested the necessity to contextualise the Territory Circularity Index into the urban governance landscape.

In designing a circular territorial strategy, systemic change and new forms of governance are required, as well as shared strategies, common territorial plans, funding, and partnerships [29]. Good territorial management requires an inclusive, collaborative approach to establish an effective circular strategy. According to the surveys, Italian cities have been looking to adopt an integrated approach to CE by involving different sectors from a variety of municipal spheres, such as environment, economic development, urban waste, heritage, public works, social policies, education, energy, IT services, infrastructure, mobility and green areas, public services, agriculture, production activities, trade, district heating, water services, public lighting, land planning and management, community policies, forestry activities [29]. The outcome of surveys across Italian cities highlights the importance of defining specific circular laws and policies to support this transition. Urban contexts are shown to rely on multi-level governance for regulations that support their efforts in implementing circular practices and to structure the basis for targeted funding. In general, circular laws and policies analysed fall into the category of 'softer' norms aimed at incentivising CE practices through voluntary efforts and 'hard' laws aimed at discouraging transgressions of these regulations [10]. The difficulties identified in the literature confirm the need to consider a variety of stakeholders when defining the circularity of a city. In this sense, it is considered relevant to underline the significance of moving from a Triple Helix to a Quadruple Helix Model [39]. Thus, a territory interested in embarking on a path towards circularity will define a quadruple governance strategy that simultaneously involves actors from Government (policymaking and finance), Academia (research and development), Industry (entrepreneurs, services, and places to test new circular model) and Civil society (citizens).

The model below (Figure 4) is predicted to be effective in helping local entities and decision makers. The graph is organised in three macro areas represented by the three different colour shades.

**Figure 4.** Graphic representation of the innovative circular governance strategy called "Flexible Governance Model" by the authors.

The white band on the bottom of the figure cross the governance model and guides operations and actions to be implemented at the different levels of the three macro areas: territory characteristics, enable environment, and assess circularity. These correspond to different levels of granularity based on phases of implementation going from the attainment of a context overview and listening and engagement practices, ending with data collection and implementation and monitoring activities. The band and consequent succession of operations are to be understood as bidirectional: a succession from outer to inner segments indicates the implementation of top-down and bottom-up governance actions; a succession from inner to outer segments is indicative of bottom-up tendencies. The outer rectangle serves to outline the *territory characteristics* that will be the base of the CE implementation. It examines:


The rounded rectangle in the middle represents the *enabling environment*. In this phase it is necessary to:


• Learn, engage, and implement, i.e., supporting territorial awareness and engagement through conferences or events to inform and form the stakeholders.

The inner circle guides territories to assess circularity. Cognizant of the different opportunities and challenges that the area has set for itself, a flexible and customised index is defined. In doing so, indicators are selected not only based on their flexibility and applicability but also according to the sectors and objectives of the territory. This assessment phase includes the collection of data depending on the indicators chosen for every sub-index, the monitoring of this data in the long term and the implementation of the circular economy. Other than the necessary intermediate results, the primary outcome of the study exceeds its original objective: the Territory Circularity Index produced is both internally and externally flexible and customisable. The variety of indicators underpinning the index can be adapted to the conditions of the urban context. The index can be crossed with a variety of external factors conditioning its applicability and success, such as specific barriers and drivers or the priorities of global governance guidelines.

#### *4.4. Testing the Governance Model in South Tyrol*

After designing the governance model on a theoretical basis, the authors deemed it necessary to test it with territorial actors in the Italian province of South Tyrol through the creation and dissemination of a questionnaire. This investigation followed the structure of the governance model. First, it sought to identify the context, the maturity of the territory regarding CE, and the stakeholders. After that, it sought to understand the importance of various aspects of CE, as well as the specificity of the barriers and drivers for the various stakeholders. Finally, to assess the feasibility of using this governance model in South Tyrol, the authors tested the willingness of the involved stakeholders to collect and disclose data. This survey also served to confirm the results obtained in the literature review.

The research involved a group of approximately 40 stakeholders. It reached several kinds of stakeholders, from the primary, secondary and tertiary sectors, public sectors, some multi-utilities, civil society, and research institutes. The authors' aim was to test the functionality of the model by involving as many territorial actors as possible. First, the existing stakeholders in the area were divided into three macro-categories: public institutions: local territorial authorities, functional agencies, participated companies, etc.; organised groups: pressure groups, territorial associations, etc.; and unorganized groups: citizens and the community. After this initial mapping phase, the stakeholders to be involved in the area under consideration were identified. Stakeholders to be involved can be identified through different methodologies; the authors considered it appropriate to identify them for their capacity of influence in the area considered and for the interest they exert [40]. Once the factors of influence of each identified stakeholder and their level of interest were defined, they were cross-referenced and finally identified and contacted.

For this research, the analysis of the results of the questionnaire focused mainly on the importance of the different aspects of CE for the different types of stakeholders identified, considering it appropriate to understand where it is best to leverage in the implementation of CE. Moreover, since the study aimed to understand how CE can be effectively implemented in the territories, the results of the questionnaire focused on the analysis of barriers and drivers to CE implementation for the stakeholders involved.

The results (Figure 5) show that most of the stakeholders consider the aspect of material recovery and reuse essential or very important, while another large percentage of respondents also consider the aspects of prolonging the life of resources and support through investments essential. Only a smaller percentage also indicated the factor of sharing information and resources as very important.

**Figure 5.** Importance of the different aspects of CE for the identified South Tyrolean stakeholders' categories to whom the questionnaire was submitted.

The results clearly show that there is a good awareness of the topic of circularity in practically all stakeholder categories, even though this awareness is often linked to the reuse of materials and funds for possible transition projects towards circularity, whereas there appears to be little understanding of the importance of sharing information and resources. According to the authors, therefore, the effective implementation of a circular governance model will have to be based on active engagement and learning approaches among stakeholders. Sharing, in fact, would save several economic resources through the partaking of equipment, waste materials, data and expertise, allowing for faster evolution through collaboration.

As for barriers and drivers (Figure 6), the results show that the greatest facilitators of the transition to the CE are related to economic aspects and defiscalisation. In smaller numbers, we find facilitators such as bureaucracy and circularity training. On the contrary, the most important factors preventing the realisation of a circular economy are the scarcity of training courses on the topic and the absence of information, regulations and standards, the absence of tools to facilitate circularity (of services or goods), the scarcity of funds to invest in circularity projects and the lack of experts. In smaller numbers, we find that barriers also include political support, cultural barriers, lack of stakeholder flexibility and poor collaboration.

**Figure 6.** Analysis of barriers and drivers to the implementation of CE for the identified South Tyrolean stakeholders' categories to whom the questionnaire was submitted.

#### **5. Conclusions**

Governance is seen as the system governing a country's economy and politics at all levels [41]. To ensure good governance, this system must be able to include in the planning process the interests of all stakeholders in the area, recognising their legal rights and assuring them of their obligations and needs.

Over the past few years, governance related to the concepts of Circular Economy and Sustainability has been defined and used in many ways in different contexts. Despite suggestions and insights in the literature, there is still no flexible governance model that can be used independently by local governments to implement circular economy initiatives and best practices at the territorial level.

Despite commitments expressed at the national and European levels, traditional models of governance toward circularity are dominated by the regulatory control of formal state institutions and, therefore, not well equipped to respond to the complex nature of the circular economy and the challenges that individual territories face in implementing it.

To tackle the complex issues related to the environmental impact that territories have on sustainable development, a strategic and systematic approach was needed to provide an appropriate framework for an integrated governance vision of all components engaged in the CE implementation process. The answer was provided by the authors through the creation of a qualitative and original governance model associated with a territorial circularity index.

The results of the present study indicate the success of a flexible governance model in capturing the multifaceted nature of the CE and the distinctiveness of local needs. Starting with a literature review to identify common traits and patterns for measuring circularity in territories, the authors proceeded to develop a model. The presented governance model incorporates a circularity index at its core, thus becoming a practical tool for administrations and policy makers who can be followed step by step toward circularity that is perfectly suited to the type of territory (mountainous, coastal, urban), the needs being analysed during the transition, and the barriers encountered. The aggregate index has to be reached by analysing the territorial context, the market and the needs expressed by stakeholders. The index is thus composed of four sub-indexes that facilitate its collection and focus of data, highlighting the flow of materials, lifespan of resources, economic resources used, and sharing actions.

One of the major limitations of the present research is the presence of cultural "conditioning" that focuses on the resolution of a global problem to a European view.

This study contributes to qualitative and original research important for researchers, policy makers and governments. The results provide a replicable and flexible governance model, and it gives a coherent picture of the topic, highlighting some weaknesses regarding CE implementation that should be addressed. Future research in the field, to be considered a natural extension of this study, is now being implemented. The authors are now testing the Territory Circularity Index within the governance model of the Italian landscape, grasping the different levels of sustainability and circularity of the Italian regions. This would make it possible to identify territories' barriers and highlight areas where certain policies and investments need to be implemented.

**Author Contributions:** Conceptualization, E.R.G., A.C. and P.S.; Methodology, E.R.G., A.C. and P.S.; Investigation, E.R.G. and A.C.; Resources, A.C.; Data curation, E.R.G.; Writing—original draft, E.R.G. and A.C.; Writing—review & editing, E.R.G., A.C. and P.S.; Supervision, P.S.; Project administration, P.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **The Double C Block Project: Thermal Performance of an Innovative Concrete Masonry Unit with Embedded Insulation**

**Luca Caruso \*, Vincent M. Buhagiar and Simon P. Borg**

Department of Environmental Design, Faculty for the Built Environment, University of Malta, MSD 2080 Msida, Malta

**\*** Correspondence: luca.caruso@um.edu.mt; Tel.: +356-2340-2471

**Abstract:** The Double C Block (DCB) is an innovative composite Concrete Masonry Unit (CMU) developed to offer enhanced thermal performance over standard hollow core blocks (HCBs). The DCB features an original design consisting of a polyurethane (PUR) foam inserted between two concrete c-shaped layers, thus acting as the insulating layer and the binding agent of the two concrete elements simultaneously. The purpose of this research is to describe the results obtained when assessing the thermal transmittance (UDCB and UHCB) of these blocks using three different methodologies: theoretical steady-state U-value calculations, numerical simulation using a Finite Element Method (FEM), and in situ monitoring of the U-value by means of the Heat Flow method (HFM). The results obtained show that the three methodologies corroborated each other within their inherent limitations. The DCB showed a performance gap of 52.1% between the predicted FEM simulation (UDCB was 0.71 W/(m2K)) and the values measured via HFM, which converged at 1.47 W/(m2K). Similarly, a gap of 19.9% was observed when assessing the HCB. The theoretical value via FEM of UHCB was 1.93 W/(m2K) and the measured one converged at 2.41 W/(m2K). Notwithstanding this, the DCB showed superior thermal performance over the traditional block thanks to a lower U-value, and it complies with the Maltese building energy code. Further improvements are envisaged.

**Keywords:** thermal transmittance; thermal resistance; finite element method; heat flux sensor; in situ monitoring; concrete masonry unit

#### **1. Introduction**

A recent study from the International Energy Agency (IEA) has shown that a technical solution to improve the energy efficiency of buildings [1], and hence the energy-related CO2 emissions of the building industry [2], is the use of efficient building envelopes. From a thermodynamic point of view, the building envelope has significant importance in determining the heating, cooling, ventilation, and lighting demands of a building [3]. However, to date, these solutions are not meeting desired goals. Some of the reasons for this include that a large number of countries still lack mandatory building energy codes for new buildings [1], often have a non-active building retrofitting market (only 1% circa in EU) [4], and have low market readiness for industry-friendly, energy-efficient building products [5]. Other reasons can be associated with the so-called "building fabric performance gap" [6], meaning that a substantial deviation from theoretical design is measured when real performance is assessed. One of the most common metrics that is often misaligned from the design stage is thermal transmittance (U-value) or its inverse, namely thermal resistance (R-Value).

In this context, an innovative concrete masonry unit (CMU) called a Double C Block (DCB) was developed. The block features an original design wherein a polyurethane (PUR) foam is inserted between two concrete c-shaped layers, thus acting as the insulating layer as well as a binding for the two concrete skins simultaneously. The idea behind this design is to enhance the thermal performance of CMUs by completely eliminating the point of

**Citation:** Caruso, L.; Buhagiar, V.M.; Borg, S.P. The Double C Block Project: Thermal Performance of an Innovative Concrete Masonry Unit with Embedded Insulation. *Sustainability* **2023**, *15*, 5262. https:// doi.org/10.3390/su15065262

Academic Editors: Oz Sahin and Russell Richards

Received: 13 January 2023 Revised: 10 March 2023 Accepted: 13 March 2023 Published: 16 March 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

contact between the concrete elements and by filling the unvented cavities with insulation. This approach is different from traditional geometrical optimization accomplished via a concrete web and an array of unvented air cavities.

This research is a further development of a previous study carried out between 2013 and 2014 [7]. It presents the results of a thorough monitoring campaign carried out between June and July 2022 at the University of Malta. In this monitoring campaign, three modes of testing were used: (i) a full-scale, real-life measurement of the U-value; (ii) a calculation using analytical methods; and (iii) a numerical simulation approach using the Finite Element Method (FEM). Specifically, for the full-scale measurement tests, this paper also has the scope of enriching the set of case studies using the Heat Flow Method (HFM), which, to date, has mostly been applied to single or multilayered walls and less frequently to single-leaf walls made of composite CMUs.

#### **2. Literature Review**

Conventional CMUs, also known as Hollow Concrete Blocks (HCBs), are, in most countries, produced to merely satisfy structural requirements of load-bearing walls. They feature a rectangular block with two cores: two unvented air cavities. This reduces the overall weight of the block, ensuring enough compression strength, and accommodating the passage of concealed building services, if necessary. Compared to a wall made of solid block, this typology can, due to the presence of unvented cavities, reduce the wall's overall thermal transmittance.

By looking at the technology itself, CMUs can be thermally improved through exploiting the following strategies: the use of concrete and insulation mix designed with high-performance thermophysical properties, geometry optimization, and filling air cavities with materials characterized by high R-values as presented in the literature review.

#### *2.1. The Role of Thermophysical Material Properties in CMUs*

The typical declared thermal conductivity (λd) of PUR foams available in technical and academic literature is 0.025 ≤ λ<sup>d</sup> ≤ 0.035 W/(mK) [8]. Concrete thermal conductivity, on the other hand, can vary, with densities up to values that are a hundred times higher compared to PUR foams [7] and λ<sup>d</sup> value ranges from 0.69 up to 1.72 W/(mK) and densities between 1600 and 2400 kg/m3 [9,10]. It is important to note that these declared values are obtained by analyzing conditioned specimens at 23 ◦C and 50% relative humidity. The "designed" λ value allows the designer to factor the effect of the real range of temperature and relative humidity, influencing the behavior of the material as described in ISO 10356 [11].

Within a dense material, such as concrete, heat is propagated mainly by conduction at an atomic level. Al-Hadhrami, et al. [12] measured heat flow under steady state conditions using a guarded hot plate to obtain the equivalent thermal conductivity (which included the overall impact of air cavities) of conventional concrete blocks used in Saudi Arabia. When using ordinary concrete mortar, the thermal conductivity was 0.976 W/(mK). The introduction of lightweight perlite aggregate in the concrete mix design reduced the thermal conductivity down to 0.489 W/(mK); this is around 50% lower.

Air has a very low thermal conductivity, as long as it is still. However, within cavities (or air spaces), heat transfer is mainly driven by convection and radiation (emissivity "e" of the cavity surface), and to a much lesser extent by conduction [13]. Indeed, before the widespread use of plastic materials in building construction, air cavities were initially introduced in northern Europe to reduce the amount of water seepage (e.g., due to driving rain) adsorbed by the external layers by the brick veneers and hence to keep the internal load-bearing wall dry. As a result, an improved U-value of the whole multilayered assembly could be experienced thanks to the low thermal conductivity of the unvented air cavity between an internal loadbearing wall and external layer. In general, the scope of any good insulating material is therefore to encapsulate air with as little material as possible. Insulation material in the form of pores or fibers fulfills the role of reducing convection heat transfer due to air movement.

Polyurethane foams due to their density, and hence their porosity, have a particular behavior as described by de Luca Bossa et al. [14]. Thermal conductivity measured in laboratory experiments is the sum of several mechanisms: conduction through the polymeric material, heat conducted through the fluid (blowing agent or air depending on their aging conditions), convection inside the fluid, and radiation between pore surfaces. Due to this complex combination of different modes of heat transfer, the overall thermal conductivity is "apparent" (ISO 22007-1), as opposed to an "effective" measured value for other types of homogeneous materials, where heat transfer is mainly driven by conduction [15].

Insulation can also lead to some disadvantages, especially in warm or hot climates [16] where cooling needs are relevant. Over-insulation may lead to the risk of overheating, albeit even in winter, and therefore it is important to strike the right balance in the choice of applied insulation. Feist and other authors [17,18], for example, proposed that the thicknesses of the wall insulation in residential buildings between 40 and 100 mm are reasonably effective whenever applied in conjunction with other energy-efficient design strategies.

Urban et al. [19] obtained results that showed good agreement with previously mentioned strategies. The results of 3D finite difference simulation (FDM) concluded that the best design for the selected types of CMUs (two-core, multicore, serpentine, and interlocking) in terms of low thermal resistances had to implement a serpentine-like shape insulation layer or multicore insulation able to fill all the air cavities. These insulation options were also evaluated against raising concrete resistivity (the inverse of thermal conductivity) towards lower density mix designs.

The ASHRAE Fundamentals Handbook [20] also emphasizes the effect of mortars on the measured wall R-value by reviewing several empirical studies using the hot box apparatus on insulated and uninsulated masonry walls. Neglecting the horizontal mortar joint could lead to a difference in the actual wall R-value of up to 16% (depending on thermal properties and density of the masonry). When multicore insulation is considered, the measured thermal resistance of the wall is 1–6% lower than the value measured including the mortar joints.

#### *2.2. To Increase the Thermal Transfer Path Length via Geometrical Optimization*

Another strategy to decrease the thermal transmittance of a CMU is through optimization of the design of the block. This relates to the investigation of the effects of complex patterns of vertical cavities with known aspect ratios (height/width) in order to minimize the heat transfer inside them, reduce the overall block weight, and to increase the length of the thermal path through the concrete web. Lacarrière et al. [21] numerically calculated the equivalent thermal conductivity of air inside cavities of vertically perforated blocks based on the finite volumes method (FVM). Inside these cavities with an aspect ratio of 23.3, heat transfer by convection is negligible. Diaz et al. [22] proved that topological optimization can successfully lead to new block geometries with the added value of reduced overall weight without losing load-bearing capabilities. A 3D FEM was used to test the compressive strength. No thermal studies of the blocks were performed.

Although applied on clay bricks, other interesting studies could be used as reference for CMUs as well. Li et al. [23] found that a reduction of 41% compared to the highest equivalent thermal conductivity could be achieved via the finite volume method (FVM) simulation from a set of 72 different patterns of air cavities. The ideal pattern consisted of vertical cavities (with a rectangular or square shape in a horizontal cross-section) numbering eight lengthwise and four in widthwise. With a similar methodology, Bustamante et al. [24] introduced a diagonal path in the web matrix of clay bricks and then studied the heat flow path via FEM simulation. Although an evident reduction of the thermal transmittance was achieved compared to the traditional Chilean block, it was found that the thermal improvements tended to weaken the overall compressive strength.

#### *2.3. Exploiting Full-Scale Tests and Complementing HFM with Other Methodologies*

Several researchers insist that in order to provide tangible evidence and to find reliable solutions to this performance gap, that is, a closer agreement between theoretical and actual performance, full-scale test facilities, laboratory tests, and material characterization studies [25] are required. Whenever combined, these methods complement each other and reduce the inherent uncertainties embedded in the assessment of theoretical energy performance of building components. Indeed, the latter is often assessed by practitioners through standard calculation methods implemented in computer software or via other analytical methods. Bridging the performance gap cannot therefore be considered only a purely scholarly activity. Indeed, for architecture, engineering, and construction (AEC) professionals, it can be seen as tangible evidence of the energy-related environmental impact of building construction. It is also relevant for policy and decision makers who oversee the setting up of building energy codes.

In 2011, the DYNASTEE and INIVE networks, through a series of workshops [6], shed light on the types of advanced facilities currently available across the world at that time. Stemming from these activities, in 2017, IEA EBC Annex 58 launched the international research collaboration topic called "*Reliable Building Energy performance characterization based on full-scale dynamic measurements*" [25]. A series of reports were therefore released in the field of dynamic testing and data analysis to support the characterization of the actual energy performance of both building components and whole buildings. In one of these, the use of Heat Flux Meter (HFM) measurements was explored for medium to heavy opaque assembly and for a glazing unit; the strategy was described as a robust methodology with appropriate limitations and advantages in measuring the U-value (or R-value) in situ.

As highlighted earlier, the scientific literature available when assessing in situ performance via HFM mostly relates to single-leaf wall assemblies with internal plaster and rendered wall and multilayered walls (including insulation layers and air cavities as per local construction techniques) [26–28]. Most of the time these assessments are carried out via non-destructive methodologies in order to preserve the integrity of the wall assembly. The selection of the most representative area of the wall, which should be free from any alien materials, is done via infrared cameras. A relevant set of previous studies using these methodologies are described hereunder.

Dudek et al. [29] assessed a typical UK double-leaf wall with an external skin in face bricks, an air cavity, and a concrete block with 30 mm PUR panel bonded to one face and plastered internally. They assessed the performance by using commercial software to perform FEM analysis and then compared it with in situ HFM measurements to establish the performance gap.

Asdrubali et al. [30] selected six wall types from buildings implementing bio-architectural features located in the Umbria region in Italy. They made use of analytical calculation by means of ISO 6946 [31] for theoretical calculation as a way to compare HFM results.

Baker [32] assessed traditional buildings in Scotland, most of them constructed in single-leaf stone walls; for one of them, the author has compared the in situ assessment with an identical assembly purposely rebuilt under laboratory conditions and tested inside the environmental chamber (known as the hot box apparatus).

When assessing buildings in the Catalunia Region in Spain, Gaspar et al. [33], implemented the "dynamic analysis method" and compared it to the "average method"; both included in ISO 9869-1:2014 [34] standard. Then, the performance gap was established by performing theoretical calculations according to ISO 6896 [31].

To reduce the oscillation of outdoor environmental variables, when assessing existing buildings in Italy, Evola et al. [35] surrounded the HFM and related thermocouples in a small portable hotbox and attached the whole apparatus to the external wall.

In order to shorten the measuring campaign, without sacrificing precision, Rhasoli and Itard [36] investigated the use of two HFM sensors installed in series: one on the inside and one on the outside face of the wall. The predicted U-value was calculated through an algorithm solved in MATLAB for the selected type of multilayer walls. When insulation was sandwiched between layers or installed on the indoor side of the walls, HFM placed inside converged faster due to a much more stable indoor environment. Conversely, when insulation was applied outside, the measurements taken by the (shielded) outdoor HFM converged faster.

Some authors such as Atsonios et al. [37] focused on comparing the two main international standards for in situ U-value assessments via the HFM method, as described in ISO 9869-1 [34] and ASTM C1155 [38]. Using the ISO standard, they performed the "Average method" and "Dynamic Analysis methods" and the results were then compared to the equivalent "Summation Method" and the "Sum of the least squares" as provided by the ASTM standard.

It is also important to mention that earlier studies in Malta showed that, for a 230 mm thick HCB wall (without plaster and render layer), a typical U-value is in the region of 2.41 W/(m2 K) via the HFM method [39]. A single-leaf wall made up of these blocks would not be compliant with the local Maltese Building Energy code, Part F, prescribing 1.57 W/(m2 K) for exposed wall elements [40].

Micallef [7] carried out several tests on DCB prototypes made with a variety of handmade PUR foams, testing different constituents, and then prepared 25 blocks. These DCB prototypes were tested through a set of hot box experiments. The U-value was expressed through the measurement of the temperature differences across the hot–cold chambers and the heat provided by ceramic resistors, as shown in Table 1.

**Table 1.** The U-value obtained via a hotbox apparatus (adapted from Micallef [7]).


These tests included 0.1 (m2K)/W surface resistance on both sides and were carried out on specimens laid without plaster. Outputs results indicate that the DCB values were well within the part F limit as opposed to the previous conventional wall built in HCB units.

#### **3. Methodology**

The methodology described in this research is based on three different approaches to obtain the U-value (and R-value) of both DCB UDCB and RDCB and HCB UHCB and RHCB under steady-state conditions. The block dimensions are shown in Figure 1.

**Figure 1.** Conventional HCB (**a**) and the new DCB (**b**).

This pilot study started in January 2022, but the data regarding the use of the HFM sensor on full-scale test cells are related to measures carried out between June and July 2022. The first approach, a purely theoretical one, involved the application of the two methodologies proposed by ISO 6946:2017 [31]. This standard proposes a theoretical calculation using the "simplified method" applicable to elements containing inhomogeneous

layers, although with some limitations. In the same standard, a leeway to overcome these limitations is given by the "detailed method" wherein numerical simulations are carried out with established modelling rules in accordance with those in ISO 10211 [41]. In this research, software using two-dimensional (2D) FEM steady-state conduction and radiation heat-transfer analysis based on the FEM was deployed specifically for this task.

The third methodology employed used in situ measurements of the U-value (and R-value) by means of Heat Flux Meters placed on the walls of two geometrically identical test cells: one built in conventional HCB walls and the other built using DCB walls.

The overall dimensions of the test cells in terms of length, depth, and height were 5 × 4 × 3.15 m, comparable to the minimum dimensions described by EBC Annex 58 report for full-scale test facilities [25]. The two test cells were also identical in terms of ground slab and the roof build ups: they were both equipped with 10 cm EPS insulation. The roof finishes included a reflective white paint with Solar Reflective Index (SRI) >104 on top of a torch-welded black waterproofing membrane. Thermal bridge correction at the wall-roof/ground slab edge were included too. Thermal bridge corrections around the window and door jamb and sill and lintel were introduced in the DCB room only. Trickle ventilators (10 × 15 cm wide wall opening) were also provided on the eastern and western façades to resemble local construction practices. Both rooms were externally rendered in white with lime and cement mix and internally plastered with gypsum. An air conditioning split unit with heat pump was installed in each test cell to control and ensure stable indoor conditions.

In each test cell, a couple of Heat Flux Meters and a total of four thermocouples for surface temperature readings were installed on the north facing walls to avoid any interference from direct solar radiation. This methodology is described by ISO 9869-1:2014 [34] and was also influenced by the previously mentioned peer-reviewed research regarding in situ measurements of full-scale single-leaf and multilayered walls, as shown in Figure 2 below. The assumptions made for the theoretical calculation performed according to ISO 6896 are listed in Table 2. The thermal performance of the unvented air cavities in HCB is expressed via an equivalent thermal resistance as per ISO 6946 rules. The value of this thermal resistance is 0.17 m2K/W. The equations provided by this standard consider the effect of emissivity of materials surrounding the cavity (assumed e = 0.93 for conventional concrete). A fictitious thermal conductivity was inputted in the FEM analysis. This value was obtained by dividing the thickness of the air cavities of the HCB, 130 mm, by the mentioned resistance, giving a value of 1.01 W/(mK) in order to satisfy the set of inputs required by the FEM software.


**Table 2.** List of material properties used for theoretical and FEM calculations.

(**a**) (**b**)

**Figure 2.** (**a**) Overall dimensions of the test cells (top) and test cells as built at the University of Malta campus. (**b**) Axonometric view of the HCB and DCB test cell's external fabric, wall, and roof layers.

From this dataset listed in Table 2, the specific heat capacity of the whole block can be calculated as the sum of the multiplication of each layer's thickness by the relative specific heat capacity and density. For the DCB, the value is 278 kJ/(m2K) and for the HCB, it is 181 kJ/(m2K).

Table 3 includes the indoor and outdoor surface resistances (or film coefficients, Rsi and Rse) used for the theoretical calculations. The related temperatures, 20 ◦C and 10 ◦C, respectively, also constitute the chosen boundary conditions for finite element simulation.


**Table 3.** List of boundary conditions applied in ISO 6946's "simplified method" and FEM analysis.

*3.1. First Method—ISO 6946:2017—U-Value—the "Simplified Method"*

ISO 6946:2017 is a recognized standard describing the approximate calculation method for the steady-state conductive heat transfer by conduction through building assemblies whenever inhomogeneous layers are present; this is the case regarding air cavities or composite materials. Two accepted methods are described: "the simplified calculation method" and "the detailed calculation method". ISO 6946:2017 follows an electrical analogy of parallel and series circuits to address the presence of adjacent thermally dissimilar material layers. Since heat behaves like a current flowing through the path of least resistance, the flow tends to bend towards highly conducting concrete elements in order to maximize the heat transfer rate.

For this reason, the simplified method requires two calculations: one can be described as "parallel path" (mono-dimensional flow), which can lead to an overestimated result, Rtot upper, while the second is called "isothermal planes", which can lead to an underestimated result, Rtot lower, of the actual thermal resistance of the buildup. The overestimate is related to the exclusion of any lateral components of the heat flow included in the Rtot lower instead. As also described in ASHRAE fundamentals [20], since the actual value is somewhat between Rtot upper and Rtot lower, then ISO 6946:2017 proposes an arithmetic average between the two calculated thermal resistances, as shown in Equations (1)–(5). Given the copious number of subscripts, a nomenclature table was added at the end of this paper for clarity.

$$\frac{1}{R\_{tot,upper}} = \frac{f\_a}{R\_{tot, a}} + \frac{f\_b}{R\_{tot, b}} + \dots + \frac{f\_q}{R\_{tot, q}},\tag{1}$$

$$\frac{1}{R\_j} = \frac{f\_a}{R\_{aj}} + \frac{f\_b}{R\_{bj}} + \dots + \frac{f\_q}{R\_{qj}},\tag{2}$$

$$R\_{tot,lower} = R\_j + R\_{sc} + R\_{si\ \prime} \tag{3}$$

$$R\_{tot} = \frac{R\_{tot,upper} + R\_{tot,lower}}{2} \,\text{,}\tag{4}$$

$$re = \frac{R\_{tot,upper} - R\_{tot,lower}}{2 \cdot R\_{tot}} \,. \tag{5}$$

Utot is thus the reciprocal of Rtot. In addition to the block itself, heat transfer may also occur through the mortar. Indeed, when insulation is introduced, the effect of ordinary mortars can create thermal bridges because they constitute an additional path to heat flow.

The thermal transmittance was thus increased accordingly. In both cases Figures 3 and 4 graphically represent the mono-dimensional heat flux and the Isothermal layers for both DCB and HCB.

**Figure 3.** DCB-Schematic representation of the block dimensions (**a**) and the electrical analogy used for ISO 6946's "simplified method*"* (**b**); all dimensions in mm.

**Figure 4.** HCB-Schematic representation of the block dimensions (**a**) and the electrical analogy used for ISO 6946's *"*simplified method*"* (**b**); all dimensions in mm.

#### *3.2. Second Method—ISO 6946:2017—U-Value—"Detailed Method"*

Since the first methodology is subject to some limitations, a detailed calculation can facilitate the assessment of complex geometries included in the presence of composite materials, such as in the DCB. The advantage of using FEM software described hereunder is that most of the material properties applied in the "simplified method" can be implemented in the numerical analysis (i.e., surface resistances, film coefficients, etc.) so that a comparison is possible. Computer simulations were performed in THERM (version 7.8.16). This software numerically resolves the steady state two-dimensional heat radiation–conduction problem under the assumption of constant physical material properties for isotropic medium; no heat is stored in the cross-section, and so all energy that enters the cross section on the interior surface leaves through the exterior surface.

In the program, the magnitude of the heat flux vector normal to the boundary is given by Fourier's law:

$$q\_f + q\_\varepsilon + q\_r = -k \left( \frac{\partial T}{\partial \chi} n\_x + \frac{\partial T}{\partial y} n\_y \right) \,, \tag{6}$$

$$q\_{\varepsilon} = h(\mathbf{T} - \mathbf{T}\_{\infty}),\tag{7}$$

$$q\_{\tau} = \varepsilon\_i \sigma T\_i^4 - \alpha\_i H\_i \, , \tag{8}$$

where T = f(x,y) and qf = q boundary condition. Refer to the nomenclature at the end of the paper for greater clarity.

The numerical resolution via FEM was performed automatically using the proprietary Finite Quadtree Method (FQM) [42]. The mesh was generated and adapted through several iterations up to the desired accuracy. During the simulations, the selected parameters that influenced the FQM were chosen according to ISO 10211: 10 iterations and 5% max error. The program integrates the heat flux over the tagged boundary segment (or group of segments that have been given the same tag), divides that flux by the projected length of the segment and the defined temperature difference, and returns a U-value. Hence, U-values were dependent on the assigned boundary.

#### *3.3. Third Method—In Situ Thermal Transmittance Measurements*

The experimental set up was built on the north facing façades of the two test cells and, as described earlier, carried out according to ISO 9869:2014 [34]. To increase the heat flux sensitivity, and hence improve the measurements, a set of two Heat Flux Meters were used and the related indoor and outdoor surface temperatures were recorded through

171

thermocouples type T, as shown in Table 4. The reported measurements refer to the first two weeks of June 2022, during which no rainy days occurred.


**Table 4.** List of the equipment used during HFM in situ monitoring.

The indoor temperature was maintained at a constant of 18 ◦C with an air conditioning split unit (with a selected cooling set point of 16 ◦C) turned on in order to guarantee at least 5 ◦C temperature difference between indoors and outdoors. Heat flux and surface temperatures were recorded every three minutes and then averaged up to 30 min so as to be comparable with indoor and outdoor temperature sensor timesteps (Figure 5). The total recording session lasted 2 months (from June 2022 until the end of July 2022).

**Figure 5.** (**a**) Thermal image to ensure that the HFM location was representative of the wall; (**b**) outdoor and (**c**) indoor thermocouples and HFM sensor attached on the north façades of both test cells.

An infrared camera was used to identify the most thermally uniform area of the wall where the sensors should be applied. For this purpose, the test cells were built with surface conduits in order to avoid placing any alien material underneath the plaster (Figure 5).

The equation used to calculate the U-value at which the measurement should converge was determined according to ISO 9869-1's [34] "average method" shown in Equation (9) below, where

q, is the density of the heat flow rate (W/m2),

Tij interior ambient temperature (◦C),

Tej outdoor environmental temperature (◦C).

Index j enumerates the individual measurements according to the established sampling time.

$$
\mathcal{U} = \frac{\sum\_{j=0}^{n} q\_j}{\sum\_{j=0}^{n} (T\_{ij} - T\_{\varepsilon j})} \,. \tag{9}
$$

In order to obtain reliable measurements, the difference in temperature between indoor and outdoor had to be more than 5 ◦C. The cooling set point was selected in order to stabilize the indoor temperature and establish a constant heat flux from the outdoor environment towards the indoor environment. Indeed, during the hottest days, temperatures above 30 ◦C were recorded. To ascertain the end of the test, the criterion used was to calculate the integer obtained from the following Equation (10), where Dt is the overall duration of the test in days. This equation is valid for heavy elements with a specific heat capacity higher than 20 kJ/(m2K). Measurements should not deviate more than ±5% from the values measured during this time.

$$INT\left(2 \times \frac{D\_T}{3}\right). \tag{10}$$

#### **4. Results**

#### *4.1. Theoretical U Calculation According to ISO 6946—Simplified Method*

In 1 m2 of wall, either DCB or HCB, the surface of each block was 450 × 250 mm2, while the area of ordinary mortar (assumed 10 mm thick) was approximately 0.03 m2 (only horizontal mortar joints were considered to resemble the typical local building practice). Assumed mortar λ<sup>d</sup> 0.75 W/(mK) (Figure 6).

**Figure 6.** Diagram of the wall dimensions including the layer of mortar.

The U-value of the mortar applied in the DCB wall is slightly different from the HCB because it follows the block thickness. The DCB is 2.38 W/(m2K) and the HCB is 2.17 W/(m2K).

For the DCB, the increase in the U-value of the wall due to the presence of ordinary premix mortar, calculated as a weighted average Ub+m, is 0.81 W/(m2K)

$$
\mathcal{U}\_{b+m|D\mathcal{C}B} = \frac{\mathcal{U}\_{tot, D\mathcal{C}B} \times A\_{D\mathcal{C}B} + \mathcal{U}\_{mortar} \times A\_{mortar}}{A\_{D\mathcal{C}B} \times A\_{mortar}} \,. \tag{11}
$$

For the Ub+m HCB, a similar weighted average can be calculated by simply replacing the related U-value. This average is 2.4 W/(m2K).

As shown in Table 5, the theoretically calculated U-value of the DCB without considering the effects of the mortar is 0.76 W/(m2K) with a relative error of 29%. UDCB then increases up to 7%, 0.81 W/(m2K), when considering ordinary mortars. Since the acceptable limit for the relative error should be equal to or less than 20%, the simplified method is not the correct way to calculate the theoretical UDCB.

The UHCB wall is 2.01 W/(m2K) and the relative error is 1%. This limit is within the limit established by the standard. When corrected to consider the presence of mortars, there is no increase.


**Table 5.** Application of the "simplified method" to the DCB and HCB with and without mortar joints.

It can be noted that the complexity of the DCB geometry has increased the likelihood of a cumulative source of errors within this method. Both isothermal and parallel path calculations, due to the simultaneous presence of a resistive material (PUR foam) and a conductive material (concrete), have led to a significant increase in the acceptable relative error (re), well beyond the limits established by ISO 6946. These uncertainties are probably located around the change in direction of the s-shaped insulation. The overall homogeneity of the HCB mix design and the mortar used have led to a relative error that is within the limit instead. The air cavity's equivalent thermal resistance is not as high as the PUR insulation, and hence, for this reason, the theoretical calculation for the conventional block was found to be within the limits of ISO 6946.

It is important to note that in the literature [20], a calculation method based on the same principles of ISO 6946's "simplified method" called the "modified zone method" is proposed. This method can be used for assemblies containing metal elements (with high thermal conductivity) that may locally increase the thermal transmittance of the overall buildup. It consists of the combination of thermal resistance calculated through a parallel path (when the insulation is not interfering with other materials) and an isothermal path when there are local conditions characterized by composite materials.

The authors believe that the significantly high error in the UDCB could be attributed to the stark difference between concrete and PUR thermal conductivity; assessing the theoretical UDCB method may be considered for further research on this aspect alone. This is because the proposed methodology strictly follows the calculation methods described by ISO 6946; hence, the mentioned UDCB cannot be used for comparison with the other methodologies.

#### *4.2. Numerical FEM Analysis According to ISO 6946:2017's Detailed Method*

FEM analysis includes the effect of mortars when solving Fourier's law. In Figure 7, the results of the simulations show the temperature gradients across the buildup.

**Figure 7.** DCB wall: the four sections used to calculate the UDCB; all units in ◦C.

It is evident from these images that the DCB isotherms show an evident variation of the heat flux due to the presence of materials with a relevant difference in terms of thermal conductivity (concrete, foam, and cement mortar). The variations are higher in proximity of the change in direction of the insulation layer. The overall results for the UDCB based on the FEM analysis are listed in Table 6.

**Table 6.** UDCB calculated via FEM according to ISO 6946's "detailed method".


The weighted average between the four cross-sections is 0.68 W/(m2K).

The arithmetic average between vertical and horizontal sections is 0.71 W/(m2K). This value was chosen for comparison with HFM measurements.

A more uniform set of isotherms is shown throughout the HCB section so that there is no relevant distortion even when geometrical or material changes occur. The overall results for the UHCB based on the FEM analysis are listed in Table 7.

**Table 7.** UHCB calculated via FEM according to ISO 6946's "detailed method".


The weighted average between the vertical cross section is 1.97 W/(m2K) (Figure 8).

**Figure 8.** HCB wall: the calculated U and relative convergence; all units in ◦C.

The arithmetic average between vertical and horizontal section is 1.93 W/(m2K). This value was chosen for comparison with HFM measurements.

#### *4.3. In Situ Measurement of the U-Value via HFM*

The generated raw data for the in situ measurements showed significant oscillations in terms of surface temperatures and related heat flux, especially at the beginning of the testing period. A steady single U-Value calculation was therefore not possible to be obtained throughout the entire measurement campaign. ISO 9869-1 specifically recommends extending tests beyond 72 h when the specific heat capacity of the component is above 20 kJ/(kgK). This is certainly the case for the DCB having 278 kJ/(m2K) and HCB having 181 kJ/(m2K). According to Equation (10), the measurement campaign could end after 12 days. Using such an experimental setup, the U-value can be calculated according to the selected timestep of the datalogger (3 min was selected). The surface resistances were assumed to be fixed according to Table 3. The plotted results in Figure 9 are the averages over 24 h for each day of the 12 days considered. Regarding the DCB, it can be observed that the typical range of the thermal transmittance is most of the time within

<sup>1</sup> ≤ UDCB ≤ 1.5 W/m2K. The convergence value of UDCB calculated via Equation (9) is 1.47 W/(m2K). Likewise, for the HCB, it can be observed that the typical range of the thermal transmittance is within 2 ≤ UHCB ≤ 3 W/m2K. The UHCB according to the "average method" is 2.41 W/(m2K), which is in good accord with results obtained by Caruana et al. [39].

**Figure 9.** Plotted results of the U-value measurement campaign lasted twelve days.

The relatively long duration of the test could be attributed to the effect of high daily swings of the outdoor temperature and excessive heat stored in the walls. Both walls have a relevant specific heat capacity, as previously mentioned. This increased the oscillation of the U-value, delaying the convergence. It can also be noted that the insulation embedded in the DCB has the beneficial effect of smoothing the peaks as experienced by the HCB. Table 8 shows the comparison between FEM results and the convergence values obtained by the in-situ measurement is shown in Table 8.

**Table 8.** Comparison of the U-values obtained according to ISO 6946's "detailed method" and ISO 9869-1's "average method".


It is also worth noting that the values listed in Table 1, when the difference in temperature is above 10 K, are very close to the one measured via HFM. However, the previous studies on the DCB did not report the value of the thermal conductivity of either the foam or the concrete, and so a more detailed comparison is not possible.

According to the authors, the relatively high range of uncertainties and the discrepancy between the theoretical and measured values could be attributed essentially to the thermophysical parameters of the building materials as shown in Table 2.

The declared values assumed in Table 2 may not be truly representative of the actual value of the materials exposed to external environmental conditions. ISO 6946 recommends obtaining the designed thermal conductivity values from the declared data provided by manufacturer in the technical sheets. In this way, designers could expect more realistic values of the thermal transmittance beyond the limits of 23 ◦C and 50% relative humidity environmental conditions. This difference could be taken into consideration in future papers. Additionally, it is valid to measure the thermal conductivity under laboratory conditions by means of a guarded hot plate, as per the ISO 8302:1991 [43] standard. This methodology requires the sampling of the material, and it is a destructive approach. Alternatively, a hot box apparatus (either calibrated or guarded) could be used according to ISO 8990:1994 [44], wherein a representative sample wall has to be built and monitored under laboratory conditions.

When it comes to the HFM method, the following refinement is being considered:


#### **5. Conclusions**

CMU is a popular construction technology manufactured in a variety of thicknesses whose main application is building both load-bearing and non-load-bearing walls. On its own, this basic building technology falls short when there is an exigency to have an energy-efficient facade. This paper has demonstrated that building envelopes built in simple HCB are performing seriously below minimum requirements; therefore, there is an urgent need to address the performance gap between predicted U-values (or R-values) and those values measured on site. Studies of this kind are relevant not only for architects and building engineers but moreover for policy and decision makers who are advised by academics on the establishment of new or upgraded building energy codes.

In this context, the innovative Double C Block (DCB) presented in this paper purports to do just that: raise awareness on the relevance of the building envelope performance gap. The block features an original geometric design wherein a polyurethane (PUR) foam is inserted between two concrete C-shaped layers; this acts as the insulating layer as well as binds the two concrete skins together. This idea outperforms the thermal performance of HCB by completely eliminating the thermal bridging between the concrete skins and by replacing the unvented air cavities with insulation. This approach is different from the traditional geometrical optimization done via a concrete web and an array of unvented air cavities. The role of high performance thermophysical properties is also briefly explored. This paper has also the scope of enriching the set of case studies using the Heat Flow Method (HFM), which, to date, has mostly been applied to single or multilayered walls and less frequently to single-leaf walls made of composite CMU blocks.

Promising results were obtained when assessing the thermal performance of the block against three different methodologies: (i) theoretical steady-state U calculations; (ii) a two-dimensional radiation–conduction steady-state heat-transfer simulation based on FEM; and (iii) in situ monitoring of the U-value by means of the HFM.

The UDCB according to ISO 6946's "simplified method" had to be modified due to the effect of ordinary cement mortar leading to a 7% increase, and the value found was 0.81 W/(m2K). However, with a relative error of 29%, higher than the acceptable threshold, this value is less reliable compared to numerical simulation. The reason for this high relative error could be found in the cumulative source of errors because of the combination

of thermally different layers (concrete, mortar, and insulation foam). For this reason, the said theoretical UDCB was excluded for comparison with the other methodologies. This is not the case for the HCB, wherein the UHCB is 2.01 W/(m2K) and the effect of the bedding joints of mortars is deemed irrelevant. UHCB was then excluded too for the sake of coherence.

Instead, the output of ISO 6946's "detailed method" via FEM analysis led to a more reliable UDCB equal to 0.71 W/(m2K), including the effect of mortar. The UHCB was equal to 1.93 W/m2K (approximately 12% lower than "simplified method").

The results obtained so far show that the first two methodologies corroborate each other, including when the effect of mortar is taken in consideration. The FEM results were eventually compared to in situ monitoring of a full-scale north-facing wall made with the same material used in computer simulations. The in situ results showed that after 12 days of monitoring, campaign data seemed to tend towards the converged value according to eq 10. UDCB converged at 1.47 W/(m2K) and the theoretical value was 51.2% lower than measured one. The UHCB converged at 2.41 W/(m2K) and the theoretical value obtained via FEM value was approximately 19.9% lower than in the in situ campaign.

There is an evident performance gap between predicted and measured U-values, as discussed in the cited scientific literature. Despite this gap, the DCB technology showed superior thermal performance, because of the lower U-value, compared to conventional HCB across all the described methodologies. Moreover, novel DCB is now compliant and actually outperforms the minimum standards of the Maltese building energy code.

**Author Contributions:** Conceptualization, V.M.B.; methodology, V.M.B., L.C., and S.P.B.; formal analysis, L.C.; investigation, V.M.B. and L.C.; resources, V.M.B.; data curation, L.C.; writing—original draft preparation, L.C.; writing—review and editing, L.C., V.M.B., and S.P.B.; visualization, L.C.; supervision, V.M.B.; project administration, V.M.B. and L.C.; funding acquisition, V.M.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research is the outcome of the Double C-Block project, a three year research project, for which funding was provided by the Malta Council for Science and Technology (MCST) under the Technology Development Programme (TDP) grant reference R&I\_2019\_010T.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data generated through all experiments, software, and calculations, as presented in this article, are being made available as an integral part of the article protected by international IP rights. Design geometry of the DCB is protected by a design registration, as filed to the Industrial Property Registrations Directorate, Ministry for the Economy, Investment and Small Business, Government of Malta Design Number 1462.

**Acknowledgments:** The manuscript is a revised version of an original scientific contribution that was presented at the 17th Sustainable Development of Energy, Water and Environmental Systems (SDEWES) conference held between the 6th and the 10 of November 2022 in Paphos, Cyprus, and that was subsequently invited to be submitted for review for inclusion in a Special Issue of Sustainability dedicated to the said Conference. Compared to the original Conference paper, the abstract, introduction, methodology, and the literature review have been extensively revised and expanded to further explain the research available on the subject matter and to address how the paper tackles existing research gaps. The results and the figure at page 14 were also updated to present an updated analysis, carried out after the deadline for the submission to the said conference. The authors acknowledge the work on the data logging setup by Nicholas Azzopardi and Alex Falzon, Assistant Laboratory Manager and Laboratory Officer, respectively, at the University of Malta. The authors are also grateful to Cementstone Ltd. as the commercial partner in the MCST awarded grant, and Cuschieri Group for providing the fenestration in kind for both test cells.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **Nomenclature**


#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Implications of the Interrelations between the (Waste)Water Sector and Hydrogen Production for Arid Countries Using the Example of Jordan**

**Thomas Adisorn 1,\*, Maike Venjakob 1, Julia Pössinger 1, Sibel Raquel Ersoy 1, Oliver Wagner <sup>1</sup> and Raphael Moser <sup>2</sup>**


**Abstract:** In the energy sector, few topics, if any, are more hyped than hydrogen. Countries develop hydrogen strategies to provide a perspective for hydrogen production and use in order to meet climate-neutrality goals. However, in this topical field the role of water is less accentuated. Hence, in this study, we seek to map the interrelations between the water and wastewater sector on the one hand and the hydrogen sector on the other hand, before reflecting upon our findings in a country case study. We chose the Hashemite Kingdom of Jordan because (i) hydrogen is politically discussed not least due to its high potentials for solar PV, and (ii) Jordan is water stressed—definitely a bad precondition for water-splitting electrolyzers. This research is based on a project called the German-Jordanian Water-Hydrogen-Dialogue (GJWHD), which started with comprehensive desk research mostly to map the intersectoral relations and to scope the situation in Jordan. Then, we carried out two expert workshops in Wuppertal, Germany, and Amman, Jordan, in order to further discuss the nexus by inviting a diverse set of stakeholders. The mapping exercise shows various options for hydrogen production and opportunities for planning hydrogen projects in water-scarce contexts such as Jordan.

**Keywords:** hydrogen; water; wastewater; electrolysis; water scarcity; wastewater treatment plants; desalination; Jordan

#### **1. Introduction**

This paper is based on input given at the SDEWES conference in 2022 [1].

The European Union and its Member States such as Germany have set ambitious goals for the decarbonization of their economy, their buildings, their transport, and their society as a whole [2,3]. In order to achieve these goals, governments rely heavily on climate-neutral or green hydrogen [4–6], whose production routes emit substantially less CO2 into the atmosphere compared with the conventional production of hydrogen [7].

Today, approximately 100 megatons of such conventional hydrogen are produced worldwide associated with significant carbon dioxide (CO2) emissions of approximately 100 megatons [8–10]. It is used in ammonia and methanol production or in refineries [10]. A substantial amount is produced from natural gas (or methane, CH4) using steam reforming and water–gas shift processes [8,11] consuming approximately 6% of global natural gas use [8,9]. Even though hydrogen is a colorless gas, the political debate has labeled this type of hydrogen as "gray" [7,12]. Germany produces 60 TWh or 1.8 megatons of hydrogen, most of which is emission-intensive [9]. If CO2 from reforming CH4 is captured, stored, or utilized, "blue" hydrogen is harvested. If CH4 is pyrolyzed, the resulting products are solid carbon and "turquoise" hydrogen. From a climate perspective, high hopes rest upon electrolysis, which splits water (H2O) into hydrogen and oxygen (O2). According to the Wuppertal Institute and DIW Econ, 5% of the hydrogen produced in Germany comes

**Citation:** Adisorn, T.; Venjakob, M.; Pössinger, J.; Ersoy, S.R.; Wagner, O.; Moser, R. Implications of the Interrelations between the (Waste)Water Sector and Hydrogen Production for Arid Countries Using the Example of Jordan. *Sustainability* **2023**, *15*, 5447. https://doi.org/ 10.3390/su15065447

Academic Editors: Oz Sahin and Russell Richards

Received: 3 February 2023 Revised: 10 March 2023 Accepted: 15 March 2023 Published: 20 March 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

from electrolysis using water as a feedstock [13]; the global share is only 0.1% [8]. It is noteworthy that (sustainable) biomass including waste resources can also become part of the hydrogen economy through specific technologies/processes (e.g., thermochemical approaches, gasification, reformation, pyrolysis) [14,15].

As electrolyzers run on electricity, it is important to focus on electricity-related CO2 emissions and, thus, how electricity is produced. If electricity comes from the grid or is from nuclear power stations, H2O-based hydrogen is yellow or pink. If electricity is from renewables, electrolysis products are labeled as green [7,12]. The three main types of electrolyzer technologies differ, for instance, in technological maturity, costs, energy inputs, and efficiencies [8,13,14,16].

One of the great advantages is that hydrogen can be used in a variety of ways: in stationary or mobile fuel cells for producing electricity (and heat), to store excess electricity, or to manufacture steel [7,14,17–20]. Further processing hydrogen into other products (e.g., methane, kerosine, diesel, methanol, or ammonia) requires additional processes and technologies [7,18,21,22].

However, this flexible applicability in combination with its potential to reduce emissions creates a large interest in hydrogen in sectors that traditionally use hydrogen (e.g., chemistry, refineries) as well as in new sectors. These are, for example, the steel industry and transport, including air traffic. The growing interest of these sectors increases the expected global demand for hydrogen [5,12,20,23,24]. For them, low-carbon hydrogen is seen as a central option to become climate-neutral [7,22,25]. Moreover, in the energy sector, natural gas combustion plants can be retrofitted to use hydrogen for electricity generation. There are also proponents arguing for using hydrogen in other sectors such as heating in buildings [26]; however, direct electrification is almost always the (much) more efficient option [27,28]. Given this new demand for hydrogen, it is important that hydrogen production pathways help to tackle what Johan Rockström calls our planetary crisis [5,7,29].

Germany, for instance, will not be able to produce sufficient amounts of hydrogen on its own to serve domestic demand; henceforth, the country will rely on imports [5,7,13]. A key limiting factor in Germany (and other countries) is space restrictions for installing sufficient capacities of renewable energies in combination with less benign renewable potentials [7]. However, there are parts of the world where such constraints do not play a role, including countries in the Middle East and North Africa (MENA), which "receives 22–26% of all solar energy striking the earth" [30]. The MENA countries can be considered an attractive partner for Europe and Germany given their proximity, so that imports via ships or even pipelines are possible at (rather) lower costs while also offering opportunities for socio-economic development in the MENA countries [31].

However, despite the abundant potential of renewable energy resources in the MENA region, there is a substantial resource constraint: water. Research to address the immense water demand for green hydrogen production in water-scarce, arid regions is very limited. Potential hydrogen importing countries such as Germany do not have this problem (so far) and often underestimate the situation in potential exporting countries [32]. Jordan, for instance, belongs to those countries with the highest water stress in the world [33,34]. Structural issues such as poor planning as well as pressing dynamics including the intake of refugees and climate change exacerbate the situation [33,35,36]. Still, discussions about a future hydrogen economy in Jordan are taking off [37–39].

In the German context especially, energy-related challenges and potentials associated with hydrogen production have often been discussed from various angles [5,8,18,19,40–45]. For instance, hydrogen production and further downstream processes are energy-intensive and, thus, associated with energy losses [19]. In contrast, the debate on environmental impacts and the role of water is less advanced even though more and more publications focus on socio-ecological concerns. They aim to inform about unintended side effects that might occur, especially if hydrogen production is triggered by external actors including other European countries [7,46,47]. There are niche concerns about a global hydrogen uptake, in

general, and hydrogen leakage, in particular, whereby reducing the availability of hydroxyl radicals in the atmosphere and increasing the lifespan of atmospheric methane [48,49]. Studies addressing water in the context of hydrogen often focus on water demand issues for electrolyzers [50–54] or other hydrogen processes [55–57]. The role of wastewater has been touched upon only very recently, especially by research from Australia [55,58–60]. However, German research has also investigated the different options of hydrogen production at wastewater treatment plants (WWTPs) [61–64]. Then, very often, the question is raised about how electrolysis-based oxygen can be used in wastewater treatment processes to also reduce the overall OPEX [58,63,64]. Although limited, just as important are the studies that investigate the water demand for equipment production and for the operation of auxiliary technologies, such as photovoltaic, to produce electricity to run electrolyzers or similar technologies [65,66]. Apart from the water needs for hydrogen production, there are studies that work on water recovery mostly when hydrogen is used in fuel cells [67–70]. It is noteworthy that Germany's Water Strategy acknowledges the impact of hydrogen on water resources seeking to establish safeguards to prevent negative effects on water [71].

This indicates that there is a limited number of publications on very specific relations between water and hydrogen production and that a systematic overview has been lacking thus far. Hence, the authors seek to systematically bring together and highlight the different relationships between the two sectors. Even though some innovative ideas such as the use of electrolysis-based oxygen for wastewater treatment were only tested in small-scale projects, it helped to structure and facilitate the discussion around hydrogen with stakeholders from Jordan to identify opportunities and challenges associated with hydrogen production in arid country contexts.

The paper builds on the assumptions that relevant relationships exist between the water and hydrogen sectors. It is generally assumed that water is used as the basis for green hydrogen. It can also be assumed that in arid countries there may be complications with hydrogen production, as fresh water is needed.

After this introduction and an insight into our methodology (Section 2), the first part of our results section (Section 3.1) compiles and structures the various threads of water-related hydrogen research. The second part of our results section (Section 3.2) will then tilt towards the case of Jordan, where we discuss the different connections between the water and wastewater sector on the one hand and the hydrogen sector on the other hand. Finally, a discussion (Section 4) is followed by concluding remarks (Section 5).

This paper is based on the project GJWHD—German-Jordanian Water-Hydrogen-Dialogue, funded by the German Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV) through the Export Initiative Environmental Technologies (EXI) [72]. We wish to express our gratitude for the funding of our project and would like to express our thanks to all the stakeholders involved in the GJWHD project. We also acknowledge financial support by Wuppertal Institut für Klima, Umwelt, Energie gGmbH within the funding programme Open Access Publishing.

#### **2. Materials and Methods**

As mentioned previously, this paper seeks initially to unveil the different threads of water-related hydrogen research and identify interconnections between the water and wastewater sector on the one hand and the hydrogen sector on the other hand. These findings will then be discussed for the case of Jordan, where policy makers are interested in developing the topic further despite the country's severe water restrictions.

In order to find answers to the research objectives, we applied two methods: (i) desk research, which was also a central preparatory step for (ii) two expert workshops conducted in Germany and Jordan.

The screening of the existing literature was firstly and predominantly used for identifying the interrelations between water, wastewater, and hydrogen. Secondly, the literature also helped to scope the situation in Jordan for the three individual sectors. Hence, in our search for information, we included scientific articles as well as gray literature. For the

German case, gray literature was mostly on recent or ongoing projects seeking to implement new technologies or processes. Media releases were also factored in, especially to learn about the latest developments in Jordan. Moreover, energy-related research projects funded by Federal Ministries in Germany are listed in a database called ENARGUS, which was also accessed for this project [73].

The screening of the literature, then, helped to structure and organize the two workshops and to identify relevant stakeholders as speakers and participants from the water, wastewater, energy, and hydrogen sectors. The overall aim of both workshops was to deliver knowledge to Jordanian and German stakeholders not only on the intersectoral relations between water, wastewater, and hydrogen but also on country-specific conditions. Within the framework of the German-Jordanian expert workshops, a community-based participatory research approach was chosen and carried out. The goal was to iteratively develop transferable and usable innovations in the water–hydrogen nexus. Experts from both countries were involved. The study design of the workshops is described in detail below.

#### *2.1. Study Design of the Workshops*

Methodologically, the study design followed the 4P framework of Gray et al. [74], as it has been used successfully in other nexus analyses [75]. The four Ps of the framework are: Purpose, processes, partnerships, and products. The 4P approach draws on frameworks identified in the literature that improve the practice of participatory processes [74]. It captures the essential questions starting with why, how, who, and what in a structured way.


#### 2.1.1. Purpose

The aim of the workshops was to enable a practical networking of knowledge carriers and technology providers from Germany to support Jordanian stakeholders. The aim was to identify solution spaces for a sustainable use of the resource water in the context of current and future challenges. The overall objective of the project was to transfer experiences from the German wastewater sector to relevant Jordanian stakeholders.

#### 2.1.2. Partnerships

The partnership, i.e., the selection of the experts involved, required an intensive research process in both countries. Here, on the one hand, the excellent networks of the Friedrich-Ebert-Stiftung in Jordan could be used, which had already led to the identification of important actors in a previous project. On the other hand, intensive research was required into technical solutions that have proven themselves in municipal practice. To this end, intensive preliminary discussions were held with associations. The German Association of Local Public Utilities was very helpful in identifying solutions that had been tried and tested in practice.

The network of stakeholders included academic, public, and non-profit partners with expertise in energy, water, and wastewater in urban as well as in rural areas. Some participants also had expertise in transportation, economics, and policy. Jordanian parliamentarians were also involved in the discussion process at times. The participants work almost equally in Germany and in Jordan.

#### 2.1.3. Process

Two workshops lasting several days were held in Jordan and in Germany, framed by side events, excursions to technical facilities, and various expert inputs. The workshops themselves were intensively prepared and planned. The inputs and the visits to technical facilities required a detailed schedule. Interpreters were present throughout the program to minimize language barriers.

#### 2.1.4. Products

The technical solutions visited during the excursions and the aspects of the regulatory framework, i.e., the laws and regulations in both countries, dealt with during the discussions can be described as products.

#### *2.2. Organization of the Workshops*

In line with the system described above, the workshops were organized as follows: The first workshop was held in Wuppertal, Germany, from 26 to 30 September 2022. Participants from Jordan came from the Jordan Valley Authority (JVA) under the Ministry of Water and Irrigation (MWI), the water and wastewater services sector in the Jordanian city of Aqaba, the Ministry of Energy and Mineral Resources (MEMR), and the National Electric Power Company (NEPCO), as well as from EDAMA, a Jordanian business association. For instance, the agenda included presentations by speakers from the Wuppertal Institute for Climate, Environment and Energy, the German Association of Local Public Municipalities, the water supplier of the city of Sonneberg, the Association of Machinery and Plant Engineering (VDMA), the National Organization of the Hydrogen and Fuel Cell Technology (NOW), and technology providers such as GRAFORCE. Field trips to wastewater treatment plants and hydrogen production sites were used to illustrate applied knowledge on the nexus.

The second workshop in Amman, Jordan, was held from 24 to 27 October 2022 and was supported by substantial efforts of the Jordanian Office of the Friedrich-Ebert-Stiftung. The delegation from Germany not only included representatives from academia such as the Wuppertal Institute, the Fraunhofer UMSICHT, the University of Applied Sciences in Saarbrücken, SRH University Heidelberg, and the German Institute of Development and Sustainability (IDOS) but also from Lower Saxony's Hydrogen Network and the Municipal Utility of the City of Aschaffenburg. Presentations from Jordanian stakeholders were given, for instance, by JVA, MEMR, NEPCO, EDAMA, the city of Aqaba's water and wastewater supplier, the Ministry of Transportation (MoT), and the Royal Scientific Society (RSS). In addition, researchers from the German-Jordanian University (GJU), the University of Jordan (UJ), and the Jordan University of Science and Technology (JUST) took part. In addition, representatives from the German Gesellschaft für Internationale Zusammenarbeit (GIZ) also participated in the workshop and provided interesting insights into the wastewater situation or the water–energy nexus in the country. In addition to presentations with discussions and a field trip to the WWTP close to the city of Irbid, interactive sessions were conducted based on the 6-3-5 brainwriting method [77]. For these sessions, we asked (i) what impacts need to be realized or avoided in a future Jordanian hydrogen economy and (ii) what steps are necessary to realize a sustainable future hydrogen economy in the country?

If putting it schematically, the workshop in Germany rather focused on the various connections between the three sectors of water, wastewater, and hydrogen, whereas the workshop in Jordan focused on the nexus' implications in Jordan. In real life, however, the participants from Jordan, especially, were asked to reflect upon their home country's situation in discussions immediately after presentations or field trips. Hence, the content of the discussions was very fluid. Given that the stakeholders were deliberately chosen from

different sectors, the discussions were rich in content and seen as an important benefit of the project and workshops.

#### **3. Results**

#### *3.1. Mapping Water- and Wastewater-Related Hydrogen Issues*

#### 3.1.1. Water as an Input for Electrolysis-Based Hydrogen

Hydrogen production through electrolysis (but also other hydrogen processes) requires demineralized water. For instance, the stoichiometric minimum requirement for the generation of 1 kgH2 is 8.92 L of water [10,56,57,78].

However, water quality, cooling demand, and process losses also need to be taken into account. Already in the 2000s, Barbir reflected upon the advantages of a PEMEL in combination with variable renewable energy sources. As regards the stoichiometric water consumption, he found that the actual water consumption is about 25% higher due to process losses [51]. Later, Mehmeti et al. not only assessed the water footprint of different hydrogen production pathways including a PEMEL and a SOEC but also biomass gasification, reforming, and dark fermentation. Based on the available literature, he found water consumption for electrolysis to be between 9.1 kg/kgH2 (SOEC) and 18.04 kg/kgH2 (PEMEL), while hydrogen from biomass had a significantly higher water footprint [52]. Others state that it takes up to 30 L of water to produce 1 kg of hydrogen [53,54,58].

Saulnier et al. compared the water demand for available electrolyzers advertised on the market to be between 10 L/kgH2 and 11.1 L/kgH2. Due to the additional water demand needed to purify surface or tap water to deionized water, total water demand was assumed to be at approximately 15.5 L/kgH2. In their paper, the authors conducted a thought experiment on how much water would be needed if 20% of the natural gas consumption in the Canadian province of Alberta would be substituted with hydrogen on an "equivalent energy basis". They found a daily water demand of 134,000 m<sup>3</sup> for hydrogen through electrolysis, whereas methane reforming would require 114,000 m<sup>3</sup> of water. The authors reflected upon the water situation in Alberta and believed that an expanded hydrogen production would be in conflict with the agricultural sector responsible for 67% of water consumption in the province. Moreover, in some parts of the region, the availability of surface water is negatively affected by climate change and permission for water extraction has been stopped since 2007 [79].

A substantially higher water demand for electrolysis-based hydrogen is provided by Coertzen et al. They expect the total water demand to be somewhere between 60 to 95 L/kgH2 assuming, for instance, that 30 to 40 L of water is needed for cooling needs. Those estimates may increase due to a higher cooling demand during the lifetime of electrolyzer stacks. In a comparative perspective, Coertzen et al. show that, in order to produce other "colors" of hydrogen, both the stoichiometric water needs and the total water needs for, i.e., process cooling, are significantly lower for hydrogen production processes relying on natural gas as a feedstock [55].

Apart from fresh water, seawater may constitute another type of water from which to produce hydrogen. However, state-of-the-art electrolyzers would require auxiliary processes to desalinate seawater. Even though costs for operating a seawater desalination plant might be neglectable, desalination is an energy-intensive process and, based on today's electricity mixes, would increase emissions [50].

In a demonstration project called H2-Mare, several companies, including Siemens Gamesa, Siemens Energy, and ThyssenKrupp, seek to build an offshore wind turbine integrating an offshore electrolyzer. Green hydrogen will come from a PEMEL due to its quick start-up times. The facility will be without external power supply, which is why power consumption is sought to be reduced as much as possible. This also affects the seawater desalination process needed for purifying the water feedstock [80].

In contrast to H2-sMare, Tong et al. investigate opportunities for electrolysis with low-grade water, including seawater. While seawater is abundantly available, the authors highlight the downsides of desalination and purification processes adding investment and operational costs to the end product (hydrogen), which, in the longer run, could be avoided. In their paper, they review most recent developments in electrode materials or catalysts for electrolysis with saline and low-grade water [81]. Generally, seawater electrolysis has a relatively low technology readiness level, in part due to the problems caused by chloride corrosion [23,82]. However, Chinese researchers announced that they successfully operated a demonstrator to run for 3200 h [83].

Wastewater treatment plants (WWTPs) are considered to provide opportunities, especially for decentral hydrogen production in Germany [15]. Given that electrolyzers need highly purified water to protect the system from breakdown, using incoming wastewater will not be an option any time in the near future. However, according to Jacobs and Yarra Valley Water, treated or recycled wastewater combined with processes to purify the (already) treated wastewater further has several advantages. These advantages include consistent water supplies and less competition with domestic or industrial water needs as recycled water is normally discharged into the environment [58,84]. In Germany, a few electrolysis projects, mostly in a research and development stage, have been realized at WWTPs. Already in 2002/03, a PEMEL was installed at the WWTP Barth together with a PV system to run the electrolyzer. A more recent project was realized at the WWTP in the city of Sonneberg [85]. Apart from Australia and Germany, there are similar projects completed in the U. S. or in Oman [86,87].

#### 3.1.2. Other Feedstocks from the Wastewater Sector for Hydrogen Production

Apart from H2O, other feedstocks exist in the wastewater sector from which hydrogen can be produced. As regards sewage sludge, one can differentiate between thermochemical processes (e.g., pyrolysis, gasification) and biological processes (e.g., dark fermentation, photo-fermentation) to be applied for hydrogen production [64,88]. As of 2019, Liu et al. found that, generally, reaction rates are faster for thermochemical processes, resulting in higher hydrogen yields [60]. Researchers of the Sludge2P project aim at developing a novel process concept in which dried sewage sludge is processed into a product gas and a usable fertilizer. In the process, hydrogen is to be separated from the product gas. The remaining residual gas is used to heat the melting reactor. All process stages are considered to be, in principle, suitable for onsite operation by the WWTP operators; as a result, energy self-sufficiency can be achieved to a large extent [89]. Similar developments on hydrogen generation from wastewater sludge can also be found in Ukraine [90].

Another feedstock can be concentrated ammonium (NH4) resulting from dewatering sewage sludge. Through plasmalysis, the German company GRAFORCE enables the recovery of hydrogen and other gases including, e.g., nitrogen or CH4, which can then be stored individually. The overall advantage of this pathway is that water does not have to be purified; however, the concentration of NH4 does have to be relatively high. Other products are (waste) heat and water with low NH4 content. The energy demand is lower compared with electrolysis because the nitrogen–hydrogen bond is "looser" compared with water molecules. There are four hydrogen atoms per ammonium ion compared with two hydrogen atoms per water molecule [91,92].

Another innovative process is currently being developed at the Fraunhofer Institute for Environmental, Safety, and Energy Technology UMSICHT. Researchers have developed an electrochemical cleaning process for industrial wastewater using diamond electrodes. The novel approach is energy-intensive but is considered to be interesting for companies that produce electricity onsite; excess electricity could feed the cleaning process. In the cleaning process, a syngas is produced containing hydrogen with a share of up 60%. Researchers see an opportunity in applying this highly innovative water cleaning process in certain industries (e.g., in refineries for desulfurization of petroleum products by hydrogenation) [62].

Another hydrogen-based product is methanol (CH3OH), which was produced at a WWTP in Dinslaken, Germany, in a research project. While biogas is often used in CHP at German WWTPs to deliver both electricity and heat for relevant processes, researchers converted biogas from the WWTP through methanol synthesis. Hydrogen was delivered through electrolysis. In particular, they assumed that it may make economic sense to produce an energy carrier, which is easy to store and transport, particularly in summer when there is excess grid electricity and minimal heat demand at WWTPs [61,93].

#### 3.1.3. Water Needs for Operating Auxiliary Technologies

For producing hydrogen and green hydrogen, in particular, the technology set up will not only include electrolyzers but also auxiliary technologies including, for instance, power generation units running not only hydrogen processes but water-related processes as well (e.g., water desalination, pumping). In a project for the MENA region, researchers collected data on the water demand of renewable energy power plants [65]. For instance, electricity production through solar PV needs water for frequently cleaning the modules (from 0.01 m3/MWh up to 0.1 m3/MWh). However, in their study, the authors assume a higher water demand of 0.4 m3/MWh for electricity production by solar PV in the MENA region, as high dust levels in the area would lead to significantly lower efficiencies. Others combine in their analysis the gross water demand of PV modules (1.500 h/a) with a PEMEL and expect a water demand of 19.1 L/kgH2 [57]. Apart from solar PV, thermal concentrated solar power (CSP) plants also need water for cooling. While CSP plants lead to the highest energy generation, solar energy uses the least water, if PV technology is applied. Especially in regions with water scarcity, implementation of additional solar power plants can lead to further conflicts with other uses of water, such as agriculture [94]. For wind power combined with a PEMEL, a gross water demand of 11.0 L/kgH2 is estimated. Pink hydrogen, based on nuclear electricity, "uses about 270 kg of cooling water per kg of hydrogen" [65]. Given these differences, decision makers should carefully choose between the different electricity generation options.

#### 3.1.4. The Role of Water in Downstream and Co-Processes

There are several downstream processes to make use of hydrogen to produce ammonia or methanol. It was found that certain synthesis processes (e.g., methanization, Fischer– Tropsch synthesis) produce water as a byproduct [65]. However, it is unknown how this water can be used further.

A representative from the HTW Saar (Germany), who developed a process called bio-energy storage (BEST), explained that BEST does not only produce snythetic methane by using hydrogen from electrolysis and CO2 from wastewater processes but also water as a by-product. He argued at the expert workshop that such water could be recovered and fed into the electrolyzers (even though purification will have to take place) [1].

Direct air capture (DAC) is a technology suitable for hydrogen-based products relying on CO2 as an additional feedstock. DAC technology uses ambient air and filters CO2, which can then be forwarded to Fischer–Tropsch synthesis or methanol production. While state-of-the-art DAC may need up to 50 t of water/tCO2, recent developments promise up to 2 t of water can be extracted per tCO2 [7]. Depending on the ambient air and humidity, other researchers point to processes that need approximately 4.7 l/tCO2 [65].

Industrial point sources (including cement plants) could be an alternative source for producing CO2 through carbon capture and use (CCU) for producing synthetic diesel or kerosene, for instance. However, the water demand is considered to be very high. Capturing CO2 at power plants was found to result in an increasing water demand of 40% to 90%.

#### 3.1.5. Indirect Water Needs for Equipment Production

Shi, Liao & Li assessed the impact of hydrogen production on water. Their paper establishes an approach to identify water footprints of hydrogen production from electrolysis factoring in "the geographical distribution of the footprints along the supply chain" and different types of electricity to run the electrolyzers. The authors find that hydrogen produced with Australian grid electricity has the highest water footprint compared with

solar PV and wind. Since the PV panels were assumed to be built in China, which is associated with water demand, the largest proportion of water for Australian hydrogen is considered to be used in the Asian country. For grid electricity, the water is consumed locally. For the Australian case, the authors conclude that grid electricity is less an option from a water perspective [66].

#### 3.1.6. End-Uses of Hydrogen and Hydrogen-Related Products

The use of hydrogen, in fact, offers the potential to recover water, e.g., used in fuel cells, for which there are mobile and stationary applications. In 2011, researchers found for a PEM- and molten carbonate fuel cell "approximately 8% of the theoretical amount of water generated" without any additional condensing system, even though a recovery rate of 40% would be necessary to serve the water needs of a typical U. S. American household [68]. Since then, several studies have been carried out verifying water recovery opportunities, even though recovery rates could theoretically be further improved [70]. Apart from hydrogen fuel cells, there are also fuel cells using methanol (DMFC). Apart from methanol and ambient oxygen, water is needed at the anode and produced at the cathode. In total, its water balance is considered to be positive [95]. Water recovery for drinking purposes has already taken place in aerospace [96].

3.1.7. Benefits of Hydrogen and Its By-Products Applied in the Water and Wastewater Sectors

In water supply services, hydrogen can be used to denitrify drinking groundwater. Fertilizers used in agriculture and transported to plants and soils may also pollute waters, which then have to be denitrified. For instance, water pollution with NH3 takes place where groundwater resources—responsible for approximately 70% of Germany's potable water supply—are below intensively cultivated areas (e.g., for vegetables). In Germany, 27% of Germany's groundwater bodies exceed maximum thresholds of 50 mg nitrate/l. Such thresholds exist at the EU level because nitrogen can ultimately lead to limited oxygen uptake in infants between three and six months of age [97]. Groundwater resources can be treated biologically through autotrophic denitrification using hydrogen, which is an alternative to the heterotrophic path mostly applied to eliminate nitrogen [98]. For autotrophic denitrification of groundwater resources and potable water supply, the municipal public utility in the city of Aschaffenburg, for example, needs approximately 30 t of hydrogen per year. As of today, this is natural gas-based hydrogen, but the utility is planning for green hydrogen instead [99,100].

Apart from fresh water, wastewaters may include high loads of nitrate. Methanol, a hydrogen-based product, can and is used to denitrify wastewaters [101].

Apart from the role of hydrogen and derivates, oxygen as a by-product of electrolysis can also be used at WWTPs. In Germany, the WWTP in the city of Barth tested the use of pure oxygen in aeration tanks to deal with increased wastewater loads resulting from new camping grounds in the area [63]. In the project LocalHy, a small-scale test-WWTP was set up together with a PEMEL on a site of an operational WWTP in Sonneberg. Again, the focus was on making use of oxygen for wastewater treatment in the biological treatment stage [85]. Biological wastewater treatment consumes substantial electricity as turbo blowers need to blow ambient air into the aeration tank. As ambient air consists only of 21% oxygen, pure electrolysis-based oxygen could substitute for turbo blowers, at least partly. Electrolysis-based oxygen could also be further processed into ozone (O3). In particular, for more advanced WWTP contexts, ozonation allows elimination of very specific pollutants. Ozonation in the context of WWTPs belongs to the so-called fourth treatment stage [102].

Waste heat of the hydrogen production process can also be made use of. For instance, electrolyzers produce waste heat, which students from Sweden analyzed for district heating [103]. In Germany, especially, the fouling for producing foul gas at WWTPs has a heat demand, which is currently often met by combined heat and power (CHP) [104].

#### 3.1.8. Water-Related Impacts

In order to mitigate the impact of green hydrogen production, the German Advisory Council on the Environment recommends safeguards so that hydrogen production does not compete with other sectors such as agriculture/food security and with the well-being of the local population. Water-related safeguards could include that electrolyzers must not be built in areas with decreasing (ground)water levels and must not negatively affect local water supplies. As regards seawater desalination, the authors point to the risk that saline brine may destroy the coastal and maritime ecosystems and biodiversity if returned to the sea without any further measures [7]. Moreover, chemicals and metals may be in the discharge stream of desalination plants. Altgelt et al. point to zero-liquid-discharge technologies adding only little in costs [50].

According to the World Bank, hydrogen production will have to be accompanied by infrastructure works depending on how and where hydrogen is produced and consumed. Whether new roads, pipelines, or terminals are to be developed needs careful consideration factoring in environmental and social impacts including water-related effects. With respect to water, it is essential to analyze the impacts of large(r)-scaled hydrogen production on water availability and additional water infrastructure needs. This may include the modernization of the existing water network as well as its expansion. Authors also acknowledge that such additions come at a cost, and it needs to be clarified who pays for such investments [53]. Potentially, hydrogen production could result in an overall improvement of water supply in a region, if, for instance, inefficiencies (leaks) are tackled [7].

Depending on the energy situation of countries, the German Advisory Council on the Environment notes that water consumption could even decrease if conventional energy production and processing is substituted by green hydrogen production. In this respect, the authors mention the high water demand of coal and gas extraction and power plants [7]. Saulnier et al. also refer to factoring in water savings resulting from demand reductions in other sectors [79]. However, one needs to scrutinize the local peculiarities as water savings in one region are not automatically beneficial to other regions due to the distributional characteristics of water resources. Figure 1 summarizes the water needs and role of hydrogen and related products in the water and wastewater sector.

**Figure 1.** Schematic overview of the water–hydrogen nexus focusing on water needs (black arrows) and the role of hydrogen and related products in the water and wastewater sector (blue bold arrow).

#### *3.2. Transferring Results to the Case of Jordan*

Renewable water resources, which include "groundwater aquifers and surface water like rivers and lakes" [105], were 937 million m3/year in 2014. Groundwater reserves totaled 540 million m3, distributed among twelve aquifers of which the Disi aquifer is the largest. While the Jafer aquifer has both renewable and non-renewable water resources, key renewable groundwater resources are mainly located in the Yarmouk, Amman-Zarqa, and Dead Sea basins. Even though the safe yield of them is at 275.5 million m3/year, static groundwater level drops between 1–20 m annually. Water scarcity is and will be a challenge for economic development of the country. Intense drought events were registered, for instance, in the years 2005, 2007, 2008, 2010, and 2011 [34]. Water stress is considered to have increased due to the influx of refugees, while so-called non-revenue water (NRW) has remained a problem for years. NRW is not billed either because its lost due to leakages/inefficient water networks or due to illegal connections [106]. In the end, feedback from the expert workshops was that fresh water resources must definitely not be used for hydrogen production.

Water use in Jordan is met by groundwater (52%), surface water (30%), and wastewater (17%). The dominant users are the agricultural sector (51%) and households (45%), followed by industry (4%). While being the major water consumer, agriculture only generates 3% to 4% of Jordan's GDP. The overwhelming amount of treated wastewater is facilitated to the agricultural sector, even though it needs further subtantial resources also from groundwater and surface water [107,108]. As treated wastewater is already re-used by approximately 90%, using this type of unconventional water resource for hydrogen production appears to be problematic due to tradeoffs with agricultural production. However, Jordan's population is growing and expected to rise by 24% by 2040 from 9.5 million to almost 12 million people [109], and the number of people connected to the sewage system is intended to increase from 63% to 80% between 2014 and 2030 [106]. These prospects would increase wastewater loads and, hence, additonal recycled water. This could be an opportunity to investigate the future use of recycled wastewaters for electrolyzers as well as of oxygen in Jordanian WWTPs.

Hence, the option to use desalinated water was brought up by participants of the expert workshops. In fact, Jordan has already initated plans together with USAID for a large-scale desalination plant at the Gulf of Aqaba. The project is expected to produce approximately 300 million m<sup>3</sup> of desalinated drinking water per year, of which 250 million m3 will be supplied to Amman and other regions. The remaining 50 million m3 is still to be decided or can be sold by the operator. The build-up of renewable energies for powering the plant, which will likely also require additional water, will have to be considered by the developer [110,111]. If used completely for the purposes of electrolysis, substantial amounts of hydrogen could be produced. However, participants voiced concerns that 250 million m3 of additonal drinking water may not be enough to meet even today's demand sustainably. In the end, hydrogen production may fuel water conflicts.

The application of alternative pathways to hydrogen production through feedstocks provided by WWTPs (slugde, NH4) was further discussed. For instance, even for (rather low-tech) CHPs at Jordanian WWTPs, it is difficult to have service personnel or operators to repair respective plants in time. This example, which can be transferred to all types of hydrogen processes, stresses the importance of having operation and maintenance staff trained to safeguard continous hydrogen production flows. Furthermore, 29 Jordanian WWTPs produce approximately 150,000 m<sup>3</sup> of semi-dry sludge and 357,000 m<sup>3</sup> of liquid sludge annually. According to GIZ, most of this sludge is stored onsite or is transported to unsanitary landfills, which, in turn, do not only produce emissions but also become a problem for groundwater resources [112]. In how far hydrogen produced from sewage sludge could offer a solution to this problem may deserve attention.

As regards the production of sufficient electricity, Jordan is home to solar-PV module producers. Given that Shi et al. voice concerns over water consumption associated with the production of solar-PV modules [66], new demand for renewable energies could also result in additional water demand by the solar-PV industry in Jordan. However, the experts argued that the number of PV panels, which are imported, is substantial, so that the domestic water demand for auxilliary technologies is and will be limited.

The application of hydrogen, especially, was discussed more concretely for the industry and the energy sector. For ammonia, being the 78th most imported product in Jordan, Saffouri (2022) explained that the country is the 27th largest importer of ammonia. In 2020, ammonia worth USD 56 million was imported. Domestic production of ammonia would reduce imports and increase domestic value creation [113]. In the energy sector, hydrogen could be an option to store excess electricity, which is an opportunity to further expand renewables without curtailment [114]. However, current projects to increase electricity system flexility focuses on battery energy storage and pumped storage facilities [115]. Even though Jordan produced almost 16,000 GWh of electricity from natural gas [116], the retrofitting of the existing plants has not been considerd in Jordan yet.

Regarding the next steps for a future hydrogen economy in Jordan, stakeholders highlighted the role of both pilot projects and capacity development. Given the good research conditions including GJU, JU and JUST, pilot projects at universities would help researchers and students to gain hands-on experience with the technologies.

#### **4. Discussion**

In the first part of our results section, we mapped the various connections between the water and wastewater sector on the one side and the hydrogen sector on the other side. Even though obvious for several researchers, we identified three different types of water to be used for electrolyzers: freshwater, seawater, and treated wastewater. Since most of the research has a rather narrow view on water focusing only on one or two types of water, e.g., [50,53], we widened the perspective, which is also relevant for policy makers strategically thinking about a future hydrogen economy and reflecting upon the different water resources that can potentially be used in electrolysis. However, it needs to be acknowledged that the different types of water may result in different hydrogen structures. For instance, while a desalination plant is a more central way to produce water for electrolysis, WWTPs are normally organized decentrally. Hence, a decision on the water resources to be used, has implications on the hydrogen structures to be developed.

We found large variations regarding the amount of water needed for producing 1 kgH2 also depending on how broad the technology system is framed. While some only focus on the stoichiometric minimum of H2O, others differentiate between different types of electrolysis and extend the system of analysis to cooling needs and the construction and operation of auxiliary technologies, e.g., [55,78]. In a water-scarce context such as Jordan, the analysis should cover the impacts on the national water situation as holistically as possible to identify all risks arising from hydrogen production.

These risks in arid countries include, for example, the conflicting goals of different sustainability approaches. With regard to a hydrogen economy to be established, aspects of social, ecological, and economic sustainability would have to be taken into account here, among other things. For example, it would be important for a socially sustainable hydrogen economy that the population's water supply is not negatively affected at any time. Accordingly, it would be relevant for an ecologically sustainable solution that the environment also does not suffer from the water requirements of the electrolysis processes. Last but not least, the development of a hydrogen industry must be profitable, so that it can also be economically sustainable. The national water situation should be analyzed to the effect that a functioning, green hydrogen economy can be expected to produce multi-layered, sustainable benefits.

One approach to prevent the mentioned conflicts would be to investigate the possibilities of a WWTP in more detail. Even though decentrally structured in most country contexts, WWTPs deserve special attention because, first, they can provide different feedstocks for hydrogen, for which different processes need to be considered [58,60]. Second, they can also use hydrogen-based products such as methanol or electrolysis-based oxygen

for wastewater treatment [85,101]. Such opportunities or co-benefits of decentral hydrogen production should be taken into account when strategically planning a hydrogen economy. A driver for hydrogen production at WWTPs in Jordan could be the potential mitigation of challenges associated with sewage sludge disposal.

Opportunities for water recovery exist for different processes for the production and use of hydrogen or derivates [1,55,65]. Water can even be recovered from fuel cells [68,117]. Project planning in water-scarce contexts should pay particular attention to avoid leaks and inefficient water uses and consider water recovery where technically and economically feasible.

Hydrogen production and use can also help to substitute other forms of energy generation or consumption [79], which can be taken into account in a broader analysis, for instance, when fossil fuel extraction or use is to be substituted by green hydrogen. However, even then, a careful analysis of the hydrological situation in different areas is mandatory.

The results of this research are limited on the one hand by the fact that part of the chosen approach is a review of the literature. Here, the selected literature regarding the interaction of water and hydrogen is considered the core literature. Thus, the base of data was narrowed, and there is a possibility that important information in the literature was overlooked during the research. Additional information from the literature could, for example, provide further perspectives on the research question.

On the other hand, the format of the workshop represents a limitation of the research results. It should be noted here that the number of participants representing opinions and interests in the workshop was limited. Even though a multistakeholder approach was chosen to cover many topics, there is a possibility that geographical differences and demands, for example, were not sufficiently taken into account. Furthermore, when workshops of this type are held, there is a risk that individual contributions may be lost in the volume of information. To make the approach of conducting a workshop more representative, it would be useful to accompany the method with a quantitative survey. Another limitation of the workshop is the interest of the stakeholders. Here, interests from the technical and political fields are mainly represented. For extended research, it would be interesting to invite stakeholders with other interests (for example, primarily socio-ecological) to broaden the perspectives on the research question.

As regards the future research direction, we welcome feedback from other researchers to our concept on the water–hydrogen nexus. In fact, we provided an overview of the interplay between the sectors based on the existing literature, which was helpful to structure our expert workshops. In-depth and semi-structured interviews with planners of hydrogen projects and relevant stakeholders from the water sector could further provide insights into project realities.

Given that hydrogen technologies are mostly developed by manufacturers from industrialized countries, research apparently focuses mostly on the energy inputs. However, since the challenges related to energy are supposed to be outsourced to countries with good conditions for renewable electricity but questionable water situations, water-sensitive research and development of respective technologies and processes and innovative project planning need to be become a key theme.

Even though the role of WWTPs for hydrogen production really depends on the strategy for a green economy, such infrastructures may deserve further attention. Innovative processes at WWTPs could contribute to local value creation and green jobs in local or decentral areas. Furthermore, hydrogen and related products could contribute to improving wastewater treatment processes.

Our project has also sought to facilitate dialogue between the experts of energy and water sectors. For developing either a national- or export-oriented hydrogen economy, it appears to be essential to develop a common vision, identify challenges from both sector perspectives and work out solutions with minimal tradeoffs. In doing so, strategies and policies can be derived by policy makers from the water *and* the energy sector. This will most likely have a positive impact on acceptance by the population. The question on how such a process is to be initiated likely depends on country contexts and the stakeholders involved but needs proactive engagement from policy makers and practitioners, as well as applied research.

#### **5. Conclusions**

By presenting the interrelations between hydrogen and water in this article, we want to start to fill the gap of systematic research in this field and to highlight the importance of water as a resource in the potential export countries of hydrogen. We have focused mainly on the conditions of Jordan; however, it can be expected that water scarcity issues will also play an increasingly important role in other (arid) countries.

First, we worked on the different types of water that can be used for hydrogen through electrolysis. These include higher quality potable water, seawater, and recycled wastewater. WWTPs offer further feedstocks for hydrogen production, even though the use of non-H2O feedstocks would not label the product as green hydrogen. In addition, water-sensitive hydrogen planning should also factor in the water needs of both auxiliary technologies (e.g., solar PV, desalination) and downstream or co-processes (e.g., synthesis, DAC). In addition, for hydrogen uses (e.g., in fuel cells) this can include technology or process operation and opportunities to recirculate water and to close water leaks in water-scarce contexts. Close attention should be paid to developing and enforcing water-related safeguards that help to avoid water conflicts and overuse induced by hydrogen production and a loss in maritime biodiversity as in the case of desalination plants.

The concept of the water–hydrogen nexus has been developed to structure the dialogue of the project called the German-Jordanian Water-Hydrogen-Dialogue. As Jordanian stakeholders discuss opportunities of future hydrogen production, a massive opportunity lies in the desalination plant commissioned by the end of the 2020s. Recycled water from WWTPs is at the moment used for agricultural purposes—so hydrogen production from treated water would also create tradeoffs if wastewater loads remain constant. However, since the population will likely increase, and it is supposed that more people will be connected to the sewage system, the amount of wastewater will likely increase. Given the challenging country conditions, a dialogue between relevant stakeholders of the water and energy sectors should be initiated with respect to the topic of hydrogen. Pilot projects would help to develop human capacities for a green hydrogen economy.

**Supplementary Materials:** A brochure with further information on the German-Jordanian Water-Hydrogen-Dialogue project will be available at www.wupperinst.org.

**Author Contributions:** Conceptualization, T.A.; methodology, T.A. and M.V.; validation, T.A., J.P. and M.V.; formal analysis, T.A.; investigation, T.A., M.V., S.R.E., R.M. and J.P.; writing—original draft preparation, T.A.; writing—review and editing, M.V. and J.P.; visualization, T.A.; supervision, T.A.; project administration, T.A. and M.V.; funding acquisition, T.A., M.V. and O.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the German Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV), grant number 67EXI5503A. We acknowledge financial support by Wuppertal Institut für Klima, Umwelt, Energie gGmbH within the funding programme Open Access Publishing.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Data Availability Statement:** The authors confirm that the data supporting the findings of this study are available within the article [and/or] its supplementary materials.

**Acknowledgments:** We would like to thank the National Organization for Hydrogen and Fuel Cell Technologies for exchanging ideas on the subject. Moreover, we would like to express our thanks to the speakers and participants in our expert workshops. Their thrilling presentations and fruitful discussions really brought the project to life. Moreover, the Jordanian Office of the Friedrich-Ebert-Foundation as well as the German Association of Local Municipalities really made great efforts to support our workshops.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## *Article* **Thermal Comfort Analysis Using System Dynamics Modeling—A Sustainable Scenario Proposition for Low-Income Housing in Brazil**

**Cylon Liaw 1,\*, Vitória Elisa da Silva 2, Rebecca Maduro 1, Milena Megrè 1, Julio Cesar de Souza Inácio Gonçalves 2, Edmilson Moutinho dos Santos <sup>1</sup> and Dominique Mouette <sup>3</sup>**


**Abstract:** As a riveting example of social housing in Brazil, the Minha Casa Minha Vida program was set in 2009 to diminish the 6-million-home housing deficit by offering affordable dwellings for low-income families. However, recurrent thermal discomfort complaints occur among dwellers, especially in the Baltimore Residential sample in Uberlândia City. To avoid negative effects of energy poverty, such as family budget constraints from the purchase of electric appliances and extra costs from power consumption, a simulation based on system dynamics modeling shows a natural ventilation strategy with a mixed combination of sustainable and energy-efficient materials (tilting window with up to 100% opening, green tempered glass, and expanded polystyrene wall) to observe the internal room temperature variation over time. With a 50% window opening ratio combined with a 3 mm regular glass window and a 12.5 cm rectangular 8-hole brick wall, this scenario presents the highest internal room temperature value held during the entire period. From the worst to the best-case scenario, a substantial reduction in the peak temperature was observed from window size variation, demonstrating that natural ventilation and constructive elements of low complexity and wide availability in the market contribute to the thermal comfort of residential rooms.

**Keywords:** system dynamics; thermal comfort; Minha Casa Minha Vida; natural ventilation; bioclimatic architecture; social housing; energy poverty

#### **1. Introduction**

In Brazil, energy consumption in the residential sector relies on electricity, wood, and liquefied petroleum gas (LPG), with a crescent use of natural gas, according to the Brazilian Energy Balance—2020 report [1]. By 2020, this particular sector consumed 10.8% of the total energy supply in Brazil, only ahead of the agribusiness and services sectors. Figure 1 illustrates how these energy resources have slightly changed over the last decade, with a strong presence of renewable energy share (67%) due to electricity generation based on renewable resources such as hydro, solar, wind, and biomass.

Moreover, due to the LPG price soaring between 2016 and 2019, wood became a direct substitute for fossil fuel for cooking, especially among low-income families. Further on, with a global economy aggravated by the COVID-19 pandemic's resonating effects, it is expected that wood consumption remains this increasing trend until Brazilian families' purchasing power is regained. In this sense, Brazil's Energy Research Office (EPE) had earlier pointed out this direction and showed a 1.8% growth in wood use in 2020 when compared to the previous year [1]. Additionally, a greenhouse gas (GHG) emission rise related to this expansion is likely to occur. In Brazil, anthropogenic CO2 emissions associated with the residential sector accrued 19.4 Mt CO2-eq in 2020 [1]. Though it represented close to 5%

**Citation:** Liaw, C.; da Silva, V.E.; Maduro, R.; Megrè, M.; de Souza Inácio Gonçalves, J.C.; Santos, E.M.d.; Mouette, D. Thermal Comfort Analysis Using System Dynamics Modeling—A Sustainable Scenario Proposition for Low-Income Housing in Brazil. *Sustainability* **2023**, *15*, 5831. https://doi.org/10.3390/ su15075831

Academic Editors: Oz Sahin and Russell Richards

Received: 29 January 2023 Revised: 9 March 2023 Accepted: 15 March 2023 Published: 28 March 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

of Brazil's total emissions in the corresponding year, we highlight that the country held 12th place in the global emission ranking [2]; in other words, a relatively small percentage denotes a robust addition to the global warming threat; therefore, it is a relevant volume not to be ignored.

Although subject to successive price rises during the same period, electricity consumption likewise kept continuously increasing. A perceived 4.05% growth between 2019 and 2020 resulted from a combination of diversified factors in the presence of the COVID-19 pandemic, such as higher absence from work, extended home office conditions, federal financial aid, and home appliance acquisition [1]. The latter creates a demand for better convenience associated with energy efficiency measures, as indicated by Lamberts et al. [3], especially for thermal comfort as a determinant of human well-being and when it comes to prolonged indoor stays [3,4].

Throughout history, humanity has sought comfort, protection, and safety, with modern societies ending up primarily in urban concentrations and spending most of their day indoors. Nevertheless, according to Felix et al. [5], when faced with a poorly designed environment, the consequences for humans living under such conditions could represent, for example, loss of productivity and health problems due to inadequate indoor air quality and lack of thermal comfort.

Regarding thermal comfort, it encompasses a whole range of data types and collection methods [4], including physiological and psychological aspects, ambient factors [6], whether from active and controlled technologies (through ventilation, heating, and air conditioning), architecture (materials, building design, and landscaping), and even activities and clothing, as shown in Daghigh's [7] and Mallick's [8] articles. In essence, each residence presents a particular mix toward adaptation to the regional climate and according to available alternatives in the market.

The role of climate in defining the conditions and thermal performance of indoor environments has become more relevant. In a study carried out in Portugal in 2017 [9], a high mortality rate related to exposure to excessive cold was identified. This result signaled the impact of energy poverty during a certain season of the year, the urgency of proposing solutions for this impacted layer of society, and the pressure put on the income of these families. Simões and Leder [10] describe energy poverty as a hurdle to accessing energy services, often threatening families, for instance, in tropical climates with exposure to internal overheating due to the lack of cooling solutions.

Additionally, Almeida et al. [11] present important findings about the particular discussion related to families' income versus the necessity to improve thermal comfort in new homes, while observing the impact on carbon emissions. This time, research conducted with a family building in Porto, Portugal, explored the relationship between the embodied carbon emissions and embodied energy of the materials used, showing a mutual increase associated with each renovation package to this building [11]. The results show that while the use of renewable energy can be positive in terms of reducing the use of non-renewable primary energy, the embodied energy and implicit life-cycle emissions associated with the materials and processes applied to building solutions can potentially harm their sustainability [11]. This relationship between social housing and thermal comfort issues was further developed in [12–17].

In this sense, natural ventilation is an effective option that relies on natural airflow to assure appropriate thermal conditioning of the environment, which provides favorable comfort conditions to its occupants and improvement of indoor air quality with no income pressure [18]. However, over the last decades, buildings have adopted new technologies to provide such coziness, often depending on electric appliances, and gradually both natural ventilation and lighting strategies were put aside [18]. This could represent an increase in energy consumption for families who are often unable to pay for the energy to maintain a comfortable standard of living at home [19].

When it comes to home appliances used in Brazil for this matter, a survey regarding possession and habits of use of electrical equipment in the residential class [20] indicated that almost 76% of the sample maintained a ventilator or air circulator in residences, whereas approximately 17% owned an air conditioner [20]. Additionally, according to this survey conducted between 2018 and 2019, air conditioners were mostly held by the upper classes with the highest income strata, while other appliances were equally distributed among all classes [20]. It has been observed that the energy used for thermal comfort in buildings follows a rising trend not only in Brazil but also worldwide: between 1990 and 2016, the power demand for space cooling more than tripled [21].

With these findings, this paper sheds light on thermal comfort as a lingering issue in social housing in Brazil, considering that families with financial constraints can barely afford more efficient ventilating appliances, i.e., air conditioners, and are incapable of withstanding a higher electricity bill within a narrow budget (energy poverty) [10].

As a riveting example of social housing, the Minha Casa Minha Vida (MCMV) program set in 2009 represented the Brazilian national push to diminish a 6-million-home housing deficit, equivalent to 10.2% of the total private-owned houses, by offering affordable dwellings for low-income families. However, Fundação João Pinheiro's report [22] showed some flaws concerning the program's family selection process focused on income and putting other relevant preconditions aside, such as existing precarious housing, cohabitation of families, and excessive financial burden with renting [22]. For this reason, the MCMV program arguably contributed to reducing the housing deficit a decade later, as this figure remained close to its initial level and has been stable since 2016, though this deficit decreased in relative numbers (dropped from 10.2% to 8%) as indicated by the official MCMV assessment report [23].

To enable such an ambitious housing program, a maxed-out number of residences would have to be built with a limited capital volume. Not surprisingly, a low investment allocated to each house resulted in poor construction quality and energy efficiency measures [24]. As a consequence of inadequate building design that disregards bioclimate zone peculiarities, the lack of thermal comfort remains a resonating complaint among MCMV residents. According to Baltimore Residential's occupants, a condominium located in Uberlândia City, Minas Gerais state (Figure 2), "tight, hot, and stuffy" senses describe the indoor apartment characteristics, as indicated in the post-occupation assessment report [25], due to the inefficient airflow often trapped in the center hall and only single-sided ventilation, as seen in Figure 3 [25].

**Figure 2.** Baltimore Residential's location in Uberlândia, Brazil (source: authors' own elaboration on Google My Maps).

**Figure 3.** Airflow in a Baltimore Residential apartment (source: authors' own elaboration based on [25] data and created on SmartDraw website).

Most MCMV homes are "one-size-fits-all" units that share a standardized size, room allocation, and building design, and this rigid structure limits any adaptation to local climate conditions and hence replicates thermal comfort issues previously described [25,26]. For this reason, social housing in Brazil presents no significant architectural pattern differences, leading to extra expenses for further renovation in an attempt to mitigate thermal discomfort [25].

For the above-mentioned reasons, this paper will take a single room from Baltimore Residential to run a system dynamics (SD) model based on the literature review's suggestions on sustainable solutions for thermal comfort improvement [27,28]. The SD model should shed light on the underlying cause-and-effect relationships that could explain why a particular behavior occurred in the first place, simulating the causality between factors through positive and negative feedback [27]. By applying this tool, it is possible to deal with high-order non-linear problems and other complex systems; therefore, it has been used by researchers to investigate, for example, green buildings [29] and construction management [28] promotion strategies.

The simulation goal is to verify possible benefits to indoor temperature using a mix of passive design strategies on windows, walls, and glasses. As an example, modifying window type and openings directly contribute to suitable ventilation, as opposed to the standard sliding type window seen in Brazilian social housing projects, which reduces the effective area of ventilation by 50% [30].

With these low-cost modifications applied during the construction phase, it is highly expected that dwellers will rely on alternative forms of ventilation instead of electric appliances, thus contributing to lesser carbon emissions without compromising the family budget. As described by Geels et al. [31], the transition toward a sustainable energy system includes:

"( ... ) major changes in buildings, energy and transportation systems to substantially increase energy efficiency, reduce demand or imply a shift from fossil fuels to renewable supplies. These transitions imply not only technical changes, but also changes in consumer behavior, markets, institutions, infrastructure, business, models, and cultural discourses."

Throughout the literature review, thermal comfort in social housing has promoted various discussions regarding its potential solutions to particular climate characteristics. Based on bioclimatic architecture premises, this paper considers the local climate zone to determine the type of adapted architecture with no energy consumption and a low ecological footprint [32]. In this sense, natural ventilation is deemed to provide adequate thermal comfort and indoor air quality, relying on the building design and its interaction with the local environment [33].

Bodach and Hamhaber [24] performed a compelling simulation regarding cost assessment with an energy efficiency focus on a social housing project located in Rio de Janeiro. The Mangueira project shares Baltimore's building design and, therefore, those same flaws are present in both contexts. The authors presented an economic evaluation highlighting an additional cost according to the proposed change and potential savings from energyefficient appliances, though they have not simulated to what extent temperature would fall due to these modifications.

Simões et al. [34] pointed out significant shortcomings due to unguided house renovation and expansion, mostly resulting in thermal discomfort and consequent unhealthy living conditions in low-income houses. According to the authors, residents in such an environment would be forced to use adaptative strategies to minimize discomfort, leading to high-demanding energy alternatives. To avoid such risks and unnecessary spending for these particular units, this paper will not consider applying the simulated mix of modifications in a post-occupation phase, but rather in the initial building plan. Just like Baltimore's complaints, Simões et al. [34] gathered similar opinions in João Pessoa, also identifying other Brazilian cases related to thermal issues in low-income housing in Campina Grande and Pato Branco, but did not carry out any simulation regarding passive ventilation mechanisms and related temperature drops.

One of the latest literature reviews focused on the MCMV program considered an entire decade from 2009 to 2019, with Bavaresco et al. [35] collecting relevant information regarding energy performance in Brazilian social housing. Though not proposing any solution for thermal comfort dissatisfaction nor running any simulation, the authors exposed the most impacting factors for energy efficiency in this particular matter within a total of 93 national publications, which matched the corresponding predefined research parameters.

The authors have researched studies focused on themes related to thermal comfort, natural ventilation, bioclimatic architecture, social housing, and energy poverty. A significant foundation on thermal comfort and natural ventilation in tropical regions such as Brazil could be identified through this research [10,36], which was reinforced by articles that consider, in addition to thermal comfort, issues related to social housing [10,37]. The researched works on the application of natural ventilation techniques in different bioclimatic zones indicated guidelines regarding the use of scenarios involving the configuration of buildings, wind direction, and efficiency as important factors for the application of natural ventilation solutions [38]. Yet, the authors of [39,40], dealing with social housing and energy poverty, focused on cooling solutions in social housing to minimize the thermal discomfort caused by high temperatures and reduce energy poverty in the considered regions. In addition, such articles corroborate with other studies carried out regarding the aforementioned themes, opening the way for new works with solutions that are more tailored to the experienced bioclimatic zones, allowing their improvement and application for families exposed to such climatic conditions [41].

As mentioned before, natural ventilation and sustainable insulated walls are shown as low-cost alternatives to diminish thermal discomfort. In addition, the authors highlighted that a single solution would not be possible as every locality shows a very unique set of bioclimatic aspects; moreover, given their specificities, even if two or more municipalities belong to the same bioclimatic zone, they may differ in construction aspects and should receive specific guidance [35]. Dorsey et al. [42] led a study on the use of natural building methods and how these could be incorporated into buildings and improve community development, land use planning, and architectural design as well as issues related to climate change. Another example draws attention to the use of straw bales in buildings and their proper adaptation to the official construction code, which could be promoted on a large scale [43]. These relevant discussions may also be replicated in the Brazilian MCMV context to promote low-carbon construction materials.

In conclusion, with the great variety of regional scenarios and the respective bioclimatic zones, much of the existing literature for low-income thermal comfort focused on describing related issues and listing possible alternatives, rather than simulating potential temperature drops linked to these solutions. This paper proposes a novel structured method based on SD to test some of the cited alternatives for Baltimore Residential's single rooms to increase thermal comfort.

#### **2. Materials and Methods**

The following tasks will be carried out to support social housing programs by providing thermal comfort to residents, with adequate thermal performance and energy efficiency.


This article presents the development of a model capable of predicting the internal thermal sensitivity of an individual room in Baltimore Residential using the SD method, which is a computer-aided approach for stock and flow diagram strategy development and better decision-making in dynamic and complex systems [44]. In this case, the software Vensim (from Ventana Systems, Inc.; Salisbury; UK) was used for the simulation exercise based on feedback theory, which complements system thinking approaches.

For the model creation, three steps were essential to evaluating and prioritizing the most suitable projects that are relevant to communities on a social, environmental, and economic level, according to Castrillon-Gomez et al. [45]:

1. Problem identification: how to improve thermal comfort, knowing in advance the constructive elements that influence the thermal energy flow of a certain volume of social coexistence;


Further on, the conclusion of the stock and flow diagram considers the addition of auxiliary variables, which will be further explained in this paper. These values remain the same in all scenarios when only the solar factor of the glass, wall thermal transmittance, and room window opening percentage parameters are changed to simulate the behavior of the system over time. In the computational model, the constructive parameters of an individual room in Baltimore Residential are also transcribed. After this data collection, the elements of the system are interrelated through mathematical equations. Ultimately, changing values of the constructive elements of the system results in a scenario variety with different indoor thermal sensitivity of the room.

### *2.1. Selection Criteria for Building Materials*

A myriad of building materials is currently available to provide a desirable level of thermal comfort. Nevertheless, it comes with a broad array of costs, with top-tier technologies requiring higher expenditure [46]. Considering the MCMV program, the use of poor-quality materials has contributed to general dissatisfaction [25,35]. Although dealing with a restricted budget, the federal housing program may rely on a better material selection that does not necessarily reflect higher costs. In this sense, the following items were chosen from the literature reviewed in the introduction section [24,33–35], especially concerning cost/benefit [11], thermal efficiency [37,38,40], and national availability [10] aspects:


green tempered glass presents much higher protection for a reasonable cost, and is considered an optimized solution among distinct possibilities, especially regarding budget restraints in the national program. The simulation with green tempered glass is only carried out when the window is half open, as in this situation the glass is an element that participates in the heat flow. However, when the window is completely open, the glass is out of the picture and does not affect the system.

#### *2.2. Calculation of Natural Ventilation*

According to Lamberts, Dutra, and Pereira [3], natural ventilation through unilateral openings is an energy-efficient method to adjust the thermal comfort of buildings. Thus, it is necessary to understand the climatic conditions of the region, such as wind speed and average temperature. The wind speed, provided by meteorological stations, must be corrected for the height of interest, also as a function of the distance between houses [3]. In addition, the wind pressure coefficients of the region must be observed. From this information, the average corrected wind speed must then be calculated [3], as given by Equation (1):

$$\text{Vcorrected} = \text{Vaverage} \times \text{F}\_{\text{t}} \times \left(\text{H}^{\text{Fr}}\right) \tag{1}$$

Vcorrected: average corrected wind speed (m/s);

Vaverage: average annual wind speed measured by the weather station (m/s);

H: building ridge height (m);

Ft: topographic factor;

Fr: terrain roughness factor.

The average annual wind speed gauged by the weather station in the Uberlândia region is close to 1.67 m/s [56]. According to Lamberts, Dutra, and Pereira [3], the topographic and terrain roughness factors are, respectively, 0.35 and 0.25. Moreover, as reported by Villa, Saramago, and Garcia, the building ridge height is equivalent to 15 m [25].

Window opening is pivotal to estimating the corresponding airflow in the room [3]. Depending on the window opening ratio, its useful area may vary and, in this case, a 50% or 100% opening is considered. Regarding Baltimore Residential's rooms, only singlesided ventilation is available, which directly contributes to the air flux amount carried into the room.

$$\mathbf{Q} = 0.025 \times \mathbf{a} \times \text{Vcorrected} \tag{2}$$

Q: airflow with natural ventilation in the room (m3/s);

a: opening window useful area (m2);

Vcorrected: average corrected wind speed (m/s).

The living room window has a total area of 1.78 m<sup>2</sup> [25]. To ensure the air quality of a room, a minimum number of air changes per hour is defined. As airflow is given in cubic meters per second, the result is multiplied by 3600 to establish the number of air changes per hour, according to Equation (3) [3]:

$$\mathbf{n} = \left(\frac{\mathbf{Q} \times \mathbf{3600}}{\mathbf{v}}\right) \tag{3}$$

n: number of air changes per hour;

Q: airflow with natural ventilation in the room (m3/s);

v: ventilated room volume (m3).

According to the floor plan of the Baltimore Residential apartment evaluation (Figure 3), the room has an area equivalent to 10.95 m<sup>2</sup> and a ceiling height equal to 2 m, thus the volume of the room is equal to 21.9 m3 [25]. The average annual temperature of the Uberlândia region is approximately 22.1 ◦C [57]. Moreover, the internal room temperature becomes the desired variable, in which values vary when modifying the constructive parameters of the environment.

After determining the number of air exchanges for the room, which depends on the tightness of the air openings, it is important to understand that this exchange will translate into a heat removal in the room. To calculate the thermal load, the sensible heat must be determined, which is related to the temperature difference between the interior and exterior, as shown in Equation (4) [3]:

$$\mathbf{Q\_{SE}} = \mathbf{m\_{air}} \times \mathbf{c\_{air}} \times \left(\frac{\mathbf{n} \times \mathbf{v}}{3600}\right) \times \Delta \mathbf{T} \tag{4}$$

QSE: sensible heat (W); mair: air density (1.2 kg/m3); cair: specific heat of the air (1000 J/kg· ◦C); n: number of air changes per hour; v: ventilated room volume (m3); ΔT: internal and external temperature difference (◦C).

Heat flux represents the heat transfer rate through a unit area section of the window and it is uniform (invariant) across the entire area of the window opening as well as the glass. The heat loss through the window area is given by Equation (5) [58]:

$$
\Phi = \frac{\mathbf{Q}\_{\text{SE}}}{\mathbf{a}} \tag{5}
$$

φ: heat flux removed from the room (W/m2); QSE: sensible heat (W); a: opening window useful area (m2).

#### *2.3. Calculation of the Internal Thermal Energy of the Room*

To understand which material has better thermal performance, it is necessary to know its thermal transmittance index, which means the amount of heat in watts that passes from one surface of the wall to another, per square meter at the degree of variation of temperature between the surfaces [47].

In this project, two types of envelopes are simulated: a 12.5 cm 8-hole brick wall and an EPS (expandable polystyrene) monolithic panel. The thermal transmittance of the first material is 2.94 W/m2· ◦C [3], while the second is 0.42 W/m2· ◦C [59]. From these indexes, it is possible to calculate the heat flux of Baltimore Residential's rooms, as given by Equation (6) [3]:

$$\mathbf{q} = \mathbf{U} \times \Delta \mathbf{T} \tag{6}$$

q: added heat flux in the room (W/m2);

U: thermal transmittance (W/m2 ◦C);

ΔT: internal and external temperature difference (◦C).

It is understood that Equation (6) calculates the heat flux added in the room, due to the thermal transmittance of the wall, while Equation (5) calculates the heat flux removed from the room, determined by the number of air changes through the window. The difference between both added and removed heat flux from the room results in the internal thermal energy of the room. This calculation is better understood by noting that the room is considered to be a stock where energy accumulates. From the initial internal energy value at a given time "t", it is known that what is added to the stock between time t and the next point in time, denoted by "t + 1", is also added to obtain the stock value at the, "t + 1" point, given by Equation (7) [60,61]:

$$\mathbf{E}\left(\mathbf{t}\right) = \mathbf{E}\left(\mathbf{t}\_0\right) + \int\_{\mathbf{t}\_0}^{\mathbf{t}} \left[ \left(\mathbf{q} \times \mathbf{A}\right) - \left(\boldsymbol{\Phi} \times \mathbf{a}\right) \right] \tag{7}$$

E: internal energy of the room (J);

q: added heat flux in the room (W/m2);

A: area of the wall where solar radiation falls on (m2);

φ: heat flux removed from the room (W/m2);

a: ventilation area of the living room window (m2).

Thus, it is concluded that the room ambiance works as heat storage, where the subtraction of the inlet and outlet flows results in the internal temperature of the room, given by Equation (8) [60]:

$$\text{Tinternal} = \frac{\text{E}}{\text{c}\_{\text{air}} \times \text{m}\_{\text{air}} \times \text{v}} \tag{8}$$

Tinternal: internal room temperature (◦C); cair: specific heat of air at constant pressure (1000 J/kg· ◦C); mair: air density (1.2 kg/m3); v: ventilated room volume (m3).

#### *2.4. Model Construction*

A causal diagram depicts a reduced representation of causal links and mathematical equations to facilitate the understanding of the variables' evolution and their interconnected rationale. As can be seen in Figure 4 (made with the Vensim software PLE 9.0.1x64 version), there are two negative balancing loops (B1/B2), whose functions are to smooth the internal thermal energy inside the room, which is directly related to the internal room temperature. Loop B1 is related to the heat flux that passes through the wall and window glass, while Loop B2 considers the heat flux removed from the room by the number of air changes through the window opening.

**Figure 4.** Causal diagram for Baltimore Residential's rooms (source: authors' own elaboration using the Vensim software).

To develop a stock and flow diagram (SFD), four building blocks are used, according to Ahmad et al. [62]: stock, flow, and auxiliary variables, and also a connector. A stock shows an accumulation of any variable; in the case under study, energy is the variable accumulated in the room. The flow is attached to a stock and responsible for increasing or depleting the stock level, with the variable "added heat flux in the room" as the input and the variable "heat flux removed from the room" as the output. Auxiliary variables can be parameters or values calculated from other system variables. All auxiliary variables and their values can be seen in Table 1. Finally, a connector or an arrow denotes the connection and control between the system variables. Ultimately, the four building blocks are displayed within the stock and flow diagram, as seen in Figure 5.


**Table 1.** List of auxiliary variables and their respective values (source: authors' own elaboration).

**Figure 5.** Stock and flow diagram for Baltimore Residential's rooms (source: authors' own elaboration using the Vensim software).

#### **3. Results and Discussion**

The thermal comfort evaluation in indoor environments is commonly performed according to the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) standard ASHRAE 55 [64], which has been largely explored in the literature [65–71] and defines indoor operating temperatures ranging from 23.0 ◦C to 26.0 ◦C (for 35% relative humidity) and from 22.5 ◦C to 25.5 ◦C (for 65% relative humidity). In the Uberlândia region where this simulation was carried out, the relative humidity is above 65% [63]. For analysis purposes, a temperature of 25.5 ◦C was adopted as the thermally comfortable value, being the maximum value according to its operating range.

For the model purposes, a 5-month period between October 2021 and February 2022 taken from the National Institute of Meteorology (INMET) database [63] contained significant data on the temperature of Uberlândia during the hottest months of 2021/2022 time frame. Bringing a harsh-condition sample to an extensive but not exhaustive simulation would reflect how far thermal discomfort has long been affecting Baltimore Residential's occupants, which is critical to inflict health disorders, especially in the considered period. Moreover, the potential benefits of the building materials combination would also be stressed to the fullest.

From 3000 temperature samples registered at least twice per day (also at night), it was noticed that 33% remained above ASHRAE's thermally comfortable threshold value of 25.5 ◦C. In this sense, it was reasonable to extract a random day from this selection for the simulation.

The model also considers the average solar radiation in the region and the outdoor air temperature, which were obtained from the National Institute of Meteorology (INMET), specifically from the Uberlândia station [63]; in Figure 6, the solar radiation presents a low level at the beginning of the day, eventually increasing at times close to noon, and decreasing at the end of the afternoon. On the other hand, on this particular day, between 10h00 and 15h00, the outdoor temperature kept rising even with less solar radiation, also seen in Figure 6. These data are displayed over time, between 06h00 and 18h00, considering the solar radiation occurrence.

**Figure 6.** Regional solar radiation and outdoor air temperature (source: authors' own elaboration using the Vensim software).

After incorporating the data into the model, the next step requires establishing values for the input variables such as useful ventilation area, glass material, and wall material, which make up each scenario, as shown in Table 2.


**Table 2.** Technical specifications of simulated scenarios (source: authors' own elaboration).

\* Once the window is completely open, the window glass material does not affect the system.

As an outcome obtained from the simulation, once initially set with the average annual temperature of the region (22.1 ◦C) at 06h00, the internal room temperature variation over time is represented in Figure 7, in which is possible to identify three local maxima similar to Figure 6, since it is directly linked to solar radiation.

With a 50% window opening ratio combined with a 3 mm regular glass window and a 12.5 cm rectangular 8-hole brick wall, this scenario presents the highest internal room temperature value held during the entire period, which illustrates the thermal discomfort peak pointed out by Villa, Saramago, and Garcia [25]. Even with the substitution of window glass, wall materials, or both for allegedly more sustainable and efficient ones, while remaining with a 50% window opening ratio, there was only a small drop in perception regarding the internal room temperature.

**Figure 7.** Internal room temperature variation over time—scenario performance (source: authors' own elaboration using the Vensim software).

On the other hand, the lowest temperatures over time were achieved by applying a fully open window (100% opening ratio), regardless of its material, with an additional reduction when used alongside an EPS wall. In this context of a wide-open window, it is worth mentioning that the window glass material did not impact the simulation since heat flow and solar radiation had no obstacles whatsoever. It is noticed that, even with the maximum outdoor temperature shown at 15h00, the room remained at a lower temperature.

The generated results in Figure 8 show the amount of time (as a percentage of the full period) during which each scenario exhibited temperature values above 25.5 ◦C.

**Figure 8.** Percentage of time in which the internal room temperature was above 25.5 ◦C (source: authors' own elaboration).

Regarding the worst-case scenario, it is verified in the corresponding situation that the internal room temperature remains above 25.5 ◦C for the longest time, equivalent to 80% of the considered interval, peaking at 33.7 ◦C close to 15h00. The other three scenarios that also include a 50% window opening have lower percentages, with a temperature drop at the decimal order, including:

(+) EPS wall: 0.6 ◦C temperature peak reduction and 79% of the time with an internal room temperature above 25.5 ◦C;

(+) Green tempered glass: 0.7 ◦C temperature peak reduction and 79% of the time with an internal room temperature above 25.5 ◦C;

(+) Green tempered glass and EPS wall: 1.6 ◦C temperature peak reduction and 78% of the time with an internal room temperature above 25.5 ◦C

Indeed, the 100% window opening ratio contributes to the greatest sensitivity to the system, exerting a strong influence on the heat flux removal as heat change frequency increases. Other elements such as the solar factor of the glass and thermal transmittance of the wall influence the amount of energy added to the environment. As heat struggles to enter the room, it also struggles to leave it, meaning that the impact of both does not significantly alter the internal temperature of the room.

The best scenario encompasses a fully opened window and an EPS wall, with the temperature staying above 25.5 ◦C for 70% of the time, peaking at 28.2 ◦C, which is still above the operative range for thermal comfort. However, it is considered that by applying this scenario to the entire residence, it would be possible to offer a thermally comfortable environment.

In addition to these results, the predicted mean vote index (PMV) method from standard 55-2004 of the ASHRAE [64] can be applied to determine the thermal comfort of the room. The PMV index is used to measure the level of thermal comfort, as shown in Table 3.


**Table 3.** PMV index scales with thermal perception (source: authors' own elaboration).

The results of the scenarios taken from the SD model were analyzed according to the ASHRAE 55 thermal comfort PMV method using the CBE Thermal Comfort Tool [72,73], developed at the University of California at Berkeley. Table 4 shows the summary of the input values of the factors used in the method.

**Table 4.** Parameters used to calculate the thermal comfort indices (source: authors' own elaboration).


The inputs in Table 4 remain the same for all scenarios, only the internal temperature of the room varies according to each scenario, and the temperature value used was the peak temperature in order to analyze the most extreme situation presented by the SD model. For the PMV analysis purpose using the CBE Thermal Comfort Tool, the operative temperature used is equal to the peak temperature without variation over time.

As the window opening factor leads to a compelling contribution to the temperature drop, modifying the window size may also bring an additional benefit when combined with the previous construction elements [48,49], following the same rationale presented in Equation (2), in which the airflow volume is directly related to the useful area of the window. According to Uberlândia's Municipal Construction Code [50], each window area should be at least equivalent to 50% of the required illuminated area, which is 1/6 of the total room area. In this case, a minimum window area of 0.9125 m2 complies with the local regulation; it is worth mentioning that the previous results were obtained from a window area equal to 1.78 m2.

A new simulation for the internal room temperature regarding different sizes for the window was performed, ranging from 1.00 to 3.00 m2 (the minimum window area of 0.9125 m2 was rounded up for better comparison). Figure 9 shows the internal room peak temperature encompassing the previous six scenarios for different window areas.

**Figure 9.** Internal room peak temperature behavior according to the window area variation (source: authors' own elaboration).

Compared to the MCMV standard window size (1.78 m2), it is observed that the smallest window area (1.00 m2) provides the highest peak temperature in the room for all scenarios, as a direct result of less airflow volume and air exchange. In this case, a smaller window can raise the internal room peak temperature close to 11 ◦C ("50% Window Opening", "50% Window Opening + EPS", and "50% Window Opening + Tempered Glass + EPS" scenarios). With a 2.00 m<sup>2</sup> window area, a temperature decrease of up to 3 ◦C is detected in the "50% Window Opening + EPS" scenario, while others remain lower than a 1 ◦C drop. When a 3.00 m<sup>2</sup> window area is applied, almost 5 ◦C is withdrawn from the room temperature, shown in the "50% Window Opening" and "50% Window Opening + EPS" scenarios.

Both the "100% Window Opening" and "100% Window Opening + EPS" scenarios demonstrated a slight temperature drop for 2.00 m2 and 3.00 m2 window areas in comparison to the standard MCMV window measure (1.78 m2), whereas the smallest window area provided an addition of up to 3 ◦C to the peak temperature. Moreover, it was noticed that as the window area gets closer to 3.00 m2, the internal room peak temperature difference between the simulated scenarios reaches a maximum value of 1.37 ◦C. In conclusion, there is a temperature threshold that limits the benefits from the window area increase, which decision-making should take into account when determining building material costs and building structure safety. Table 5 contains the PMV results from the CBE Thermal Comfort Tool [73].

Since the smallest window area (1.00 m2) brings the highest internal room temperature for all the simulated scenarios, it is also expected to show a PMV index that translates to thermal discomfort. In this case, a PMV index above 3 (hot) is shown for all four scenarios that include "50% Window Opening", whereas the "100% Window Opening" scenarios stay within the neutral scale (close to 0).


**Table 5.** PMV results for each scenario and window area (source: authors' own elaboration).

\* Lowest PMV values among all scenarios.

For the MCMV standard window area (1.78 m2), the analysis results show a PMV index close to Scale 2 (warm), while the most favorable scenario (100% Window Opening + EPS) places between Scale 0 (neutral) and Scale -1 (slightly cool). As can be seen, a fully opened window has a slight difference either combined with an EPS wall or a brick wall alternative, nevertheless, both scenarios have a larger distinction when it comes to compared to halfopened-window scenarios, which is an indication that wide-open windows are relevant to reaching the thermal comfort goal. With a larger window area (2.00 m<sup>2</sup> and 3.00 m2), a greater airflow volume and more frequent air exchanges are key to improving thermal comfort: the previous shows PMV values between 1 (slightly hot) and 0 (neutral), whereas the latter resides mostly between 0 (neutral) and -1 (slightly cold). Interesting to notice that the "50% Window Opening" scenario benefits the most from the adoption of a larger window, which demonstrates the potential gains from the window area variation on the internal room temperature.

#### **4. Conclusions**

This research demonstrates that natural ventilation and the use of constructive elements of low complexity and wide availability in the market (tilting window with up to 100% opening, green tempered glass, and EPS wall) can contribute to the thermal comfort of a residential room. From the worst to the best-case scenario, a substantial reduction in the peak temperature was obtained from window size variation, without any use of electrical equipment, such as a ventilator or air conditioning. In addition, this paper shed light on solutions such as natural ventilation and more sustainable and energy-efficient building materials that would not incur power consumption, considering that the majority of electric appliances are not affordable for most of the low-income population, which would also increase the household electricity costs, but mainly, elevate anthropogenic greenhouse gas emissions.

Moreover, the fact that obtaining a significant decrease in temperature just by fully opening the window or modifying its size makes the solution more tangible, given the ease of its implementation, it is something that can be done even in existing homes through a well-planned renovation. Serving as a baseline to encourage continuous field development in new buildings and renovations [11], it will also avoid any increase in electricity consumption during the operational phase for buildings, considering their entire life cycle [10].

The EPS wall has other advantages in addition to thermal comfort, as it is a light material and easy to install, and also makes construction more agile; therefore, it is recommended and its effect on thermal comfort remains positive. The green tempered glass alternative should be analyzed in terms of costs and benefits compared to the tilting window with 100% opening; if its cost is considerably higher, it is worth installing the full opening window or a larger one.

The effect caused by green tempered glass and EPS wall in hampering the heat entrance is important, but it should be noted that the heat flow exit also depends on the same window opening, thus the prioritized constructive element should be the 100% opening tilting window, which also assures the privacy of residents.

In conclusion, the advancement of constructive technologies and materials used in civil construction significantly contributes to expanding access to more efficient buildings in the thermal field without increasing electrical consumption. The system dynamics method may be further adapted and replicated to other circumstances, considering various locations and different realities among low-income housing in Brazil. Ultimately, managing thermal comfort for a national-size social housing program means adding significant value to the quality of life for millions at a reasonable cost. For policymakers, social housing should not remain only a matter of using low-cost materials but elevating their benefits to the fullest on a sustainable path.

**Author Contributions:** Conceptualization, C.L., V.E.d.S. and R.M.; methodology, V.E.d.S. and J.C.d.S.I.G.; software, V.E.d.S. and J.C.d.S.I.G.; validation, D.M. and J.C.d.S.I.G.; formal analysis, C.L., V.E.d.S. and R.M.; investigation, C.L., V.E.d.S., R.M. and M.M.; resources, C.L., V.E.d.S., R.M. and M.M.; data curation, C.L. and V.E.d.S.; writing—original draft preparation, C.L., V.E.d.S. and R.M.; writing—review and editing, C.L., V.E.d.S. and J.C.d.S.I.G.; visualization, C.L. and V.E.d.S.; supervision, D.M. and J.C.d.S.I.G.; project administration, D.M.; funding acquisition, E.M.d.S. and J.C.d.S.I.G. All authors have read and agreed to the published version of the manuscript.

**Funding:** We gratefully acknowledge the support of the RCGI—Research Centre for Greenhouse Gas Innovation, hosted by the University of São Paulo (USP) and sponsored by FAPESP—São Paulo Research Foundation (2014/50279-4 and 2020/15230-5) and Shell Brasil, and the strategic importance of the support given by ANP (Brazil's National Oil, Natural Gas and Biofuels Agency) through the R&D levy regulation. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001. This work is based upon financial support from Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq - 407631/2021-6).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** Special recognition to Samantha Maduro, whose valuable architectural experience within the Minha Casa Minha Vida program shed light on sustainable materials and state-ofthe-art practices.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **The Influence of Cultural Factors on Choosing Low-Emission Passenger Cars**

**Ioana Ancuta Iancu 1, Patrick Hendrick 1, Dan Doru Micu 2, Denisa Stet 2, Levente Czumbil <sup>2</sup> and Stefan Dragos Cirstea 3,\***


**Abstract:** The decrease in greenhouse gas emissions by passenger cars is one of the key factors for climate protection measures. Besides EU strategies for low-emission mobility, policy makers must consider the behavioural factors of buyers. This study aims to cover this gap by investigating the relation between the national cultural dimensions (Hofstede model) and car adoption by fuel type in EU countries. This could help car sellers to find better solutions for advertising cars with medium and low greenhouse gas emissions. To find better ways to increase the usage of mediumand low-emission cars using targeted advertising, correlations and a multiple regression analysis were used. The results show that the consumer preference for one type of fuel is correlated with at least one of Hofstede's six cultural dimensions: the power distance index; individualism versus collectivism; masculinity versus femininity; the uncertainty avoidance index; long-term orientation versus short-term normative orientation; indulgence versus restraint. The major conclusion of the study underlines that, with increases in the individualism versus collectivism and indulgence versus restraint scores, the usage of low- and medium-emission cars also increases, and with the increase in the power distance and uncertainty avoidance index, the usage of low- and medium emission cars decreases. At the same time, the driving preference for low- and medium-emission vehicles decreases with the tendency towards collectivism and restraint of EU countries.

**Keywords:** battery electric vehicles; plug-in hybrid electric vehicles; hybrid electric vehicles; CO2 emissions; Hofstede; advertising

#### **1. Introduction**

The climate and environment changes has become one of the most important topics in recent years. The release of carbon dioxide (CO2) and other greenhouse gas (GHG) emissions due to human activities has led to global warming [1] and, therefore, to climate change. Factors such as the increases in urbanization, population, wealth, energy consumption, and agriculture activities have resulted in environmental change [2]. As a result, the European Green Deal for the European Union (EU) emerged. One of the aims of this commitment is to transform the EU into a region where there are no net GHG emissions by 2050 [3].

Worldwide, the transport sector was responsible for 16.2% of the GHG emissions [4], and it was the main pollutant in Europe [5], responsible for 22.3% of the total GHG emissions in 2020. Most of the GHGs come from passenger cars (44%) [6]. Therefore, the reduction in GHGs for passenger cars is one of the key factors for climate protection measures [7]. Besides EU strategies for low-emission mobility, policy makers must consider the behavioural factors of buyers. Buying behaviour is influenced by cultural, personal, social, and psychological factors [8]. A large body of literature focuses on three of them

**Citation:** Iancu, I.A.; Hendrick, P.; Micu, D.D.; Stet, D.; Czumbil, L.; Cirstea, S.D. The Influence of Cultural Factors on Choosing Low-Emission Passenger Cars. *Sustainability* **2023**, *15*, 6848. https://doi.org/10.3390/su15086848

Academic Editors: Oz Sahin and Russell Richards

Received: 8 March 2023 Revised: 7 April 2023 Accepted: 12 April 2023 Published: 19 April 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

(personal, social, and psychological factors), and only a few studies focus on the cultural characteristics of car buyers by fuel type. Furthermore, the studied literature shows that considering only the incentives when promoting a certain type of fuel will influence buyer behaviour only for a short period of time. We do not argue the relevance of price, incentives, taxes, and other already studied factors that affect the decision to buy a car with a certain engine.

The objective of our study is to find out how specific traits of national culture, as described by Hofstede's culture dimensions, influence the usage of low-emission passenger cars. To complete the objective, after reviewing the literature, the first step was to find out whether there is a correlation between the cultural dimensions and fuel choice when driving a passenger car. Using multiple regression analysis, we found which cultural traits can best predict the choice of a passenger car.

To the best of our knowledge, there is no similar study that shows how to improve the marketing strategies of low-emission passenger cars using culture as a determinant. This study aims to complete the literature by analysing the relations between the national cultural dimensions and car adoption by fuel type in EU countries. By taking into consideration cultural factors, this study could help policy makers, scientists, and marketing specialists find better solutions for promoting the sales of cars with low CO2 emissions by properly addressing them to each country.

To accomplish the research objective, the paper begins with a concise overview of the research topic, and it then reveals an extensive literature review in Section 2, which explores carbon dioxide emissions from passenger cars and the cultural factors that impact the purchasing decisions for these vehicles. Section 3 provides details on the research methodology, while Section 4 presents the findings, results, and a discussion of the study. Finally, the paper concludes with a summary of the key insights from the research in the last section.

#### **2. Literature Review**

#### *2.1. Carbon Dioxide Emissions of Passenger Cars by Fuel Type*

Inside the EU, in the last years, the total GHG emissions have dropped, but the transport sector has not followed the same trend. Moreover, in 2019, GHG emissions increased by 0.8% (shipping was not included) [9]. The emissions of GHGs in the transport industry consist mainly of NOx, CO, and NMVOCs, the bigger share being CO2 [10].

In the transport sector, passenger cars and light-duty vehicles are the main pollutants, and together they are responsible for 70% of the total GHG emissions in the EU [11]. Due to this matter, the EU is obliged to find new ways to address and encourage the acquisition of less polluting passenger cars.

On EU roads, in 2021, the passenger cars by fuel type were as follows: 52.9% petrol; 42.3% diesel; 3.4% alternative fuels; 0.8% hybrid; 0.4% electric cars (BEVs) [12]. The most used passenger cars are those with internal combustion engines, which are the ones that emit the most GHGs [13]. Figure 1, which represents the average CO2 emissions from different fuel types, shows that BEVs have zero emissions, and that the cars that use E85, LPG, diesel, and petrol as fuel pollute the most.

The literature shows that cars that use bioethanol (E85) have the highest CO2 emissions (Figure 1), but they have the lowest total greenhouse effect when sugar cane-based fuel is used. The reason behind this contradiction dwells in the uptake of CO2 from the air during the growth of sugar cane [14]. In the EU, the countries with the highest rates of CO2 emissions from new cars that come from E85 are Germany (292 gCO2/km) and France (185.5 gCO2/km) [15].

**Figure 1.** Average NECD emissions (gCO2/km) from new passenger cars in EU countries (2020). Source: [14].

Due to the high rate of GHG emissions from diesel cars, many cities in the EU are considering banning them in the central areas [16]. Diesel cars are mostly used in Lithuania and Latvia (69.2% and 63.2%, respectively, from the total fleets of passenger cars), and they are used the least in Greece (8.1%) [17]. Petrol is the most used fuel in the EU (52.9%), ranging between 91.1% of the cars in Greece and 26.1% of the cars in Lithuania [18].The popularity of petrol and diesel cars resides in the range of vehicles being designed for long-distance trips [18]. However, a 2019 study made in Portugal shows a decrease by up to −27% in emissions when cleaner vehicles are used that comply with the post-Euro 6 standards [19].

LPG is a colourless gas that is derived from petroleum, and it is most often used when converting the existing technologies of passenger cars to run on cheaper and lower GHG emissions [20]. From the 9,787,916 passenger cars registered in the EU in 2020, 151,999 used LPG as fuel, which emits, on average, 110.9 NECD (gCO2/km). In this case, the differences between the CO2 emissions in EU countries are small, ranging between 121.42 /km in Belgium and 102.3 gCO2/km in Romania [14]. Most of the registered cars that use LPG are in Italy (93,339 (6.3% of the cars)), with fewer in Luxemburg (only five cars) [14,17]. Compressed natural gas (CNG) is used by 0.5% of the passenger car fleet in the EU, being most common in Italy (2.4%) [17]. In 2020, the average emissions of gCO22/km from new registered cars ranged between 127.4 (Poland) and 81 (Croatia) [9]. CNG is described as one of the promising low-emission alternatives for the short- and mid-term decarbonization of road transport in the EU [21], but studies show that there are no benefits in terms of GNG emissions [22].

The below figure (Figure 2) shows the distribution of high-emission passenger vehicles in EU-27. The most passenger cars that use fuels with high polluting rates (petrol, diesel, LPG, GNG) are in Latvia, Croatia, Slovenia, and Estonia, while Sweden, the Netherlands, and Lithuania have less.

**Figure 2.** Percentages of passenger cars in EU-27 (except Bulgaria) with high CO2 emissions in 2019. Source: all the data from the ACEA [14], except for the following: Austria—all passenger cars [23]; Croatia—BEVs [24]; Cyprus, Estonia, France, Latvia, and Malta—all passenger cars [24]; Denmark—LPG [25].

The differences between the average emissions from new petrol–electric and diesel– electric cars are small (41.1 and 40.2 gCO2/km, respectively), being mostly preferred by Germans in the EU [14]. Hybrid engine cars (HEVs) are a combination of the combustion engine and an electric motor, and they are an intermediate solution between ICVs and BEVs [18]. These cars are mostly used in Sweden, the Netherlands, and Ireland (2.4% of the total passenger car fleet), and they are less used in Croatia and Romania (0.2% of the total passenger car fleet) [17]. Another solution is plug-in electric cars (PHEVs), which are cars can be directly charged from the power grid and can be driven at 20–50 km using only electricity [26]. It is safe to say that, on small trips, the CO2 emissions of PHEVs are zero.

The available data (Figure 3) show higher rates of registered HEV and PHEV passenger cars in Sweden, the Netherlands, Finland, Ireland, and Belgium, and fewer in Poland, Romania, Croatia, and Latvia.

**Figure 3.** Percentages of passenger cars in EU-27 (except Bulgaria) with medium CO2 emissions in 2019. Source: all the data from ACEA [14], except for the following: Austria—all passenger cars [23]; Croatia—BEVs [24]; Cyprus, Estonia, France, Latvia, and Malta—all passenger cars [24]; Denmark—LPG [25].

A 2021 study shows that only electric and hydrogen fuel can help in achieving the goals of the Paris Agreement [27]. As the above figure illustrates (Figure 4), hydrogen used as a fuel for passenger cars emits only 1.2 g CO22/km, and BEVs have 0 gCO22/km emissions. Hydrogen cars use H2 to generate electricity, the main advantage being that they are easily refuelled at filling stations in 3–5 m and they have a good driving range [26]. Many studies show that using electricity to power BEVs has the lowest climate impact. The Netherlands has the highest rate of BEVs (1.2%), followed by Austria, Denmark, and Sweden (0.6%) [17]. A study from 2018 indicates that GHG emissions are reduced by 50–60%, on average, when using BEVs compared with internal combustion engines [28]. The collected data show that cars that use hydrogen and electricity as fuel are more common in the Netherlands, Sweden, Denmark, and Luxembourg.

**Figure 4.** Percentages of passenger cars in EU-27 with low CO2 emissions in 2019. Source: all the data from ACEA [14], except for the following: Austria—all passenger cars [23]; Croatia—BEVs [24]; Cyprus, Estonia, France, Latvia, and Malta—all passenger cars [24]; Denmark—LPG [25].

To decrease the CO2 emissions, which are the main component of GHGs, the policy makers in the EU use almost 700 measures to address road transport emissions [11], and to decrease the demand for polluting cars and promote the use and production of more energy-efficient vehicles [16,29,30]. They use "push" or "pull" strategies, [31,32]. The first type of strategy addresses the car manufacturers and fuel suppliers, and the second (pull strategies) applies to the demand side [11]. EU and national policies related to taxes (on fuel, vehicles, emissions) can decrease the demand for polluting cars and aim to promote the production of more fuel-efficient vehicles. Besides EU and national strategies for low-emission mobility, policy makers must consider the behavioural factors of buyers. Through economic incentives, buyers are motivated only for the period during which they have benefits; afterwards, they will return to their old buying habits [33–35]. Another study demonstrates that besides financial help, it is necessary to have a certain level of self-sustainability [36].

#### *2.2. Cultural Factors Influencing Passenger-Car-Buying Behaviour*

Buying behaviour is influenced by cultural, social, personal, and psychological factors [37]. A large body of literature focuses on the personal, social, and psychological factors that influence passenger-car-buying behaviour when choosing the fuel type of the car, but only a few focus on the cultural characteristics of buyers.

Reviewing the literature, we find many diverse definitions of culture. In 1952, Kroeber and Kluckhohn listed 160 definitions [38]. Some authors explain culture through empirical studies, while others use more generic formulations [39]. Kotler and Armstrong explain culture as a cumulus of fundamental values, perceptions, wishes, and learned behaviours, which are different from one society to another [8]. Moreover, they consider culture as the most profound cause that influences consumer behaviour [37].

One of the most cited authors, Schwartz, sees culture as a "complex of meanings, beliefs, practices, symbols, norms, and values prevalent among people in a society" [40]. The model developed by the author introduces seven dimensions that can predict consumer behaviour: intellectual autonomy; affective autonomy; embedded cultures; cultural egalitarianism; cultural hierarchy; harmony; mastery [40].

Hofstede defines culture as a collective "programming of the mind" phenomenon [41,42]. He explains that culture represents a set of elements, such as beliefs, attitudes, collective activities, role models, and the language common to a particular group [43]. By repeated

empirical research, Hofstede created one of the most comprehensive models with which we can characterize national cultures. His model provides a scale from 0 to 100 for all the dimensions, by country [42].

Hofstede's six cultural dimensions are as follows: the power distance index (PDI); individualism versus collectivism (IDV); masculinity versus femininity (MAS); the uncertainty avoidance index (UAI); long-term orientation versus short-term normative orientation (LTO); indulgence versus restraint (IND).

The model has been used in various sectors and for various perspectives; for example, for the prediction of proenvironmental behaviour in hospitality and tourism [44], changing the organisational culture to increase innovation and productivity [45], understanding the perceived risks related to self-driving cars [46], etc.

The PDI reflects the acceptance of the power distribution in a society. When a country has a score close to 100, the less powerful members will more easily accept the hierarchy and inequalities [47,48]. In countries with high PDIs (Figure 5), luxury articles, fashion items [49], and expensive cars [50] are used to make one's status clear.

**Figure 5.** EU-27 countries' power distance indexes (PDIs) according to Hofstede's culture dimension model [42].

In individualist cultures (Figure 6), people look after themselves and immediate family, and they are striving for identity [47]. Furthermore, they use explicit communication, contrary to collectivist cultures. The countries that score low in this dimension consider that to sell something, first you must create trust [49]. In collective cultures, people are more inclined to develop a pro-environmental attitude, being ready to pay more for the wellbeing of all society [51]. A study conducted in Germany, Mexico, and Spain shows that a higher level of collectivism develops stronger eco-friendly behaviours, and stronger intentions to adopt renewable energy technologies [52]. The preferred advertisements in collectivist countries are focussed on the idea of team, collaboration, and the victory of the community [53].

If in a masculine society (Figure 7), the main drivers are achievement and success, in a feminine society, the values are caring for others and a good life quality [49]. The same study shows that dimension plays an important role: big in the masculine dominated societies and small in the feminine ones. The advertising in countries dominated by masculine values is focused on success (by showing luxury brands) [49], competitiveness, dreams, expectations, and nonfictional elements [54].

The UAI refers to the ways in which the individuals in a society relate to uncertainty and ambiguity [54]. In cultures with high scores for this dimension (Figure 8), the individuals are more resistant to accepting new technologies and innovation [55], the conflicts are threatening, individuals have an aggressive driving style [54], and advertisements are structured and serious with a great deal of technical information [56].

**Figure 6.** EU-27 countries' individualism indexes (IDVs) according to Hofstede's culture dimension model [42].

**Figure 7.** EU-27 countries' masculinity indexes (MAS) according to Hofstede's culture dimension model [42].

**Figure 8.** EU-27 countries' uncertainty avoidance indexes (UAIs) according to Hofstede's culture dimension model [42].

When a society is driven by elements such as perseverance [49], pragmatism, and a focus on the future [48], it scores high in the LTO dimension (Figure 9). Moreover, LTO corelates with pro-environmental behaviours [57].

**Figure 9.** EU-27 countries' long-term orientation indexes (LTO) according to Hofstede's culture dimension model [42].

The last dimension, IND, was introduced in 2010, and it describes the inclination of a society to enjoy life in opposition to the ones that suppress gratification [58]. Usually, people living in restraint societies with low IND indexes (Figure 10) tend to be more cynical and pessimist [59]. This is the least studied dimension in the literature.

**Figure 10.** EU-27 countries' indulgence indexes (INDs) according to Hofstede's culture dimension model [42].

The Hofstede model is debated by some authors because heterogenous groups can live within a state, but we must consider that sharing the same education system, healthcare system, legal system, and institutions makes them share a common goal over a long period [43], and if it is required, Hofstede's model could be applied to smaller communities as well. Many articles demonstrate the relevance of this model for cross-cultural studies in marketing, psychology, sociology, or management [60], as it is the best way to measure national cultures [61]. The arguments that support the choice of using Hofstede's cultural dimensions in the present paper are described next. Hofstede's cultural dimensions have been used for a long time (since 1970) and have an extensive empirical base from all over the world. His research was conducted in the 1970s, and it has been expanded and validated over time, with his dimensions being used in numerous studies across different countries and cultures. Hofstede's dimensions are recognized and used in the business world so that companies can tailor their marketing strategies to different cultures [62].

McLeay et al. recommend that future studies on eco-friendly consumption should consider cultural contexts [63]. Barbarossa et al. compared the EV adoption intentions among consumers in Denmark, Belgium, and Italy using Hofstede's cultural dimensions to explain cross-national differences [33]. According to the results, public policy and social marketing campaigns should give greater consideration to the impact of cultural values when promoting environmentally sustainable technologies. It is important to note that promotional efforts should vary based on the specific cultural elements and the level of innovation of the products being promoted [63].

#### **3. Research Methodology**

The purpose of this study is to establish the influence of national culture, as described by Hofstede's cultural dimensions, on the selection of fuel for automobile usage by following a series of methodological steps to arrive at a conclusive answer.

First, a literature review and a collection of data from different sources were made. Afterwards, an assessment of the passenger cars by fuel type in every country of the EU was conducted. The percentages of passenger cars by fuel type were taken into consideration, and not the number of passenger cars in the EU, in 2019. The authors consider that new car purchases during the COVID-19 pandemic years were more influenced by the availability of cars and their prices on the market (the component crisis) than the cultural dimensions of the buyers. The share of each chosen fuel type when driving a car is more important than the effective number because it can show the preferred passenger car in a country. In the study, all the data come from five official sources: the ACEA [14], except for the following: Austria—all passenger cars [23]; Croatia—BEVs [24]; Cyprus, Estonia, France, Latvia, and Malta—all passenger cars [24]; Denmark—LPG [25]. From the EU countries, Bulgaria is the only one that does not have any official data on the number of diesel, petrol, and hybrid electric passenger cars. Bulgaria was not excluded from the study, but the absent information was noted as missing data.

The second step was to collect data on the six dimensions of Hofstede's model from the homepage Geert Hofstede [63]. The Hofstede model of cultural dimensions was last updated in 2010 within [42]. This edition includes an expanded analysis of cultural differences and updates to the original cultural dimensions based on data collected from additional countries and regions. Each dimension, for all the studied countries, has a score between 1 and 100. Cyprus was the only country with no characterization of its cultural dimensions. However, at least one study assumes that due to the similarities with Greek culture, the same score can be applied [33].

Next, the data were introduced in SPSS 28 software, and a descriptive analysis of the elements was made. To observe the existence of links between the preferred fuel in passenger cars and the cultural dimensions, we used the normal distribution of the data, using Shapiro–Wilk tests, histograms, and plots. Further, Pearson's correlation for the normally distributed data (*p* > 0.05) and Spearman's rho for the non-normally distributed data (*p* < 0.05) were used.

The results show the strength of the correlations between two variables and can range between −1, which is a perfect negative correlation, and +1, which is a perfect positive correlation. When the coefficient is close to 0, the results show no relation between the variables. It is mandatory to consider the value of the significance (sig 2-tailed), which must be less than 0.05 [64].

For answering the research question, three groups of passenger cars by CO2 emissions were considered:


To explain the usage of high-, medium-, and low-emission passenger cars by the cultural dimensions, multiple regression analysis was applied. The first step was to find out whether there is a correlation between passenger vehicles by CO2 emissions and the cultural dimensions. Using Shapiro–Wilk tests, we found which correlation tests are the best fit. Pearson's correlation for normal distribution data or Spearman tests for non-normal distribution data were used.

Next, in the multiple regression analysis, the cultural dimensions are considered as independent variables, and one of three groups of vehicles by CO2 emissions with which a correlation was found is the dependent variable. For the multiple regression, the equation used was as follows:

$$\boldsymbol{Y} = \boldsymbol{\beta}\_0 + \boldsymbol{\beta}\_1 \cdot \mathbf{X}\_1 + \boldsymbol{\beta}\_2 \cdot \mathbf{X}\_2 + \dots + \boldsymbol{\beta}\_k \cdot \mathbf{X}\_k$$

where *Y* is a dependent variable (percentage of high-, medium-, or low-emission passenger cars); from *X*<sup>1</sup> to *Xk* are the independent variables (the cultural dimensions), considering a total of *k* independent variables; from *β*<sup>1</sup> to *β<sup>k</sup>* are the regression coefficients; *β*<sup>0</sup> is the intercept value (regression constant).

To identify the most significant cultural dimensions that explain the usage of high-, medium-, or low-emission passenger cars, the backward elimination method was applied. This is a widely used method that starts from a multilinear regression model that contains all the independent variables that are of interest and eliminates, one by one, the independent variables that are not significant in predicting the dependent variable until the highest accuracy regression model is identified.

#### **4. Results and Discussion**

The descriptive analysis of the fuels used for passenger cars in the EU gives a picture of the substantial differences between the consumer preferences in each country in 2019 (Table 1). The differences between the lowest and highest rates of registered cars in different countries are given by the minimum and maximum values. It can be observed that there are countries with low percentages of passenger cars that use LPG as fuel (Ireland or Sweden), CNG (Malta, Cyprus, Croatia, Latvia), and HEVs (Latvia, Croatia, Romania), and other countries with high rates of these types of cars. Even if studies show the importance of H22 in the reduction in the CO2 for passenger cars, due to barriers such as price, infrastructure, and distribution [65], this fuel is less used. Petrol- and diesel-fuelled engines are the most common in the EU, with means of 54.23% and 42.07%, respectively.


**Table 1.** Descriptive statistics of variables.

Besides the economic-related factors, the cultural dimensions of the European countries may explain some of these differences.

To find out whether there is a link between the chosen fuel type and the national cultural dimensions, first, the data were tested for normal distribution (Table 2). Using Shapiro–Wilk tests, the results show that data for petrol, diesel, HEVs, low emissions, medium emissions, high emissions, the PDI, MAS, the UAI, LTO, and IND are normally distributed (sig. > 0.05). The rest of the variables (LPG, CNG, PHEVs, H2, BEVs, and IDV) are not normally distributed (sig. < 0.05)


**Table 2.** Normal distribution of variables.

#### *4.1. High-Emission Passenger Cars*

To observe the link between the variables, a Pearson correlation test was applied for the normally distributed data. The next table (Table 3) shows that the percentage of petrol passenger cars in EU-27 countries has a moderate positive correlation with the IND cultural dimension (r = 0.4190, sig. = 0.03). This indicates that the inclination to buy petrol-fuelled cars is linked to the tendency of people living inside national borders to enjoy life, not being influenced by social norms.

**Table 3.** Correlations between cultural dimensions and usage of high-emission passenger cars. \*, \*\*—Automated highlight of SPSS software to underline greatest values.


The next results show that the consumption of LPG in passenger cars is moderately positively linked with the PDI (r = 0.596, sig. = 0.001), and is strongly negatively correlated with IND (r = −0.716, sig. < 0.001). With the increase in the power distance within a society, the consumption of LPG also increases. Nations characterized by restraint will use more LPG-fuelled cars.

The number of cars fuelled by CNG is moderately positively correlated with the tendency of a nation towards long-term orientation (r = 0.517, sig. = 0.006). The results are different compared with other studies, which show that societies with high LTO scores have more pro-environmental behaviours. The results may reflect the perception of the individuals to whom it was promoted as having lower GNG emissions than traditional fuels, when the measurements of the emissions show no difference between them. It can be summarized that individuals with pro-environmental behaviours are willing to buy cars fuelled by CNG, thinking that they are low-emission vehicles.

Correlation can only tell how a variable covaries. To find what cultural dimensions predict the usage of high-emission passenger cars in general, multiple regression analysis should be applied. In order to identify the most significant cultural dimensions that influence the usage of high-emission cars in general, a backward elimination process was applied. The highest accuracy regression model was obtained for the combination of the UAI and IND independent variables. The results (Table 4) show that the UAI and IND dimensions can describe 42.6% of the variability from the usage of high-emission vehicles in EU-27 (adj. R square = 0.426). At the same time, for 60% of the analysed EU-27 countries, the evaluation error between the predicted values and actual usage percentages of high-emission vehicles is less than 1%. The results of the test were derived from the regression model: *High*\_*Emiss* = 97.44 + 0.03·*UAI* − 0.04·*IND*. A one-point increase in the uncertainty avoidance index will increase the usage of high-emission vehicles by 0.03%. Moreover, a one-point increase in the indulgence versus restrain index will decrease the usage of high-emission vehicles by 0.04%.


**Table 4.** Multiple regression analysis between rate of high-emission vehicles and UAI and IND.

Dependent variable: *High\_Emiss.*

Figure 11 shows the map of EU countries in relation to the UAI and IND scores. As already mentioned, countries with high UAIs and weak INDs tend to use more highemission vehicles.

**Figure 11.** EU map, based on uncertainty avoidance index and indulgence versus restraint index. Mean of UAI = 73.03; mean of IND = 42.74.

To increase the usage of medium- and low-emission vehicles in countries such as Bulgaria, Romania, Czechia, Italy, Poland, Portugal, Hungary, and Croatia, the advertising of passenger cars should emphasize the low price for maintenance and the low price of fuels. Moreover, the advertising should be well structured and contain technical information.

#### *4.2. Medium-Emission Passenger Cars*

Further, the data show that the usage of HEVs and PHEVs (vehicles with medium GNG emissions) is correlated with four out of six national dimensions (Table 5). HEVs are in strong positive correlation with the IND (r = 0.635 > 0.6, sig. < 0.001), in moderate positive correlation with the IDV (r = 0.3 < 0.501< 0.6, sig. = 0.011), in moderate negative correlation with the PDI (r = −0.6 < −0.593 < −0.3, sig. = 0.002), and in moderate negative correlation with the UAI (r = −0.6 < −0.540 < −0.3, sig. = 0.005). PHEVs are in strong positive correlation with the IND (r = 0.703 > 0.6, sig. < 0.001), in moderate positive correlation with the IDV (r = 0.3 < 0.511 < 0.6, sig. = 0.006), in strong negative correlation with the PDI (r = −0.608, sig. = 0.001), and in moderate negative correlation with the UAI (r = −0.6 < −0.407 < −0.3, sig. = 0.035).

**Table 5.** Correlations between cultural dimensions and usage of medium-emission passenger cars. \*, \*\*—Automated highlight of SPSS software to underline greatest values.


The higher power distance index in one country, the less inclined to use hybrid cars are its citizens. The same happens when we analyse the UAI dimension. As theory notes, a high UAI indicates that the individuals in these societies do not easily accept new technologies and innovation. For these individuals, HEVs and PHEVs use a technology that is hard to accept. IDV and IND speak to the existent link between the registered hybrid cars and the national tendency towards individualism and indulgence. People striving for identity and who enjoy life are more inclined to buy hybrid cars.

To investigate the cultural dimensions that can predict the usage of medium-emission vehicles in general, the same methods as previously described were applied. Following the backward elimination process, it was concluded that IDV and IND are the most relevant cultural dimensions. The value of the adjusted R square (0.566) (Table 6) indicates that the variability for hybrid car usage in general can be predicted as 56.6% by the two cultural dimensions: IDV and IND. The multilinear regression model, *Medium*\_*Emiss* = −0.884 + 0.023·*IDV* + 0.031·*IND*, can be explained as follows: a one-point increase in the individuality index will increase the usage of medium-emission vehicles by 0.023%.

**Table 6.** Multiple regression analysis between rate of medium-emission vehicles and IDV and IND.


Dependent variable: *Med\_Emiss.*

Moreover, a one-point increase in the indulgence versus restraint index will increase the usage of medium-emission vehicles by 0.031%. In countries such as Bulgaria, Romania, Croatia, Portugal, Cyprus, Greece, and Slovenia, characterized by collectivism and restraint (Figure 12), the advertising of medium-emission vehicles should emphasize the wellbeing of the society when choosing such a vehicle, as well as the low maintenance costs and fuel price.

**Figure 12.** EU map, based on individualism versus collectivism and indulgence versus restraint indexes (source). Mean of IND = 42.74; mean of IND = 56.63.

#### *4.3. Low-Emission Passenger Cars*

The data presented in the next table (Table 7) show similar results to the HEV and PHEV cases. The use of H2 for passenger cars is in moderate negative correlation with the PDI (r = −0.498, sig. = 0.008) and UAI (r = −0.384, sig. = 0.048), but it is in moderate positive correlation with IDV (r = 0.505, sig. = 0.007), and in strong positive correlation with IND (r = 605, sig. < 0.001). The usage of BEVs is in strong negative correlation with the PDI (r = −0.635, sig. = < 0.001), in moderate negative correlation with the UAI (r = −0.425, sig. = 0.027), in moderate positive correlation with IDV (r = 0.443, sig. = 0.021), and in strong positive correlation with IND (r = 639, sig. < 0.001).

**Table 7.** Correlations between cultural dimensions and usage of low-emission passenger cars. \*, \*\*—Automated highlight of SPSS software to underline greatest values.


Both low-emission fuels are used mostly by countries dominated by low power distances and low uncertainty avoidance. With increases in the indulgence and individualism cultural dimensions, low-emission fuels are used more often.

As in the previous cases, backward elimination was applied to identify the cultural dimensions that can most accurately predict the usage of low-emission vehicles in general. It was found (Table 8) that 32.7% of the logarithmic variability in the low-emission passenger car usage can be accounted for by IND (adjusted R square = 0.327). The other cultural dimensions do not have a significant influence on the low-emission vehicle usage variability within EU-27. The regression *Log*\_*Low*\_*Emiss* = −1.607 + 0.018·*IND* can be explained by a one-point increase in the indulgence versus restrain index, which will increase the usage of low-emission vehicles by 0.018%.


**Table 8.** Linear regression analysis between rate of low-emission vehicles and IND.

Dependent variable: *Log\_Low\_Emiss.*

#### **5. Conclusions**

The European Union, a conglomerate of 27 countries with different cultural characteristics, has a common objective to pass from traditional polluting sources to green energy. In the last years, a lot of improvement has been seen, with GHGs dropping in all sectors, with one main exception—transport. Passenger cars and light-duty vehicles are the main pollutants, and together they are responsible for 70% of the total GHG emissions in the EU [11].

The data show that Latvia, Romania, Croatia, Poland, the Czech Republic, Slovakia, Slovenia, Greece, and Italy have the lowest rates of medium- and low-emission passenger vehicles in their car fleets. In order to encourage the usage of the mentioned cars, policy makers have to find more ways to address individuals.

The results of the study show that the consumer preference for one type of fuel when using a passenger car is correlated with at least one of the following four national cultural dimensions: the PDI, IDV, UAI, and IND. With increases in the IDV and IND scores, the usage of low- and medium-emission cars also increases. With increases in the PDI and UAI, the usage of low- and medium-emission cars decreases. The marketing strategies for low- and medium-emission cars should be addressed according to these four cultural factors as well.

Studying the preference for high-emission passenger cars, it can be observed that these are preferred in countries that score high for the uncertainty avoidance index and low for the indulgence versus restrain index. In countries such as Bulgaria, Romania, Czechia, Italy, Poland, Portugal, Hungary, and Croatia, the advertising of medium- and low-emission passenger cars should be well structured and contain technical information, and it should emphasize the low price for maintenance and fuels. Medium- and low-emission passenger cars should be promoted as fashionable items in countries that score high on the PDI.

The driving preference for low- and medium-emission vehicles decreases with the tendency towards collectivism and restraint of EU countries. In Bulgaria, Romania, Croatia, Portugal, Cyprus, Greece, and Slovenia, the advertising of low- and medium-emission vehicles should first create trust and highlight the pro-environmental characteristics, low maintenance costs, and low fuel price.

Our results are in contradiction with other studies that have found a correlation between collectivist cultures and green behaviour. The reasons that could be behind the contradiction will be studied in the future, but the literature suggests that pro-environmental behaviour is linked to the standard of living [66]. However, most of the EU countries characterized by collectivism and restraint are also the countries with the lowest GDPs per

capita [67]. It is understandable that the high price of BEVs, HEVs, PHEVs, and H2 cars is a strong barrier to buying low-emission cars.

The symbolism of BEVs, HEVs, and PHEVs must also be considered. For many, it is more important to be "seen green" than "to be green" [68]. Further, we can argue that wanting to be seen as pro-environmental, and different from others, shows the tendency towards individuality rather than collectivist behaviour. Our study, which uses official data on the passenger cars that are already registered (2019), does not consider the attitudes and perceptions towards low- and medium-emission cars, but the actions taken by EU citizens.

As a potential area for future research in this study area, the authors intend to conduct a similar analysis for the pandemic and the period of conflict in Ukraine, which would be highly valuable. This would enable an examination of the changes that have arisen as a result of these exceptional circumstances, and their impacts on the findings of the current article.

**Author Contributions:** Conceptualization, I.A.I., D.S. and S.D.C.; Methodology, I.A.I., P.H., D.D.M. and S.D.C.; Software, L.C.; Validation, D.D.M., L.C. and S.D.C.; Formal analysis, P.H.; Investigation, I.A.I., P.H., D.D.M., D.S. and L.C.; Resources, I.A.I.; Data curation, P.H. and L.C.; Writing—original draft, D.D.M., D.S. and S.D.C.; Writing—review & editing, D.S. and S.D.C.; Funding acquisition, I.A.I. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research and the APC were funded by European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant 801505.

**Institutional Review Board Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **Sustainability Topics Integration in Supply Chain and Logistics Higher Education: Where Is the Middle East?**

**Maja Rosi and Matevž Obrecht \***

Faculty of Logistics, University of Maribor, 3000 Celje, Slovenia **\*** Correspondence: matevz.obrecht@um.si

**Abstract:** The global logistics industry has grown significantly in the last decade and has become essential to global business activities. In addition, the logistics industry is vital in transportation, urbanization in the Middle East, and transshipment through the Middle East. Due to their increasing importance and size, there is an increasing demand for adequately qualified people capable of managing the logistics systems and supply chains holistically and sustainably to avoid problems caused by unsustainable practices in mobility, transport, and supply chains. However, it is unclear whether the logistic and supply chain education of future leaders, managers, and engineers will follow SDG goals, the rise of new trends, and green technologies or lag behind. This paper pioneered a systematic approach and analyzed Middle Eastern countries regarding their sustainability integration into higher education programs related to supply chain management and logistics studies. It revealed enormous differences among countries and a lack of sustainability topics in most studied countries. Some countries are also significantly more oriented toward partial logistics challenges such as transport efficiency instead of sustainable supply chains, which are becoming critical challenges for the near future and must be accompanied by formal and life-long learning on sustainability-related topics. The circular economy and corporate social responsibility are especially neglected. It was also revealed that sustainability integration in higher education does not correlate with sustainability scores and the ranking of countries within the sustainability index.

**Keywords:** sustainability integration; Middle East; supply chain management; responsible logistics; education for SDGs

#### **1. Introduction**

The global logistics industry is facing new infrastructure development, new technology adoption, high energy consumption, and political polarization recently accompanied by new challenges in environmental protection, social challenges, better collaboration, and managing supply disruptions sustainably. In the Middle East (ME), the logistics industry plays a vital role in transportation development and urbanization and will be a crucial player contributing to regional sustainability. Due to its significant size and fast development, there is an increasing demand for adequate qualified people capable of managing not just distribution and supply disruptions but holistically dealing with business, environmental, and social issues. Information on whether logistic education follows this expansion based on a sustainable development agenda or lags behind is unavailable. Therefore, this paper focused on sustainability integration into logistics and supply chain studies in selected ME countries [1–3].

The ME has undergone tremendous cultural, political, and economic growth over the past few years. The region is facing a fundamental change in the oil market, where new technologies are increasing the oil supply on the one hand and, on the other, raising concerns over the environment, forcing a move away from oil. In order to reduce their reliance on oil and become a more sustainable society, oil-exporting countries, including ME countries, are establishing and implementing new reforms to diversify their economies

**Citation:** Rosi, M.; Obrecht, M. Sustainability Topics Integration in Supply Chain and Logistics Higher Education: Where Is the Middle East? *Sustainability* **2023**, *15*, 6955. https://doi.org/10.3390/su15086955

Academic Editors: Oz Sahin, Russell Richards and GuoJun Ji

Received: 17 February 2023 Revised: 6 April 2023 Accepted: 14 April 2023 Published: 20 April 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

with sustainable directives [1,2] and initiatives [3–5]. Similarly, new goals toward a more sustainable environment and green economy are set in the logistics industry, to become a transformative society [6]. Companies are encouraged to implement an environmentally sound approach and incentivize lowering their carbon footprint [7], and are challenged with the transition to a net-zero economy [8]. Alongside addressing environmental concerns, long-term value creation based on human capital development and good corporate governance is gaining importance [8].

The strategic location of the ME—at the junction of three continents and with the world's most critical natural resources, including over half of the world's proven oil reserves—has historically been a crossroads for trade, people, and a transition zone for political and cultural interaction [9,10]. The global logistics market is projected to grow by USD 71.96 billion by 2026 [11]. In terms of global competitiveness, the United Arab Emirates (UAE) was ranked among the top 20 countries in 2020 across 13 indexes related to transport [12]. The UEA is also considered one of the largest logistics hubs in the ME [13], and a large part of the country's economy is based on the logistics industry. Issues arising from this industry may significantly impact the business community, logistics organizations, and the overall economy of this region [14]. That proves that logistics is among the highly prioritized industries in those regions. According to Knight [15], oil-reliant economies need to maximize their attractiveness, diversify their economies, and improve their logistics industry competitiveness and industry performance as well as make their supply chains more sustainable [16].

Along with the growth of the logistics sector, there is also growing importance for adequately educated and skilled human resources, emphasizing a sustainable, green society [17]. This affects education providers, especially universities and colleges, by offering logistics-related programs and essential contributions to society and the economy. Since logistics is a growing topic, and industry demand for experts capable of sustainably managing logistics is expanding [18], logistics education can significantly influence the success of sustainability in the logistics sector. Integrating sustainability and embedding sustainable development in higher education is highly topical [19–21]. However, information about it is not yet available; our research focused on analyzing logistics-related higher education programs in selected ME countries and its integration of sustainability topics. The quality and adoptability of supply chain education are identified by adapting to local and global socio-political, environmental and economic challenges [22].

The analysis of ME study programs serves three purposes: first, it provides a database with novel findings. Second, it compares ME logistics-related study programs using various criteria focusing mainly on sustainability topics. Third, the results can be used as an orientation tool for the ME logistics sector, for other fast-growing economies and societies, to develop toward a more sustainable future. However, the study approach and method must first be developed to do that. Therefore, this study's innovative approach was also identified in the developed framework for analyzing study programs structured in three steps. Furthermore, the main aim of this study was to explore logistics and supply chainrelated study programs in the ME in terms of their environmental sustainability integration. Particular emphasis was on identifying specific sustainable development priorities (e.g., sustainability, environment, eco/green, circularity, CSR). We also comparatively analyzed sustainability integration on three levels of logistics—(1) transportation, (2) logistics, and (3) supply chain management. In addition, we assumed that countries with more sustainability topics and better sustainability integration in their study programs also had better sustainability scores. Therefore, potential correlations with the sustainability score/index were also investigated and examined.

#### **2. Review of Theoretical Background**

As sustainable development is recognized as one of the biggest societal challenges nowadays, HEIs should incorporate sustainability values into their mission, curriculum, and practice to align with the global sustainability agenda [23,24]. HEIs must enable

sustainability-oriented education of future managers, experts, and other social stakeholders and raise their environmental awareness [25]. They have an important role in societal transformation by educating global citizens and delivering knowledge and innovation to society [26].

This also applies to HEIs in the ME that are also experiencing significant transformation, privatization, internationalization, and industry reforms [27]. Higher education institutions must adjust to these changes by reforming their study programs or adding new ones to meet the growing market needs. Along with this, the growth of the young population is estimated to be 65 million by 2030; education and qualified professionals are crucial elements for achieving sustained development and sustainable development goals (SDGs) [21,28]. Namely, higher education influences the development of production and sustainable management systems [29].

Regarding environmental issues, universities in the region need to comply with international and environmental requirements, including policies toward reducing the carbon footprint and integrating environmental management into daily business [30]. HEIs, directly and indirectly, impact sustainable development through all their activities, influencing society, the environment, and the economy [31]. It is challenging to integrate sustainable development and sustainability into their systems [23–25] with a particular focus on sustainability topic analysis and defining priorities such as circular economy, green transition, social responsibility, and other related topics.

Reviewing the literature, we found few studies about ME higher education, and a recent lack of scientific research on the integration of sustainability-related topics in ME higher education was noted.

Most publications on sustainability in HEIs focus on the Global North; little is known about the state of sustainability in HEIs in the Global South [32]. Similarly, Hassan et al. [33] noticed that no previous publications studied sustainability challenges in HEIs in the middle eastern region.

Romani [27] explored higher education issues as a critical political problem in the Arab ME. Education was also identified as a future challenge for the ME by Akkari [34]. Miller-Idriss and Hanauer [35] researched the landscape of transnational higher education in the ME, focusing on offshore educational institutions and programs that foreign institutions have set up in the region. Similarly, Rupp [36] focused on foreign universities and colleges in the ME. Alzyoud and Bani-Hani [37] discussed how universities in the ME could achieve development, sustainability, and competitiveness by applying University social responsibility concepts. Sherif [38] investigated a similar topic, emphasizing how corporate social responsibility (CSR) is implemented in universities in the ME. Another comprehensive survey was performed by Saab and coauthors [30] regarding the environmental content in school and university curricula across Arab countries, and Daneshjo proposed a new approach for teaching sustainability in Arab schools [39]. Keser [29] investigated the effects of higher education on global competitiveness in European countries and ME countries.

By reviewing publications and research studies related to sustainability issues in ME higher education, it was revealed that practically no findings of integrating sustainability topics in logistics and supply chain-related education in those countries could be found. It was found that even studies related to sectors other than logistics or other geographical areas did not cover an analysis of priority topics. So, which sustainability topics will be taught is of particular importance for the education of future leaders and experts in the field of green transition, circular economy, and sustainable supply chains.

#### **3. Methods**

This study identified sustainability-related topics integrated into logistics and supply chain management-related studies in the Middle East. The Middle East was selected as a niche area poorly examined from the sustainability perspective. Since the definitions of the ME countries vary, this study focus was on 15 countries: Turkey, Syria, Cyprus, Lebanon, Israel, Jordan, Iraq, Iran, Kuwait, Saudi Arabia, Bahrain, Qatar, United Arab Emirates, Oman, and Yemen as proposed by the World population review [40].

To perform this research, a conceptual framework for analyzing and identifying relevant variables for exploring sustainability integration into the logistics and supply chain management-related study programs in the selected ME countries was first developed (Figure 1). Conceptual frameworks represent a way of thinking about a study, or ways of representing how complex things work the way they do [41].

**Figure 1.** A developed conceptual framework for analyzing and identifying relevant variables and integrating sustainability topics into selected ME countries' logistics-related higher education.

We developed a conceptual framework in three steps, namely: (1) web content analysis of selected study programs, (2) grouping and categorizing analyzed topics, and (3) statistical analysis and cross-sections of included topics.

First, the web-content analysis was used as a qualitative descriptive approach. Content analysis is a systematic coding and categorizing approach used to analyze and explore much textual information [42,43]. Private and public higher institutions in chosen ME countries with English web pages were searched and listed to create a data sample. A comprehensive study of publicly available online data, accessible on each institution's web pages, identified logistic and supply chain management-related study programs at all study levels (graduate and postgraduate) and cross-compared country specifics on specific sustainability topics integration. These specific topics were also seen as study program priorities.

The second step was divided into (a) identifying common keywords and (b1) grouping and (b2) categorizing obtained data, following Jabareen's [44] conceptual framework analysis. (a) Obtained data from the first step revealed nine keywords that were commonly used in logistics-related study programs: *logistics*, *supply chain*, *mobility*/*transport (air*, *road*, *sea*, *rail*, *maritime*, *ship, etc*.), *management*, *corporate social responsibility* (*CSR*), *environmental*, *eco*/*green*, *waste*/*circular*, *sustainable and their synonyms* (*b1*). We defined groups of keywords; the first group consisted of the most common logistic education-related keywords: "*logistics*", "*supply chain*", and *"transport/mobility*", as well as "*management*". According to our previous study of sustainability and environment-related higher education study pro-

grams [45], the second group of keywords referred to sustainability education: "*sustainable*", "*environmental*", "*eco/green*", "*waste/circular*", and "*corporate responsibility*". The database listed programs related to the keywords (e.g., in the program's name, in curricula, and study outcomes) and was limited by programs accessible online and in English. Further, we categorized the keywords into logistics, management, and sustainability-related topics (b2), as presented in Table 1.

**Table 1.** Grouping logistics, management, and sustainability-related topics.


The second step provided a detailed analysis of the most and least common topics in specific countries and study cycles and defined the share of programs with specific topics included in their curricula. This information is of practical importance to see whether programs are following strategic goals of each country or group of countries. It enabled defining priorities of logistic-related study programs in investigated countries and exposing the potential for improvements.

The third step was identifying the correlations between the studied keywords. The study analyzed the correlation between obtained data with the SPSS software program and performed sequence analysis for different variables to find the correlation between sustainability integration and the sustainability index. The third step also included indepth analysis by countries, study cycles, and sustainability-related topics in logistics and supply chain HEI. Last but not least, the Venn diagram method was used to understand the crucial correlations and fundamental concepts overlapping each other and identify interconnectivity among groups of sustainability-, logistics-, and management-related topics [46,47]. Examining the interconnectivity and overlap of these topics enabled us to identify logistics-, management-, and sustainability-related programs simultaneously. This combination was seen as the most prominent for managing sustainability-related challenges of the future in the logistics sector. Therefore, this framework could also be applied in other geographical areas/environments and add additional value to this research.

#### **4. Results**

This paper's dataset included 405 higher education programs (56% private, 43% public, and 1% with programs combined as public and private partnerships) in 15 ME countries. Program curricula were analyzed by nine study topics (keywords) to determine the integration of these topics into various study programs. The most significant share of study programs could be seen at the bachelor level (59%), followed by master's study programs (35%) and PhD study programs (6%). Most study programs were found in Israel.

Of all integrated topics, the highest share of programs were focused on management (50.1% of all programs included in the ME database), followed by sustainability, transport or logistics (31.6–34.8%), and finally, Eco/green and CSR (12.6–18.0%). Results indicate that only environment and mobility/transport study topics, not sustainability topics, were integrated to some degree in all researched ME countries. The share of study topics integrated into ME higher education programs can be observed in Figure 2.

**Figure 2.** Share of study topics integrated into study programs in ME countries with minimal and maximal integration.

Further, study topics were grouped into study topics per study cycle. Figure 3 indicates that management was the study topic commanding the highest share of the curriculum in 53% of all bachelor-level and 49% of all master's programs. Interestingly, mobility/transport mainly included study topics in PhD programs.

**Figure 3.** Share of study topics included in ME study programs per study level.

Detailed analysis revealed the level of integration of specific topics by country. As can be seen from Table 2, Jordan had the highest share of integrated logistics and management study topics. Turkey indicated high integration of logistics and management study topics but low integration of sustainability-related topics. No study programs were found for the UAE and Bahrain, including management study topics. Results varied depending on the country, although meager inclusion rates were observed for CSR study topics in all researched ME countries.


**Table 2.** Most and least included topics by country.

To better understand the level of topic integration (nine keywords) by country, we analyzed obtained data with a spider diagram (Figure 4). Countries such as Cyprus, Iran, Syria, Turkey, and the UAE showed more robust integration of logistics-related study topics. Iraq, Jordan, and Syria focused on management in their study programs. Nevertheless, Bahrain, Kuwait, Jordan, Cyprus, and Israel focused more on sustainability than other countries. Kuwait's focus was on environmental study topics, and Turkey focused on management and logistics study topics.

Regarding the focus on specific study topics and locations, correlations between the researched countries were not observed. Only Iraq and Iran were neighboring countries with similar trends in high integration of management and mobility/transport study topics in study program curricula. Similarly, Cyprus and Israel indicated an emphatic focus on sustainability compared with other ME countries.

**Figure 4.** Share of study topics in study programs in ME countries.

By grouping keywords into three priority topics (management, logistics, and sustainability) and using a Venn diagram, the correlations (differences and similarities) between the three priority topics were analyzed. In Figure 5, aggregated results are presented for all ME countries, revealing a stronger focus on management and logistics study topics. This result strongly correlates with the paper results that revealed 57% of all analyzed study programs, including at least one topic of management group study topics, were less oriented toward integrating sustainability-related priorities.

**Figure 5.** The number of study programs in grouped study topics.

The logistics group showed similar results. At least one logistics -related topic was included in 56% of all study programs in the ME database. The sustainability group indicated that approximately half of all study programs included one sustainability topic in their curricula. A strong connection was observed between the logistics and management groups. a 31% share of the researched study programs included management and logistics courses in their curricula. On the other hand, for logistics and sustainability, integration was significantly weaker (approx. 23%).

As spider diagrams (Figure 4) were designed logically, by setting up keywords related to a specific topic together, notable trends could be observed. By grouping the results by country in terms of the three priority topics, it could be seen (Figure 6) that Bahrain and Kuwait had the highest proportion of programs focusing on *"Sustainability*", whereas Turkey was the country with the lowest share of "*Sustainability*" topics. "Logistics" as a study topic group was evident in Turkey, Syria, and Iran. The "*Management*" study topic group showed strong integration in Yemen, Turkey, Jordan, and Qatar. Bahrain had an insignificant share of programs that included management topics in study program curricula.

**Figure 6.** Share of sustainability, logistics, and management study topic groups per country.

Since higher education institutions are engaged in institutionalizing sustainable development, community, and engagement studies into their systems or operational practices, the research aimed to identify possible correlations with SDG sustainability scores per country and include sustainability topics in their curricula. Namely, adopting the SDGs represents a significant global challenge for higher education institutions. These goals force them to assess how they engage with these goals and how they address future societal challenges [48]. As shown in Figure 7, the highest sustainability scores were seen in Israel and Cyprus. Although the inclusion of environmental and sustainability study topics in Israel was high, this was not true for Cyprus. Additionally, Turkey's results indicated a high sustainability score (around 70%), but the inclusion of environmental and sustainability study topics was the lowest. The results showed no statistically significant correlation between sustainability scores per country and analyzed sustainability study topics integration. A correlation was also proven with the SPSS Pearson correlation test, which did not show any (zero relationships) between observed variables.

**Figure 7.** Sustainability score per country for 2021 [49] and found programs that include sustainability topics in their curricula.

Another essential aim of the study was to prove whether there is a relationship between the integration of logistics study topics in higher education curricula in a given country and the logistics performance index (LPI) specific to that country. According to Kabak et al. [50], higher education institutions are expected to enhance logistics performance. Namely, despite the increase in automation, logistics is still a human-centric business, and more competitive global economies have a higher demand for highly qualified logistics-related labor [50]. It is vital to invest in human capital to improve logistics performance. Namely, well-educated logistics managers will have more technical competence and problem-solving capability and, thus, be more efficient [51]. The logistics performance index (LPI) for selected ME countries was retrieved from the World Bank Organization (International LPI rankings for 2018) [52] and compared to our data regarding logistic study topics with the SPSS program. The Pearson correlation test was calculated, showing that correlations between observed variables did not exist (zero relationships).

#### **5. Discussion**

The importance of sustainability education in higher education institutions is also recognized within the United Nations SDGs. SDG 4 (quality education) target 4.7 states "that all learners acquire the knowledge and skills needed to promote sustainable development, including, among others, through education for sustainable development and sustainable lifestyles, human rights, gender equality, promotion of a culture of peace and non-violence, global citizenship and appreciation of cultural diversity and culture's contribution to sustainable development" [53]. Universities in Arab countries play an increasingly important role in achieving the SDGs through their academic programs and research activities [35]; however, no statistically significant correlation with their sustainability scores was detected in this research. It could be speculated that such dependency will appear in the long term or is more related to other fields and not so much to education. Detailed and continuous monitoring should be performed to obtain datasets for a more extended time that might enable better calculations.

The SDGs should solve many challenges through "inclusive" or "sustainable" economic growth, assuming that economic growth can be conveniently decoupled from resource consumption. However, the current hegemony of the "sustainability and growth" paradigm has increased inequalities and pressure on natural resources, exacerbating biodiversity loss, climate change, and additional social tensions. Therefore, the paradoxes of sustainable development need to be defined. Integration of sustainability in curricula should focus on various examples of alternative education (e.g., indigenous learning, ecopedagogy, eco-centric education for a steady-state and circular economy, empowerment, and liberation), emphasizing planetary ethics and degrowth, and should be holistic [54,55]. According to Wals [56], some higher education institutions see a new way of organizing and profiling sustainability. However, higher educational institutions (HEIs) must first be systematically analyzed to transform the education system, and this research revealed that, e.g., CSR and the circular economy are among the least advanced topics in ME higher education. This is surprisingly distant from international climate agreements and leading countries' progress in sustainable development. Koleva's [57] study indicates that the interpretation and integration of CSR topics could be slightly different due to the main religion and its beliefs in the ME. In addition, challenging gender equality might affect systematic avoidance of such topics, e.g., in more conservative Arab countries.

HEIs can implement sustainability concepts and translate them into practice through education, curricula, research, campus operations, community outreach, and management [58]. Increasing student internationalization could also increase the availability of sustainability education, especially for students from lower-income countries [59]. Still, it also demands that they fly frequently and consequently live less sustainably, which has changed recently with online studies. Even though the impact of staff was not a part of this study, some other studies revealed that a lack of staff interest in improvements might also be challenging [60].

Since higher education considerably influences its graduates regardless of the academic field, environmental and sustainability topics should be significant in higher education curriculums. The importance of the existence of those topics in a variety of academic fields is also notable by different authors—Mulder [61] is focused on engineering students, Boarin et al. [62] on architecture, Zoller [63] on chemistry and science literacy, Walshe [64] on geography, and Springett [65] on business studies.

Arab countries started taking note of education on sustainable development (ESD) in the early 1980s, and the emergence of ESD has provided a stimulus to reform environmental education. The Arab region is promising achievements in ESD activities on both the national and regional levels. Despite the apparent gap between the Arab region and other parts of the world regarding ESD, there are promising achievements in the Arab region toward ESD. For example, Jordan, Lebanon, Egypt, Qatar, and Oman include training on integrating ESD themes into their curricula, incorporating ESD into university courses, and funding ESD-related scholarships and programs. In Qatar, for instance, ESD is only included in selected courses [30].

On the other hand, this study revealed that most sustainability-related topics were integrated into logistics-related study programs in Bahrain, Kuwait, Jordan, Cyprus, and Israel. Cyprus and Israel emphasized sustainability compared with other ME countries, which could be related to the location and culture, since the countries are closer to Europe and more culturally related to the EU and the "western" lifestyle. There are also some specifics, e.g., in Israel, where higher education represents the highest share of study programs in the ME database. This might indicate that Israel focuses on international students more than other ME countries. The ME database included only the programs that had information about the included curriculum.

The small share (23%) of study programs that include logistics and sustainability topics in researched ME study programs indicate that logistics could be more related to sustainability topics, and future logistics program graduates might lack knowledge about sustainable logistics development. "*Management*", on the other hand, is significantly better integrated into ME study program curricula, since more than half (53%) of researched bachelor programs include management in their curricula. Specifically prioritized were Iraq, Jordan, and Syria. Surprisingly "*mobility*/*transport*" is most included in PhD programs, indicating that sustainability is still lagging even in research priorities. Logistics topics are also more visible in Turkey, Cyprus, Iran, Syria, and UAE.

When checking topic overlapping, only 11% of researched study programs included all three study topic groups (management, logistics, and sustainability). It was also seen that "*management*" and "*logistics*" were more connected and integrated than "*management*" in combination with "*sustainability*". Population growth, lower income than, e.g., in the north, and significant oil and mineral reserves might lead to a focus on economic and practical engineering issues and exploit natural reserves instead of sustainability since environmentalism is costly and usually related to the pragmatism of those lacking fossil fuel supplies.

Regarding the expected correlation of countries with a higher level of integration of sustainability topics into their study programs and sustainability score/index, no clear correlation was discovered between countries and their GDP or sustainability scores. Few countries with higher sustainability scores indicated higher inclusion of sustainability study topics in their study programs, but the proportional tendency was not observed (countries with lower sustainability scores did not show lower integration of sustainability topics in study programs). This might also be seen in the particular focus of this study on logistics and supply chain-related programs. Assessing a complete set of higher education programs might show a different picture. Therefore, assessments of logistics programs and analyses of their dependence on the LPI index were also performed. Similar results regarding the correlation between LPI and a higher level of integration of logistics topics into study programs of selected ME countries show that the correlation between observed variables was not visible. Assessing a complete set of all higher education programs might show a different picture. The results of Ekici et al. [66] show that the greater the education of the logistics manager, the greater would be the manager's performance since their knowledge and skills are significantly influenced by the higher education and training offered by governments in different countries. Similarly, Yildiz [67] showed a positive correlation between logistics performance and education, but in this case, a weak correlation was identified in some countries.

The interconnection of sustainability, management, and logistics is clear. Logistics professionals need knowledge and skills from different areas. According to Tatham and Kovács [68], general management skills, functional logistic skills, problem-solving, and people management are relevant to logistics labor. The authors' previous research [45] revealed that another critical skill is sustainability-related knowledge, which is essential for managers in the modern global economy [69]. Management education prepares human capital for jobs in logistics and provides knowledge for optimizing resources and maximizing economic returns through business management knowledge and skills [70] as well as solving complex interdisciplinary problems [71]. Since logistics takes the lead in today's business administration fields, open development for professional managers is inevitable for the acquisition of logistics- and supply chain management-related higher education knowledge [72]. This knowledge will endow the professionals with more skills in subjects such as coaching, operations management, and crisis management by providing them with the opportunity to gain leadership ability and maintain it effectively [73] and to be better prepared for the future challenges of the sustainable paradigm demanded by the EU within the EU taxonomy as well as the recovery and resilience plan, which calls also for international cooperation that should be the focus of sustainability education [74].

On the other hand, these two topics interrelate with sustainability. Sustainability has received increasing attention in management education over the last few years [75]. Regarding the UN-supported initiative "Principles for Responsible Management Education" (PRME), there is an expectation that education institutions should lead thought and action on social responsibility and sustainability issues. Namely, they prepare current and future business professionals to engage in more responsible and sustainable practices [70]. Research ideas and results are especially applicable within the EU member states since the EU recovery and resilience plan demands boosting sustainability operations, formal education, lifelong learning, knowledge, and practices. In addition, further research should be promoted in detailed studies on teaching and including sustainability topics in individual processes within selected universities in individual countries. The three-step research concept could also be applied in other countries. Moreover, the EU taxonomy demands sustainable investments, which must be accompanied by experts from the field of sustainability within the EU or globally, since many investments are global and interconnected with organizations with headquarters within the EU. Further research implications could also be identified in the cross-border dissemination of progress toward sustainable education outside the EU, with particular emphasis on the Middle East as the geographical area with continuous population growth, an economic focus on fossil fuels, and the crossroads of international trade between Asian production and European consumers.

**Author Contributions:** Conceptualization, M.O.; methodology, M.O. and M.R.; validation, M.R.; formal analysis, M.R.; investigation, M.O. and M.R.; resources, M.O.; data curation, M.R.; writing original draft preparation, M.O. and M.R.; visualization, M.R.; supervision, M.O.; project administration, M.O. and M.R.; funding acquisition, M.O. All authors have read and agreed to the published version of the manuscript.

**Funding:** The European Union-Next Generation EU & The Ministry of Higher Education, Science, and Innovation funded the research. The research was carried out within the project titled "Establishing an environment for green and digital logistics and supply chain education within the Recovery and Resilience Plan scheme".

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** Special thanks go to Z. F. for cooperation within the process of data gathering and preparing graphs. The authors thank the funder for the research funding and for covering the APC.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **Long-Range Wireless Communication for In-Line Inspection Robot: 2.4 km On-Site Test**

**Huseyin Ayhan Yavasoglu 1,2,\*, Ilhami Unal 3, Ahmet Koksoy 3, Kursad Gokce <sup>2</sup> and Yusuf Engin Tetik <sup>2</sup>**

<sup>1</sup> Mechatronics Engineering, Mechanical Engineering, Yildiz Technical University, Istanbul 34349, Türkiye


**Abstract:** This paper presents a study of the feasibility of using in-line inspection (ILI) techniques with long-range communication-capable robotic systems deployed with advanced inspection sensors in natural gas distribution pipelines, which are rare in the literature. The study involved selecting appropriate antennas and determining the appropriate communication frequency for an ILI robot operating on Istanbul 12" and 16" steel pipelines. The paper identifies the frequency windows with low losses, presents received signal strength indicator (RSSI) and signal-to-noise ratio (SNR) information for various scenarios, and evaluates the impact of T-junctions, which are known to be the worst components in terms of communication. To utilize the pipeline as a waveguide, lowattenuation-frequency windows were determined, which improved communication by a factor of 500 compared to aerial communication. The results of laboratory tests on a 50 m pipeline and realworld tests on a 2.4 km pipeline indicate that long-distance communication and video transmission are possible at frequencies of around 917 MHz with low-gain antennas. The study also assessed the impact of the early diagnosis of anomalies without incidents on the environment, achievable with ILI robots using long-range wireless communication.

**Keywords:** natural gas pipelines; nondestructive testing (NDT); in-line inspection robot; wireless communication; antenna; circular waveguide

### **1. Introduction**

The transportation of natural gas and oil, both of which are potentially hazardous materials that can have significant environmental impacts, is commonly achieved through pipelines. According to the International Energy Agency (IEA), around 90% of natural gas is transported via pipelines globally [1]. The safe operation of pipelines is threatened by various factors, including corrosion, third-party damage, natural disaster, etc. [2]. Faults in natural gas pipelines are a threat to the energy supply's continuity and can result in the catastrophic loss of life and property [3]. Therefore, it is essential to have a thorough understanding of the condition of pipelines in order to respond to emergency situations and prevent accidents and malfunctions. In summary, a maintenance and repair decision support system is necessary to ensure the safe and effective operation of these pipelines. When it comes to identifying pipeline anomalies, in-line inspection (ILI) methods are considered to be the most reliable, providing the most accurate results [4]. Pipelines are not harmed during ILI inspections. Therefore, these technologies are referred to as nondestructive testing (NDT) or non-destructive inspection (NDI) technologies [5]. Numerous ILI sensor technologies, including magnetic flux leakage (MFL) sensors [6], ultra-sonic sensors (UTs) [7], electro-magnetic acoustic transducers (EMATs) [8], and laser profilometers (LPs) [9], are used to detect faults during in-line inspections [10].

The applicability of in-line inspection technology extends beyond pipelines transporting hazardous materials, such as natural gas and oil, to include those conveying safe materials in water pipelines [11]. The material composition of the pipeline is a crucial

**Citation:** Yavasoglu, H.A.; Unal, I.; Koksoy, A.; Gokce, K.; Tetik, Y.E. Long-Range Wireless Communication for In-Line Inspection Robot: 2.4 km On-Site Test. *Sustainability* **2023**, *15*, 8134. https://doi.org/10.3390/ su15108134

Academic Editors: Manuel Fernandez-Veiga, Oz Sahin and Russell Richards

Received: 14 January 2023 Revised: 10 April 2023 Accepted: 25 April 2023 Published: 17 May 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

factor in determining the appropriate sensor technology to be employed. Depending on the type of natural gas pipeline, the application of ILI sensors varies. On the other hand, the selection of the inspection tool that conveys the sensor is dependent on the characteristics of the pipeline itself. Natural gas pipelines can be broadly categorized into two distinct classifications: distribution and transmission lines. Transmission pipelines are typically larger than distribution pipelines and operate at higher pressures [12]. In contrast, the distribution system consists of smaller pipelines responsible for delivering natural gas to various end users, including commercial and residential buildings. The distinction between transmission and distribution pipelines is important as they have different design characteristics.

For transmission pipeline inspections, the high-sensitivity sensors are carried by tools called smart pigs [13] that move with the flow. One of the key advantages of these devices is their ability to operate autonomously, without the need for remote control. This eliminates the requirement for long-distance communication during their operation. Natural gas transmission lines are lengthy and uncomplicated, making them ideal for deploying selfcontained smart pigs.

However, mostly, gas pipeline systems are not designed or constructed with ILI in mind. Distribution lines are particularly more complex, and they do not have the same structural characteristics as natural gas transmission lines. Due to the numerous special transitions and frequent diameter variations, smart pigs cannot be utilized in urban natural gas pipelines. Therefore, wireless robotic systems that can move independently of the gas flow, unlike smart pigs, are required for the ILI of city pipelines.

The scope of a robotic system capable of conducting inline inspections is extensive, encompassing a wide array of mechatronic systems with varying designs and capabilities. In a literature search conducted on this topic, numerous studies on pipeline inspection robotic systems were identified. For instance, the study [14] describes a fluid-driven small-size robotic system. The robot used in this study is more like the pig structure for transmission lines, and it is incompatible with distribution lines. The study [15] presents a crawler robot with three track-driving modules. This robot can be used in distribution lines but has a one-hour operating time and a short wireless range. Thus, it is better suited for visual inspections due to its lack of sophisticated sensors. The study [16] presents an automated vision-based navigation system for ILI pre-inspection. This is another small robotic application that drives on wheels rather than tracks. SAPER II [17] is an additional robotic system that inspects 4" to 14" natural gas pipelines and relies on a wire for power and communication.

However, these robotic systems do not include advanced sensor technology-based robotic systems, which are the subject of this study. The majority of these studies have focused on robotic systems that operate at close range and lack advanced sensors such as MFLs, UTs, and EMATs. In the literature, there are a few robotic systems equipped with MFL sensors that are capable of operating in natural gas distribution lines. One of these studies was on the Pibot robotic system, which is still in development [18]. The Pibot is a wireless [19] snake-like robot that is designed to operate in 16" steel pipes. Pipetel Explorer robots, which are used commercially, are an additional example of a robotic application that employs an MFL sensor [20]. This family of robots has been adapted for different pipe diameters and is primarily employed in North American nations.

Robotic systems with sophisticated sensors and long-range wireless communication, the subject of this paper, are extremely rare in the scientific literature. This is because developing robots with advanced sensors capable of operating in natural gas distribution pipelines presents numerous challenges. Establishing a secure, long-distance connection between the robot and the control station could be regarded as one of the greatest challenges associated with the development of an ILI robot, such as the use of wireless relay [21] for small robots. Additionally, in [22], some bi-directional relay nodes were utilized for the long-distance communication of an inspection robot in a water distribution system. Thanks to the relay nodes, the robot switches its communication between transceivers during

long-distance operation. For instance, in the study [23], an innovative concept of a robot chain for pipeline inspection based on wireless communication is presented. A robot chain is a subtype of a robot swarm in this context. These strategies are challenging to implement, as they require external structures in addition to the station and robot antenna and cause a delay in switching operations. Visible light communication (VLC) is a technique that can serve as an alternative to conventional RF communication strategies. For an actual visible light communication system, it is necessary to consider the uniformity of indoor illumination [24]. It is one of the newest technologies and has a high transmission rate and strong anti-interference capability. However, as given in the experimental study [25], VLC was developed for short- and middle-range data communication. In [26], a microwave communication system was developed for oil and gas pipelines with a good signal-tonoise ratio (SNR) (15 dB) at 2.4 GHz. However, this study presents 36 m straight pipeline communication and indicates that their proposed system can provide communication up to 150 m, which is a very short distance for ILI robots.

To increase the communication distance, there are two solutions using natural gas pipelines as a communication medium: a pipe as a waveguide for modems and a pipe as a signal conductor. A pipeline can act as a waveguide to frequencies in the few GHz range, which is suitable for commercially available radio modems. A pipeline can also support the direct signal injection of a signal with a frequency of a few kHz, which requires much more power for the same distance [26,27]. For example, in [28], a gas distribution pipeline was used as a communication channel to avoid installing a dedicated data transmission system. However, this research examined the transmission of data in the low-frequency band over a distance of approximately one kilometer. It was determined that the system under consideration was not capable of transmitting video, a crucial capability for ILI robots.

Despite the numerous studies in the literature on wireless communication, as highlighted above, there exists a gap in the ability to provide both long-distance transmission and video transmission capabilities. The implementation of ILI robots is hampered by this limitation.

Therefore, this study fills an important gap in this regard by examining the datalink system for long-distance communication without requiring an external structure. A preliminary version of the presented information was previously presented as a short paper at the 3rd Latin American Conference on Sustainable Development of Energy Water and Environmental Systems held in Sao Paulo [29].

The main contributions of this manuscript are as follows: (1) The introduction of a robot designed for deployment in pipelines with diameters of 12" and 16"; (2) a detailed description of the antenna placement on the robot; (3) a comparison of the performance of the station antennas selected for use in steel natural gas pipelines, accompanied by test results of both 10 m and 50 m lines composed of pipes with 12" and 16" diameters; (4) an evaluation of the impact of T-junction pipe components on wireless communication; (5) the determination of the low-attenuation windows to select the proper communication frequency; (6) contrary to the studies in the literature that have utilized straight pipes, this study used an experimental on-site test on a real 2.4 km long pipeline with 42 bends and various special transitions; (7) an evaluation of the environmental implications of enabling ILI robot operations.

This paper is organized as follows: In Section 2, the robotic system and antenna installation are described. In Section 3, an analysis of the frequency range and antenna gain is given. In addition, the laboratory test results for 12-inch and 16-inch pipelines are discussed, and T-junction cases are evaluated. In Section 4, the conducted site test results on a real 12-inch steel gas pipeline are presented. In Section 5, the study's impact on environmental consequences and major accidents is discussed. In Section 6, the impact of this study on natural gas distribution line-related accidents and environmental consequences are assessed. Finally, Section 7 presents the conclusion.

#### **2. Robotic System**

The natural gas inspection robot is designed in a modular manner and can move in both directions independently of the flow direction. It is classified as a snake robot with articulated joints. The articulation allows for multiple degrees of freedom within a single system, making its approach to obstacles extremely versatile.

As depicted in Figure 1, the robot's mechanical structure has been completed, and functional testing is currently in progress. The robot can automatically adapt to 12" and 16" pipe diameters by opening its arms based on the data it receives from force sensors. As illustrated in Figure 2, the mechanical structure consists of four distinct types of modules: Camera Modules, Driver Modules, Orientation Modules, and an MFL module. The front and the back modules of the robot are called Camera Modules, and they are equipped with custom-designed communication antennas. The robot communicates with the control station utilizing these antennas.

**Figure 1.** The natural gas inspection robot.

**Figure 2.** Modules of the in-line inspection robot.

#### *Camera Modules and Antenna Installation*

The camera modules at both ends of the robot have a complex multi-joint structure and include several components: two arms with traction motors, a high-resolution fish-eye camera for image acquisition, a single board computer as a supervisory controller, an LED lighting unit for illumination, proximity sensors to avoid a collision during bend transitions, and a laser profiler to detect visual defects. A custom-designed communication antenna is also placed on the front of each camera module, as shown in Figure 3, to receive/transmit data between the robot and the station antenna.

**Figure 3.** Camera module and communication antenna.

#### **3. Analysis of Frequency Range and Antenna Gain**

To enable wireless communication between the robot and the station antenna over a long distance, the proper wave frequency must be chosen and the natural gas pipe used as a waveguide.

$$(f\_{\varepsilon})\_{mn} = \frac{\chi'\_{mn}}{2\pi a \sqrt{\varepsilon\mu}}\tag{1}$$

where *a* is the radius of the waveguide, and *χ mn* is the zeros of the derivative *J m*(*χ mn*) = 0 (*n* = 1, 2, 3, . . .) of the Bessel function *Jm*(*x*) [18]. The propagation of waves can be either in the transverse electric (TE) mode or the transverse magnetic (TM) mode [30]. Since the permittivity (ε) and permeability (μ) of the natural gas are approximately the same as the ones of free space [31], the first propagating TE and TM modes and their cut-off frequencies for the 12-inch diameter circular waveguide are shown in Figure 4.

TE11 is the dominant mode. In reality, it is preferable to excite the circular waveguide with a 12-inch diameter between 585.67 MHz and 764.97 MHz for the lowest attenuation along the pipeline and polarization control. As the commercial transceiver communication modules available in the market operate mainly at the 868 MHz, 915 MHz, and 955 MHz central frequencies under 1 GHz, the excitation of one more mode, TM01, would be urgent considering not only higher losses but also dispersion and attenuation distortion to the signal since each exhibits different phase velocities and attenuation. Any propagating field in the circular waveguide (pipeline) is a superposition of the propagating modes existing for the frequency under consideration.

The TE11 dominant mode cut-off frequency of the 16-inch diameter circular waveguide is 390.7 MHz. The number of higher order modes that would be excited inside 16-inch diameter circular waveguides is at least four (TM01 = 510 MHz, TE21 = 647.4 MHz, TM11 = TE01 = 812.6 MHz, TE31 = 890.9 MHz, TM21 = 1090 MHz) under 1.1 GHz, compared to the only higher-order mode TM01 inside the 12-inch diameter circular waveguide. Due to the more common usage of underground circular pipes with a diameter of 12 inches for natural gas transportation in Turkey, fewer higher-order modes are excited compared to the 16-inch ones when commercial transceiver modules are utilized for communication inside the pipelines.

In addition to the aforementioned information, additional parameters are used to determine the quality of wireless communication. In wireless communication, the signalto-noise ratio (SNR) and received signal strength indicator (RSSI) determine the quality and dependability of the communication channel [32,33]. The SNR is the ratio between the desired signal power and the noise power. It is a crucial metric for assessing the performance of communication. A high SNR value indicates a low noise level and high signal quality. The RSSI, on the other hand, is a measurement of the received signal's strength. It is a relative measure of the signal strength that can be used to determine the distance between the receiver and the transmitter. A strong signal is indicated by a high RSSI value, while a weak signal is indicated by a low RSSI value. The SNR and RSSI are important parameters in wireless communication because they provide information about the signal's quality and strength. In Sections 4 and 5, these parameters are examined for multiple data-link cases using laboratory and field testing.

#### **4. Laboratory Tests**

To investigate the effects of antenna gain on the transmission performance inside the pipeline, two different ultra-wideband double-ridged antennas with high- and low-gain characteristics were used. As illustrated in Figure 5, the tests were initially conducted in pipelines that were constructed around the research center.

**Figure 5.** Laboratory test pipeline.

The measurement setup for the laboratory tests mainly consisted of an Agilent Vector Network Analyzer (VNA) E8361A (10 MHz–67 GHz). VNA measurements were performed for the frequency interval of 500 MHz to 3 GHz with an output power of +5 dBm. Each spectral measurement was represented with 401 equally spaced frequency points (data points) and 1 kHz IF bandwidth (BW) within the interval specified by the VNA. Therefore, a spectral resolution of 6.25 MHz was obtained. These parameters significantly reduced the noise floor and improved the dynamic range. Then, the measurement system, including the connectors and cables, was calibrated to remove the impairment caused by the components of each case of the laboratory experiments for reproducibility. The calibration data were saved to the internal memory of the Agilent VNA E8361A. The frequency interval was measured using the two different horn antennas that were attached to the VNA ports. The utilized low-gain antenna with a small aperture size operated between 0.9 and 18 GHz (1 dBi gain @ 915 MHz), and the high-gain antenna with a big aperture size operated between 0.5 and 4.5 GHz (10.5 dBi gain @ 915 MHz). The calibration data were removed from the measurement using its internal memory and a scattering parameter (S21) for transmission loss (or attenuation) in dB format for each data point.

Figure 6 compares the transmission loss within a 12-inch-diameter straight pipe when two antennas were utilized. Below 1.1 GHz, where the transmission loss was uniform and low, the communication channel within the circular waveguide was optimal, as depicted in the figure.

**Figure 6.** Transmission loss (dB) characteristics with respect to frequency between high- and low-gain antennas.

Although 12-inch diameter pipelines are widely used in Turkey, there are several subterranean transitions underground between 12-inch and 16-inch diameter pipes. If one observes 12-inch to 16-inch diameter transition losses, they are not greater than 0.5–1.0 dB between a frequency range of 750 MHz and 950 MHz in application compared to the losses for the same-length straight pipelines with a 12-inch diameter (Figure 7).

As shown in Figure 7, comparing the communication loss over air to that within a circular waveguide with a 12-inch diameter at a distance of 10 m revealed a minimum increase of 25 dB inside the pipeline, which corresponded to a 500-fold improvement in communication performance.

**Figure 7.** Transmission loss (dB) measurements using high-gain antennas among 12" straight pipelines, 12"–16" transitions, and free-space cases.

Attenuation tests were also conducted in a 50 m long pipeline, including both 12 and 16 straight pipelines with bend and transition parts that were constructed around the research center, as shown in Figure 8. The pipeline under test comprised eight 12-inch pipe sections, two 16-inch pipe sections, and two elbow joints. As depicted in Figure 8, the 12-inch pipes are denoted by yellow, while the 16-inch pipes are represented by blue.

In this case, new VNA measurements were performed for the frequency interval of 900 MHz to 930 MHz, with a higher spectral resolution of 0.25 MHz. The attenuation test results for the 50 m length test pipeline located in front of the laboratory were analyzed both with respect to the frequency and the distance/length. The evaluation results, as a function of frequency for the different lengths of the test pipelines, are presented in Figure 9, while the results based on distance for different frequency ranges are shown in Figure 10.

**Figure 9.** Attenuation results as a function of frequency for different lengths of test pipelines.

**Figure 10.** Attenuation test results as a function of distance/length for different frequency ranges on average.

According to the attenuation results with respect to frequency, there were four low attenuation windows that were suitable to communicate efficiently inside the pipelines (Figure 9). It is also obvious that the attenuation level on average increased, as the pipeline became longer for those three low attenuation windows (Figure 10).

In another measurement study, the transmission characteristics of three different Tjunction cases with different antenna gains were investigated over 500 MHz to 1 GHz, with a spectral resolution of 6.25 MHz (Figure 11). Three types of antennas were attached to the VNA ports, as shown in Table 1, including low-gain (1 dBi), middle-gain (4 dBi), and high-gain (10.5 dBi) double-ridged antennas.

**Figure 11.** Low-, middle-, and high-gain antennas used in the measurements: (**a**) Low-gain double-ridged horn antenna; (**b**) middle-gain double-ridged horn antenna; (**c**) high-gain double-ridged horn antenna.


**Table 1.** Site measurement results with different selections of antennas and their polarizations.

Figure 12 depicts the test pipeline designed to examine communication in three distinct scenarios for T-junctions, one of the worst wireless communication components.

**Figure 12.** Measurement set-up of transmission characteristics of three different T-junction cases.

Figure 13 only depicts the transmission response (loss) for the high-gain antenna because the graphs obtained from the tests conducted with three different antennas have comparable characteristics. Information about the tests conducted with other antennas is provided in the text that follows.

**Figure 13.** Transmission characteristics of high-gain antenna for horizontal polarization with respect to frequency for three different cases.

The transmission response (loss) in Figure 13 shows major attenuation (at least under −30 dB) at the central frequency of 737.5 MHz. Similarly, the transmission response for the vertical polarization of high-gain antennas shows major attenuation at the central frequency of 914.5 MHz. On the other hand, the transmission response for both the vertical and horizontal polarizations of the middle-gain antennas shows major attenuation at the central frequency of 738 MHz. Finally, the transmission response for the vertical and horizontal polarizations of the low-gain antennas shows major attenuation at the central frequencies of 918.7 MHz and 737.5 MHz, respectively.

#### **5. On-Site Test Measurements**

As shown in Figure 14, real pipeline tests were conducted by installing antennas and measuring equipment (spectrum analyzer) at both ends of the pipeline.

**Figure 14.** Wireless communication on-site test.

Due to the uniform and low-loss-transmission characteristics for the frequency band under 1.1 GHz, Microhard pDDL900—Dual Frequency OEM Ethernet and Serial Digital Data Link modules and accompanying evaluation boards were selected for the site measurements of the features of a 900 MHz frequency band of operation and its adjustable output power (up to 1 W).

The preferred digital data link module for wireless communication was developed to provide flexibility within the 900 MHz frequency band (ranging from 902–928 MHz). This module has a maximum data rate of 25 Mbps, enabling the testing of various frequencies at various output power levels within a natural gas distribution pipeline. Here, the 2.4-km-long test pipeline contained 42 bends and 3 specialized transitions (such as river and railway crossings). The chosen pipeline contained numerous challenging pipe components and special transitions, and it was essential to evaluate how transmission can be achieved even under the most challenging conditions.

Two distinct ultra-wideband double-ridged antennas with high- and low-gain characteristics were used for the site measurements. These measurements were also carried out by considering different polarizations and a different selection (Tx/Rx) of antennas at a 917 MHz central frequency with a 6 MHz bandwidth (one of the three low-attenuation windows depicted in Figure 9) and 1 W (30 dBm) of output power. The received signal strength indicator (RSSI) [21] and signal-to-noise ratio (SNR) results of the conducted tests are given in Table 1. The SNR indicates how much the signal level was greater than the noise level. The higher the ratio (above 10 dB), the better the signal quality [34]. The digital modulation type was also set as adaptive to obtain the best SNR in each site measurement test.

#### **6. Discussion**

This section discusses the findings of the study and also addresses the possibility of reducing environmental consequences through the early detection of anomalies by utilizing ILI robots that can be deployed after long-range wireless communication capabilities have been established.

Limited research has been conducted on the application of pipeline monitoring for both water and gas distribution networks. The existing research has examined the viability of wireless propagation over 100 m at a frequency of 2.5 GHz [35] and the wireless control of an in-pipe robot at a lower frequency of 434 MHz [22]. Our study, however, presents measurements conducted on-site that demonstrate satisfactory results for communication at the central frequency of 917 MHz over a significantly longer range of 2400 m.

#### *6.1. Frequency and Antenna Selection*

The output power of the transmitter is an important factor to consider during testing. The tests were conducted at 30 dBm, which is comparable to the 25 dBm [35] level used in similar studies. Wireless communication systems can employ different frequency bands depending on the amount of data to be transmitted and the range of communication. For low-data-rate applications where a narrow bandwidth is sufficient, 100 kHz [36] frequencies can be used, whereas for short-range communications requiring high data rates, higher frequencies, such as 2.4 GHz [35,37], can be chosen. The results of the laboratory tests have provided us with crucial information regarding frequency selection. It is obvious that the use and selection of high-gain antennas for pipeline communication were better in terms of the obtained lower transmission loss. In addition, the transmission losses randomly varied and fluctuated over the frequency band due to the higher-order modes excited approximately after 1.1 GHz. Hence, it is obvious that it was hard to communicate properly after 1.1 GHz inside the 12-inch diameter pipes due to the big amount of ripples of losses at higher-frequency regions. The lower-frequency band (under 1.1 GHz) led to a better communication channel inside the circular waveguide, in terms of uniform and low-loss transmission characteristics over its related frequency band.

In the context of this study, considering the transmission of communication over the air and from different pipe diameters, the scope of this study permits the presentation of the following findings: 12-inch to 16-inch diameter transitions and their straight pipeline equivalents showed almost similar transmission loss characteristics. This would allow for the use of a number of needed 12-inch to 16-inch diameter transitions underground. Moreover, when the free-space communication loss over the air is compared to that inside the 12-inch diameter straight circular waveguide from the same distance (10 m), it is obvious that the dynamic range would be at least 25 dB greater inside the pipeline or circular waveguide. This corresponds to communication performance that is approximately 500 times superior within the pipeline compared to communication in free space (over air).

According to the attenuation tests employed in this study, our analysis allows us to draw the following conclusions: There were three wide low-attenuation windows in terms of bandwidth, including 906–912 MHz, 914–920 MHz, and 922–928 MHz. It is also evident that the average attenuation level increased as the pipeline lengthened for the three low-attenuation windows. According to the test results, it appears that electromagnetic waves within the pipelines were attenuated by approximately 0.5 dB per 5 m on average.

The examination of wireless communication in the T-junctions shows that it is among the most important parts of data-link systems because these components are the most likely to degrade wireless communication along the transmission path. Due to the fact that there are three different directions in the pipeline containing T-junctions, the results varied depending on which ends of the pipe were communicated. Therefore, in this study, tests were conducted for three distinct scenarios. Consequently, although the Tjunctions negatively impacted the wireless communication, the worst-case scenario was when the station antenna and robot did not communicate in the same direction. In this study, it was determined that the Case 1 and Case 3 scenarios significantly degraded the wireless communication. When these tests were repeated with three distinct antenna types, similar results were obtained, and significant attenuation was observed regardless of the antenna type.

As testing in a real pipeline is not always possible, the findings presented in this study are of critical importance. In pipelines with a high degree of complexity, including numerous elbows, valves, and specialized transitions in addition to straight lines, the antenna, polarization, and frequency selection become of the utmost importance. Considering the measurement results given in this study, it was possible to communicate with the lowest-gain antennas at both the Tx and Rx sites, even regardless of the polarization (V: Vertical, H: Horizontal), at a distance of 2.4 km inside the pipeline, despite containing 42 bends and 3 special transitions. Additionally, live footage was successfully transferred via a 2.4-km-long pipeline.

#### *6.2. The Study's Impact on Environmental Consequences and Major Accidents*

Pipelines are widely acknowledged as a dependable and efficient method of energy transmission with minimal environmental impact if they are properly operated and maintained. However, accidents and malfunctions in pipelines can have severe consequences, resulting in substantial harm to human life and the environment. In natural gas and oil pipelines, where explosions and environmental contamination, respectively, can occur as a result of these incidents, the effects can be especially pronounced.

In accordance with regulatory frameworks, operators must consistently identify and manage risks associated with pipeline segments located in high-consequence areas (HCAs), where any untoward incident can have a significant impact on public safety and the environment. As a crucial component of pipeline integration management (PIM), pipeline defects and anomalies are meticulously analyzed and promptly remedied. This proactive approach ensures that the pipeline system operates safely and sustainably. Notably, PIM requirements have been in effect since 2002 for all hazardous liquid pipelines, since 2004 for natural gas transmission pipelines, and since 2010 for natural gas distribution pipelines in accordance with American regulations [35].

Figure 15 demonstrates that, as a result of integrity management, the environmental impacts of both natural gas and oil pipelines have decreased despite their growing lengths.

**Figure 15.** Serious incidents per million miles for gas pipelines in the United States of America.

Based on Pipeline and Hazardous Materials Safety Administration (PHMSA) data [38], which are presented in Figure 15, it can be observed that serious accidents are decreasing due to pipeline inspections and PIM. As a part of PIM, the ILI approach produces the most precise and reliable results when conducting pipeline inspection activities [39]. ILI techniques in distribution lines can only be executed by robots, but the difficulty of wireless communication over long distances precludes their use. As a result, this enhancement could be accomplished using ILI techniques that are only applied to gas transmission lines.

This study presents a long-range communication for robotic systems that enables the utilization of robots on unpiggable natural gas distribution pipelines. By implementing this method, inspections that are more accurate can be performed, and more efficient and organized responses to pipeline issues can be achieved. As a result, unnecessary excavations and fugitive gas emissions into the atmosphere will be reduced. Additionally, this approach has the potential to mitigate the occurrence of accidents and decrease the environmental impact, particularly for older pipelines. Overall, this approach not only ensures the continuity of a gas supply but also promotes safety and environmental sustainability.

#### **7. Conclusions**

In conclusion, the study has demonstrated the viability and significance of employing ILI robots in natural gas pipelines to address the problem of long-range wireless communication.

The laboratory tests on pipelines with 12" and 16" diameters and 10 m and 50 m lengths revealed that communication losses occurred despite the use of various antenna types, particularly at T-junction transitions. It is the worst-case scenario for communication to change planes, particularly at T-junction transitions. However, utilizing the pipeline as a waveguide and selecting frequencies below 1.1 GHz has the potential to improve communication by a factor of 500 when compared to airborne communication. In addition, low-gain antennas were able to successfully transmit video transmission over a pipeline length of 2.4 km with 42 bends and 3 special transitions at 917 MHz, which was selected based on the identification of the low-attenuation windows.

Our findings indicate that a signal-to-noise ratio (SNR) slightly above 10 dB is required for high-quality long-range communication within a pipeline. Low-gain antennas with V-V/H-H polarization could transmit up to 2.4 km, whereas antennas with higher gain and vertical polarization improved communication quality. Despite the pipeline's irregularities, the reliable on-site measurements indicate promising results for vertically polarized communication in the 914–920 MHz range.

ILI technology has a substantial influence on reducing fatal and severe injuries and preventing accidents with negative environmental impacts. The results of this study provide significant findings for incorporating ILI technology into distribution lines through the use of robots.

**Author Contributions:** Conceptualization, H.A.Y. and I.U.; methodology, I.U. and A.K.; software, Y.E.T. and A.K.; validation, I.U. and A.K.; investigation, I.U.; writing—original draft preparation, H.A.Y. and I.U.; writing—review and editing, all authors; supervision, H.A.Y.; project administration, H.A.Y. and K.G. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data used to support the finding of this study are available from the corresponding author upon approval of the project's administration.

**Acknowledgments:** This work was performed under the "Pipeline Inspection Robot for 12-inch and 16-inch Natural Gas Distribution Pipe-lines" project, which is supported and financed by IGDAS, a gas distribution company located in Istanbul, Turkey. The inspection robot was designed and developed by the Robotics and Smart Systems Department of TUB˙ ITAK RUTE.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Nomenclature**


#### *χ mn* Zeros of the Derivative

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

MDPI St. Alban-Anlage 66 4052 Basel Switzerland www.mdpi.com

*Sustainability* Editorial Office E-mail: sustainability@mdpi.com www.mdpi.com/journal/sustainability

Disclaimer/Publisher's Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Academic Open Access Publishing

mdpi.com ISBN 978-3-0365-9677-8