Next Article in Journal
Different Fertility Approaches in Organic Hemp (Cannabis sativa L.) Production Alter Floral Biomass Yield but Not CBD:THC Ratio
Next Article in Special Issue
How to Monitor and Evaluate Quality in Adaptive Heritage Reuse Projects from a Well-Being Perspective: A Proposal for a Dashboard Model of Indicators to Support Promoters
Previous Article in Journal
Supporting the Relevance of Chemistry Education through Sustainable Ionic Liquids Context: A Research-Based Design Approach
Previous Article in Special Issue
Project Sustainability and Public-Private Partnership: The Role of Government Relation Orientation and Project Governance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ELECTRE III for Strategic Environmental Assessment: A “Phantom” Approach

by
Fabrizio Battisti
Department of Architecture, University of Florence, Via della Mattonaia 8, 50121 Florence, Italy
Sustainability 2022, 14(10), 6221; https://doi.org/10.3390/su14106221
Submission received: 13 April 2022 / Revised: 10 May 2022 / Accepted: 18 May 2022 / Published: 20 May 2022

Abstract

:
The Strategic Environmental Assessment (SEA) is a systematic evaluation process of the environmental consequences of urban and territorial plans and programs which aims to guarantee a high degree of environmental protection and to contribute to integrating environmental factors during the design, adoption, and approval of plans and programs. Even if in Europe the SEA was already included in the legislation of each European Member State as of 2017, in these countries—and particularly in Italy—there is a diffuse lack of indications on procedures and/or evaluation protocols. In this article, the use of evaluation techniques in SEA is discussed. The specific objective of the research is the construction of an evaluation method to express a synthetic judgement—based on acknowledged, objective parameters—within the SEA procedure. According to the literature review, results regarding the SEA procedure, and its possible supporting methodologies, Multi Criteria Decision Analysis (MCDA) appears to be the most SEA-coherent approach. Moreover, the ELECTRE method family has shown the highest suitability to perform the evaluation phase of SEA. Hence, an operational development of ELECTRE III is herein proposed and applied to a case study.

1. Introduction

Strategic Environmental Assessment (SEA) was conceived in the late 1980s [1]. It is a systematic evaluation process of the environmental consequences of urban and territorial plans and programs. It aims to guarantee a high degree of environmental protection and contributes to the integration of environmental factors during the design, adoption and approval of plans and programs to certify that they benefit sustainable development [2,3].
Hence, SEA aims to assess the possible environmental effects of national, regional and local policies, plans and programs (including their variants) during the design phase before they are approved, allowing for taking actions upstream on planning and programming choices [4,5].
SEA is widely used in several countries. Years ago, the USA, Canada and Australia began including in their framework legal and administrative provisions that impose the use of the SEA under certain circumstances. More recently, SEA has been introduced in the frameworks/policies of China, Vietnam, South Africa, Ghana, the Dominican Republic and Guatemala. Moreover, several key international agreements, among which are the Convention on Biological Diversity and the ESPOO Convention, include government pledges to implement SEA (and Environmental Impact Assessment or EIA) in the national legislations in a trans-border logic [6].
In the European Union (EU), SEA has been used to enact community strategies for sustainable development, integrating the environmental dimension in strategic decisional processes at an operational level. The 2001/42/CE Directive has imposed the inclusion of SEA in national legislations to the 27 Member States, in order to make it compulsory and serving as a base for the design of plans and programs that affect and transform physical spaces. In recent years, 2000 annual SEA procedures have been recorded in UE [7].
Hence, as of 2021, SEA no longer represents a novel, untested approach; rather, it is an internationally widespread assessment methodology with a wide track record.
Technically, SEA falls under the category of the so-called Decision Support Systems (DSS), as this tool is aimed at the preparatory assessment for decision making regarding shared strategic goals. The related scientific literature agrees that, in addition to the possible impacts of plans and programs on the traditional environmental components, economic and social aspects have to be considered within this assessment: this is crucial to verify eco-systemic compatibility and to include all the factors that affect overall sustainability [8,9]. The process of SEA could also significantly change the decisions (associated with the allocation of resources and land use) taken during the preparation of the development of plans and programs.
In 2017, the SEA was already included in the legislation of each European Member State. The specific administrative characteristics of each Member State have influenced the organizational modalities established on a national scale for the adoption and the implementation of the Directive. In fact, regulatory frameworks concerning the implementation of Directives are different for each state. More than one half of the Member States have modified their national legislation to implement SEA, in order to adapt national provisions to the directive and to solve the cases of “bad” applications; other Member States have implemented SEA through a new, specific national regulatory framework [9,10,11,12]; finally, other Member States have integrated its provisions into pre-existing laws, including those deriving from the implementation of directives on EIA (85/337/CEE; 2011/92/UE; 2014/52/UE). Member States have kept a high degree of arbitrariness regarding the management of endo-procedural aspects related to SEA; specifically, (i) choice of Competent Authorities (usually represented by the Ministries tasked with environmental management, specific Agencies in sector, or purposely instituted Supervisory Body); (ii) evaluation protocols.
In Italy, SEA has been introduced in the framework by the Legislative Decree n. 152/2006, Part II (Environmental Code, EC), with its modifications and integrations. This law imposed the use of SEA in all the projects carried out in Italy. EC also allows for a simplified procedure (SEA screening) for plans and programs with a limited territorial scope (lower than 10 hectares). In that case, the full procedure of SEA can be avoided (art. 13), unless significant environmental impacts are envisaged.
13 years have passed since the implementation of the EC and no enforcement regulations have been promulgated in Italy (on a national level) to define the methodologies to be used in the evaluation processes mentioned in article 12 and article 13 of the EC, nor their criteria/sub-criteria. Each region, in accordance with the shared competence between state and regions on the environment (Italian Constitution, Title V), has provided indications for the implementation of these assessments, both regarding the SEA screening and the full SEA procedure. However, the examination of SEA regional legislation throughout the Italian territory shows that there is a diffuse lack of indications on procedures and/or evaluation protocols.
The Italian situation matches the other Member States. During a web search on national institutional sites of Member States, no evaluation protocol was found. However, the scientific literature offers several evaluation techniques and tools which have proven to be feasible for use in SEA through scientific experimentations. Evaluation methods that can support SEA include the following: Scenario Construction [13,14,15,16]; checklist [17,18]; Multi-Criteria Decision Analysis (MCDA) [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33]; Cost-Benefit Analysis (CBA) [34,35,36,37,38,39,40]; Life Cycle Assessment [41,42,43,44,45,46,47,48,49,50]; Risk Analysis [47,48,49]. Moreover, there are several techniques that encourage the participation of experts and stakeholders [51,52,53,54,55,56,57,58,59,60,61,62,63,64] in accordance with the principles from the directive that has introduced SEA, such as the Expert Panel, Delphi Method, and the Stakeholders Analysis [63,64,65,66,67]. Section 2 contains a related literature review.
The present work is part of the field of research concerning the use of evaluation techniques in SEA. The specific objective of the present work is the construction of an evaluation method to express a synthetic judgment—based on acknowledged, objective parameters—within the SEA procedure. Specifically, the research focuses on the final assessment of environmental compatibility and sustainability of plans and/or programs. Suitable evaluation methods must systematize the conclusions drawn during the SEA process. This is necessary to contribute to the Competent Authorities’ final decisions on the acceptance or rejection of plans; the evaluation model could also be used in the monitoring phase.
In this sense, this work comprises part of the studies on evaluation methodologies to support urban planning processes. The research deals with the sensitive relationship between evaluation and administrative decisions. It aims to offer methodological contents and preliminary results to push legislators and/or authorities in charge of SEA toward the adoption of regulations on the use of evaluation protocols based on procedures tested in the scientific literature. In general, the objective is to identify suitable assessment procedures whose “operational mechanics” have already been extensively tested and can be used in different territorial contexts (in various countries), but which at the same time can be suitably adapted to specific local needs. In other words, the purpose is to achieve a recognizable, tested, and effective—yet, at the same time, flexible—evaluation procedure. According to the literature review results regarding the SEA procedure and its possible supporting methodologies, Multi Criteria Decision Analysis (MCDA) appears to be the most SEA-coherent approach. Moreover, the ELECTRE method family [68,69,70] has shown the highest suitability to perform the evaluation phase of SEA. Hence, an operational development of ELECTRE III [71,72,73,74,75,76,77,78,79] is herein proposed. In particular, this implementation of ELECTRE III introduces an unreal alternative—dubbed “phantom”—that is structured by assuming hypothetical and desirable “performances” deriving from the process of interaction and consultation between Competent Authorities, Competent Subjects in Environmental Matters from the institutions, possible stakeholders that represent specific trade associations, and citizens. The interaction and consultation of these subjects can be carried out through specific techniques for participation and inclusion in evaluation and decisional processes, such as the Delphi Method, Focus Group, and/or Stakeholders Analysis. This follows the principle of wide participation outlined in the European Directive that instituted SEA. In the proposed model, experts’ participation is hypothesized to take place through the Delphi Group.
The diversification of SEA procedures in the various Member States requires setting a specific context for this model by following the definition of the procedure of a single state. In the present case, the chosen context is the Italian legislation, with its corresponding SEA procedure. However, this evaluation model is designed to be used in the final phase of any decisional process with the presence of a decision-maker (Competent Authority), various experts or stakeholders, and known objectives. Hence, its use is not limited to the Italian case, but rather extended, at the very least, to the European Union cases.
In order to assess the operational applicability of the proposed model, it has been used for the final assessment of an urban plan focused on the development of a thermal center in the Province of Viterbo, whose SEA procedure has been suspended at a preliminary evaluation phase.
In the following, Section 2 “Materials and Methods” initially covers the analysis of SEA national codifications, with reference to the Italian case. Then, a literature review on the experimental evaluation tools for SEA is proposed. Finally, the section contains an in-depth examination of MCDA tools in relation to SEA, which shows the suitability of ELECTRE methods, and in particular of ELECTRE III, detailed in the last paragraph of Section 2. Section 3 “Experimentation and Results” outlines the proposed innovation, consisting of an integration between ELECTRE III and the Delphi Method, anticipating its application to the case study and extrapolating its results. Section 4 “Discussion and Conclusions” analyzes the results obtained and tests their validity. Near the end, conclusions regarding the present work are drawn.

2. Materials and Methods

2.1. The SEA Procedure, as Codified in a Member State: The Italian Case

As outlined in Section 1, there is no univocal SEA procedure for all the Member States; the present work refers to its Italian codification, established by the Legislative Decree n. 152/2006 (Environmental Code or EC), which prescribes:
  • a first procedure defined SEA screening (art. 12 of Legislative Decree n. 152/2006), which evaluates plans and programs with a limited scope before their approval. This “synthetic” procedure is aimed to: (i) exempt plans and programs without significant environmental impacts, or those that can be generally regarded as sustainable, from the full SEA procedure, and perform their environmental validation; (ii) alternatively, if some elements require additional verifications, submit them to a full SEA procedure (art. 13 of Legislative Decree n. 152/2006);
  • a second procedure, referred to as the “full” SEA procedure, represented by an in-detail assessment of a plan or program under drafting, or not yet approved. In particular, it is aimed at evaluating its impacts, the possible alternatives, and the planned mitigations. This full procedure is aimed at formulating a provision with indications for the final drafting of the plan or program; if it validates the plan or program without modifications, it provides indications for its actuation.
The phases of the SEA screening and full SEA procedure are outlined below.

2.1.1. SEA Screening

Synthetic Studies

In this initial phase, the proceeding authority (with competence on the approval of the plan and/or program) carries out (synthetic) studies on the possible significant environmental impacts related to the implementation of the plan or program, to be included within a Preliminary Report of Eligibility (PRE).

Consultation of Competent Subjects in Environmental Matters

The PRE is sent to the Competent Authority (with competence on the SEA procedure and the duty of making related decisions), which in turn transmits it to the so-called Competent Subjects in Environmental matters (CSE, public bodies with competences on environmental themes), chosen together with the proceeding Authority, in order to acquire their opinion.

Evaluation

The Competent Authority evaluates the plan and/or program, taking into account the CSEs’ opinions, consulting the proceeding Authority and checking whether the plan and/or program will have significant environmental impacts.

Decision

The Competent Authority issues the final provision, which establishes whether a plan and/or program can be exempted from the full SEA procedure or if it must be subjected to it, according to whether its environmental impacts are not significant overall, or if they must be examined further.

2.1.2. Full SEA Procedure

Synthetic Studies

Regarding the full SEA procedure as well, the first step is represented by the development of synthetic studies on the possible environmental impacts deriving from the implementation of the plan or program, to be included in a Preliminary Environmental Report.

Scoping (Expert Consultation)

The proceeding Authority consults the Competent Authority and the CSEs, in order to define the scope and the level of detail of the information to include in the Environmental Report (ER).

In-Depth Studies

The proceeding Authority is tasked with the drafting of ER, which is then included in the documentation of the plan or program itself during the process of elaboration and approval. The ER must include the individuation, description, and assessment of the significant impacts on the environment and on the cultural heritage deriving from the implementation of the plan and/or program, and the outline of its possible reasonable alternatives, considering its objectives and territorial scope. The information to be provided in the ER is listed in Annex 6 of the EC. The ER must prove the performance of the Scoping phase and detail the reception of its related contributions.

Public Participation

The plan/program proposal, together with the ER, is sent to the Competent Authority and made available to the CSEs and the interested community, allowing them to communicate their observations and provide new or further information and evaluations.

Evaluation

The Competent Authority, after receiving and assessing the documentation and the observations, objections, and suggestions provided during the consultation, performs administrative, technical, and evaluative activities on the plan and/or program. This is carried out in collaboration with the proceeding Authority and is based on the ER and the observations received.

Decision

The Competent Authority, on the basis of the evaluation performed, expresses its decision through a reasoned opinion on the plan and/or program. In collaboration with the Competent Authority, the proceeding Authority opportunely revises the plan and/or program before its approval, taking into account the conclusions outlined in the reasoned opinion and the results of cross-border consultations.

Monitoring

Monitoring ensures the control of significant environmental impacts deriving from the actuation of approved plans and programs, and the verification of the achievement of sustainability goals established. This allows for a quick individuation of envisaged negative impacts in order to adopt suitable corrective measures. This activity is performed by the proceeding Authority in collaboration with the Competent Authority and supported by the system of Environmental Agencies and the Italian Institute for Environmental Protection and Research.

2.2. SEA Supporting Evaluation Tools and Techniques

The scientific literature concerning the application of evaluation tools and techniques to the SEA procedure displays several methodologies which are studied and experimented with over time.
Even before the European Directive on SEA, Therivel and Partidario [10] and Noble and Storey [11] used Scenario Analysis as a forecasting tool to include future predictions within SEA. Dom [12] states that Scenario Analysis is the most suitable tool to evaluate future impacts in SEA. Among the various approaches to investigate the future, as detailed in the studies authored by Dreborg [13], Rescher [14], and Makridakis et al. [15], Scenario Analysis performs predictions according to past tendencies and events by introducing the concept of conditional hypotheses at the base of the constant changes in society. Hence, predictions that derive from conditional hypotheses are based partly on the extrapolation of historical data or mechanisms, and partly on scenario hypotheses and results. In this sense, a scenario represents a possible future and can be used in plans or programming activities to research robust strategies among a variety of possible futures, while admitting a correlation between individuals’ behavior and future scenarios. In this regard, Scenario Analysis represents a future modeling tool, and the series of events that determine future scenarios can include political–strategic decisions evaluated through SEA.
Before the European Directive which instituted SEA, some validated experimentations on the use of a checklist concerned EIA. The checklist is widely used in Great Britain and is a tool for quick evaluations which requires an in-depth knowledge of the object to be evaluated in order to identify the impact of a policy, plan, practice, or action on health and well-being, based on the analysis of its potential impact on the equity between given groups of people. In the environmental field, as noted by Pawłat–Zawrzykraj and Podawca [17], the checklist allows for an accurate planning of the control activities aiming to verify the entity of most critical impacts. A similar concept has been expressed by Donnelly et al. [18] and Therivel and Partidario [10], who focused on the measurability of impacts with respect to their verification, while admitting a simplified use of MCDA in SEA.
Since 2004, Munier [19] has identified the potential of MCDA for SEA. MCDA tools allow for evaluating various intervention alternatives, considering heterogeneous judgement criteria and multiple (also diverging) standpoints. Their versatility is now recognized in the literature, thanks to the multiple different approaches adopted [20,21,22,23,24,25,26,27,28,29,30,31,32,33]. Around 100 attempts to systematize MCDA methods are reported [21] and have been performed by Guitoni, Martel, and Vincke [33], Roy and Bouyssou [28], and Ishizaka and Nemery [21]. The conclusions of these contributions are the following: no single tool can be considered perfect, nor applied to every possible problem; the high number of available procedures offers many different operational opportunities, yet it entails the risk of using unsuitable methods for a given decisional problem.
The specific theme related to the choice of the most suitable MCDA tool for SEA is a key item in the present work. The scientific literature reports several studies on this topic: in this regard, please refer to the Section 2.3, dedicated to the use of MCDA in SEA.
Cost–Benefit Analysis (CBA)—along with economic evaluation methods—is a popular theme in the scientific literature [34,35,36,37,38,39,40]. The first studies on the use of CBA for SEA were carried out by Markandya and Richardon [34], Wood and Dejeddour [35], and Kniesner and Viscusi [36] in the early 1990s. In 1994, the European Commission also recognized CBA among the tools to be used for SEA [37]. CBA attributes a monetary value to positive and negative externalities, which do not have an inherent one. CBA employs auxiliary techniques to evaluate impacts without a monetary value by hypothesizing markets that do not exist. These include willingness-to-pay (WTP) or willingness-to-accept (WTA). This allows for performing economic analyses that provide economic indicators (e.g., ENPV, EROI) and expressing judgement on the initiative under evaluation.
Life Cycle Assessment (LCA) is one of the key tools for the actuation of an Integrated Product Policy, and the main operational tool of “Life Cycle Thinking” [41,42,43,44,45,46]. It is an objective method for the evaluation and quantification of energy and environmental loads associated with a product/process/activity over the full life cycle from the acquisition of raw materials to the end-of-life. Hence, it is recognized among the suitable tools to be used within SEA. The relevance of this technique is determined by its innovative approach, which allows for evaluating all the phases of a productive process by considering them interrelated and interdependent. In the international framework, LCA methodology is regulated by ISO 14040 series standards, according to which a life cycle assessment includes the following: the definition of the objective and of the application field of the analysis (ISO 14041), the drafting of the input and output inventory for a given system (ISO 14041), the evaluation of the potential environmental impact related to the input and output (ISO 14042), and, finally, the interpretation of results (ISO 14043). Several EU documents highlight the strategic importance of the adoption of LCA as a base tool and its scientific suitability with respect to the identification of significant environmental aspects. This is clearly expressed in the Green Paper (COM 2001/68/CE) and in the Integrated Product Policy (COM 2003/302/CE), and indirectly suggested by some EU Regulations, i.e., EMAS (Reg. 1221/2009) and Ecolabel (Reg. 61/2010). The application of LCA to SEA allows for an “empiric” evaluation of the product under evaluation, considering the relation with its environment during the life cycle. However, a detailed LCA can be very costly (both in terms of time and expense) and complex. Indeed, several environmental data must be acquired for each phase of the life cycle; moreover, an in-depth knowledge of both the standard methodological aspects and the support tools—such as software and databases—is required. For this reason, many “simplified LCA” tools are being developed which allow for performing an immediate assessment of the life cycle of products even without the knowledge and resources required to realize a detailed study.
Risk Assessment has been experimented with SEA as well. In scientific literature, this term refers to several typologies of assessments [47]. Regarding environmental aspects, Risk Assessment is performed on pollution or possible accidents. As detailed by Therivel, these factors are significant within SEA [48]. Even before the SEA Directive, many methodologies and protocols were used for the risk assessment of pollution (for example, from chemicals) in relevant international contexts, such as EU and OECD. These methods allowed for assessing risk exposure according to the nature and size of the exposed elements while considering the magnitude and the duration of the exposure and evaluating the related effects [49]. When evaluating accidental risks, the key items are represented by the accidental consequences and their frequency. This assessment is usually divided into three parts: danger identification, analysis of the consequences, and frequency estimation. The scientific literature related to risk analysis has notably developed since the early years of the 21st century [50], and the works by Therivel and Wood [51] and Therivel and Paridario [10] provide their full scientific validation for its use in SEA.
Concerning SEA, as detailed both in regulatory frameworks and by several scientific studies, among which are those by Naddeo et al., Rega and Baldizzone, Kørnøv and Thissen, Brown and Therivel, Fischer, Sadler et al., Acharibasam and Noble, Björklund, Oppio and Dell’Ovo, and Gauthier et al. [52,53,54,55,56,57,58,59,60,61], a key item is experts’ and stakeholders’ participation.
The interpretation of regulatory frameworks, combined with the literature review—with a particular reference to the above-mentioned studies—shows that this corresponds—in Pretty and Hine’s terms [62]—to functional participation, i.e., the participation of different subjects who aim to reach their goals (in this case, the defense of the environmental category represented). This includes both stakeholders and experts who participate as “institutional delegates” of a Public Authority who is “competent” for a specific environmental component.
Among the various typologies discussed in the literature, the following paragraphs provide a synthetic outline of the most interesting ones—reporting a study by Oppio [63] on SEA regulations—among those supporting experts’ (Expert Panel, Delphi Method), and in general stakeholders’ participation (Stakeholders Analysis). Another focus is placed on the questionnaire method, which can be effectively integrated with the abovementioned techniques.
The Expert Panel (EP) is a work group comprised of experts with specialization in the themes that characterize the intervention and/or program under assessment. It allows for a rapid formulation of a value judgment on the program and its impacts, taking into account several information sources and each expert’s experiences. In fact, the choice of the experts is aimed at representing different standpoints in an equilibrated and impartial way. According to the available data and to individual knowledge and experiences, the experts are tasked with the formulation of a synthetic judgement on the intervention/program. The validity of the evaluation is guaranteed by the experts’ relevance and scientific reliability. The EP is useful both to express the relevance of an intervention/program and to estimate its impacts; it can also be used to outline and rank the criteria adopted for the assessment of an intervention/program [64].
The Delphi Method (DM) is another evaluation method based on experts’ judgements [65]. Unlike the EP, in DM, experts do not interact with each other directly, but rather through the evaluator who manages the process. The use of DM can be advised when the object of the assessment is complex, and the assessment cannot be performed through brief meetings [66]. Experts’ involvement modalities must follow standard procedures and possibly be implemented through surveys organized by the evaluator. On one hand, the lack of direct meetings among the experts does not favor local subjects and avoids possible conflictual situations; on the other hand, this limits direct confrontations, which might generate new and stimulating solutions [67].
Stakeholder Analysis (SA) can be implemented if each single subject—or group of subjects—has an interest in a given intervention/program [63]. SA follows the purpose of individuating the interest of different groups with respect to a given program/intervention. The understanding of the social context of an intervention can be achieved through the exploration of the standpoints of subjects with a specific interest, as well as who can affect the result of an intervention. It is a particularly relevant theme, in relation to the subjects’ effective involvement and to the actuation of the policies. Stakeholder Analysis is aimed at determining the role played by the stakeholders for the success of a project, the expected positive effects, and the impact produced by the lack of a prior definition of these aspects. Other purposes are the encouragement of stakeholders’ participation, the assessment of the impact of negative responses, and consequently the employment of strategies to reduce them. SA can be integrated with the implementation of surveys, possibly making use of questionnaire analysis. Table 1 proposes a synthesis of some of the evaluation methods employed in the SEA procedure, related to the main evaluation nodes (i.e., phases).
Evaluation and decision phases are particularly relevant, as they are repeated, yet with different levels of detail, both in the SEA screening and the full SEA procedure.
In the following, the use of MCDA in the evaluation phase—which is preliminary to the decision of SEA—will be further detailed. This in-depth analysis appears to be interesting, as the scientific literature contains a significant number of studies on the use of MCDA in SEA. Specifically, a search on the “Scopus” database in the TITLE-ABS-KEY with the keywords “strategic” AND “environmental” AND “assessment” AND “multi-criteria” has found 232 papers; after verifying them by examining their abstracts, it so happened that only 18 of them explicitly discuss the employment of MCDA in SEA. Hence, it is necessary to carry out a more detailed study on the relationship between the environment, SEA, and MCDA.

2.3. MCDA for Environmental Assessments: Requirements for the Choice of the Most Suitable Technique

The Treaty on the Functioning of the European Union defines the environment as a complex dynamic system comprised of the reciprocal interaction between its elements.
This dynamism and interrelation between constitutive components require the use of evaluation tools that can interpret this concept. In other words, they must be able to consider the numerous and heterogeneous components of environmental assessments, and the stakeholders’ many—and often, conflictual—standpoints and judgements. These elements may also have significant variations and oscillations over time.
As partially described in Section 2.2, the possibility to use MCDA in the environmental sector and in SEA has been widely validated by the scientific community.
The scientific literature highlights that the choice of the most suitable MCDA tool depends on the “mechanical” relationship between the method and the typology of decisional problem [21]. Hence, the choice of the MCDA model follows the individuation of the needs it must fulfill, in relation to SEA.
As analyzed in the previous Section 2.1, SEA evaluates the compatibility of plans and programs with the environment and their sustainability. It is not an administrative conformity check, but rather an assessment of whose contours are less defined. In technical terms, this consideration foreshadows the suitability of an outranking solution, i.e., a method aiming to identify the degree of validity (not by choice, but by sorting) rather than a score or a best alternative in absolute terms. Moreover, considering the possibility/need to carry out a fuzzy evaluation, input data can have a “medium” level of definition, i.e., their obtainment does not require technically detailed survey campaigns.
The characteristic of interrelation between environmental components infers the need for a general coherency between evaluation criteria [80], to which the concordance principle is associated. In other terms, evaluations must not only consider each criterion separately, but rather the degree of “affinity” between them.
However, it must also be observed that SEA is part of an administrative procedure. In other terms, it includes experts and stakeholders who are heterogeneous subjects with high-level competences. This means that the evaluation structure must be highly comprehensible and must have a significant, yet limited number of quantitative and qualitative criteria (sub-criteria) and indicators. Evaluations with a wide number of criteria (above 60–70) would be risky, considering the above-mentioned coherency problems. Hence, SEA should be implemented with a defined and—as much as possible—low number of criteria (a number between 15 and 30, according to the specific characteristics of the plan or program, seems fitting) and subjects.
Among the main MCDA techniques with a large and consolidated literature, ELECTRE methods appear to fulfill the outlined requirements.
ELECTRE methods [68,69,70] seem to be coherent with environmental assessments, with respect to their specificity. They operate with “medium”-level input data and allow for expressing comparisons by measuring the “outranking” between each solution and the other ones, rather than in terms of net preference or indifference between alternatives. This operation is performed according to a principle of concordance/discordance, i.e., by verifying the concordance of reasons/criteria in favor of one design with respect to one another (concordance condition) and the absence of a strong degree of discordance between evaluations, as this factor could invalidate the concordance (veto expression, discordance condition).
From an operational standpoint, the evaluations performed with ELECTRE methods allow for finding “preferrable” alternatives among various ones, with the following scenarios: (i) alternative with higher performance than the other ones, for the majority of judgement criteria/sub-criteria; (ii) preferrable alternative with lower performances than the other ones, considered acceptable due to the low entity of the difference with them. In other words, the ELECTRE method allows for avoiding the compensation between impacts that often occurs in other models (pairwise comparison (AHP; ANP; MACBETH); compensatory aggregation (PROMETEE; WSM)). In fact, in environmental assessments, this might lead to overlooking the significant negative impacts that must necessarily be considered.
The audit on the Scopus database with respect to the use of ELECTRE methods in SEA found 12 papers by typing “strategic” AND “environmental” AND “assessment” AND “ELECTRE” in the TITLE-ABS-KEY search field; among these, only 1 paper [81] applied ELECTRE III in an SEA-related thematic application.
Despite the consistency between this type of model and environmental assessments, these models still have a scarce diffusion in the related literature. Hence, it seems interesting to provide a detailed study on the ELECTRE methods, envisaging their possible use in the evaluation phase of SEA.

2.4. ELECTRE and ELECTRE III Methods

ELECTRE is the acronym for Elimination Et Choix Traduisant la Realitè and represents a family of multi-criteria methods (ELECTRE I, II, III, IV, IS, TRI) based on the partial aggregation of preferences by outranking. They were developed by B. Roy in the late 1960s. The various versions of ELECTRE have a different approach to decisional problems (the first one outputs choices, the others provide a ranking), to the nature of the employed data and to the typology of criteria (the first and the second one adopt criteria, with the former using cardinal scales and the second one using both cardinal and ordinal scales; the third and the fourth one use pseudo-criteria and cardinal scales with thresholds), and to the outranking modeling framework [71,72,73,74,75,76,77,78,79].
In general, ELECTRE methods are structured in two phases:
  • outranking modeling, which includes the pairwise comparison of the actions on each criterion and the aggregation of the obtained results through the construction of indexes—or the implementation of tests—to determine concordance/discordance conditions, at the base of the concept of outranking itself;
  • alternative ranking according to the investigated problem, and to the modeled decision rule.
The choice between the different methods is performed both according to the nature of the available data, then to the usable criteria, and then to the operating decision rule.
The third version of the model represents the first attempt at fuzzy outranking in the literature, introduced in 1978. The ELECTRE III model requires both the input data of the problem (alternative and criteria) and the decision maker’s preferences; these latter are represented by the weight and three threshold values for each criterion. The weight associated with each criterion is a relative importance coefficient, which is one of the most delicate parts of the model as it is the most direct and explicit expression of decisional preferences and can significantly influence the results of the method. The threshold values are implemented to reduce two types of risks: on one hand, the semantic distinction of two situations that correspond to very close, almost equivalent conditions and evaluations; on the other hand, the unification of two different preferential situations. In detail: the indifference threshold (qj) expresses the minimum difference between values of the j criterion, considered significant by the decision maker; the veto threshold (vj) expresses the minimum difference between the values of the j criterion, beyond which the decision maker considers that the gap between scores can no longer be compensated for by the performances of the other criteria.
This method differs from ELECTRE I and II mainly because it uses pseudo-criteria, i.e., criteria to which a degree of uncertainty in information and preferences can be associated. Hence, in the first phase of the method a “fuzzy” outranking is applied, associating a δ(a,a’) characteristic function with each relation between ordered pairs of actions. The function expresses the degree of credibility of the outranking relation [72].
The experimentation detailed in Section 3 uses ELECTRE III for the following reasons:
(i)
the relation between environmental impacts can be ambiguous; in other words, performance differences may not univocally correspond to differences in terms of impacts; ELECTRE III uses pseudo-criteria, i.e., criteria related to uncertainty in information and preference. Indifference, preference, and veto thresholds allow for solving the ambiguity of environmental assessments;
(ii)
unlike the first two ELECTRE methods, the first version uses a fuzzy outranking model, which associates a characteristic function (indifference, preference, veto threshold) with each ordered pair of alternatives. That expresses the degree of credibility of the outranking relation, which can vary within the [0;1] range;
(iii)
it allows for considering criteria expressed in heterogeneous scales, different from each other;
(iv)
even though there is much literature on the ELECTRE and ELECTRE III methods, as outlined in Section 2.2, their applications in SEA and in the environmental field in general are quite limited. That makes this experimentation inherently relevant, apart from its results.
One final consideration shows that a specific declination of ELECTRE III for SEA is required. In fact, these assessments may concern both the validation of one alternative (plan/program) among different possible ones, and the validation of a single alternative, in absence of other possibilities. This condition implies the loss of the principle of “comparison”, which is subtended not only to ELECTRE, but to any MCDA method.
The proposed model suggests a “phantom” operational declination of the ELECTRE model, i.e., a model that introduces one alternative (or more) that does not exist in reality, made up by considering “desirable” performances. This represents an innovative element, as ELECTRE III must be integrated with a further, prodromal evaluation method aimed at defining the “phantom” alternative. Tools for the inclusion of expert and non-expert subjects in evaluation/decisional processes—such as the Delphi Method, Expert Panel, and Stakeholder Analysis—can be used in this context, considering the nature of SEA as a “participated” procedure.
Finally, some hints have been obtained from the model of the TOPSIS method [82,83]. In the technical phase of judgement aggregation, this method constructs a geometric linear dimension for the collocation of the evaluation results, delimited by an ideal and a non-ideal solution. Similarly, the “phantom” determines the acceptable minimum category for the validation of the alternative.
Hence, decisions can be made according to the results of ELECTRE III on the basis of the relative classification of the alternatives under assessment and the phantom alternative. If the alternative—comprised of the plan/program evaluated by SEA—is in the same class as the phantom alternative, then it can be considered sustainable; otherwise, it must be revised.

3. Experimentation and Results

The experimentation consists of the application of the ELECTRE III method, integrated with the Delphi Method, opportunely articulated and calibrated to be used in the final evaluation phase of the SEA procedure, to a case study. Specifically, that is the SEA procedure of the “Urban Plan for the realization of the new thermal center in Viterbo”, a province capital in the Latium region of Italy.
The SEA of this urban plan has been ongoing since 2015, and currently (2022) it has been suspended, despite having reached the final “decision” phase. In fact, the Ministry of Cultural Heritage has recently performed a revision (2019) of urban planning restrictions on a wide portion of the territory (around 1600 ha) in the City of Viterbo. As a consequence, even though the General Regulatory Plan of Viterbo has been in force since 1974, its dispositions cannot be actuated any longer, as (because of the new planning restrictions) the condition of landscape compatibility is no longer fulfilled.
Hence, the related SEA procedure is destined to be canceled without reaching a final decision. The case study is interesting for two reasons: in the first place, it allows for testing the operational capability of the proposed model (as here articulated); secondly, it provides results on the entity of the environmental impacts of an urban plan with a strategic value for the development of the city, yet is no longer viable.
In this way, the experimentation simulates the possible results of the SEA procedure, which will be soon cancelled.
This application of the ELECTRE III model includes some innovations, with respect to the traditional and diffuse implementation mechanics observed in the literature.
The innovations consist of the following:
  • the addition of an “ideal”, “phantom” alternative, constructed through the consultation with expert subjects in environmental matters (procedural phase within SEA). In fact, the environmental field is highly specific, so subjects with expertise in it are required to define evaluation criteria and indicators, as well as their weight; this definition must be performed on the basis of the available data (criteria and indicators that require unavailable data would be meaningless). In the present case, the “phantom” solution is constructed on the basis of the satisfying performances defined by the Competent Authority or by the Competent Subjects in Environmental Matters;
  • the reasoned definition of indifference, preference, and veto thresholds, through the consultation of expert subjects in environmental matters; these thresholds can also be defined with a percentage deviation (higher/lower), in comparison to the “ideal” performance of each criterion.
The proposed innovations, and the need to define criteria and indicators, require the integration of the Delphi Method within ELECTRE III. It represents a correct balance between participation and rapidity by including the consultation of experts who participate in SEA, and taking shorter times, compared to Stakeholder Analysis. The addition of the Delphi Method is aimed at the definition of (i) criteria, indicators and objective functions; (ii) criterion weights; (iii) the “phantom” alternative; (iv) indifference, preference, and veto thresholds.
The proposed version of the ELECTRE III model is divided into the following phases:
  • Phase 1—Delphi Method (definition of criteria, indicators and objective functions, weights, indifference, preference, and veto thresholds)
  • Phase 2—Concordance and discordance matrices
  • Phase 3—Aggregate concordance matrix
  • Phase 4—Credibility matrix
  • Phase 5—Distillation and pre-orders

3.1. The Experimental Model: Phases

The present model can be used when making a decision concerning the validity of an alternative in the framework of a SEA procedure or the choice of the best alternative among various ones. Hence, (real) alternatives must be defined before the implementation of the model. Considering that SEA follows the whole approval process of a plan, other alternatives can be represented by different plan/program hypotheses. Moreover, the implementation of the model requires data regarding the performances of the alternatives.

3.1.1. Phase 1—The Delphi Method

In the implementation of the Delphi Method (DM), the expert subjects are represented by the (institutional) Competent Authority on SEA, and the Competent Subjects in Environmental Matters designated by said authority.
The DM can be carried out by consulting the following: (i) only the subjects associated to the Competent Authority; (ii) both the subjects associated with the Competent Authority, and the Competent Subjects in Environmental Matters; (iii) only Competent Subjects in Environmental Matters, while the Competent Authority defines relevance indices (weights) for the experts.
The choice between the three possibilities affects the assessment results, as these in turn depend on the results of the DM. It is made by the Proceeding Authority and must take into account the specific characteristics of the plan/program for which SEA is carried out.
The first phase of the DM takes place by asking the expert subjects (Competent Authority and/or Competent Subjects in Environmental Matters) to choose significant and exhaustive criteria and the related indicators (Formula (1)):
E j = { c 1 , c 2 , c 3 , ,   c n } C
where C represents the set of the possible significant and coherent criteria that can be considered in SEA. For each criterion, the objective function of must be indicated (↑ indicates increasing satisfaction at increasing performance; ↓ indicates increasing satisfaction at decreasing performance; of can be also defined as a range of values). If experts indicate different of (very rare situation), the of to be chosen is the prevailing one.
Each expert subject assigns a weight to each criterion (Formula (2)):
c j = ( w j e x j )
in order to obtain a vector of normalized weights p = {pj: jЄJ}, so that (Formula (3)):
j   0   w j   1 a n d j J   w j = 1
Different expert subjects take part in the definition of weights, yet there must be a single weight for each criterion; hence, the weights chosen by the single experts could be processed with an arithmetic mean (Formula (4)):
w   = 1 n j = 1 n w j
or with a weighted arithmetic mean (Formula (5)):
w   = j = 1 n w j r ( e x j ) j j = 1 n r ( e x j ) j
where r(exj)j represents the j-th relevance factor (among the experts) of the j-th expert, which can be exclusively attributed to the Competent Authority for SEA.
After defining criteria, indicators, and weights, the “phantom” alternative(s) must be constructed.
For each of the evaluation criteria, each expert must define a performance through an indicator that represents a desirable outcome and an index of impact sustainability (Formula (6)):
c j = ( d p j e x j )
The phantom alternative must be defined by a single desirable performance for each criterion; yet, each of them displays several desirable performances (one for each expert subject) at this stage. Desirable performances should be defined by considering the territorial context in which the plan/program is developed; therefore, their parameters should also be compatible with all other plans/programs under development in the same territorial context. In this sense, the compatibility of the phantom alternative is also related to the cumulative impacts. Hence, an arithmetic mean (Formula (4)) or a weighted arithmetic mean (Formula (5)) must be carried out, and the same equations for the weights are applied by substituting the w parameter with the dp parameters.
Alternatively, different phantom alternatives—up to one alternative for each expert—can be considered. The ELECTRE III mechanic allows this option.
Thus, the phantom alternative (or the various phantom alternatives) is considered as one of the evaluation alternatives, along with the real alternatives, and is added to the finite set of alternatives—here, planning options, po = {poi: iЄI}—evaluated on the base of a set of pseudo-criteria, c = {cj: jЄJ}.
The last phase of the DM is the threshold definition; the experts define three thresholds (itj, ptj, vtj) on the Ej scale of each criterion, in compliance with the following condition:
0 i t j p t j v t j
where itj, ptj, vtj are respectively indifference, preference and veto thresholds.
Hence, it results that (Formulas (8)–(10)):
c j = ( i t j e x j )
c j = ( p t j e x j )
c j = ( v t j e x j )
As with weights and desirable performances, each threshold must be calculated through an arithmetic mean (Formula (4)) or a weighted arithmetic mean (Formula (5)) if there are multiple thresholds for each of the criteria; in this case, the w parameter in the formulas is substituted with the it, pt, and vt parameters.
The thresholds can also be defined as the admitted percentage deviation from the desirable performances of the phantom planning option.

3.1.2. Phase 2—Evaluation Matrix and Concordance and Discordance Matrices

The DM has allowed for construction of the phantom alternative, hereinafter referred to as phantom planning option (pop); together with the available data on real alternatives, hereinafter called planning options (pon), it allows for drafting the evaluation matrix (Table 2), i.e., a matrix containing all the input data required for the evaluation.
After defining evaluation criteria and indicators, as well as outlining the framework of the actors involved in the evaluation, concordance (cnc) and discordance (dsc) marginal indices must be determined for each criterion. Concordance (Table 3) and discordance (Table 4) matrix models are herein reported.
For each pair of planning options (po,po’) and for each criterion (c), the concordance (cnc) marginal index is defined through the comparison between evaluation deviation values gj(po)-gj(po’) and itj e ptj thresholds. There are different cases according to whether the criterion is increasing (higher values of the criterion are related to more positive judgements on the alternative) or decreasing (higher values are related to more negative judgements). If the objective function of the criterion is increasing, 3 cases can occur (Formulas (11)–(13)):
g j ( p o ) g j   ( p o ) + i t j   c n c j   ( p o ,   p o ) = 1
implying that the two planning options under comparison are indifferent;
g j ( p o ) g j   ( p o ) + p t j   c n c j   ( p o ,   p o ) = 0
implying that the planning option po’ outranks the planning option po;
g j ( p o ) + i t j < g j   ( p o ) < g j ( p o ) + p t j
implying that an interpolation must be performed, and that the planning option po’ slightly outranks the planning option po.
Among the possible interpolations, linear interpolation takes the following form (Formula (14)):
c n c j ( p o ,   p o ) = p t j [ g j ( p o ) g j ( p o ) ] p t j i t j
Conversely, if a criterion has a decreasing objective function, 3 different cases can take place (Formulas (15)–(17)):
g j ( p o ) g j   ( p o ) i t j   c n c j   ( p o ,   p o ) = 1
implying that the two planning options under comparison are indifferent;
g j ( p o ) g j   ( p o ) p t j   c n c j   ( p o ,   p o ) = 0
implying that the planning option po’ outranks the planning option po;
g j ( p o ) p t j < g j   ( p o ) < g j ( p o ) i t j
implying that an interpolation must be performed, and that the planning option po’ slightly outranks the planning option po.
In this case, a linear interpolation takes the following form (Formula (18)):
c n c j ( p o ,   p o ) = i t j ( p o ) [ g j ( p o ) p t j ] p t j i t j  
This allows for obtaining a concordance matrix for each of the considered criteria; the elements of each matrix are the concordance coefficients between all the couples of alternatives, with respect to the considered criterion.
Marginal discordance indices are determined through an analogous process, with the introduction of the veto threshold.
Again, if the objective function of the criterion is increasing, 3 cases can occur (Formulas (19)–(21)):
g j ( p o ) g j   ( p o ) + p t   d s c j   ( p o ,   p o ) = 0
implying that the two planning options under comparison are indifferent;
g j ( p o ) g j   ( p o ) + v t j   d s c j   ( p o ,   p o ) = 1
implying that the planning option po’ cannot outrank the planning option po;
g j ( p o ) + p t j < g j   ( p o ) < g j ( p o ) + v t j
implying that an interpolation must be performed, and that the planning option po’ slightly outranks the planning option po.
Again, a linear interpolation, chosen among the various possible techniques, takes the form (Formula (22)):
d s c j ( p o ,   p o ) = [ g j ( p o ) g j ( p o ) ] p t j v t j p t
Conversely, if a criterion has a decreasing objective function, 3 different cases can take place (Formulas (23)–(25)):
g j ( p o ) g j   ( p o ) p t j   d s c j   ( p o ,   p o ) = 0
implying that the two planning options under comparison are indifferent;
g j ( p o ) g j   ( p o ) v t j   d s c j   ( p o ,   p o ) = 1
implying that the planning option po outranks the planning option po’;
g j ( p o   ) v t j < g j   ( p o ) < g j ( p o ) p t j
implying that an interpolation must be performed, and that the planning option po’ slightly outranks the planning option po.
In this case, a linear interpolation takes the form (Formula (26)):
d s c j ( p o ,   p o ) = [ q j ( p o ) g j ( p o ) ] p t j ] v t j p t  

3.1.3. Phase 3—Aggregate Concordance Matrix

After obtaining J concordance matrices and J discordance matrices of size IxI, the aggregate concordance matrix must be calculated. Its elements are the weighted sums of the marginal concordance indices, multiplied by the weights assigned to the criteria (Formula (27)):
a c n c ( p o , p o ) = j J w j c j ( p o , p o )
The aggregate concordance matrix always has a IxI size; its standard form is reported below in Table 5.

3.1.4. Phase 4—Credibility Matrix

The credibility matrix is calculated on the basis of the aggregate concordance matrix and of the single discordance matrices; its elements are obtained as follows (Formula (28)):
j     d s c ( p o , p o ) = 0   δ   ( p o , p o ) = a c n c ( p o , p o )
Conversely, if (Formula (29)):
j : d s c j ( p o , p o ) > 0
then (Formulas (30) and (31)):
d s c j ( p o , p o ) < a c n c ( p o , p o )   δ   ( p o , p o ) = a c n c ( p o , p o )
d s c j ( p o , p o ) a c n c ( p o , p o )   δ   ( p o , p o ) = a c n c ( p o , p o ) * j * J * 1 d s c j * ( p o , p o ) 1 a c n c ( p o , p o )
A standard credibility matrix is reported below in Table 6.

3.1.5. Phase 5—Net Outranking, Distillation, and Pre-Orders

As previously analyzed, ELECTRE III has been discussed by much literature and various operational versions have been devised. This research considers the original structure proposed by Roy with Enea’s operational version [72].
The last phase of the ELECTRE III model is aimed at the construction of the final pre-order, i.e., the final ranking of alternatives—planning options, in the present case—by employing a distillation algorithm. Distillation is a procedure through which planning options are extracted from the credibility matrix and then ranked.
First, a s(δ) discrimination threshold is determined, representing the maximum distance between two credibility values for them to be considered within the same order of magnitude. This threshold represents the starting point to perform a net outranking; then, alternatives will be extracted from it and ranked.
From an operational standpoint, two distillation algorithms are applied: a descending one, which extracts the alternatives from the best to worst, and an ascending one, which extracts them from the worst to the best. This results in two pre-orders, and their intersection determines the final ranking. The procedure is detailed in the following.
Concerning the extraction of the alternatives, the maximum credibility degree δ0 in the credibility matrix is equal to (Formula (32)):
δ 0 = m a x ( p o , p o ) C M i δ ( p o , p o )
which represents the maximum among the values δ(po,po’) for the i-th class (the credibility matrix for the i-th class is indicated by CMi). It determines the credibility value, and on its base only the values δ(po,po’) with a sufficient proximity to δ0 are considered. Next, the discrimination threshold s(δ) is considered; it represents the minimum difference between two credibility values, to which the decision maker attributes significance, which varies linearly with δ0 (Formula (33)):
s ( δ 0 ) = α * δ 0 + β
with α and β generally taking the following values (Formulas (34) and (35)):
α = 0.15
β = 0.30
δ’0 can be therefore calculated (Formula (36)):
δ 0 = δ 0   s ( δ )
It represents the lowest value, such that the credibility degree of the outranking relation is considered significant by the decision maker and is used for the construction of preference classes.
The following step is the construction of a matrix for the synthesis of the outranking relations between alternatives, taking into account the restrictions imposed by the discrimination threshold, and expressing the so-called net outranking with Boolean variables.
The net outranking matrix is a Boolean matrix T (i.e., a binary (0–1) matrix), constructed as follows (Formula (37)):
T ( p o i , p o j ) = { 1   δ ( p o i , p o j ) δ 0   e t   δ ( p o i , p o j ) δ ( p o j , p o i ) δ 0   0   o t h e r   c a s e s
where 1 indicates that a planning option outranks or is at least equal to another planning option, while 0 indicates that a planning option is outranked by another planning option.
A standard Boolean matrix is reported below in Table 7.
The qualification q(poi) of a planning option poi is herein defined as the difference between the number of the planning options bsrc(poi) outranked by it, as well as the number of alternatives src(poi) that outrank it (Formula (38));
q ( a i ) = b s r c ( p o i ) s r c ( p o i )
In operational terms, it is simply the difference between the sum of the values in the row and the sum of the values in the column, for each alternative in the matrix T.
Hence, the descending distillation algorithm can be applied; it classifies the actions based on the maximum qualification, as expressed by (Formula (39)):
q + = m a x p o i P O i q ( p o i )
The result is the following subset of POi (Formula (40)), which represents the set of the planning options under assessment:
D 1 + = { p o i P O i : q ( p o i ) = q + }
D 1 + represents the first descending distillate; each class Ci+ is constructed starting at the top from this entry. If D 1 + only contains one alternative, then Ci+ = D 1 + ; the process outlined above is then repeated on the set of the remaining actions. Otherwise, the algorithm is applied inside the set of the actions D 1 + , hence generating a sub-distillation until it only contains a single alternative. Then, the process is repeated starting from Ai+1 and is completed only when all of the elements in the set of actions A have been attributed to a class.
The ascending distillation process is similar to the previous case. However, the selection is performed according to the minimum qualification (Formula (41)):
q = m i n p o i P O i q ( p o i )
In this case as well, the result is a subset of POi (Formula (42)):
D 1 = { p o i P O i : q ( p o i ) = q }  
In this case, D 1 is the first ascending distillate, and each class Ci is constructed according to an ascending process.
After obtaining the two pre-orders P(PO)+ and P(PO) from distillation algorithms, the final ranking must be determined.
An “intersection” process can be used to define the final ranking. This process has been proposed for the first time in relation to PROMETHEE, but it can be applied in ELECTRE III as well. The final ranking is based on the following three rules: (i) an action in the final ranking cannot be preferred to another one, unless the former is preferred to the latter in one of the two pre-orders P(PO)+ or P(PO) and preferred to, or indifferent with the latter, in the other one; (ii) two actions cannot be at the same position in the final ranking, unless they belong to the same class in both the descending and ascending rankings; (iii) two actions are incomparable in the final ranking if one is preferred to the other one in one of the rankings (descending or ascending), but the other one is preferred in the other ranking.
The pre-order produced by ELECTRE III allows for the ranking of the alternatives according to their “equivalence”; the introduction of a non-real, phantom alternative, with a pilot function, allows for performing a classification in 3 macro-classes. The first one is the class with the phantom alternative, which contains satisfactory alternatives; higher classes contain very satisfactory alternatives; lower classes contain unsatisfactory alternatives.
This implementation of ELECTRE III can be followed by a sensitivity analysis, where some input data are modified in a reasoned way, then ELECTRE III is implemented again. The results of the sensitivity analysis allow for verifying the entity of variations in order to test the robustness of the results of the evaluation and, implicitly, the method itself.

3.2. Results

The SEA procedure carried out on the “Urban Plan for the realization of the new thermal center in Viterbo” has 3 real planning options: (i) the Urban Plan adopted by the Municipal Administration of the City of Viterbo, about to be cancelled (Adopted Urban Plan); (ii) a second, low-impact, project, with a pending authorization, designed after the application of the planning restriction (Low impact project); (iii) the “zero” alternative, corresponding to the absence of interventions (No intervention).
In compliance with the proposed operational declination, a DM has been carried out on a sample of 8 expert subjects: 2 experts from the Competent Authority on SEA in the Latium region and 8 experts represented by Competent Subjects in Environmental Matters. The 3 thresholds—it, pt, and vt—have been defined as percentage variations from the performances of the phantom planning option.
The results of the DM are reported in Table 8; the experts have chosen the same threshold values for all criteria.
The three real planning options of the assessment are compounded by the phantom planning option. Information and data from the technical documentation of the SEA procedure have been used to draft the evaluation matrix (Table 9).
The evaluation matrix (Table 10) contains all the data and performance needed for the implementation of the ELECTRE III model.
After the composition of the evaluation matrix and the opportune transformation of the judgements related to the qualitative criteria into scores, it was possible to implement the following phases 2, 3, 4, and 5 through the software XLSTAT 2021 [84]. Among the existing software tools for MCDA, this one allows for performing the evaluation according to the proposed mechanic. The software automatically processes concordance and discordance matrices; it first calculates the aggregate concordance matrix (Table 11):
Then, the credibility matrix (Table 12):
Finally, the Boolean matrix (Table 13):
Then, the software performed the (descending and ascending) distillation of the alternatives, which have eventually been ranked and classified (Table 14).
In summary, the evaluation shows that the real alternatives of the evaluation are in the same class as the phantom alternative. Hence, the overall impact of the planning option “Urban plan for the realization of the new thermal center in Viterbo” (n. 2) subjected to SEA matches the “phantom alternative” proposed by the experts. In this sense, this research shows that the outcome of the SEA procedure could have been the validation of the “Urban plan for the realization of the new thermal center in Viterbo”.

4. Discussion and Conclusions

The quality and validity of the obtained results can be discussed after the implementation of a sensitivity analysis, showing the variations of the final pre-orders following the modification of certain parameters of the DM.
The results of the DM were “altered” (Table 15); the alteration was performed stochastically, gradually increasing the “severity” of the desirable performances by steps of 10%, 20%, 30%, and 40%; in two hypotheses, the new thresholds exceeded those established in the DM, progressively closer by 10% and 20%. The alteration is considered stochastic as it is performed only on some of the parameters, chosen randomly; the performances of full-score qualitative criteria were not subjected to any change.
This allowed for 15 new implementations of ELECTRE III through the software XLSTAT, generating 15 new rankings; it was then possible to compare the results of the experimentation with the results of the sensitivity analysis (Table 16).
The interpretation of the results of the sensitivity analysis shows that: (i) when the threshold values are not altered, the results of the evaluation change only if the phantom planning option is varied by 40% or above; (ii) when altering the thresholds, the results vary if the desirable performances of the phantom planning option are varied by 20% or above, and the thresholds are varied by 20%; (iii) in any case, the results have a linear variation along the gradual modification of the values.
Hence, the sensitivity analysis has allowed for denoting that the method is stable and that the results have a good inertia to small variations of the input parameters. This model behavior seems to be coherent with the use in environmental assessments; in fact, their purpose is not the conformity check, but rather the comprehension of the overall effects of a plan/program on the environment, understanding its opportunity and risks, in order to implement correct mitigation measures.
In other methods, such as compensatory aggregation or pairwise comparison, the modification of a single parameter (a performance or a weight) often has a sensible effect on the results of the assessment, increasing their randomness and margins of error.
Hence, it can be concluded that the sensitivity analysis has proved the robustness of the obtained results and confirmed the validity of this version of the ELECTRE III method for SEA.
Further in-depth studies can be carried out by experimenting with different methods and sensitivity analyses; this will allow for verifying the variations of the results for each single method, as well as consequently the differences between the methods themselves.
Considering the validity of the results, it seems convenient to analyze their content. The classification of the planning options under evaluation shows that the planning option “Urban plan for the realization of the new thermal center in Viterbo” could be validated by the SEA procedure, as it belongs to the same class as the phantom planning option. In this sense, the result of the SEA procedure could have been the approval of the “Urban Plan for the realization of the new thermal center in Viterbo”, notwithstanding the prescriptions under the competence of the Proceeding Authority and the Competent Subjects in Environmental Matters.
However, it must be considered that the present experimentation adopted a limited number of criteria and experts included in the DM. A higher number of criteria and experts in the DM might affect the results of the assessment.
SEA is a wide-scope assessment aimed at finding opportunities in relation to the expected environmental impact of plans and programs, rather than a mere conformity check. Hence, the characteristics version of ELECTRE III are compatible with its employment in the SEA procedure. It provides robust results, which could be used by the Competent Authority to formulate the reasoned decision under its competence, as a support to the evaluation phase in the SEA procedure.
Based on the above, one last point must be made on the meaning of SEA. As mentioned in Section 2, the SEA procedure is intertwined with the drafting of plans. It seems evident that no SEA can be performed if no plan—not even an incomplete one—has been conceived, as there would be no evaluation object. Moreover, the result of SEA is not limited to a mere approval of refusal; rather, following its implementation, the competent institutional authority decides on the environmental impacts of the plan, leading the validation of the proposed version—which represents the planning option—or to the request of modifications and/or mitigations.
The results of the experimentation (see Section 3) seem coherent with this point. The classification performed by this version of the ELECTRE III allows for performing a general evaluation on the acceptability of the examined planning option; moreover, notwithstanding compliance with the mandatory environmental parameters established in the framework of EU Member States, it allows for overcoming the typical rigidity of conformity assessments. In this sense, the proposed version of ELECTRE III combines mandatory environmental parameters with desirable performances, i.e., non-mandatory parameters, deemed important for the formulation of a global judgement on the plan evaluated through SEA.
In conclusion, the employment of the ELECTRE III model, integrated with the DM, can be seen as a valid assessment tool for the final phases of the SEA procedure; in particular, the plan under evaluation could be validated if its classification reaches the class of the phantom alterative.
The analysis of the model shows that it is more suitable for the assessment of plans and/or programs with detailed prescriptions, rather than for plans whose function is the mere provision of a legal framework for a possible new settlement; in this sense, it shows a high degree of compatibility with SEA procedures for urban plans and programs, especially for local executive plans. It can also be used for the SEA screening.
No developments or changes are expected regarding the mechanic of the ELECTRE III model, since it has already been widely tested in the scientific literature; instead, the model could be developed through the implementation of EA, with the integration of computerized processes for the inclusion of all citizens, in addition to experts. In this case, Stakeholder Analysis (SA) as an EA must be structured in such a way to “manage” a high number of incongruent data, rectifying them to provide useful and significant data for the achievement of valid results.
In this sense, the integration of EA or SA in the SEA procedure contributes to the fulfillment of the principle of extended participation, encouraged by the European Directive that has established SEA.
Further possible developments of this research work are related to the evaluation of the possible cumulative nature of impacts. This occurs when more initiatives are planned/programmed at the same time, hence requiring the evaluation of the intensity and magnitude of the combined impacts. As mentioned above, the performance of the “phantom” alternative should also consider additional concurring initiatives (and the related impacts), allowing for identifying tolerable parameters that take into account the cumulative effect produced by the compresence of other plans and programs. However, in the presented case study, the territory was not the object of any other plans or programs; hence, this aspect was not discussed. Possible variations to the proposed approach could include aggregative–compensatory techniques for the construction of a “phantom” scenario as the outcome of the cumulative and synergic effects of the impacts following the transformation of the area under assessment, as a result of those plans/programs. When assessing a new plan/program, the prediction of its resulting scenario should also consider other in-progress plans/programs, compared with the “phantom” scenario.
This should be part of an “open access”, constantly updated evaluation process that incorporates new plans/programs to be subjected to SEA. This system will support the prediction of the future environmental and territorial scenarios, to be compared to the “phantom” scenario also using the proposed ELECTRE III method.
The construction of a database and open access software, on a regional or national scale, could contribute to supporting the competent authorities in the definition of the performance of “phantom” alternatives and scenarios, allowing for further testing of the proposed model.
Finally, we want to provide a reflection on the applicability of the proposed model to a much more extended territorial context, such as the EU.
As clarified several times in this article, the related EU Directives state that each Member State has legislative power in environmental matters—and so, on SEA. The objective of this research was the study of assessment methods and techniques with a significant relevance in the scientific literature (ELECTRE III and EA), to be used in SEA. These techniques have widespread use in Europe; however, each operational declination in force in each of the Member States considers different criteria, sub-criteria, indicators, and weights, as well as a different evaluation–decision ratio, in order to fit local goals, priorities, and administrative practices.
Hence, the use of established assessment methods and techniques (such as ELECTRE III and EA) in SEA should hopefully grow, recognizing that the knowledge developed and disseminated on evaluation methods and techniques at an international scale can contribute to the efficiency of the SEA process itself.

Funding

This research was funded by the University of Florence, Department of Architecture, research funds for the year 2022; research project description “Metodi e tecniche di valutazione per la Valutazione Ambientale Strategica (VAS); Assessment methods and techniques for Strategic Environmental Assessment (SEA)”, scientic responsible: Fabrizio Battisti.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to belonging to Public Administration and are obtainable through formal access to documents procedures.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Fischer, T.B.; González, A. Introduction to Handbook on Strategic Environmental Assessment. In Handbook on Strategic Environmental Assessment; Edward Elgar Publishing: Gloucester, UK, 2021. [Google Scholar]
  2. Therivel, R. Strategic Environmental Assessment in Action; Routledge: Oxfordshire, UK, 2012. [Google Scholar]
  3. Noble, B.F. Strategic environmental assessment: What is it? & what makes it strategic? J. Environ. Assess. Policy Manag. 2000, 2, 203–224. [Google Scholar]
  4. Partidario, M.R. Strategic environmental assessment: Key issues emerging from recent practice. Environ. Impact Assess. Rev. 1996, 16, 31–55. [Google Scholar] [CrossRef]
  5. Morgan, R.K. Environmental Impact Assessment: A Methodological Approach; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  6. Organisation for Economic Co-operation and Development (OECD). Applying Strategic Environmental Assessment: Good Practice Guidance for Development Co-Operation; Organisation for Economic Cooperation and Development Publishing: Paris, France, 2006. [Google Scholar]
  7. European Commission, Strategic Environmental Assessment. Available online: https://europa.eu/capacity4dev/public-environment-climate/wiki/strategic-environmental-assessment (accessed on 4 April 2022).
  8. Partidario, M.R. Strategic thinking for sustainability (ST4S) in strategic environmental assessment. In Handbook on Strategic Environmental Assessment; Edward Elgar Publishing: Gloucester, UK, 2021. [Google Scholar]
  9. Therivel, R.; Wilson, E.; Heaney, D.; Thompson, S. Strategic Environmental Assessment; Routledge: Oxfordshire, UK, 2013. [Google Scholar]
  10. Therivel, R.; Paridario, M.R. The Practice of Strategic Environmental Assessment; Routledge: Oxfordshire, UK, 2013. [Google Scholar]
  11. Noble, B.; Storey, K. Towards a structured approach for strategic environmental assessment. J. Environ. Assess. Policy. Manag. 2001, 3, 483–508. [Google Scholar] [CrossRef]
  12. Dom, A. Environmental impact assessment of road and rail infrastructure. In Handbook of Environmental Impact Assessment; Petts, J., Ed.; Blackwell: London, UK, 1999; Volume 2. [Google Scholar]
  13. Dreborg, K.-H. Essence of Backcasting. Futures 1996, 28, 813–828. [Google Scholar] [CrossRef]
  14. Rescher, N. Predicting the Future: An Introduction to the Theory of Forecasting; State University of New York Press: Albany, NY, USA, 1998. [Google Scholar]
  15. Makridakis, S.; Wheelright, S.C.; Hyndman, R.J. Forecasting: Methods and Applications; Wiley: New York, NY, USA, 1998. [Google Scholar]
  16. Saleh, N.; Mostafa Qutb, S. Role of Strategic Environmental Assessment Tools (SEA) to Guide Strategic Plans in Egypt: New cities Case Study. SVU-Int. J. Eng. Sci. Appl. 2021, 2, 55–62. [Google Scholar] [CrossRef]
  17. Pawłat-Zawrzykraj, A.; Podawca, K. Analysis and evaluation of selected strategic environmental assessments for local land use plans. [Analiza i ocena wybranych prognoz oddziaływania na środowisko projektów miejscowych planów zagospodarowania przestrzennego]. Sci. Rev. Eng. Environ. Sci. 2015, 24, 331–341. [Google Scholar]
  18. Donnelly, A.; Prendergast, T.; Hanusch, M. Examining quality of environmental objectives, targets and indicators in environmental reports prepared for strategic environmental assessment. J. Environ. Assess. Policy Manag. 2008, 10, 381–401. [Google Scholar] [CrossRef]
  19. Munier, N. Multicriteria Environmental Assessment: A Practical Guide; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  20. Figueira, J.; Greco, S.; Ehrgott, M. Multiple Criteria Decision Analysis—State of the Art Survey; Springer: New York, NY, USA, 2005. [Google Scholar]
  21. Ishizaka, A.; Nemery, P. Multi-Criteria Decision Analysis, Methods and Software; Wiley and Sons Ltd.: Chichester, UK, 2013. [Google Scholar]
  22. Roy, B. Méthodologie Multicritére d’Aide à la Décision; Economica: Paris, France, 1985. [Google Scholar]
  23. Guitoni, A.; Martel, J.M. Tentative guidelines to help choosing an appropriate MCDA method. Eur. J. Oper. Res. 1998, 109, 501–521. [Google Scholar] [CrossRef]
  24. Vincke, P. L’aide Multicritère à la Décision, Édition de l’Université de Bruxelles; Bruxelles: Brussels, Belgium, 1989. [Google Scholar]
  25. Colson, G.; De Bruyn, C. Models and Methods in Multiple Objectives Decision Making, Models and Methods in Multiple Criteria Decision Making; Pergamon Press: Oxford, UK, 1989. [Google Scholar]
  26. Fishburn, P.C. A survey of multiattribute/multicriterion evaluation theories. In Multiple Criterion Problem Solving; Zionts, S., Ed.; Springer: Heidelberg, Germany, 1978; pp. 181–224. [Google Scholar]
  27. Guitouni, A.; Martel, J.M.; Vincke, P.; North, P.B. A Framework to Choose a Discrete Multicriterion Aggregation Procedure; Defence Research Establishment Valcatier (DREV): Ottawa, ON, Canada, 1998; Available online: https://pdfs.semanticscholar.org/27d5/9c846657268bc840c4df8df98e85de66c562.pdf (accessed on 28 July 2017).
  28. Roy, B.; Bouyssou, D. Aide Multicritère à la Décision: Methodes et Cas; Economica: Paris, France, 1993. [Google Scholar]
  29. Keeney, R.L.; Raiffa, H. Decisions with Multiple Objectives: Preferences and Value Trade-Offs; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  30. Cinelli, M.; Stuart, R.; Coles, K.K. Analysis of the potentials of multi criteria decision analysis methods to conduct sustainability assessment. Ecol. Indic. 2014, 46, 138–148. [Google Scholar] [CrossRef] [Green Version]
  31. Al-Shemmeri, T.; Bashar, A.; Pearman, A. Model choice in multi-criteria decision aid. Eur. J. Oper. Res. 1997, 97, 550–560. [Google Scholar] [CrossRef]
  32. Celik, M.; Deha, I.E. Fuzzy axiomatic design extension for managing model selection paradigm in decision science. Expert Syst. Appl. 2009, 36, 6477–6484. [Google Scholar] [CrossRef]
  33. Kurka, T.; Blackwood, D. Selection of MCA methods to support decision making for renewable energy developments. Renew. Sustain. Energy Rev. 2013, 27, 225–233. [Google Scholar] [CrossRef]
  34. Markandya, A.; Richardon, J. The Earthscan Reader in Environmental Economics; Earthscan: London, UK, 1992. [Google Scholar]
  35. Wood, C.; Dejeddour, M. Strategic environmental assessment: EA of policies, plans and programmes. Impact Assess. 1992, 10, 3–22. [Google Scholar] [CrossRef]
  36. Kniesner, T.J.; Viscusi, W.K. Cost-Benefit Analysis: Why Relative Economic Position Does Not Matter; Syracuse University: Syracuse, NY, USA, 2002. [Google Scholar]
  37. European Commission. Strategic Environmental Assessment: Existing Methodology; European Commission: Luxembourg, 1994. [Google Scholar]
  38. O’Mahony, T. Cost-Benefit Analysis and the environment: The time horizon is of the essence. Environ. Impact Assess. Rev. 2021, 89, 106587. [Google Scholar] [CrossRef]
  39. Bojö, J.; Mäler, K.-G.; Unemo, L. Environment and Development: An Economic Approach; Kluwer Academic Publishing: Dordrecht, The Netherlands, 1992. [Google Scholar]
  40. Mishan, E.J.; Quah, E. Cost-Benefit Analysis; Routledge: Oxfordshire, UK, 2020. [Google Scholar]
  41. Guinée, J.B.; Lindeijer, E. Handbook on Life Cycle Assessment: Operational Guide to the ISO standards; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2002; Volume 7. [Google Scholar]
  42. Heijungs, R.; Huppes, G.; Guinée, J.B. Life cycle assessment and sustainability analysis of products, materials and technologies: Toward a scientific framework for sustainability life cycle analysis. Polym. Degrad. Stab. 2010, 95, 422–428. [Google Scholar] [CrossRef]
  43. Guinee, J.B.; Heijungs, R.; Huppes, G.; Zamagni, A.; Masoni, P.; Buonamici, R.; Rydberg, T. Life cycle assessment: Past, present, and future. Environ. Sci. Technol. 2011, 45, 90–96. [Google Scholar] [CrossRef] [PubMed]
  44. Tukker, A. Life cycle assessment as a tool in environmental impact assessment. Environ. Impact Assess. Rev. 2000, 20, 435–456. [Google Scholar] [CrossRef]
  45. Björklund, A. Life cycle assessment as an analytical tool in strategic environmental assessment: Lessons learned from a case study on municipal energy planning in Sweden. Environ. Impact Assess. Rev. 2012, 32, 82–87. [Google Scholar] [CrossRef]
  46. Nikkhah, A.; Emadi, B.; Soltanali, H.; Firouzi, S.; Rosentrater, K.A.; Allahyari, M.S. Integration of life cycle assessment and Cobb-Douglas modeling for the environmental assessment of kiwifruit in Iran. J. Clean. Prod. 2016, 137, 843–849. [Google Scholar] [CrossRef]
  47. Hansson, S.O. A philosophical perspective on risk. Ambio 1999, 28, 539–542. [Google Scholar] [CrossRef]
  48. Eduljee, G. Risk assessment. In Handbook of Environmental Impact Assessment; Petts, J., Ed.; Blackwell: London, UK, 1999; Volume 1, pp. 374–404. [Google Scholar]
  49. Olsen, S.I.; Christensen, F.M.; Hauschild, M.; Pedersen, F.; Larsen, H.F.; Törslöv, J. Life cycle impact assessment and risk assessment of chemicals—A methodological comparison. Environ. Impact. Assess. Rev. 2001, 21, 385–404. [Google Scholar] [CrossRef]
  50. Greiving, S. Risk Assessment and Management as an important tool for the EU Strategic Environmental Assessment. Disp-Plan. Rev. 2004, 40, 11–17. [Google Scholar] [CrossRef]
  51. Thérivel, R.; Wood, G. Tools for SEA. In Implementing Strategic Environmental Assessment; Springer: Berlin/Heidelberg, Germany, 2005; pp. 349–363. [Google Scholar]
  52. Naddeo, V.; Belgiorno, V.; Zarra, T.; Scannapieco, D. Dynamic and embedded evaluation procedure for strategic environmental assessment. Land Use Policy 2013, 31, 605–612. [Google Scholar] [CrossRef]
  53. Rega, C.; Baldizzone, G. Public participation in Strategic Environmental Assessment: A practitioners’ perspective. Environ. Impact Assess. Rev. 2015, 50, 105–115. [Google Scholar] [CrossRef]
  54. Kørnøv, L.; Thissen, W.A. Rationality in decision-and policy-making: Implications for strategic environmental assessment. Impact Assess. Proj. Apprais. 2000, 18, 191–200. [Google Scholar] [CrossRef]
  55. Brown, A.L.; Thérivel, R. Principles to guide the development of strategic environmental assessment methodology. Impact Assess. Proj. Apprais. 2000, 18, 183–189. [Google Scholar] [CrossRef] [Green Version]
  56. Fischer, T.B. The Theory and Practice of Strategic Environmental Assessment: Towards a More Systematic Approach; Routledge: Oxfordshire, UK, 2010. [Google Scholar]
  57. Sadler, B.; Dusik, J.; Fischer, T.; Partidario, M.; Verheem, R.; Aschemann, R. Handbook of Strategic Environmental Assessment; Routledge: Oxfordshire, UK, 2012. [Google Scholar]
  58. Acharibasam, J.B.; Noble, B.F. Assessing the impact of strategic environmental assessment. Impact Assess. Proj. Apprais. 2014, 32, 177–187. [Google Scholar] [CrossRef] [Green Version]
  59. Lee, N.; Walsh, F. Strategic environmental assessment: An overview. Proj. Apprais. 1992, 7, 126–136. [Google Scholar] [CrossRef]
  60. Oppio, A.; Dell’Ovo, M. Strategic Environmental Assessment (SEA) and Multi-Criteria Analysis: An Integrated Approach. In Strategic Environmental Assessment and Urban Planning; Springer: Cham, Switzerland, 2020; pp. 47–63. [Google Scholar]
  61. Gauthier, M.; Simard, L.; Waaub, J.P. Public participation in strategic environmental assessment (SEA): Critical review and the Quebec (Canada) approach. Environ. Impact Assess. Rev. 2011, 31, 48–60. [Google Scholar] [CrossRef]
  62. Pretty, J.; Hine, R. Participatory Appraisal for Community Assessment: Principles and Methods; Centre for Environment and Society, University of Essex: Colchester, UK, 1999. [Google Scholar]
  63. Mattia, S. Costruzione e Valutazione Della Sostenibilità dei Progetti; Franco Angeli: Milano, Italy, 2008; Volume 1. [Google Scholar]
  64. Callon, M.; Larédo, P.; Mustar, P. The Strategic Management of Research and Technology: Evaluation of Programmes; Brookings Institution Press: Washington, DC, USA, 1997. [Google Scholar]
  65. Godet, M. Prospective et Planification; Economica: Paris, France, 1985. [Google Scholar]
  66. Nadeau, M.A. L’evaluation de Programme; Laval University Press: Laval, QC, Canada, 1988. [Google Scholar]
  67. Witkin, B.R.; Altschuld, J.W. Planning Conducting Needs Assessments; Sage: London, UK, 1995. [Google Scholar]
  68. Figueira, J.R.; Mousseau, V.; Roy, B. ELECTRE methods. In Multiple Criteria Decision Analysis; Springer: New York, NY, USA, 2016; pp. 155–185. [Google Scholar]
  69. Govindan, K.; Jepsen, M.B. ELECTRE: A comprehensive literature review on methodologies and applications. Eur. J. Oper. Res. 2016, 250, 1–29. [Google Scholar] [CrossRef]
  70. Figueira, J.R.; Greco, S.; Roy, B.; Słowiński, R. ELECTRE methods: Main features and recent developments. In Handbook of Multicriteria Analysis; Springer: Berlin/Heidelberg, Germany, 2010; pp. 51–89. [Google Scholar]
  71. La Scalia, G.; Micale, R.; Certa, A.; Enea, M. Ranking of shelf life models based on smart logistic unit using the ELECTRE III method. Int. J. Appl. Eng. Res. 2015, 10, 38009–38015. [Google Scholar]
  72. Certa, A.; Enea, M.; Lupo, T. ELECTRE III to dynamically support the decision maker about the periodic replacements configurations for a multi-component system. Decis. Support Syst. 2013, 55, 126–134. [Google Scholar] [CrossRef]
  73. Aiello, G.; Enea, M.; Galante, G. A multi-objective approach to facility layout problem by genetic search algorithm and electre method. Robot. Comput.-Integr. Manuf. 2006, 22, 447–455. [Google Scholar] [CrossRef]
  74. Norese, M.F. ELECTRE III as a support for participatory decision-making on the localisation of waste-treatment plants. Land Use Policy 2006, 23, 76–85. [Google Scholar] [CrossRef]
  75. Buchanan, J.T.; Sheppard, P.J.; Vanderpooten, D. Project Ranking Using ELECTRE III; Department of Management Systems, University of Waikato: Hamilton, New Zealand, 1999. [Google Scholar]
  76. Hokkanen, J.; Salminen, P. ELECTRE III and IV decision aids in an environmental problem. J. Multi-Criteria Decis. Anal. 1997, 6, 215–226. [Google Scholar] [CrossRef]
  77. Li, H.F.; Wang, J.J. An improved ranking method for ELECTRE III. In Proceedings of the 2007 International Conference on Wireless Communications, Networking and Mobile Computing, Honolulu, HI, USA, 12–16 August 2007; IEEE: New York, NJ, USA; pp. 6659–6662. [Google Scholar]
  78. Yu, X.; Zhang, S.; Liao, X.; Qi, X. ELECTRE methods in prioritized MCDM environment. Inf. Sci. 2018, 424, 301–316. [Google Scholar] [CrossRef]
  79. Bottero, M.; Ferretti, V.; Figueira, J.R.; Greco, S.; Roy, B. Dealing with a multiple criteria environmental problem with interaction effects between criteria through an extension of the Electre III method. Eur. J. Oper. Res. 2015, 245, 837–850. [Google Scholar] [CrossRef] [Green Version]
  80. Bottero, M.; Comino, E.; Dell’Anna, F.; Dominici, L.; Rosso, M. Strategic assessment and economic evaluation: The case study of yanzhou island (China). Sustainability 2019, 11, 1076. [Google Scholar] [CrossRef] [Green Version]
  81. Park, D.; Kim, Y.; Um, M.-J.; Choi, S.-U. Robust Priority for Strategic Environmental Assessment with Incomplete Information Using Multi-Criteria Decision Making Analysis. Sustainability 2015, 7, 10233–10249. [Google Scholar] [CrossRef] [Green Version]
  82. Lai, Y.J.; Liu, T.Y.; Hwang, C.L. Topsis for MODM. Eur. J. Oper. Res. 1994, 76, 486–500. [Google Scholar] [CrossRef]
  83. Scharlig, A. Pratiquer Electre et Promethee; PPUR: Lausanne, Switzerland, 1996. [Google Scholar]
  84. XLSTAT Software v. 2022.01; Addinsoft: Paris, France, 2022.
Table 1. Evaluation tools to use in the SEA.
Table 1. Evaluation tools to use in the SEA.
SEA PhasesObjects of EvaluationTools
SEA screeningSyntethic studiesAnalysis of the intervention hypothesisScenarios
CSEs’ opinionData and opinion detection; benchmarks and standard definitionExpert Panel; Delphi Method
EvaluationDefinition of set of criteria and indicators; performance detection on the intervention hypothesis; benchmarks; detection of social impacts and shadow pricesChecklist; MCDA; CBA; LCA; Risk Analysis
DecisionComparison between synthetic judgement and benchmarks or standards; sustainability check
SEA (complete procedure)Syntethic studiesDefinition of the alternativesScenarios
Scoping (expert consultation)Survey of qualitative and quantitative informationExpert analysis; Q-methodology; surveys
In-depth studiesAnalysis of the intervention hypothesis; definition of the alternativesScenarios; Forecast analysis
Public participationDetection of data and opinions; benchmarks and standard definitionExpert Panel; Delphi Method; Stakeholders Analysis
EvaluationDefinition of set of criteria and indicators; performance detection on alternatives; benchmarks; detection of social impacts and shadow pricesChecklist; MCDA; CBA; LCA; Risk Analysis
DecisionComparison between synthetic judgement and benchmarks or standards; sustainability check
MonitoringComparison between expected impacts and real impactsMCDA
Table 2. The evaluation matrix.
Table 2. The evaluation matrix.
CriterionIndicatorObjective FunctionIndifference ThresholdPreference ThresholdVeto ThresholdWeightAlternatives
Phantom AlternativeReal Alternative 1Real Alternative 2Real Alternative n
c1i(c1)of(c1;ex.m)it(c1;ex.m)pt(c1;eex.m)vt(c1;ex.m)w(c1;ex.m)i(pop;c1)i(po1;c1)i(po2;c1)i(pon;c1)
c2i(c2)of(c2;ex.m)it(c2;ex.m)pt(c2;ex.m)vt(c2;ex.m)w(c2;ex.m)i(pop;c2)i(po1;c2)i(po2;c2)i(pon;c2)
cni(cn)of(cn;ex.m)it(cn;ex.m)pt(cn;ex.m)vt(cn;ex.m)w(cn;ex.m)i(pop;c3)i(po1;cn)i(po2;cn)i(pon;cn)
Table 3. The concordance matrix model.
Table 3. The concordance matrix model.
cnc(poi;poj)poppo1po2pon
pop-cnc(pop;po1)cnc(pop;po2)cnc(pop;pon)
po1cnc(po1;pop)-cnc(po1;po2)cnc(po1;pon)
po2cnc(po2;pop)cnc(po2;po1)-cnc(po2;pon)
poncnc(pon;pop)cnc(pon;po1)cnc(pon;po2)-
Table 4. The discordance matrix model.
Table 4. The discordance matrix model.
dsc(poi;poj)poppo1po2pon
pop-dsc(pop;po1)dsc(pop;po2)dsc(pop;pon)
po1dsc(po1;pop)-dsc(po1;po2)dsc(po1;pon)
po2dsc(po2;pop)dsc(po2;po1)-dsc(po2;pon)
pondsc(pon;pop)dsc(pon;po1)dsc(pon;po2)-
Table 5. The standard aggregate concordance matrix.
Table 5. The standard aggregate concordance matrix.
acnc(poi;poj)poppo1po2pon
pop-acnc(pop;po1)acnc(pop;po2)acnc(pop;pon)
po1acnc(po1;pop)-acnc(po1;po2)acnc(po1;pon)
po2acnc(po2;pop)acnc(po2;po1)-acnc(po2;pon)
ponacnc(pon;pop)acnc(pon;po1)acnc(pon;po2)-
Table 6. The standard credibility matrix.
Table 6. The standard credibility matrix.
δ(poi;poj)poppo1po2pon
pop-δ(pop;po1)δ(pop;po2)δ(pop;pon)
po1δ(po1;pop)-δ(po1;po2)δ(po1;pon)
po2δ(po2;pop)δ(po2;po1)-δ(po2;pon)
ponδ(pon;pop)δ(pon;po1)δ(pon;po2)-
Table 7. A standard Boolean matrix.
Table 7. A standard Boolean matrix.
T(poi;poj)poppo1po2pon
pop-T(pop;po1)T(pop;po2)T(pop;pon)
po1T(po1;pop)-T(po1;po2)T(po1;pon)
po2T(po2;pop)T(po2;po1)-T(po2;pon)
ponT(pon;pop)T(pon;po1)T(pon;po2)-
Table 8. The DM results.
Table 8. The DM results.
DM Results
Id.CriterionIndicatorOb. Func.WeightInd. thr.Pref. thr.Thr. V.Phantom Planning Option
c1Soil consumption% of new urbanized area/entire municipal area7.510.00%20.00%50.00%0.0001
c2Public areasSM of public areas per capita (EI)510.00%20.00%50.00%18
c3Increase in inhabitants% of new residents/entire municipal population510.00%20.00%50.00%0.05%
c4Effects on the road systemLevel of saturation of road infrastructures (% of increase/residual capacity)2.510.00%20.00%50.00%2.00%
c5New jobs% of no. of new employees/ unemployed in the Province510.00%20.00%50.00%0.50%
c6Benefits on people’s quality of lifeQualitative
(VH, H, M, L, VL or N)
7.510.00%20.00%50.00%H
c7Indirect effects on the local economy% (presumed) local product increase7.510.00%20.00%50.00%0.50%
c8Tourist flows variation% of increase in tourist attendance in Viterbo Municipality2.510.00%20.00%50.00%5.00%
c9Water supplyWater availability in compliance with the hydric needs of the plan/project
(T-ST-P-L-N)
510.00%20.00%50.00%T
c10Waste water purification% of new EI/residual EI capacity of municipal purifier2.510.00%20.00%50.00%4.00%
c11Soil sealing% of SM of new sealed soil/ intervention area2.510.00%20.00%50.00%10.00%
c12Hydraulic invarianceYes/Partial/No510.00%20.00%50.00%Yes
c13Geological suitabilityQualitative
(T-ST-P-L-N)
2.510.00%20.00%50.00%T
c14Effects on ecological connectionsQualitative
(VH, H, M, L, VL or N)
510.00%20.00%50.00%L
c15Effects on air qualityQualitative
(VH, H, M, L, VL or N)
510.00%20.00%50.00%L
c16Waste productionCapacity to landfill new waste
(T-ST-P-L-N)
2.510.00%20.00%50.00%VH
c17Landscape compliance (regional)Yes/Partial/No510.00%20.00%50.00%VH
c18Landscape compatibility (regional)Qualitative
(VH, H, M, L, VL or N)
7.510.00%20.00%50.00%VH
c19Strategic planning orientation (provincial)Qualitative
(VH, H, M, L, VL or N)
7.510.00%20.00%50.00%VH
c20Urban planning rules (municipal)Qualitative
(VH, H, M, L, VL or N)
7.510.00%20.00%50.00%VH
Table 9. Input data for the evaluation.
Table 9. Input data for the evaluation.
General DataOriginal Plan DataLow Impact Plan Data
Intervention areasm-121,478.003,580.00
Public green areassm-13,177.000.00
Transformed areas (excluding public green areas)sm-108,301.003,580.00
Municipal areasm406,230,000.00--
New inhabitantsn.-195.000.00
Equivalent inhabitantsn.-542.5010.00
Public parkingsm-9,570.000.00
Users during weekdaysn.-200.0040.00
Users during weekendsn.-600.00100.00
Annual usersn. -80,080.0014,560.00
Estimated revenue€/per year-6,670,000.00440,000.00
Estimated revenue (for construction; 4 years)€/per year-7,098,550.0075,000.00
Tourists in Viterbo Municipalityn.226,274.00--
Tourists in Viterbo Provincen.1,252,111.00--
Permanent employeesn.-100.005.00
Provincial unemployment rate%10.00%--
Municipal purifier capacity (under design)n. EI75,000.00--
Municipal populationn.65,050.00--
Rate of demographic change%−0.79
Provincial populationn. 317,030.00--
Road flow S.S. n. 2 “Cassia” (max detected)v/h492.00--
Max road flow S.S. n. 2 “Cassia” (by law)v/h1,000.00--
Weekday influxv/h-30.004.00
Festive influxv/h-50.0012.00
Hydro-requirement of the thermal centercm/per year-5,200.00945.45
Overall hydric needcm/per year-13,900.00945.45
Building potential (volume) residential destinationsm-15,627.200.00
Building potential (volume) thermal services destinationsm-62,508.80310.00
GUA residential destinationsm-4,883.500.00
GUA thermal services destinationsm-19,534.00108.60
Residential max overall dimensionssm-2,441.750.00
Thermal service max overall dimensionssm-9,767.00108.60
Non-permeable surfacessm-12,208.75119.46
GDP per capita in the province of Viterbo€/per year20,000.00--
GDP Viterbo municipality1,301,000,000.00--
GDP increase (in the municipality)%-0.51%0.03%
Table 10. The evaluation matrix.
Table 10. The evaluation matrix.
Evaluation MatrixAlternatives
Phantom Planning Option
(n. 1)
Adopted Urban Plan Planning Option
(n. 2)
Low Impact Planning Option
(n. 3)
No Intervention Planning Option
(n. 4)
Id.CriterionIndicatorFrom DMFrom adopted urban plan documents From the presented project-
c1Soil consumption% of new urbanized area/entire municipal area0.00010.00000.00000.0000
c2Public areasSM of public areas per capita (EI)180.000.000.00
c3Increase in inhabitants% of new residents/entire municipal population0.05%0.00%0.00%0.00%
c4Effects on the road systemLevel of saturation of road infrastructures (% of increase/residual capacity)2.00%0.00%0.00%0.00%
c5New jobs% of no. of new employees/ unemployed in the Province0.50%0.00%0.00%0.00%
c6Benefits on people’s quality of lifeQualitative
(VH, H, M, L, VL or N)
HVHMVL or N
c7Indirect effects on the local economy% (presumed) local product increase0.50%0.00%0.00%0.00%
c8Tourist flows variation% of increase in tourist attendance in Viterbo Municipality5.00%0.00%0.00%0.00%
c9Water supplyWater availability in compliance with the hydro-needs of the plan/project
(T-ST-P-L-N)
1.00001.00001.00001.0000
c10Waste water purification% of new EI/residual EI capacity of municipal purifier4.00%0.00%0.00%0.00%
c11Soil sealing% of SM of new sealed soil/ intervention area10.00%0.00%0.00%0.00%
c12Hydraulic invarianceYes/Partial/NoYesYesYesYes
c13Geological suitabilityQualitative
(T-ST-P-L-N)
TSTTT
c14Effects on ecological connectionsQualitative
(VH, H, M, L, VL or N)
LMLVL or N
c15Effects on air qualityQualitative
(VH, H, M, L, VL or N)
LLVL or NVL or N
c16Waste productionCapacity to landfill new waste
(T-ST-P-L-N)
TTTT
c17Landscape compliance (regional)Yes/Partial/NoYesPartialYesPartial
c18Landscape compatibility (regional)Qualitative
(VH, H, M, L, VL or N)
VHLHVL or N
c19Strategic planning orientation (provincial)Qualitative
(VH, H, M, L, VL or N)
HVHMVL or N
C20Urban planning rules (municipal)Qualitative
(VH, H, M, L, VL or N)
HVHMVL or N
Table 11. The aggregate concordance matrix.
Table 11. The aggregate concordance matrix.
A1A2A3A4
A11.0000.8250.7750.625
A20.7751.0000.7500.625
A30.8000.8001.0000.825
A40.5750.6500.7231.000
Table 12. Credibility matrix.
Table 12. Credibility matrix.
A1A2A3A4
A11.0000.0000.0000.000
A20.0001.0000.0000.000
A30.0000.0001.0000.000
A40.0000.0000.0001.000
Table 13. Boolean matrix.
Table 13. Boolean matrix.
a/bA1A2A3A4
A1-111
A21-11
A311-1
A4111-
Table 14. Ranking and classification of the alternatives.
Table 14. Ranking and classification of the alternatives.
Planning OptionRankClass
A111
A21
A31
A41
Table 15. Alteration of the results of the DM.
Table 15. Alteration of the results of the DM.
Id. CriterionObjective FunctionExperts (from the Main Assessment)Stochastic Hypotesis 1 (10%)Stochastic Hypotesis 2 (20%)Stochastic Hypotesis 3 (30%)Stochastic Hypotesis 4 (40%)
c10.01%0.0090%0.0080%0.0070%0.0060%
c21819.8021.6023.4025.20
c30.05%0.0450%0.0400%0.0350%0.0300%
c42.00%1.80%1.60%1.40%1.20%
c50.50%0.55%0.60%0.65%0.70%
c6HH/VHH/VHVHVH
c70.50%0.55%0.60%0.65%0.70%
c85.00%5.50%6.00%6.50%7.00%
c9TTTTT
c104.00%3.60%3.20%2.80%2.40%
c1110.00%9.00%8.00%7.00%6.00%
c12YesYesYesYesYes
c13TTTTT
c14LL/VLL/VLVL or NVL or N
c15 L/VLL/VLVL or NVL or N
c16TTTTT
c17YesYesYesYesYes
c18VHVHVHVHVH
c19HH/VHH/VHVHVH
c20HH/VHH/VHVHVH
Table 16. A comparison of the results of the experimentation and sensitivity analysis.
Table 16. A comparison of the results of the experimentation and sensitivity analysis.
Original Threshold
Real AssessmentStochastic Hypothesis +10%Stochastic Hypothesis +20%Stochastic Hypothesis +30%Stochastic Hypothesis +40%
1Phantom alternative1Phantom alternative1Phantom alternative1Phantom alternative1Zero alternative
1Official plan proposal1Official plan proposal1Official plan proposal1Official plan proposal2Phantom alternative
1Low impact plan proposal1Low impact plan proposal1Low impact plan proposal1Low impact plan proposal2Official plan proposal
1Zero alternative1Zero alternative1Zero alternative1Zero alternative3Low impact plan proposal
Threshold −10%
Real AssessmentStochastic Hypothesis +10%Stochastic Hypothesis +20%Stochastic Hypothesis +30%Stochastic Hypothesis +40%
1Phantom alternative1Phantom alternative1Phantom alternative1Phantom alternative1Zero alternative
1Official plan proposal1Official plan proposal1Official plan proposal1Official plan proposal2Phantom alternative
1Low impact plan proposal1Low impact plan proposal1Low impact plan proposal1Low impact plan proposal2Official plan proposal
1Zero alternative1Zero alternative1Zero alternative1Zero alternative3Low impact plan proposal
Threshold −20%
Real AssessmentStochastic Hypothesis +10%Stochastic Hypothesis +20%Stochastic Hypothesis +30%Stochastic Hypothesis +40%
1Phantom alternative1Phantom alternative1Zero alternative1Zero alternative1Zero alternative
1Official plan proposal1Official plan proposal2Phantom alternative2Phantom alternative2Phantom alternative
1Low impact plan proposal1Low impact plan proposal2Official plan proposal2Official plan proposal2Official plan proposal
1Zero alternative1Zero alternative3Low impact plan proposal3Low impact plan proposal3Low impact plan proposal
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Battisti, F. ELECTRE III for Strategic Environmental Assessment: A “Phantom” Approach. Sustainability 2022, 14, 6221. https://doi.org/10.3390/su14106221

AMA Style

Battisti F. ELECTRE III for Strategic Environmental Assessment: A “Phantom” Approach. Sustainability. 2022; 14(10):6221. https://doi.org/10.3390/su14106221

Chicago/Turabian Style

Battisti, Fabrizio. 2022. "ELECTRE III for Strategic Environmental Assessment: A “Phantom” Approach" Sustainability 14, no. 10: 6221. https://doi.org/10.3390/su14106221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop