This study pioneers a distinctive research approach to foster stakeholder engagement in the intricate realm of sustainable economic development decision making. The methodology intertwines qualitative and quantitative research designs, leveraging primary and secondary data sources to attain a nuanced and comprehensive perspective. This study outlines a new research approach for stakeholder engagement in sustainable economic development decision making. The methodology combines qualitative and quantitative data to integrate stakeholder perspectives into the criteria weighting process, enhancing decision-making integrity.
Study design: The research design artfully combines qualitative and quantitative methodologies, each playing a distinct yet synergistic role in the investigation. Through its exploratory nature, qualitative research delves into the intricacies of stakeholder perceptions and experiences. Concurrently, quantitative research lends to a structured, numerical framework, allowing for rigorous analysis and measurement.
Qualitative research design: The qualitative component involves an exploratory research design, facilitating a deeper understanding of stakeholder perspectives. This design uses in-depth interviews, workshops, and expert panels to uncover nuanced insights, motivations, and contextual factors influencing decision making in sustainable economic development.
Quantitative research design: The quantitative arm employs a descriptive research design, systematically quantifying and analysing specific aspects of the decision-making process. Surveys form a crucial instrument in this phase, generating structured data that can be statistically analysed to derive meaningful patterns, correlations, and trends.
Data collection: A structured Delphi process encompassing surveys, interviews, workshops, and expert panels to acquire stakeholder input. The qualitative data collection methods contribute rich, qualitative insights, while surveys offer a quantifiable dataset, combining to provide a holistic understanding of stakeholder perspectives.
Criteria weight determination: Introducing the SHARDA method, the research adopts a hierarchical approach to systematically evaluate the relative importance of criteria and sub-criteria. This method provides a structured and transparent process for stakeholders to assign weights, ensuring a robust foundation for decision making. Evaluation of choice options performance and analysis: The ARAS method is applied quantitatively to assess project manager performance and determine optimality. This approach utilizes effectiveness ratios and offers a quantitative lens to the decision-making process, enabling precise comparisons among alternatives.
Validation and analysis: Experts critically examine factors influencing decision outcomes in the validation and analysis phase. The SHARDA method is rigorously tested through expert opinions and benchmarking practices, ensuring its validity, reliability, and relevance in sustainable economic development. Sample selection: The study includes stakeholders from diverse groups relevant to sustainable economic development. The selection of experts is based on considerations of relevance, expertise, and active involvement, ensuring a representative and comprehensive set of perspectives.
Table 1 presents the experts’ qualifications and additional conditions. Ethical considerations: Adherence to ethical guidelines is paramount. The research team ensures informed consent, confidentiality, and a profound respect for the rights of participants. The study is conducted in full compliance with ethical regulations governing research involving human participants, upholding the highest standards of ethical conduct. The study emerges as a systematic approach for selecting and developing project managers in sustainable economic development. What sets the proposed methodology apart is its resilience against rank reversals and its ability to quantify the direct ratio of each considered choice concerning the optimal (Pareto) solution.
3.1. A Generic MCDM Model
A distinguishing feature of SHARDA is its emphasis on stakeholder involvement and transparency. The research recognises that decisions with far-reaching impacts require diverse perspectives. SHARDA ensures a comprehensive decision-making process that aligns seamlessly with global sustainability objectives.
By crafting a narrative that seamlessly transitions from the broader MCDM environment to the specifics of SHARDA, decision makers can follow a logical progression, understanding the context and significance of the introduced methodology.
In the realm of MCDM, the generic model (
Figure 1) offers a systematic approach to decision making. The generic MCDM model emphasises problem definition, criteria identification, alternative generation, evaluation, and sensitivity analysis for optimal decision outcomes. The expert panel process involves a structured and collaborative approach guided by a neutral moderator or facilitator. The moderator plays a crucial role in guiding the process and ensuring productive exchanges of ideas among panel members. The panel provides expertise and knowledge to determine criteria weights and values through discussions, assessments, and consensus building.
The main steps include:
Moderator’s presentation: Introduction of the decision problem, objectives, and criteria.
Problem domain definition: Examine problem characteristics, considering alternatives, attributes, constraints, and relevant factors.
Discussion and knowledge sharing: Panel members’ meetings and expertise sharing on criteria importance and relative weights.
Criteria and objectives identification: Specify and prioritise criteria and objectives.
Eliciting judgments and perspectives: Open discussions or structured exercises elicit panel members’ judgments and perspectives.
Determining criteria weights: The expert panel assigns relative weights, considering relevance, impact, feasibility, and trade-offs.
Weight criteria: Determine relative criterion importance by assigning weights.
Calculate overall scores: Multiply ratings by weights and sum up to obtain overall scores.
Define rating scale: Establish a quantified rating scale for assessing candidates’ proficiency levels.
Alternative generation: Create diverse options or courses of action.
Gather alternative information: Collect relevant alternative information through resumes, interviews, etc.
Evaluation against criteria: Assess each alternative’s performance using qualitative or quantitative techniques. Appoint evaluators, provide consistent instructions, and ensure information consistency.
Consensus building: The facilitator aggregates opinions and judgments to reach a consensus on criteria, weights, and values.
Aggregate evaluations and calculate scores: Combine evaluations and calculate overall scores, considering criteria and their weights.
Analysis, validation, and decision making: Analyse results, validate with real-world data or expert opinions, and make the final decision for implementation.
By adhering to these steps, decision makers effectively analyse and evaluate alternatives, facilitating informed decisions and successful implementation.
Using the SHARDA method, stakeholders assembled a diverse expert panel for a Delphi study, evaluating project managers’ characteristics in sustainable economic development.
Table 1 summarises the expert qualifications and additional conditions they meet.
With their respective qualifications and additional conditions, these experts form a diverse panel capable of evaluating project managers for sustainable project implementation using the SHARDA method.
The experts using the Delphic process formed the two-level criteria set helpful for solving the considered problem (
Table 2).
The hierarchical structure enables more detailed analysis and an assessment of project managers’ strengths and weaknesses, facilitating informed decision making in selecting and developing project managers for sustainable economic development initiatives.
3.2. Enhancing the SWARA Method with the Delphic Process for Criteria Weight Determination in Group MCDM–SHARDA (Systemic Hierarchical Attribute Ratio Assessment) Method
Decision makers in MCDM, more than traditional attribute weights, are needed to adequately represent varying attribute significance, as these weights are typically assigned based on expert preferences. In 2010, the stepwise weighted valuation ratio analysis (SWARA) methodology was introduced to address this issue in initial dispute resolution. Decision makers in sustainable economic development require systematic approaches for determining criteria weights and evaluating choices. It led to the emergence of the SHARDA method, which integrates a hierarchical structure and the expanded SWARA technique.
Main steps in the SHARDA method:
Decision-making process using SHARDA: Involves assembling an expert panel, ensuring anonymity in their opinions, and employing the Delphic process to gather expert opinions and achieve consensus.
Problem definition and hierarchy: Decision makers define the problem, identify relevant criteria, and establish a hierarchical structure.
Enhanced SWARA for criteria: Decision makers utilise the SWARA method to evaluate criteria weights.
Enhanced SWARA for sub-criteria: Decision makers utilise the SWARA method to evaluate sub-criteria weights within each criterion group.
Weight assignment for criteria groups and aggregating expert opinions: Relative weights are assigned to criteria groups, considering project requirements and priorities. Expert opinions are combined to determine criteria weights at each hierarchy level.
Evaluation of candidate qualifications: Candidates are evaluated based on specific criteria such as leadership abilities, technical expertise, communication skills, adaptability, and ethical and social responsibility.
Criteria values normalisation: Criteria values are normalised using standard scaling techniques.
Criteria weight determination using the expanded SWARA method (
Figure 2)
: The SWARA method offers a systemic process to determine criteria consequences. It starts with defining the decision problem and identifying the evaluation criteria. Criteria weights aid decision makers in further analyses, such as ranking alternatives or evaluating options.
Criteria weight calculation:
Decision makers assign criteria weights at the highest hierarchy level. Firstly, the moderator ranks criteria and sub-criteria in descending importance order. For this study, the authors selected the Eckenrode rating technique based on criteria importance ratings (
) (
Table 3).
Each of the experts rates all criteria in the given hierarchy level:
.
The is .
The is the number of experts.
The moderator calculates sums of given importance ratings to each criterion
and the sum of ratings given to all criteria importance in the considered hierarchy level, and then divides geomeans of ratings
given to each criterion by the sum of ratings given to all criteria importance
.
is the number of criteria in the considered hierarchy level, and is the relative significance of the criterion. It helps to assign ranks to all criteria in the regarded hierarchy level and rank criteria in descending order according to their importance level.
Each decision maker assigns criterion importance
to the first criterion
, where
:
The coefficient
helps to determine the importance ratio of criterion
compared to criterion
:
Decision makers determine the importance level of each criterion determined using the Equation (6):
Decision makers determine the relative importance weights of each criterion calculated using Equation (7):
The moderator calculates the sums of assigned coefficients
for each criterion
, and then for all criteria
.
Equation (10) helps to calculate criteria weights:
In the second stage, decision makers prepare tables for calculating criteria weights using the expanded SWARA method (
Table 2 and
Table 3), which show calculation results). These tables allow experts to determine criteria weights based on group ranks established through the SWARA process (
Table 4).
For instance, the expert must evaluate criterion
x5 as the least significant or at least as equally significant as criterion
x4, and criterion
x1 must be assessed as the most significant or equally significant as criterion
x2. Each of the experts first presents essential criteria. The most significant criterion is rank 1, and the least significant criterion is rank 5. The basis of the overall ranks of the expert group is the sum values of ranks. According to the calculations by applying SWARA, group criteria and weights were established, as shown in
Table 4,
Table 5,
Table 6,
Table 7,
Table 8 and
Table 9. The SHARDA method’s application of the SWARA method results in group criteria and weights, enhancing objectivity and transparency throughout the decision-making process.
Relative importance scale for the expanded SWARA method:
The authors introduce a new scale to estimate criteria weights based on their relative importance compared to other criteria (
Table 5) in the expanded SWARA method.
The scale assigns values between 1.0 and 0.1 to reflect the relative importance of one criterion compared to the next. A higher value indicates a greater degree of importance.
The specific scale and values can be adjusted to suit the context of the decision problem and the decision maker’s preferences.
A ten-point Likert-type relative importance scale (
Table 3) assesses managers’ qualitative skills in various decision-making contexts. This scale allows decision makers or experts to express their agreement or disagreement with the importance of specific skills exhibited by managers, facilitating nuanced evaluations.
The SHARDA method introduces a systematic approach that combines a hierarchical structure with the expanded SWARA technique for criteria weight determination, particularly in sustainable economic development contexts. It addresses limitations, enhances objectivity, and contributes to sustainable development objectives by providing a structured approach for criteria weight determination and decision making. It promotes transparency, consensus building, and documentation throughout the process.
The scale provides a structured and systematic approach to evaluating the relative importance of different qualitative skills exhibited by managers. When using this scale, decision makers or experts assign a score from 1 to 10 to each skill based on their perception of its importance. The assigned scores enable a more nuanced understanding of the importance and impact of each skill, facilitating informed decision-making processes related to the assessment and development of managerial capabilities.
Establish reference points: Identify reference points within the range corresponding to the qualitative categories. Determine specific numerical values for each qualitative category that best reflect their qualitative distinctions. For example:
Excellent: Assign a numerical value of 9–10, indicating a high level of knowledge.
Good: Assign a numerical value of 7–8, indicating above-average knowledge.
Average: Assign a numerical value of 5–6, indicating an average level of knowledge.
Below Average: Assign a numerical value of 3–4, indicating below-average knowledge.
3.3. Additive Ratio Assessment (ARAS) Method in Multiple Criteria Decision-Making
The additive ratio assessment (ARAS) method is essential in multiple criteria decision-making (MCDM). It effectively addresses the complex task of ranking and selecting the most suitable alternative when faced with multiple options. This method is precious in sustainable decision-making scenarios, where it considers the interplay between profits and losses, ensuring sound and informed decision analysis.
Historically, MCDM problems have involved ranking a finite number of decision alternatives, each characterised by a distinct set of criteria. This multifaceted nature of decision-making calls for robust methodologies. Bernoulli laid the groundwork for advanced MCDM methods by working on additive utility functions. At the same time, further research has explored the challenges of achieving consensus among group decision-makers [
52], underscoring the collective nature of decision-making processes.
When applied to sustainable decision making, the ARAS method focuses on effectiveness ratios related to profits and losses. It directly correlates the utility function value with an alternative’s relative efficiency, influenced by the interplay between the values and weights of critical criteria within a given project.
A vital advantage of the ARAS method is its ability to provide accurate and reliable results while mitigating the occurrence of rank reversal phenomena, a common issue in MCDM problems. It ensures that each alternative’s performance is assessed relative to the best attainable performance, resulting in a robust and equitable evaluation process.
The ARAS method comprises a systematic sequence of steps to facilitate the decision-making process:
Problem definition: It begins with a clear and comprehensive definition of the decision problem, including criteria and objectives.
Criteria weights determination: Decision makers assign weights to the criteria based on their relative importance.
Normalisation with sum of solution values: A normalisation procedure transforms the initial data into normalised scores.
Ratio assessment: The next step involves the calculation of the additive ratio for each alternative.
Rank alternatives: Alternatives are subsequently ranked based on their ratio scores, simplifying the complex decision-making process.
The heart of the decision-making process is the creation of a decision-making matrix (DMM), encapsulating the preferences for potential alternatives evaluated across multiple criteria. This foundational matrix ensures a fair and robust decision-making process in various contexts [
53].
Decision makers populate the DMM with preferences for
m possible choices (rows) evaluated on
n significant criteria (columns):
m is the number of alternatives, n is the number of criteria describing each option, represents the performance value of i alternative in terms of the j criterion, and is the optimal value of the j criterion.
When decision makers do not know the optimal value for a criterion, they can approximate it using Equation (12):
The performance values and criteria weights are typical entries of the DMM. Experts determine the criteria system and the initial values and weights of the criteria.
Decision makers could normalise the criteria values using the ratio to the optimal value to facilitate meaningful comparisons. This normalisation process transforms the criteria values into dimensionless values within the range of [0, 1]. Normalisation enhances the comparability of criteria and enables practical assessments and evaluations across different criteria.
In the next stage, decision makers normalise the initial values of all criteria to obtain the normalised decision-making matrix
:
For criteria with preferable values as maxima, decision makers use the following normalisation procedure:
For criteria with preferable values as minima, decision makers employ a two-stage normalisation procedure:
The third stage involves constructing the normalised-weighted matrix, denoted as
. Decision makers calculate this matrix by assigning weights to the criteria. Experts and stakeholders determine the values of weights
typically falling within the range of
. The sum of weights should satisfy the following conditions:
Subsequently, decision makers calculate the normalised–weighted values for each criterion
:
is the weight (importance) of the j criterion, and is the normalised rating of the j criterion.
Decision makers employ the optimality function, which calculates the utility degree of an alternative relative to the best alternative, to determine the priorities of other options. Decision makers determine the optimality function values as follows:
is the value of the optimality function of the i alternative.
Higher values of the optimality function correspond to preferable options.
Decision makers determine the utility degree of the alternative by comparing the analysed choice with the ideally best one,
S0 (Pareto optimal solution). Equation (20) helps decision makers calculate the utility degree
of an alternative
:
and are the optimality criterion values obtained from Equation (19).
The values of fall within the interval [0, 1]. Decision makers can order these values in an increasing sequence, reflecting the desired order of precedence. The utility function values derived from this process enable decision makers to determine the complex relative efficiency of each potential alternative.