Next Article in Journal
Heat and Flow Study of the Internally Finned Tubes with Different Fin Geometries
Previous Article in Journal
Augmented Reality-Based Framework Supporting Visual Inspection for Automotive Industry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Framing Algorithm-Driven Development of Sets of Objectives Using Elementary Interactions

Bauhaus-Institute for Infrastructure Solutions (b.is), Bauhaus-Universität Weimar, Goetheplatz 7/8, 99423 Weimar, Germany
*
Author to whom correspondence should be addressed.
Appl. Syst. Innov. 2022, 5(3), 49; https://doi.org/10.3390/asi5030049
Submission received: 14 April 2022 / Revised: 5 May 2022 / Accepted: 7 May 2022 / Published: 9 May 2022

Abstract

:
Multi-criteria decision analysis (MCDA) is an established methodology to support the decision-making of multi-objective problems. For conducting an MCDA, in most cases, a set of objectives (SOO) is required, which consists of a hierarchical structure comprised of objectives, criteria, and indicators. The development of an SOO is usually based on moderated development processes requiring high organizational and cognitive effort from all stakeholders involved. This article proposes elementary interactions as a key paradigm of an algorithm-driven development process for an SOO that requires little moderation efforts. Elementary interactions are self-contained information requests that may be answered with little cognitive effort. The pairwise comparison of elements in the well-known analytical hierarchical process (AHP) is an example of an elementary interaction. Each elementary interaction in the development process presented contributes to the stepwise development of an SOO. Based on the hypothesis that an SOO may be developed exclusively using elementary interactions (EIs), a concept for a multi-user platform is proposed. Essential components of the platform are a Model Aggregator, an Elementary Interaction Stream Generator, a Participant Manager, and a Discussion Forum. While the latter component serves the professional exchange of the participants, the first three components are intended to be automatable by algorithms. The platform concept proposed has been evaluated partly in an explorative validation study demonstrating the general functionality of the algorithms outlined. In summary, the platform concept suggested demonstrates the potential to ease SOO development processes as the platform concept does not restrict the application domain; it is intended to work with little administration moderation efforts, and it supports the further development of an existing SOO in the event of changes in external conditions. The algorithm-driven development of SOOs proposed in this article may ease the development of MCDA applications and, thus, may have a positive effect on the spread of MCDA applications.

1. Introduction

Multi-criteria decision analysis (MCDA) is a group of decision support approaches that analyses multi-objective problems [1]. In MCDA modeling, aspects such as stakeholder involvement and social participation are not essential but are considered outcome-enhancing [1,2,3,4,5]. Thus, multiple MCDA variants integrate stakeholder engagement. Among these variants are the decision analysis interview approach [4], stakeholder multi-criteria decision aid [6], participatory analytical hierarchy process (AHP) [7], decision conferencing [8], and multi-actor multi-criteria analysis (MAMCA) [9]. In general, stakeholders can be involved in many stages of an MCDA development process [3,10,11].
For situating MCDA approaches, a view on the discipline of operations research (OR) is expedient. OR is dedicated to the mathematics-based, data-driven, and model-driven contribution of methodologies to improve decision quality. OR has a clear quantitative approach [12]. In contrast, by involving human stakeholders in the development of OR methodologies, psychological aspects have to be taken into account; these approaches are also referred to as behavioral operational research (BOR) [13]. It is argued that the application of OR methodologies by humans introduces behavioral factors that should not be disregarded when assessing the quality of OR methodologies [14]. At the same time, it is postulated that the inclusion of behavioral aspects in a discipline is an indicator of the maturity of the underlying core discipline [15] and also reflects the increasing capability to accommodate more complex models [16]. Di Montibeller and von Winterfeldt [17] describe among others cognitive and motivational biases that may occur in decision analysis and lead to inaccurate decision models or decisions, as illustrated for example in [18]. Franco et al. [19] provide an overview of the current state of BOR offering a detailed research agenda including problem structuring methods and model building goals that are also addressed in this study. Overall, it remains that MCDA approaches are shaped by the insights of the BOR discipline [20].
Thus, inherently, the development process of an MCDA application itself is demanding, particularly when various stakeholder groups with diverse backgrounds have to be integrated into a joint, transdisciplinary process. Such an approach requires balancing various levels of cognitive skills, habits, and cultures [21]. For example, involved citizens and experts form a sharp contrast in terms of specific knowledge and experiences [3]. The modeling process is, also, prone to behavioral effects, such as group interaction and influences by the facilitator based on communication with the group [13]. Moreover, MCDA development processes are commonly considered very time- and effort-consuming [4,11,22]. A relevant, but additionally challenging part of the development of MCDA tools is the identification of objectives [23]. Thus, Bond et al. [24] discusses some shortcomings when defining objectives and validating mitigating measures. Similarly, Haag et al. [25] suggest using a master list in brainstorming activities combined with online questionnaires for enlarging the number of participants. These applications advantageously integrate software tools in their development processes. Examples of this would be the Decision Analysis Interview approach [26,27] and decision conferencing [28].
This article proposes the concept of using a multi-user software platform as a medium for the participatory development of an MCDA application involving all stakeholders. The central principle proposed is the use of short interactions between the participants and the platform. Participation from any location is enabled by the provision of the platform via the web. Time independence is enabled by the capability of asynchronous work, i.e., participants are not required to be online at the same time. Furthermore, time requirements for participation are flexible. Together, these characteristics enable a large number of participants to contribute to the development of an MCDA application. Further, negative group effects should be avoided. The platform concept proposed in this article is limited to the participatory creation of a set of objectives (SOO) as the core of an MCDA application.
This article is structured as follows: in the next section, the theoretical foundations of the software-supported participatory development of an SOO development are outlined. The concept of the envisioned platform is described in the succeeding section. Section 5 describes a pilot study based on the platform concept, whereas Section 6 discusses the results. The article is concluded with a summary and the conclusions.

2. Theoretical Background

2.1. Participatory MCDA

Stakeholder involvement and participation are affirmed in the MCDA literature [2,6,10,29,30]; it allows incorporation of stakeholders’ knowledge and values and enhances bringing structure to the planning, creating discussion frameworks, and learning among stakeholders [4].
A variety of participatory methods is discussed [11,31,32,33]. These methods range from workshops, stakeholder group meetings, interviews, written surveys, brainstorming and writing, morphologic analysis, literature research, and a panel of experts [3,4,11,34,35,36]. The application of such methods is time- and staff resource-consuming [11,37,38]. By applying participatory approaches in presence meetings, (e.g., workshops, sessions, panels), there may occur strategic, tactical, social, and psychological issues in the decision modeling process faced by individuals [39]. Negative effects such as the dominance of stakeholders [40], strategic answers by stakeholders [41], and the groupthink phenomenon [42] have been observed.
There are structured communication techniques, such as the Delphi method [40], which aim to reduce negative group effects by employing repeated questionnaires and aggregating facilitators for achieving consensus. Viera et al. [43] have proposed and evaluated a combination of the Delphi method and additional methods, such as decision conferencing to open the design of MCDA tools to a large number of participants.
The approach proposed in this article aims at reducing negative group effects. However, it accentuates asynchronous activities, algorithm-based aggregation of answers, and the inclusion of all stakeholder-groups, while not requiring personal meetings.

2.2. Set of Objectives (SOO) Development

For conducting MCDAs, the following steps are typically taken: (1) clarify the decision context; (2) define objectives and attributes; (3) develop alternatives; (4) estimate consequences; (5) evaluate trade-offs and select alternatives, and (6) implement, monitor, and review [44].
The development of an SOO is carried out in the first two stages of conducting MCDAs according to Gregory et al. [44] and includes the definition of the assessment goal and the collection of objectives and criteria (the terms “attribute” and “criterion” are used synonymously) [11,45]. The assessment goal is divided into objectives. Each objective is specified in more detail by so-called criteria. A criterion is measured by indicators, which provide concrete values. Figure 1 depicts the general structure of an SOO. In addition to weights, the SOO may be supplemented by transformation functions for transforming and normalizing indicator values [46] to serve as the basis for an MCDA application.

2.3. Participatory MCDA Using Software Tools

There are many applications of MCDA software [47,48,49,50,51,52,53], as well as many case studies [54]. Recently, Cinelli et al. [55] presented a web-based tool for the development of ranking alternatives for non-MCDA experts based on a fixed set of indicators.
Marttunen et al. [4] discuss a list of potential problems occurring during personal interactive interviews with MCDA software. It is argued that the software-based MCDA modeling requires time and commitment from stakeholders; that there are problems understanding or accepting the method and its principles by some participants; that support from an experienced decision analyst is required, and the potential for the unintentional influencing of interviewees’ answers may occur [4].
The platform concept proposed is preventing these problems as users may choose their engagement level on their own, elementary interactions (EIs) do not require a deeper understanding of MCDA modeling, and the EIs may be adapted to users’ abilities. Further, there is no decision analyst potentially influencing the process. Moreover, it is important to note that the platform concept algorithms are inherently capable of driving the design of SOOs with no specific specification of SOO elements from scratch. In case it becomes apparent that particular expertise is missing, then additional experts of the missing expertise might be included, further supplementing the SOO that has existed up to then.
Mustajoki and Marttunen [51] provide a survey of MCDA software, especially in the context of environmental planning processes. Mustajoki and Marttunen state that there, “are numerous MCDA software tools available”. Most of the software tools investigated support MCDA-related models and the elicitation of preferences via questionnaires. However, the development of an SOO with the help of many participants is not mentioned. They further state, “We think that none of the software tools in our analysis is such that users without any prior experience of MCDA could use it”. In contrast, the platform concept proposed fosters that only initiators of MCDA applications are required to be trained in the usage of the platform, while the participants simply have to perform self-explanatory elementary interactions with the platform.

3. The Concept of Elementary Interactions

3.1. Elementary Interactions

Elementary interactions (EIs) are a central construct of the platform concept but, to the best of our knowledge, EIs are not discussed in the literature in the context of developing an SOO. Therefore, EIs are presented in detail in this section. EIs are defined as short user interactions with the platform. Ideally, EIs are closed questions in which the user must choose from a predefined set of answers. EIs are self-contained and require short human processing time only, i.e., EIs are accomplishable with a few clicks or typing a term in less than a minute of time. Thus, the platform enables a low threshold for participation in the development process of an SOO.
Figure 2 shows three examples of website components asking for short interactions, the inspiration for the elementary interactions proposed here. The requested interactions require the participant to make a short decision and externalize this decision with one click. Although it is not possible to restrict the EI’s cognitive complexity to such a low level, (e.g., confer EI Name in Table 1, which requires identification of a meaningful word and typing it in), it is considered a main design trait for EIs. The goal is to keep the level of cognitive complexity as simple as possible for enabling answering EIs casually. A method for limiting the level of cognitive complexity is the utilization of closed questions. An example is asking for an intuitive and subjective assessment of the relative importance of two criteria: “Is criterion A or criterion B more important to measure goal C?” (This kind of question is well-known as a paired comparison from the priority evaluation within the AHP [30]). Using such a design allows for short feedback cycles: a participant is given a short task that can be completed in a second. This should tempt the participant to the next EI, which might be just as easy to accomplish. To foster such a stream of EIs in a flow, it should be pitched as relaxed and playful for activating users’ intuitive abilities. This principle of a stream of EIs can be observed, for example, in surveys conducted in the field of public opinion research by the company Civey [56,57]. Participants may stop and resume answering elementary interactions at any time. All completed responses contribute to the SOO development.

3.2. Elementary Interaction Categories and Types

EI must fulfill different purposes in SOO development, such as creating, structuring, or validating elements. In the following, the EIs are categorized by purpose and described using examples. The list of purposes represents a draft, and is considered incomplete, but is supposed to describe the idea underlying. EIs are summarized in Table 2.
EI Category Create. The first necessity is to ask the participants for appropriate SOO elements. This is accomplished by the EI Name (cf. Table 2, Id 1), which asks for example: “Please name a criterion, which is important to assess the objective time”. After having been answered by multiple users, EI Name results in a set of potential elements (Element Candidates). This EI is considered cognitively complex, because the participants have to think creatively about a suitable term, which, for example, designates a criterion, and they must type in the term.
EI Category Validate. As soon as an element has been named, it has to be validated. This is the goal of another EI Confirm (cf. Table 2, Id 2): The participant is asked if a given element candidate must be considered as an element, e.g., “Are direct costs a valid criterion to assess the objective economy?” If an element candidate reaches a certain validity level, the generation of elements of the subordinated level can be started, e.g., if a criterion has been validated, suitable indicators may be generated. The validation of SOO elements requires support by appropriate validity measures. For example, the percentage of confirmations compared to the rejections of an SOO element is such a validity measure. Furthermore, the confirmations and rejections may be weighted by the element-related expertise of each answering participant. Arguments to each element may be exchanged in the discussion forum and the respective discussion may be linked in the EI user interface.
EI Category Structure. The goal of structuring criteria and objectives is the identification of duplicates and a hierarchical structure. The EI Identify duplicates (cf. Table 2, Id 5) works on two random elements. It helps to discover duplicates and elements with semantically similar meanings. If the results of this EI point to two (or more) potentially similar elements, the EI Determine common name (cf. Table 2, Id 6) requires the participant to enter a common name. If a provided name achieves a defined validity (resulting from confirming EIs similar to EI Confirm), the underlying similar elements are removed from the model and the resulting element is added. Further, EIs evaluate the need to restructure the hierarchy of the elements. The EI Select parent element (cf. Table 2, Id 7) challenges the current assignment of an element (criterion or indicator) to its parent, e.g., “What is the most appropriate objective for the criterion ‘direct costs’: ‘economic objectives,’ ‘environmental objectives,’ or ‘social objectives’?” The answers to this EI either confirm the assignment, provide hints to relocate it or identify new elements of the superordinate level.
EI Category Determine Weights. The determination of weights is giving a priority to the elements of an SOO. An example is a pairwise comparison, accomplished by using EI Prioritize pairwise (cf. Table 2, Id 3), e.g., “Is the objective ‘direct costs’ more important than ‘indirect costs’?” (Measured on a Likert scale). A variant of this EI is the specification of more than two answer options. The EI Choose set-based (cf. Table 2, Id 4) implements multiple answer options: “Which five of the following criteria are the most important criteria for measuring economic objectives of a water infrastructure system?” As mentioned above, it is worth noting that “importance” is defined in the context of this platform as subjectively perceived importance, in order not to discourage participants if they are not experts from participating due to the pressure of demanding expectations.
While the EI categories have been described so far, in Table 2, the most important EI types are described.

4. Platform Concept

The platform concept proposed consists of seven numbered elements (Figure 3). The simulation model (2) represents a system of the real world (1). Based on the interactions of participants (3) with the platform; the set of objectives designer (4) creates the SOO and weights (5) with the help of so-called elementary interactions. In the end, the assessment result (6) serves as a basis for decisions (7).
(1)
The assessment object includes the system boundaries and the alternatives for the assessment objective.
(2)
Each indicator of the SOO must be calculated, and each requires either an algorithm or manual data input, e.g., in the case of an expert estimation. The input values for the calculation of the indicators are stored in the simulation model, which represents a model of the real-world system to be evaluated. The development of an indicator is always accompanied by the modeling of suitable attributes in the simulation model. Thus, when an algorithm is defined—by means of an expression editor, which can explore the underlying model—it relies on the attributes already present or adds new attributes to the simulation model. The simulation model grows in parallel with advancing SOO development. This means that both the meta-model of the simulation model is developed and corresponding values for concrete assessment object examples are provided. At this point of the process, a lack of data may emerge and may require a redesign of indicators and their algorithms. In further approaches, the simulation model may be extended, for example, to accommodate dynamic simulations.
(3)
Participants are required for the functioning of the platform. Participants are managed through the Participant Manager (a platform element, described in more detail in Section 4.3), but they are not seen as part of the proposed platform as such.
(4)
The Set of Objective Designer collects the information given by the participants through EIs. This includes the collection and structuring of objectives, criteria, and indicators. The Set of Objects Designer uses the EIs described above and facilitates them through algorithms in an automated way so that no human facilitator is involved. Furthermore, the weighting of an SOO is conducted by the Set of Objective Designer, which is described in more detail in Section 4.2.
(5)
The Set of Objectives results from the Set of Objectives Designer and the Simulation Model. Both components, their interactions, and the development of an SOO are explained in Section 4.2 and Section 4.3.
(6)
The joining of an SOO with corresponding weights and the example data of the simulation model allows a suggestion for an assessment result.
(7)
Based on the assessment results, a decision can be made.

4.1. Use Case

To illustrate the intended workflow of the platform and describe the development process of an SOO, the following use case includes the relevant steps using the example of creating an SOO for an MCDA application assessing sustainable water infrastructure.
Step 1: Defining the assessment goal, selecting and activating platform participants. One or more persons—the initiators—recognize the need for an MCDA application. The initiators define the goal and the system boundaries of the real-world system. Further, the initiators identify relevant stakeholder groups. In the case of water infrastructure, typical stakeholder groups have been previously identified [11,22]. To reach a large number of potential participants, professional associations with the assessment topic should be identified.
Step 2: Starting the development process. As soon as an invited participant creates an account on the platform, s/he is able to inform him-/herself about the purpose and aims of the MCDA application. During this introduction, the participant answers multiple-choice questions. These questions both inform the participant about the context and assess the status of the participant’s knowledge. Thereafter, the participant can browse through the current SOO (which at the beginning of the development process comprises the goal only). Alternatively, the participant may answer a sequence of elementary interactions. The sequence is created on a semi-random base. The participant can stop answering EIs at any time. Dependent on the status of the SOO, not all proposed EIs might be available yet. For example, if there are no criteria, requests for indicators are not yet possible, because each indicator requires a criterion.
Step 3: Development process. The development process for an SOO should run without the need for administrative intervention in most cases. Tasks such as evaluating the validity of the SOO elements proposed and generating the EI stream are performed using algorithms. However, initiators may monitor activities on the platform and intervene in situations when there is a lack of participants or when the goal has not been defined clearly.
Step 4: Evaluation of the resulting SOO. After threshold values of validity have been reached for all SOO elements, a milestone version of the SOO is created. This version of the SOO can be integrated into an MCDA application.
Step 5: Evolution. When external conditions have changed significantly, (e.g., civic preferences), the SOO developed may not be sound for the application any longer. In this case, the platform can be used for further development of the SOO based on the SOO elements already identified in the platform.
In the following, specific core components of the platform concept, which facilitate the implementation of the given use case, are highlighted. Among them are the platform core components Set of Objectives Designer, Participant Manager, and Model Aggregator.

4.2. Set of Objectives Designer

The Set of Objectives Designer is responsible for the development of a viable SOO and the assessment of weights of an SOO’s elements. Figure 4 depicts the structure and workflow of the Set of Objectives Designer. The central component is the EI Stream Generator. It creates elementary interactions based on multiple sources of information. First, the current SOO is analyzed for missing information. For example, if a criterion misses indicators, elementary interactions to survey indicators for the criteria are generated.
A further information source is the Participant Manager, who maintains a competency model (competence profile) of each participant. For example, if the Participant Manager has recorded little technical competency for a participant, it seems unreasonable to provide this participant with EI for naming subject-specific criteria. Rather, EIs should be asked about preferences for the weighting of the criteria.
Participants’ answers to the elementary interactions are delivered to the Model Aggregator, which integrates answers into the SOO. The Model Aggregator uses information provided by the Participant Manager. Based on the competency model, the potential reliability of the answers is weighted, i.e., the answer to a technical question given by a credentialed expert is given a higher weight than the answer given by a non-expert. Further, the answers are used to update the participant’s competency model in the Participant Manager, i.e., a correct answer to a question increases the value of a corresponding competence. An additional component of the Set of Objectives Designer is the Discussion Forum. A discussion forum of this kind might be realized with the help of software packages such as MediaWiki [62] or Stack Overflow [63]. Whereby, discussions about the elements of the SOO between participants should be fostered to enable collaborative development. This component is integrated into the Set of Objectives Designer using hyperlinks; whenever an element appears in the user interface, e.g., in the question of an EI, a hyperlink leads to the corresponding description and discussion page of this element.

4.3. Participant Manager

In general, various stakeholder groups influence the development process of an MCDA system [6,22,64,65]. A set of potential stakeholder groups includes decision-makers, interest groups, experts, and planners [11].
In the case of the platform proposed, initiators are a distinct group in the MCDA application design process. The initiators define the goal of the MCDA application, and the system boundaries, and invite potential participators. They ensure all involved stakeholders are represented, i.e., that the entirety of platform users can provide specialist knowledge and preferences of the affected stakeholders at the same time. The initiators take the role of MCDA application users. The initiators are expected to know the goal of an MCDA application, know when it makes sense to develop an SOO, and know what specifications need to be given for the development of an SOO. However, the initiators need to know comparatively little about the development process itself, and the algorithms of the platform should be able to handle the process unattended. Depending on the level of sophistication of the algorithms achievable, minor manual intervention may still be essential, as was the case in the validation study (Section 5) for the definition of an objective. The idea of the platform concept, however, is that no manual interventions are required for developing an SOO after having defined a goal.
Furthermore, end-users are a specific group that is subsumed under the term interest group in the set given above. End-users can be defined as stakeholders without specialist knowledge about the assessment object, and who are impacted by an MCDA application-based decision. End-users in the context of the water infrastructure example are citizens.
It is required to estimate the capabilities of each platform user depending on his/her role. For example, the contributions of a proven expert to the SOO elements have to be more weighted than the “guesses” of the end-users. Hence, an open, i.e., visible for all participants of the platform, competency model is created and maintained during the platform operation. This model is used to weigh the impact of EIs answered. For example, the more expertise a participant demonstrates, the more impact the participant’s contributions will have on the SOO elements. The competency model captures domain-specific competencies and provides all required user-characterizing attributes, such as reliability. Among possible sources of the proposed competency model are:
  • Assessment results: When a user registers on the platform, an introductory test is done that assesses the technical expertise of the user regarding specific domains. The provision of an initial test has to be performed by the initiators. During the design process, the assessment of the technical expertise can be extended, e.g., by a collaborative question design tool [66].
  • Self-estimation: If a user identifies him-/herself as an end-user, the initial focus of the EIs may be set on contributing to preferences.
  • Reputation: In crowdsourcing systems, contributors often are assigned an attribute reputation [67], which is a measurement of the quality of their previous contributions to the system. At the same time, reputation is used to derive system permissions. An example is the Question and Answer software Stack Overflow [63]. ResearchGate is another example of a platform that assigns a reputation index to each user openly [61].

4.4. Model Aggregator

A characteristic of the platform is the continuing development process while the MCDA application is already capable of supplying an assessment result. This leads to the question, “At which point in the development can such a model be considered as stable?” It is suggested to introduce various attributes, each describing a validity measurement of an element. First, the attribute, validity, accumulates the element’s validity. It determines, for example, if the name of the element is reasonable. Further, the attribute, validityStructure, holds a measure of the correct structural position of the element, i.e., if the element is located correctly in the SOO structure. Another measurement of validity can be the attribute, validityChildren, which is a measure of the stability of the subordinated elements, e.g., if those elements define a complete set and are mutually independent.
The values of validity-describing attributes are continuously updated by EIs, which affect the related elements. For example, if multiple users name the same criteria via EI Name, (e.g., direct costs), the validity of the element (represented by the attribute validity) is increased with each mention. The confirmation of an element (EI Confirm) increases the value of this attribute, whereas a rejection decreases it.
In general, the question of validity occurs on at least three levels. The first level is elements; the validity of an element is indicated by the attribute validity. The next level is made up of groups, which are the subordinated elements of a parent element, e.g., the criteria that belong to an objective. Group-based validity increases if EIs of the category, Structure, do not result in changes; each negation of an EI Find Duplicates increases the validity, and each confirmation decreases it again. The children’s attributes, validityStructure, contribute to the validityChildren attribute. The third level consists of tiers. There are two tiers. The first one is the hierarchy of objectives, criteria, and indicators; the second one is given by the weights of objectives and criteria. The determination of the weights is reasonable only when the underlying first tier has been captured in a milestone, i.e., when it is no longer subject to changes. The decision, when the design of the first tier is completed, can be made by the platform automatically, dependent upon the tier’s validity attributes. When the average validity of the elements has reached a threshold, an SOO milestone is created. Thereafter, the step of determining weights is based on this milestone version of the SOO.
Formulas to calculate the values of validity attributes have to take into account certain stipulations; an element’s validity can be considered stable when its value does not change significantly during the last affecting EIs.
In the following section, a validation study of two core components of the platform is described.

5. Functional Core Component Validation Study

EI Stream Generator and Model Aggregator are core components of the platform concept. The functionality of the platform proposed largely depends on the functionality of these core components. Therefore, validating the functionality of these core components is the first step in validating the overall functionality of the platform concept. The objectives of this explorative study [68] were, on the one hand, to assess the effectiveness of the algorithms proposed for generating EIs and aggregating the model to a converging SOO. On the other hand, the algorithms required should also be further developed if necessary. For the purpose of the validation study, the two core components were simulated using two standard software applications and a manual collection and processing of the data required.

5.1. Study Design

The aim of the study is to validate the core components without having them developed in software. Consequently, the study was largely conducted manually. Figure 5 shows the data storage involved and the core component validation process. The first data storage is the pool of questions in the learning management system, Moodle [69], shown in Figure 5, item (2). The answers to the questions are additionally stored in Moodle. The SOO and supporting data, called the SOO model, are stored in a spreadsheet file, as shown in Figure 5, item (5). The steps of the iterative workflow employed in the validation study are:
Step 1: Generating EIs: EIs are mainly derived from the current state of the SOO. This step is performed manually by the study leader. The current state of the SOO model is used as input. The output of this step is a set of questions that are used as EIs. In each turn, a set of questions was generated (typically 10–20) manually by the study leader.
Step 2: The questions generated in Step 1 are added to a question pool provided by moodle. This step is performed manually by the study leader.
Step 3: The questions are made available to the participants via the moodle Test activity (Figure 6). Participants are then required to answer a minimum of 10 questions but can additionally answer as many questions as they wish to answer.
Step 4: The answers to the questions are used to update the SOO model. To do this, the study leader extracts the answers to the questions from Moodle and aggregates them into the SOO model, following aggregation rules. The manual aggregation includes the correction of spelling errors, which might be more complex in an automated algorithm.
Step 5: In Step 5, a new version of the SOO model will be made available. The new version of the SOO model is then the basis for the next iteration, i.e., the workflow is started again with Step 1.
Step 6: The workflow, in particular Steps 1 and 4, was accompanied by supervision aimed at improving the workflow. After each iteration, a decision is made about whether changes should be made in Steps 1 and 4, especially regarding the rules to generate EIs, and the rules for aggregating the answers into the SOO model.
While the platform concept employs the concept of a continuous EI stream and model aggregation, for practical reasons, the study followed an iterative workflow. The iterative approach allows the algorithms for generating EIs and for model aggregation to be applied to a larger number of EIs at a time. This is useful for the manual handling of these steps, while software-based automated processing allows continuous processing after each EI. The SOO model updated in Step 5 was then used as the baseline for generating a set of EIs for the next round. Figure 7 shows an excerpt of the SOO model’s suggested criteria and their validity measures. The underlying spreadsheet is available as a digital supplement (Table S1) to this article.
For the study, the assessment goal of sustainability of water infrastructure was chosen, as many potential participants had at least basic expertise in this field. Additionally, the authors were involved in the development of another SOO for this goal, conceivably serving as a reference [46].
Participants. Participants were recruited from the scientific staff of a chair of urban wastewater management (n = 12) and from the acquaintances of the study leader (n = 14). Altogether, 26 participants were involved, voluntarily, and did not receive incentives in the study. Before entering the validation study, the participants received a written introduction to the study, its purpose, and the tasks to be performed. At the end of the study–more than six weeks after the beginning, 18 participants were still active. Eight participants stopped answering the questions during the course of the study; among the reasons were lack of motivation due to not fully comprehending the type of questions to be answered, especially the many repetitions, as well as lack of access to the platform due to travel. These high drop-out rates are probably characteristic of the operation stage of the platform since voluntary answering is a central characteristic of the platform concept.
Dependencies of EI types. Based on the goal of sustainability of water infrastructure, the first iteration consists of questions for naming criteria only. For this purpose, EI type 1 Name is used (“Please name a criterion, which is important to assess the goal sustainability of water infrastructure systems”.) The result of the first iteration is a set of criteria, which have to be checked individually in the next iteration, whether (a) the criteria named are also seen as criteria by other participants; and whether (b) differently named criteria are semantically the same criteria.
For checking item (a), EI type 2 is used, (e.g., “Is cost a valid criterion to assess the sustainability of water infrastructure?”). For checking item (b), EI type 5 was used (“Do you think high user acceptance and high usability are the same criterion?”) If in the case of (a), enough participants’ answers to the question are positive, the criterion is considered valid, otherwise it is rejected and discarded. In addition, in the case of item (b), there is a threshold percentage value, (e.g., 75%, which must be exceeded with a minimum number of answers (e.g., 10)), to merge two criteria into one. If two criteria are merged into one, in the next iteration, a question is generated from EI type 6 Common Name, asking for a common name, e.g., “What is a common name for the criteria high user acceptance and high usability?”

5.2. Results

In total, 2200 questions were answered, and 12 iterations were performed at a frequency of two iterations a week. The SOO achieved after 12 iterations are depicted in Figure 8. Although the SOO apparently appears to be incomplete, similarities to further SOOs for assessing the sustainability of water infrastructures, (e.g., [46]) are observable. The apparent incompleteness applies to objectives, criteria, and indicators and is reflected in unmet validity and stability measures, missing indicators, or unassigned criteria. A bottom-up approach was used to derive the objectives from the criteria. In the workflow of the study, the criteria were determined first. Then, with the help of the EI type 7, Common Name, an attempt was made to derive an objective from various criteria. This attempt had, within two iterations, no results.
Due to the limited time of the study, the objective, ecological objectives, was provided to the SOO model manually, so that criteria could be assigned. Further gaps, (e.g., no indicators for the four criteria in the right columns of Figure 7) in the SOO are due to the pursuit of specific research aspects, since only a finite number of EIs (limited by the number of participants) could be answered per iteration.
Additionally, the study allowed for experiences with validity measures. Heuristically, criteria validation requires a confirmation rate of 75% or more, and at least 10 answers. This validity measure was applied for criteria and indicators and helped to identify the SOO depictured in Figure 8.
Challenges. Various challenges were observed during the study, which are described hereafter.
Unclear names. Challenges occurred when the names of the criteria were not clear. As a result, some participants were overwhelmed by the EI type 2, Confirm, and were unable to answer the question. Two measures have been taken. Firstly, the option “I don’t know” was added. Secondly, definitions for a criterion were requested; a criterion cannot be defined only by supplying a name. Using the EI type 4, Choose set-based, the participants were then able to determine a suitable definition for a criterion.
Stakeholder-specific questions. All participants received the same questions without any differentiation according to the stakeholder group to which they belonged. According to informal feedback, domain-specific questions were too demanding for some participants, especially those who identified themselves as end-users. Therefore, the participants should be assigned to stakeholder groups and the questions should be stakeholder group-specific. Further, some participants complained about the repetition of the questions.
Question designs. Various question designs were used in the study. For the EI Identify duplicates, for example, in addition to the original Yes/No variant, the question of how far both elements overlap on a 7-point Likert scale was also raised. No clear results could be found, it seems that participant-type dependent preferences exist.
Dependencies of EI types. In general, dependencies between the uses of the EIs were clarified and confirmed in the validation study, such as that only after a criterion is confirmed, can participants be asked for possible indicators for that criterion. In consequence, the description schema of EIs requires the naming of prerequisites for the application of the EI. Further, state models, (e.g., containing the states Criterion suggested, Criterion confirmed, Criterion rejected) and state transition diagrams for SOO elements would be beneficial for the platform concept description.

5.3. Implications and Limitations

The study, which was discontinued due to time restrictions after 12 iterations without having achieved a complete SOO, confirmed the overall viability of the platform concept. The rules used for aggregating the SOO model allowed for the identification of different elements of the SOO. It is worth pointing out that core components of the platform concept have been validated and further developed, even without extensive software development, thus confirming the concept of the validation study. Further positive results of the validation study include the development of validity measures and the design of further question types.
The study also highlighted future tasks that needed to be worked on. Among them is the need to tailor EIs to the stakeholder group of the participant in order to not overburden the participants. Furthermore, it is necessary to reflect on the mechanisms by which the motivation of the participants can be maintained. In this study, the study leader needed to repeatedly motivate the participants to do a new iteration. It also became clear that the development of an SOO requires a large number of participants. The 26 participants involved in this study can be classified as a small number. Likewise, the 12 iterations completed turned out to be too few for the development of an SOO. From the experience gained, the magnitudes of these two parameters for a further validation study might be estimated at 50 participants over 40 rounds. Haag et al. [25] propose a single-digit number of participants to generate a list of objectives. Possible reasons for the discrepancy in the number of participants, such as motivation and qualification of participants, and differing processes, need to be investigated.
Future research tasks include an extended study using the findings obtained from this validation study, as well as the simulation of the other components of the platform concept, to provide a basis for the software implementation of the platform concept. In future studies, the methodology for identifying objectives in the SOO has to be elaborated as well. Furthermore, future participants of the study should be provided with the most important requirements for the elements to be defined. In this study, this was already partially covered by the written introduction and by explanatory texts that the participants could reach through the questions. For example, in such an explanation it might be pointed out that an objective should also describe a direction to be complete, (e.g., not “cost” but “low cost”).
As the validation study served the further development of the platform concept itself, the roles taken over in the validation study do not fully correspond to the roles that the platform concept suggests. The study conductor leader in the role of initiator also has to perform other tasks, in particular, to check to what extent the development process functions and where changes should be made. This role was taken over, especially in Step 6, when the functionality of the development process was reflected in a joint review of the study leader and the authors of the article.

6. Discussion

The concept described of a platform-mediated, almost un-administered, approach to creating an SOO seems to be attainable. Such a platform enables the development of an SOO and the suitability of MCDA applications for various purposes. Further, the platform could be made openly available. The platform concept might modularly integrate methods already established in decision tool development, such as stakeholder analysis, determination of weights, and transformation functions for indicator values.
The platform concept relies on a great number of participants, as the validation study has revealed. For example, in the case of citizen science projects; however, it is known that user activity decreases over time [70]. The case study also revealed evidence of declining motivation among some participants. For this reason, it is necessary to continually motivate the participants. The platform concept, thus, needs to include methods of motivation design for the participants. Gamification, the application of gaming principles to real-world tasks [71], might be a methodology to foster the motivation and engagement of participants. The platform is expected to offer multiple opportunities for gamification; the platform generates a huge amount of usage data, e.g., the number of interactions of each user or the number of consecutive days of logins. Especially, the introduction of a reputation system is considered as fostering engagement without affecting the participation negatively [72,73]. As indicated in the specifications for the component Participant Manager, reputation attributes, such as those maintained at Stack Overflow or ResearchGate, are candidates for motivation sustenance. Moreover, immediate feedback is considered an important means of fostering engagement [74]. Immediate feedback can be given by an extensive statistics component, which would visualize the effects of any performed EI. Key figures, such as “Participant’s number of EIs” and “Platform EIs in the last 24 h” might be motivating for some of the participants.
In general, the platform concept enables the use of visualizations, since the available information is integrated into the platform. Visualizations are known as beneficial for the cognitive processing of information, especially when combined with interactions [75]. In particular, in multimedia learning, visualizations are attributed to a prominent role [76]. To visualize the results of MCDA applications, there are already various approaches, especially to comparing different variants of diagrams [77,78,79]. These capacities can be further supplemented within this platform by the integration of all important information over time, as well as the possibility for interactive visual evaluation of the platform-contained information, such as performing a sensitivity analysis. Further, the capacity of the platform to trace the changes of various components over time, such as the simulation model of the real-world systems and the preferences model, would support the visualization of these changes over time.
Among the assumptions originally made in the platform concept was that the cognitive complexity of EIs was to be regarded as low. However, the validation study has shown that EIs may indeed lead to complex cognitive processes in the participants; it is not possible to limit the cognitive complexity of EIs consistently to the level of simple multiple-choice questions. For example, a multiple-choice question regarding the best definition of an SOO element requires considerable reading work. Another example of higher cognitive complexity is the creative work required when naming new elements. Therefore, further research needs to clarify what degree of cognitive complexity is operable for EIs without perceiving them as hard work, thus discouraging participants from engaging with EIs.
The application of SOOs developed may have a variety of purposes. A probably common application scenario is an SOO that is used as a standard tool for certain classes of decisions. A further purpose is utilization as a foundation for a group discussion. For the respective application scenarios, advantages and disadvantages have to be elaborated. In the case of the standard tool, for example, attention has to be paid to balancedness; in the case of the group discussion, the emphasis on individual perspectives should be avoided. A major advantage of the platform might be the maintainability of existing platform-originated SOO using another group of participants. If the SOO was not optimal or does not correspond to the new decision task, it will evolve.

7. Conclusions

The development of MCDA applications is due to consistent findings in the literature, a complex process that requires high organizational efforts. This article describes a platform concept for developing an SOO. SOOs are essential components of MCDA applications. The key paradigm of the concept is the decomposition of SOO design decisions into short interactions, so-called elementary interactions (EIs). Based on the information collected by these EIs and the algorithms of the core component, a structured SOO consisting of objectives, criteria, and indicators evolves automated over time. Relevant components of the platform concept are:
  • A Participant Manager, which holds a competency model for each participant;
  • A Model Aggregator, which transforms the answers received by EIs into the SOO;
  • An EI Stream Generator, which creates streams of EIs due to the information required for completing the SOO and suitable for each participant;
  • A Discussion Forum, which fosters communication between participants.
A validation study confirmed the general functional capability of core components of the platform concept. However, it also helped to identify further research demands, such as determining methodologies to cluster criteria into objectives and exploring the cognitive complexity of EIs. In summary, the platform concept offers the following advantages: (1) the platform concept is open to any MCDA application domain, (2) it is intended to work with little administrative and organizational effort, and (3) it supports the further development of an existing SOO in the event of significant changes in external conditions. The algorithm-driven development of SOOs may have a positive effect on the spread of MCDA applications due to less organizational and administrative effort required.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/asi5030049/s1, Table S1: Spreadsheet—MCDA Model.

Author Contributions

Conceptualization, H.S. and A.L.; methodology, H.S. and A.L.; software, H.S.; validation, H.S. and A.L.; formal analysis, H.S. and A.L.; investigation, H.S. and A.L.; resources, H.S. and A.L.; data curation, H.S. and A.L.; writing—original draft preparation, H.S. and A.L.; writing—review and editing, H.S. and A.L.; visualization, H.S.; funding acquisition, H.S. and A.L. All authors have read and agreed to the published version of the manuscript.

Funding

We acknowledge support from the German Research Foundation (DFG) and Bauhaus-Universität Weimar within the program of Open Access Publishing. Further, we would like to acknowledge the support of the German Federal Ministry of Education and Research (BMBF) grant number FKZ 033W011B.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data is supplied as Supplementary Materials to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Belton, V.; Stewart, T.J. Multiple Criteria Decision Analysis—An Integrated Approach; Springer: Boston, MA, USA, 2002. [Google Scholar] [CrossRef]
  2. Keeney, R.L. Value-Focused Thinking. A Path to Creative Decisionmaking; Revised ed.; Harvard University Press: Cambridge, MA, USA, 1996. [Google Scholar]
  3. Hendriksen, A.; Tukahirwa, J.; Oosterveer, P.J.M.; Mol, A.P.J. Participatory decision making for sanitation improvements in unplanned urban settlements in east Africa. J. Environ. Dev. 2012, 21, 98–119. [Google Scholar] [CrossRef]
  4. Marttunen, M.; Mustajoki, J.; Dufva, M.; Karjalainen, T.P. How to design and realize participation of stakeholders in MCDA processes? A framework for selecting an appropriate approach. EURO J. Decis. Process. 2015, 3, 187–214. [Google Scholar] [CrossRef]
  5. Eisenführ, F.; Weber, M.; Langer, T. Rational Decision Making; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  6. Banville, C.; Landry, M.; Martel, J.; Boulaire, C. A stakeholder approach to MCDA. Syst. Res. Behav. Sci. 1998, 15, 15–32. [Google Scholar] [CrossRef]
  7. Antunes, P.; Karadzic, V.; Santos, R.; Beça, P.; Osann, A. Participatory multi-criteria analysis of irrigation management alternatives: The case of the caia irrigation district, Portugal. Int. J. Agric. Sustain. 2011, 9, 334–349. [Google Scholar] [CrossRef]
  8. Phillips, L. People-Centred Group Decision Support; Halsted Press: Sydney, Australia, 1989. [Google Scholar]
  9. Macharis, C.; De Witte, A.; Ampe, J. The multi-actor, multi-criteria analysis methodology (MAMCA) for the evaluation of transport projects: Theory and practice. J. Adv. Transp. 2009, 43, 183–202. [Google Scholar] [CrossRef]
  10. Lahdelma, R.; Salminen, P.; Hokkanen, J. Using multicriteria methods in environmental planning and management. Environ. Manag. 2000, 26, 595–605. [Google Scholar] [CrossRef]
  11. Lück, A.; Nyga, I. Experiences of stakeholder participation in multi-criteria decision analysis (MCDA) processes for water infrastructure. Urban. Water J. 2018, 15, 508–517. [Google Scholar] [CrossRef]
  12. Mortenson, M.J.; Doherty, N.F.; Robinson, S. Operational research from Taylorism to Terabytes: A research agenda for the analytics age. Eur. J. Oper. Res. 2015, 241, 583–595. [Google Scholar] [CrossRef] [Green Version]
  13. Hämäläinen, R.P.; Luoma, J.; Saarinen, E. On the importance of behavioral operational research: The case of understanding and communicating about dynamic systems. Eur. J. Oper. Res. 2013, 228, 623–634. [Google Scholar] [CrossRef]
  14. Hämäläinen, R.P. Behavioural issues in environmental modelling—The missing perspective. Environ. Model. Softw. 2015, 73, 244–253. [Google Scholar] [CrossRef]
  15. Franco, L.A.; Hämäläinen, R.P. Behavioural operational research: Returning to the roots of the or profession. Eur. J. Oper. Res. 2016, 249, 791–795. [Google Scholar] [CrossRef] [Green Version]
  16. Royston, G. The Past, Present and Futures of Behavioral Operational Research BT—Behavioral Operational Research: Theory, Methodology and Practice; Kunc, M., Malpass, J., White, L., Eds.; Palgrave Macmillan UK: London, UK, 2016; pp. 359–381. [Google Scholar] [CrossRef]
  17. Montibeller, G.; von Winterfeldt, D. Cognitive and motivational biases in decision and risk analysis. Risk Anal. 2015, 35, 1230–1251. [Google Scholar] [CrossRef] [PubMed]
  18. Melnik-Leroy, G.A.; Dzemyda, G. How to influence the results of MCDM?—Evidence of the impact of cognitive biases. Mathematics 2021, 9, 121. [Google Scholar] [CrossRef]
  19. Franco, L.A.; Hämäläinen, R.P.; Rouwette, E.A.J.A.; Leppänen, I. Taking Stock of Behavioural OR: A review of behavioural studies with an intervention focus. Eur. J. Oper. Res. 2021, 293, 401–418. [Google Scholar] [CrossRef]
  20. Morton, A.; Fasolo, B. Behavioural decision theory for multi-criteria decision analysis: A guided tour. J. Oper. Res. Soc. 2009, 60, 268–275. [Google Scholar] [CrossRef]
  21. Walter, A.I.; Wiek, A.; Scholz, R.W. Constructing regional development strategies: A case study approach for integrated planning and synthesis. In Handbook of Transdisciplinary Research; Springer: Berlin/Heidelberg, Germany, 2008; pp. 223–243. [Google Scholar] [CrossRef]
  22. Lienert, J.; Schnetzer, F.; Ingold, K. Stakeholder analysis combined with social network analysis provides fine-grained insights into water infrastructure planning processes. J. Environ. Manage. 2013, 125, 134–148. [Google Scholar] [CrossRef] [Green Version]
  23. Keeney, R.L. Identifying, Prioritizing, and using multiple objectives. EURO J. Decis. Process. 2013, 1, 45–67. [Google Scholar] [CrossRef] [Green Version]
  24. Bond, S.D.; Carlson, K.A.; Keeney, R.L. Improving the generation of decision objectives. Decis. Anal. 2010, 7, 238–255. [Google Scholar] [CrossRef] [Green Version]
  25. Haag, F.; Zürcher, S.; Lienert, J. Enhancing the elicitation of diverse decision objectives for public planning. Eur. J. Oper. Res. 2019, 279, 912–928. [Google Scholar] [CrossRef]
  26. Marttunen, M.; Hämäläinen, R.P. Decision analysis interviews in environmental impact assessment. Eur. J. Oper. Res. 1995, 87, 551–563. [Google Scholar] [CrossRef]
  27. Karjalainen, T.P.; Marttunen, M.; Sarkki, S.; Rytkönen, A.-M. Integrating ecosystem services into environmental impact assessment: An analytic–deliberative approach. Environ. Impact Assess. Rev. 2013, 40, 54–64. [Google Scholar] [CrossRef]
  28. Phillips, L.D.; Costa, C.A.B.E. Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann. Oper. Res. 2007, 154, 51–68. [Google Scholar] [CrossRef] [Green Version]
  29. Munda, G. Social multi-criteria evaluation: Methodological foundations and operational consequences. Eur. J. Oper. Res. 2004, 158, 662–677. [Google Scholar] [CrossRef]
  30. Saaty, T.L. How to make a decision: The analytic hierarchy process. Eur. J. Oper. Res. 1990, 48, 9–26. [Google Scholar] [CrossRef]
  31. Petkov, D.; Petkova, O.; Andrew, T.; Nepal, T. Mixing multiple criteria decision making with soft systems thinking techniques for decision support in complex situations. Decis. Support Syst. 2007, 43, 1615–1629. [Google Scholar] [CrossRef]
  32. Gabriel, A.; Camargo, M.; Monticolo, D.; Boly, V.; Bourgault, M. Improving the idea selection process in creative workshops through contextualisation. J. Clean. Prod. 2016, 135, 1503–1513. [Google Scholar] [CrossRef]
  33. Salo, A.; Hämäläinen, R.P. Multicriteria decision analysis in group decision processes. In Handbook of Group Decision and Negotiation; Springer: Berlin/Heidelberg, Germany, 2010; pp. 269–283. [Google Scholar]
  34. Domènech, L.; March, H.; Saurí, D. Degrowth initiatives in the urban water sector? A social multi-criteria evaluation of non-conventional water alternatives in metropolitan barcelona. J. Clean. Prod. 2013, 38, 44–55. [Google Scholar] [CrossRef]
  35. Marques, R.C.; da Cruz, N.F.; Pires, J. Measuring the sustainability of urban water services. Environ. Sci. Policy 2015, 54, 142–151. [Google Scholar] [CrossRef]
  36. Palme, U.; Lundin, M.; Tillman, A.M.; Molander, S. Sustainable development indicators for wastewater systems—Researchers and indicator users in a co-operative case study. Resour. Conserv. Recycl. 2005, 43, 293–311. [Google Scholar] [CrossRef] [Green Version]
  37. De Brito, M.M.; Evers, M. Multi-criteria decision-making for flood risk management: A survey of the current state of the art. Nat. Hazards Earth Syst. Sci. 2016, 16, 1019–1033. [Google Scholar] [CrossRef] [Green Version]
  38. Lienert, J.; Scholten, L.; Egger, C.; Maurer, M. Structured decision-making for sustainable water infrastructure planning and four future scenarios. EURO J. Decis. Process. 2014, 3, 107–140. [Google Scholar] [CrossRef]
  39. Kilgour, D.M.; Chen, Y.; Hipel, K.W. Multiple Criteria Approaches to Group Decision and Negotiation. In Trends in Multiple Criteria Decision Analysis; International Series in Operations Research & Management Science; Greco, S., Ehrgott, M., Figueira, J., Eds.; Springer: Boston, MA, USA, 2010; Volume 142, pp. 317–338. [Google Scholar]
  40. Hsu, C.-C.; Sandford, B.A. The Delphi technique: Making sense of consensus. Pract. Assess. Res. Eval. 2007, 12, 10. [Google Scholar]
  41. Jonsson, A.; Andersson, L.; Alkan-Olsson, J.; Arheimer, B. How participatory can participatory modelling be? Degrees of influence of stakeholder and expert perspectives in six dimensions of participatory modeling. Water Sci. Technol. 2007, 56, 207–214. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Kerr, N.L.; Tindale, R.S. Group performance and decision making. Annu. Rev. Psychol. 2004, 55, 623–655. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Vieira, A.C.L.; Oliveira, M.D.; Bana e Costa, C.A. Enhancing knowledge construction processes within multicriteria decision analysis: The collaborative value modelling framework. Omega 2020, 94, 102047. [Google Scholar] [CrossRef]
  44. Gregory, R.; Failing, L.; Harstone, M.; Long, G.; McDaniels, T.; Ohlson, D. Structured Decision Making: A Practical Guide to Environmental Management Choices; Wiley-Blackwell: Hoboken, NJ, USA, 2012. [Google Scholar]
  45. Nyga, I.; Lück, A.; Raber, W.; Hillenbrand, T.; Zimmermann, M.; Eller, M.; Eismann, C.; Möller, K.; Felmeden, J.; Langer, M.; et al. Rahmenkonzepte Zur Integrierten Bewertung Siedlungswasserwirtschaftlicher Systeme [Frameworks for Integrated Assessment of Urban Water Management Systems]. gwf-Wasser|Abwasser 2018, 1, 53–62. [Google Scholar]
  46. Sartorius, C.; Lévai, P.; Niederste-Hollenberg, J.; Nyga, I.; Sorge, C.; Hillenbrand, T. Comparative multi-criteria performance assessment of alternative water infrastructure systems. Water Sci. Technol. Water Supply 2018, 18, 2188–2198. [Google Scholar] [CrossRef]
  47. Buede, D.M. Software Review. Overview of the MCDA Software Market. J. Multi-Criteria Decis. Anal. 1992, 1, 59–61. [Google Scholar] [CrossRef]
  48. Buede, D.M. Second Overview of the MCDA Software Market. J. Multi-Criteria Decis. Anal. 1996, 5, 312–316. [Google Scholar] [CrossRef]
  49. Weistroffer, H.R.; Li, Y. Multiple Criteria Decision Analysis Software. In Multiple Criteria Decision Analysis. International Series in Operations Research & Management Science; Greco, S., Ehrgott, M., Figueira, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; Volume 233, pp. 1301–1341. [Google Scholar]
  50. Ishizaka, A.; Nemery, P. Multi-Criteria Decision Analysis: Methods and Software; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  51. Mustajoki, J.; Marttunen, M. Comparison of multi-criteria decision analytical software for supporting environmental planning processes. Environ. Model. Softw. 2017, 93, 78–91. [Google Scholar] [CrossRef]
  52. Vassilev, V.; Genova, K.; Vassileva, M. A brief survey of multicriteria decision making methods and software systems. Cybern. Inf. Technol. 2005, 5, 3–13. [Google Scholar]
  53. Oleson, S. Decision analysis: Past, present and future of dynamic software emphasizes continuous improvement of vital O.R. tool. OR/MS Today 2016, 43, 36–39. [Google Scholar]
  54. Korosuo, A.; Wikström, P.; Öhman, K.; Eriksson, L.O. An integrated MCDA software application for forest planning: A case study in southwestern Sweden. Int. J. Math. Comput. For. Nat. Sci. 2011, 3, 75–86. [Google Scholar]
  55. Cinelli, M.; Spada, M.; Kim, W.; Zhang, Y.; Burgherr, P. MCDA index tool: An interactive software to develop indices and rankings. Environ. Syst. Decis. 2020, 41, 82–109. [Google Scholar] [CrossRef] [PubMed]
  56. Civey GmbH. Civey—Erfahre Was Deutschland Denkt [Civey—Learn What Germany Thinks]. Available online: https://civey.com/ (accessed on 20 June 2018).
  57. Wurnig, D. Das Taugen Die Umfragen von Civey, Die Dir Gerade Überall Im Internet Begegnen [That’s What the Civey Polls Are for, Which You’ll Find Everywhere on the Internet Right Now]. Krautreporter. Available online: https://krautreporter.de/2077-das-taugen-die-umfragen-von-civey-die-dir-gerade-uberall-im-internet-begegnen (accessed on 12 December 2021).
  58. Opinary GmbH. Opinary—Opinary Makes Opinions Matter. Available online: http://opinary.com/ (accessed on 17 March 2017).
  59. SPIEGEL ONLINE GmbH. SPIEGEL ONLINE. Available online: www.spiegel.de (accessed on 22 March 2017).
  60. EasyBib. EasyBib: Free Bibliography Generator. Available online: http://www.easybib.com/ (accessed on 21 March 2017).
  61. ResearchGate GmbH. ResearchGate|Share and Discover Research. Available online: https://www.researchgate.net/ (accessed on 22 March 2019).
  62. Wikimedia Foundation Inc. MediaWiki. Available online: https://www.mediawiki.org/wiki/MediaWiki (accessed on 13 September 2017).
  63. Stackoverflow.com. Stack Overflow. Available online: http://stackoverflow.com/ (accessed on 17 July 2012).
  64. Ferretti, V. From Stakeholders analysis to cognitive mapping and multi-attribute value theory: An integrated approach for policy support. Eur. J. Oper. Res. 2016, 253, 524–541. [Google Scholar] [CrossRef]
  65. Lienert, J.; Koller, M.; Konrad, J.; McArdell, C.S.; Schuwirth, N. Multiple-criteria decision analysis reveals high stakeholder preference to remove pharmaceuticals from hospital wastewater. Environ. Sci. Technol. 2011, 45, 3848–3857. [Google Scholar] [CrossRef]
  66. McClean, S. Implementing PeerWise to engage students in collaborative learning. Perspect. Pedagog. Pract. 2015, 6, 89–96. [Google Scholar]
  67. Adler, B.T.; De Alfaro, L.; Mola-Velasco, S.M.; Rosso, P.; West, A.G. Wikipedia vandalism detection: Combining natural language, metadata, and reputation features. Lect. Notes Comput. Sci. 2011, 6609, 277–288. [Google Scholar] [CrossRef]
  68. Körting, D. Entwurf Und Validierung Eines Citizen Science—Gestützten Verfahrens Zur Entwicklung von Multikritieriellen Bewertungssystemen Am Beispiel Technischer Infrastruktur [Design and Validation of a Citizen Science-Based Approach for the Development of MCDA Tools]; Bauhaus-Universität Weimar: Weimar, Germany, 2018. [Google Scholar]
  69. Moodle.org. Moodle—Open Source Learning Platform. Available online: https://moodle.org (accessed on 23 May 2018).
  70. Sauermann, H.; Franzoni, C. Crowd science user contribution patterns and their implications. Proc. Natl. Acad. Sci. USA 2015, 112, 679–684. [Google Scholar] [CrossRef] [Green Version]
  71. Deterding, S.; Dixon, D.; Khaled, R.; Nacke, L. From Game Design Elements to Gamefulness: Defining Gamification. In Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, Tampere, Finland, 28–30 September 2011; ACM: New York, NY, USA, 2011; pp. 9–15. [Google Scholar]
  72. Thiel, S.-K. Reward-Based vs. Social Gamification. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction, Gothenburg, Sweden, 23–27 October 2016; Volume 104, pp. 1–6. [Google Scholar] [CrossRef]
  73. Thiel, S.-K.; Fröhlich, P. Gamification as Motivation to Engage in Location-Based Public Participation? In Progress in Location-Based Services 2016; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar] [CrossRef]
  74. Garris, R.; Ahlers, R.; Driskell, J.E. Games, motivation, and learning: A research and practice model. Simul. Gaming 2002, 33, 441–467. [Google Scholar] [CrossRef]
  75. Liu, Z.; Stasko, J. Mental models, visual reasoning and interaction in information visualization: A top-down perspective. IEEE Trans. Vis. Comput. Graph. 2010, 6, 999–1008. [Google Scholar]
  76. Mayer, R.E. Multimedia Learning, 2nd ed.; Cambridge University Press: New York, NY, USA, 2009. [Google Scholar]
  77. Miettinen, K. Survey of methods to visualize alternatives in multiple criteria decision making problems. OR Spectr. 2014, 36, 3–37. [Google Scholar] [CrossRef] [Green Version]
  78. Lami, I.M.; Abastante, F.; Bottero, M.; Masala, E.; Pensa, S. Integrating multicriteria evaluation and data visualization as a problem structuring approach to support territorial transformation projects. EURO J. Decis. Process. 2014, 2, 281–312. [Google Scholar] [CrossRef] [Green Version]
  79. Haara, A.; Pykäläinen, J.; Tolvanen, A.; Kurttila, M. Use of interactive data visualization in multi-objective forest planning. J. Environ. Manag. 2018, 210, 71–86. [Google Scholar] [CrossRef]
Figure 1. Generalized structure of a set of objectives (SOO).
Figure 1. Generalized structure of a set of objectives (SOO).
Asi 05 00049 g001
Figure 2. EI examples: Top left: request for seamless personal evaluation of a company takeover [58,59]; top right: slide-in single choice question for the reason of web page visit [60]; bottom: in-passing request for additional attributes of content in a domain-specific content management system [61].
Figure 2. EI examples: Top left: request for seamless personal evaluation of a company takeover [58,59]; top right: slide-in single choice question for the reason of web page visit [60]; bottom: in-passing request for additional attributes of content in a domain-specific content management system [61].
Asi 05 00049 g002
Figure 3. System Perspective.
Figure 3. System Perspective.
Asi 05 00049 g003
Figure 4. Set of Objectives Designer: Components and Process.
Figure 4. Set of Objectives Designer: Components and Process.
Asi 05 00049 g004
Figure 5. Workflow of core component validation.
Figure 5. Workflow of core component validation.
Asi 05 00049 g005
Figure 6. Example screenshot of the interface for conducting elementary interactions provisionally as part of the case study (moodle Test activity).
Figure 6. Example screenshot of the interface for conducting elementary interactions provisionally as part of the case study (moodle Test activity).
Asi 05 00049 g006
Figure 7. Excerpt from the spreadsheet-hold SOO model.
Figure 7. Excerpt from the spreadsheet-hold SOO model.
Asi 05 00049 g007
Figure 8. Validation study result: preliminary set of objectives (SOO).
Figure 8. Validation study result: preliminary set of objectives (SOO).
Asi 05 00049 g008
Table 1. EI Schema Description.
Table 1. EI Schema Description.
Schema ElementDescription
IdId of the EI.
DescriptionDescribes the context and purpose of the EI.
CategoryThe category refers to the purpose an EI serves. Commonly, there may be different EIs for achieving one purpose, e.g., there is more than one EI to validate an element.
Elements AffectedNames the elements of the assessment model to which this EI is applicable (e.g., Objective, Criterion, Indicator).
ImpactThe impact of an EI is described here.
Sample QuestionA sample question that illustrates the EI.
InteractionThe action the user has to take for fulfilling the EI.
Table 2. Overview of Elementary Interactions.
Table 2. Overview of Elementary Interactions.
IDNameDescriptionCat.Elem.PurposeSample QuestionInteraction
1NameUsed to add new elements to the model. Therefore, it requires the explicit naming of such an element.Element creationObjective, Criterion, IndicatorAdds a new element of the given type to the model.Please name a criterion, which is important to assess the objective time.Typing in a name
2ConfirmThis EI is used to validate an element of a model by asking a user for confirmation.ValidateObjective, Criterion, IndicatorIncreases the validity.Is Direct Costs a valid criterion to assess the objective Economy?Choosing confirmation or rejection
3Prioritize pairwiseThis EI is used to prioritize an element of a model over another by asking a user.Validate, WeighObjective, Criterion, IndicatorWeighs, Increases the validity.Which criterion is more important to describe the objective Economic Objective? Direct Costs or Indirect Costs?Choosing one of two choices.
4Choose set-basedThis EI is used to select the most relevant elements of a set. Depending on the customization, the selection may be ordered or unordered. It preferably should be implemented via Drag and Drop in a graphical user interface (GUI).Validate, WeighObjective, Criterion, IndicatorIncreases the validity.Which five of the following criteria are the most important criteria for measuring Economic Objectives of a Travel Type? [Select in order of importance]Choosing up to five of the given set of elements.
5Identify duplicatesThis EI identifies duplicate elements, which may differ in names, but probably have the same meaning.Validate, RestructureObjective, Criterion, IndicatorIncreases the validity.Do you think, “Indirect Costs” and “Direct Costs” are the same criterion? [To which extent do the criteria “Indirect Costs” and “Direct Costs overlap?]Answering with Yes or No. A variant of this EI could ask for the grade of identity on a scale from 0 to 100%.
6Determine common nameThis EI asks the user for a common name for two or more elements.NameObjective, Criterion, IndicatorDetermines the validity of an element resp. restructures the model, if a threshold validity has been reached.What is a common name for the criteria “Direct Costs” and “Indirect Costs”?Entering a name.
7Select parent elementThis EI asks the user for the appropriate parent element. It offers all available parents of the hierarchy level of its parent and lets the user choose the most appropriate parent element.Validate, RestructureCriterion, IndicatorDetermines the validity of an element resp. restructures the model, if a threshold validity has been reached.What is the most appropriate objective for the criterion “Direct Costs”: Economic Objectives, Environmental Objectives, or Social Objectives? [Provide an alternative objective, if no suggestion fits really well.]Choosing one of multiple choices [or entering the name of an alternative].
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Söbke, H.; Lück, A. Framing Algorithm-Driven Development of Sets of Objectives Using Elementary Interactions. Appl. Syst. Innov. 2022, 5, 49. https://doi.org/10.3390/asi5030049

AMA Style

Söbke H, Lück A. Framing Algorithm-Driven Development of Sets of Objectives Using Elementary Interactions. Applied System Innovation. 2022; 5(3):49. https://doi.org/10.3390/asi5030049

Chicago/Turabian Style

Söbke, Heinrich, and Andrea Lück. 2022. "Framing Algorithm-Driven Development of Sets of Objectives Using Elementary Interactions" Applied System Innovation 5, no. 3: 49. https://doi.org/10.3390/asi5030049

Article Metrics

Back to TopTop