2.1.7. Elicitation Format

The main elicitation formats are the remote inquiry or individual interviews opposed to group interactions, which bring their own advantages and drawbacks in view of the different bias types these invoke.

The remote or separate elicitation from the experts implies no interaction among them, which excludes biases originating from group dynamics such as social pressure, hierarchy influence or personality traits [14]. Such an approach could be strengthened by expert calibration, weighting and mathematical aggregation (e.g., in the CM) to contribute to the empirical control and method transparency.

Although the researcher should acknowledge the presence of group think biases within interactive groups, personal meetings of expert panels has a spectrum of advantages as well [14]. In particular, it can contribute to creative thinking by combining different expertise and generate a wider variety of ideas. Furthermore, data with bigger precision could be acquired through interactions.

#### 2.1.8. Qualitative versus Quantitative Variables

As previously mentioned, the majority of studied sources on SEJE aim the elicitation of physical parameters describing a system or a phenomenon. Such elicitation variables can be labeled as quantitative, since there theoretically exists a precise correct value, which should be approximated by the experts.

However, as the purposes of the AMA have shown, a SEJE might be required under the following circumstances:


In such cases, the expert would have insufficient experience or knowledge on nonexistent concepts. Therefore the attempts to quantify system parameters would not only be challenging but would also lead to a significant increase of epistemic uncertainty.

The alternative is to use qualitative scale for the assessment regarding non-deterministic criteria. Usually, it reflects a linguistic evaluation grades such as "very good", "good" or "bad" in a categorical form. For the purposes of quantification and further data processing, these statements have been transferred to a continuous numerical scale from 1 to 9 within the initial AMA as defined in [29]. One of its main drawbacks is obviously the ill-defined character of the numerical definitions for the assessment of technological alternatives—should a certain technology be evaluated with 7 ("very good") or 6 (between "good" and "very good")? This is found ambiguous especially by representatives of technical fields used to precise quantities (which has been observed during both conducted workshops within the current project so far). Further disadvantage is the lack of a reference for the qualitative values, at least for the extremas of the scale. This issue is addressed within the Analytical Hierarchy Process (AHP) by Saaty [30], where pairwise comparisons should be evaluated on the scale from 1 (equal) to 9 (absolutely superior) to denote the superiority of a certain alternative over another one.

At this point, one should recall the SEJE definition in the beginning of the current subsection, denoting such a process as a quantification of subjective expert knowledge. Although such scales are considered "qualitative" in the current work, these still represent a way to quantify the DM's experience and knowledge in a certain data format. Hence, this can also be defined as a form of elicitation.

#### 2.1.9. SEJE Integration with Multi-Criteria Decision-Making

The aim of the SEJE methods is to ensure a transparent and scientific elicitation of expert knowledge by offering sometimes multiple steps of aggregation, reevaluation or discussion. However, the thorough conduction of such routines might be applicable for the elicitation of a relatively small amount of variables simply due to time constraints and limitations related to participant engagement and their attention span. This was experienced during the first [12] and second workshop of the current project.

However, one might need to select among a significant amount of alternative scenarios or technologies. Furthermore, the selection might be defined as the outcome of a multicriteria assessment, which is the case for the AMA. Hence, a certain structure is required both for the elicitation process and the subsequent data processing in order to obtain the option comparison results in the form of comparisons, ranks, etc. The algorithms which represent a framework to transform expert inputs into final option ranking or assessments by accounting for multiple criteria simultaneously are denoted as Multi-Criteria Decision-Making (MCDM) methods [31].

A selection and the possible integration of a MCDM approach into the AMA process is given in Reference [4]. The source also justifies the choice of FAHP for the problem structuring. Once a MCDM algorithm/framework has been selected, it is necessary to choose or adapt an appropriate SEJE approach.

This raises the question of integration of MCDM and SEJE methodologies which is subject to the following challenges:


### 2.1.10. Purposes for SEJE Applications

Regardless of the application domain, an extended research on SEJE has yielded a spectrum of terms using participative approaches such as "technology assessment", "scenario workshops", "future workshops" or "stakeholder workshop". Therefore, it is first necessary to distinguish among these notations in order to position the AMA workshops in this scientific context.

Technology Assessment (TA) has the general purpose of identifying promising innovations and evaluating the socio-technical impact of new technologies or improvements [32]. Ultimately, this could aid the political decision-making on administrative or company level. Such assessments might involve cost-benefit and risk analysis, as well as the potential relationships to markets and society [32]. In particular, some sources lay focus on the technology integration in policy making and public acceptance [32,33], as well as the "co-evolution of technology and society" [32]. For this purpose, stakeholder workshops are conducted which allow the gathering of representatives from concerned circles such as companies, society, political and research institutions [34]. In a structured process, these attempt to form visions of the innovative technology interaction with the public and its administrative regulation by aiming to resolve existing concerns. This vision is denoted as a "scenario", which encompasses a possible implementation of the novelty and its impact on a spectrum of dimensions such as various society levels, the environment and public/consumer acceptance [34]. There exist numerous types of workshops and conferences aiming to define multiple scenarios, select the most promising ones and/or detail their implementation. Prominent examples are the scenario workshop, future workshop, consensus conference, future search or search conference [35]. The current article will not describe these in detail. Instead, it is purposeful to introduce the process of creativity use for the creation and definition of scenarios (or visions, futures, etc.).

Vidal [35] summarizes that creativity encompasses two types of thinking: divergent, which allows to see multiple perspective of a certain situation, and convergent thinking which helps to "continue to question until satisfaction is reached" [35]. Methods for creative problem solving incorporate switching between these two thinking approaches. In this context, both the scenario workshops [34] and the future workshops [36] include steps which encourage divergent thinking to generate new ideas and convergent thinking to select among these the most appropriate one(s) and to improve their level of detail. To date, the conducted workshops applying the enhanced AMA have used only the convergent thinking by asking the experts to compare technological alternatives.

#### *2.2. SEJE Applications in the Aerospace Domain*

The uncertain character of the conceptual design phase of aerospace vehicles has encouraged researchers to seek expert opinions on upcoming innovations. An extensive research and development of SEJE techniques was conducted at NASA (National Aeronautics and Space Administration) to estimate parameters of weight, sizing and operations support for a launch vehicle [37]. In particular, probabilistic values were elicited by creating appropriate expert calibration [38] and aggregation [39] techniques in combination with a specially designed questionnaire [13].

Additionally, the use of scenarios in the aircraft design process has been discussed by Strohmayer [40]. The author argues that based on market analysis, one could derive requirements and identify technologies for new aircraft by organizing and evaluating these in the form of consistent scenarios.

Authorities in the aeronautical domain have also shown interest in expert judgments. The Federal Aviation Administration (FAA, United States Department of Transportation) assumed the use of SEJE for the development of a risk assessment tool for the electrical wire interconnect system [41]. This idea has been further studied and refined by Peng et al. [42] by checking the validation of expert opinions and estimating the agreement within the panel.

Despite of the available statistics and databases on accidents in aviation, Badanik et al. [43] used expert judgments as a possible aid for airlines to estimate accident probabilities. For this purpose, the CM was applied to analyze the answers of airline pilots to assess the probabilities of some IATA (International Air Transport Association) accident types for occurring Flight Data Monitoring events on multiple aircraft.

#### *2.3. Justification of the Developed Workshop Concept*

In order to position the current research and justify the developed SEJE methodology, it is first necessary to summarize the objectives of the AMA and the elicitation as follows:


Therefore, one can categorize the AMA workshops as a variation of TA. However, depending on the selected use case and the attending participants, the research conducted so far does not necessarily focus on the social impact of new concepts, but rather on an optimal selection of technologies to use in vehicle designs (e.g., during the first and second workshops).

The consideration of novel or non-existent technologies with scarce or no historical performance data implies the challenging elicitation of deterministic physical parameters of the configurations. In addition, the limited experience of the professionals with such components will further increase the epistemic uncertainty during the quantitative elicitation of system parameters. Hence, a qualitative expression of the performance evaluations of the options has been defined. In combination with the FAHP by Buckley and Saaty, the experts are required to enter pairwise comparisons of the technological options in the form of trapezoidal fuzzy numbers on the scale from 1 (both technologies are equal) to 9 (technology A is absolutely superior to technology B)—see Reference [4] for more details.

By using the qualitative evaluation character, the current work leans on the assumption that intuitive elicitation of expert knowledge and experience are a reliable scientific basis for concept derivation. In this context, the motivational view on bias is selected, as defined by Meyer and Booker [14]. The reason for that is the dominant importance of clear problem and methodology definition leading to a cleaner elicitation of expert knowledge rather than the accurate statistical estimation of precise physical parameters (as in the case of the cognitive view on bias).

In order to engage the full potential of the experts' knowledge and creativity, the method would benefit from both mathematical and behavioral aggregation. For this purpose, a modification of the IDEA protocol is derived to fit the contextual evaluation of MM options. A detailed description of the entire methodology and its implementation can be found in the following section on the second AMA workshop.
