*3.5. Workshop Structure*

The previous workshop aimed to test only a single step of a SEJE methodology, namely the individual expert evaluations by leaving DMs interactions and behavioral knowledge aggregation out of scope. The current workshop implements both features by adapting the IDEA protocol for the pairwise comparisons of MM options. Further focus has been laid on contextual consistency of the tasks (the discussion of attribute options are followed immediately after the DMs evaluated these) and on the optimization of workshop duration and expert concentration.

The workshop was introduced with a presentation outlining the objectives of the study, the problem statement and instructions on the interactive elicitation. The evaluations were entered on a software platform specially developed for SEJE workshops within the AMA. As an upgrade of the version used for the first workshop [12], it also integrated a carefully designed User Interface (UI) in the front-end, and a back-end, which ensured the smooth execution of the background operations and the evaluation storage in a database. The architecture allowed the interactive input of the DMs' evaluations according to a specially developed questionnaire design, presented in the next subsection.

The main steps of the IDEA protocol are repeated for each attribute row of the MM. In the Investigate part, the experts individually evaluate the pairwise option comparisons according to all criteria. Subsequently, a moderated discussion round is conducted which aims to share ideas among the participants. The purpose is to broaden their horizon and point their attention at forgotten or maybe unknown aspects which might influence the evaluation. If this is the case, the DMs have the possibility to edit their previous evaluations. After finishing the evaluations for a single attribute row, the same procedure is conducted for the next one.

The discussion rounds consisted of the following steps:


The result of such mapping from the second workshop is described in Section 4.3.2. Such an approach benefits from divergent thinking and allows the group to gather more aspects in order to increase the objectivity of an evaluation. One should stress that a consensus is required only for the definition of a most suitable option for each sub-criterion, roughly based on majority agreement. Beyond this point, each expert is left to decide for themselves whether the mapping of the options is convincing enough for them to edit their evaluations.

#### *3.6. Questionnaire Design*

The questionnaire consisted of two main parts: the professional background questionnaire and a technology assessment section.

#### 3.6.1. Professional Background Questionnaire

The background questionnaire aims to elicit the participants' level of expertise in the relevant domains based on their own perception. This approach was inspired by the questionnaire design from the NASA's SEJE method mentioned in Section 2.2 and described in Reference [38]. However, the source implies asking for the DM's age and expertise in the form of integers from 1 to 5 [38] and combining these into a coefficient. Instead, the current questionnaire requires self-assessment of knowledge or experience in the domains as fuzzy numbers on the scale from 0 to 9, reflecting the progressive increase in theoretical knowledge and practical hands-on experience. An example is exhibited in Figure 5. Taking into account the MM structure and the represented domain expertise in the panel, the expertise of the DMs has been elicited in the disciplines aircraft design, aerostructures, aeroengines, flight mechanics and aerodynamics.
